0% found this document useful (0 votes)
514 views108 pages

Open Source For You

Open Source for You

Uploaded by

Bruno
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
514 views108 pages

Open Source For You

Open Source for You

Uploaded by

Bruno
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 108

YOU SAID IT

More content for non-IT readers ED: It’s great to hear from you. We will definitely cover the
I have been reading your magazine since the last few years. The topics suggested by you in one of our forthcoming issues. Keep
company I work in is in the manufacturing industry. Similarly, reading our magazine. And do feel free to get in touch with us if
your subscribers’ database may have more individuals like me you have any such valuable feedback.
from companies that are not directly related to the IT industry.
Currently, your primary focus is on technical matters, A request for the Backtrack OS to be
and the magazine carries articles written by skilled technical bundled on the DVD
individuals, so OSFY is really helpful for open source developers. I am a huge fan of Open Source For You. Thank you for
However, you also have some non-IT subscribers like us, who bundling the Ubuntu DVD with the May 2014 issue. Some of
can understand that something great is available in the open my team members and I require the Backtrack OS. Could you
source domain, which they can deploy to reduce their IT costs. provide this in your next edition? I am studying information
But, unfortunately, your magazine does not inform us about open sciences for my undergrad degree. Please suggest the important
source solutions providers. programming languages that I should become proficient in.
I request you to introduce the companies that provide end-to- —aravind naik;
end IT solutions on open source platforms including thin clients, [email protected]
desktops, servers, virtualisation, embedded customised OSs, ERP,
CRM, MRP, emails and file servers, etc. Kindly publish relevant ED: Thanks for writing in to us. We’re pleased to know that you
case studies, with the overall cost savings and benefits. Just as you liked the DVD. Backtrack is no longer being maintained. The
feature job vacancies, do give us information about the solutions updated version for penetration testing is now known as ‘Kali
providers I mentioned above. Linux’ and we bundled it with the April 2014 issue of OSFY. For
—Shekhar Ranjankar; career related queries, you can refer to older OSFY issues or you
[email protected] can find related articles on www.opensourceforu.com

ED: Thank you for your valuable feedback. We do carry case studies Overseas subscriptions
of companies deploying open source, from time to time. We also Previously, I used to get the copies of LINUX For You/ Open
regularly carry a list of different solutions providers from different Source For You and Electronics For You from local book stores
open source sectors. We will surely take note of your suggestion and but, lately, none of them carry these magazines any more. So
try to continue carrying content that interests non-IT readers too. how can I get the copies of all these magazines in Malaysia, and
where can I get previous issues too?
Requesting an article on Linux server migration —Abdullah Abd. Hamid;
I am glad to receive my first copy of OSFY. I have a [email protected]
suggestion to make: if possible, please include an article
on migrating to VMware (from a Linux physical server to ED: Thank you for reaching out to us. Currently, we do not have
VMware ESX). Also, do provide an overview of some open any reseller or distributor in Malaysia for news stand sales, but you
source tools (like Ghost for Linux) to take an image of a can always subscribe to the print edition or the e-zine version of the
physical Linux server. magazines. You can find the details of how to subscribe to the print
—Rohit Rajput; editions on www.pay.efyindia.com and for the e-zine version, please
[email protected] go to www.ezines.efyindia.com

Please send your comments


Share Your or suggestions to:
The Editor,
Open Source For You,
D-87/1, Okhla Industrial Area, Phase I,
New Delhi 110020, Phone: 011-26810601/02/03,
Fax: 011-26817563, Email: [email protected]

8  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


offe
rS THE
monTH
One 2000
Rupees
month Coupon
free Free Dedicated Server Hosting
(Free Trial Coupon)

No condition attached for trial of our


for one month cloud platform
Subscribe for our Annual Package of Dedicated
Server Hosting & enjoy one month free service Enjoy & Please share Feedback at
[email protected]
Hurry!till 31st Hurry!till 31st
alid For more information, call us r valid 14!
Off vust 2014!
er Offe
t 20
Aug
on 1800-209-3006/ +91-253-6636500 Augus For more information, call us on
1800-212-2022 / +91-120-666-7718

www.esds.co.in www.cloudoye.com

Get 10%
discount
35%
off & more
“Do not wait! Be a part of
Reseller package special offer ! the winning team”
Free Dedicated hosting/VPS for one Get 35% off on course fees and if you appear
month. Subscribe for annual package for two Red Hat exams, the second shot is free.
of Dedicated hosting/VPS and get
Hurry!till 31st one month FREE Hurry!till 31st
alid alid Contact us @ 98409 82184/85 or
Offer vust 2014! Contact us at 09841073179 Offer vust 2014! Write to [email protected]
Aug Aug
or Write to [email protected]

www.space2host.com www.vectratech.in

Get Get 25% PACKWEB PACK WEB


12 Months HOSTING

Free Off ProX

Time to go PRO now

Pay Annually & get 12 Month Free Considering VPS or a Dedicated


Services on Dedicated Server Hosting Server? Save Big !!! And go
Subscribe for the Annual Packages of with our ProX Plans
Dedicated Server Hosting & Enjoy Next 25% Off on ProX Plans - Ideal for running
Hurry!till 31st
12 Months Free Services Hurry!till 31st High Traffic or E-Commerce Websites.
alid alid Coupon Code : OSFY2014
Offer vust 2014! Offer vust 2014!
Aug For more information, call us on Aug Contact us at 98769-44977 or
1800-212-2022 / +91-120-666-7777 Write to [email protected]

www.goforhosting.com www.prox.packwebhosting.com

To advertise here, contact Omar


on +91-995 888 1862 or 011-26810601/02/03 or
Write to [email protected]

www.opensourceforu.com
Email : [email protected]
FOSSBYTES Powered by www.efytimes.com

CentOS 7 now available A rare SMS worm is attacking


The CentOS Project has announced the general availability of CentOS 7, the first your Android device!
release of the free Linux distro based on the source code for RedHat Enterprise Android does get attacked with
Linux (RHEL) 7. It is the first major release after the collaboration between the Trojan apps that have no self-
CentOS Project and Red Hat. CentOS 7 is built from the freely available RHEL 7 propagation mechanism, so users
source code tree. The features closely resemble that of Red Hat’s latest operating don’t notice the malfunction. But
system. Just like RHEL 7, it here’s a different, rather rare, mode
is now powered by version of attack that Android devices are
3.10.0 of the Linux kernel, with now facing. Selfmite is a SMS
a default file system. It is also worm attack. It is the second of such
the first version to include a deadly viruses found in the past two
management engine, systemd, months. Selfmite automatically sends
dynamic firewall system called the firewalld, and the boot loader, GRUB2. SMSs to the users with their name
The default Java Development Kit has also been upgraded to OpenJDK 7, and in the message. The SMS contains a
the system now ships with open VMware tools and 3D graphics drivers, out-of- shortened URL which triggers users
the-box. Also, like RHEL 7, this is the version of CentOS that claims to offer an in- to install a third part APK file called
place upgrade path. Soon, users will be able to upgrade from CentOS 6.5 to CentOS TheSelfTimerV1.apk. The SMS
7 without reformatting their systems. says, “Dear [name], Look the Self-
The CentOS team has launched a new build process, in which the entire time..” Some remote server hosts this
distro is built from code hosted at the CentOS Project’s own Git repository. malware application. Users can find
Source code packages (SRPMs) are created as a side effect of the build cycle, SelfTimer installed in the app drawer
and will be hosted on the main CentOS download servers. of their Android devices.
Disc images of CentOS 7, which include separate builds for the Gnome and KDE The Selfmite worm
desktops, a live CD image and a network-installable version, are also now available. shows a pop-up to download
mobogenie_122141003.apk, which
Google to launch Android One smartphones with MediaTek chipset offers synchronisation between
Google made an announcement Android devices and PCs. The app
about its Android One program has over 50 million downloads on
at the recent Google I/O 2014, Play Store, but all are through various
San Francisco, California. The paid referral schemes and promotion
company plans to launch devices programmes. Researchers at Adaptive
powered by Android One in Mobile believe that a number of
India first, with companies like Mobogenie downloads are promoted
Micromax, Spice and Karbonn. through some malicious software
Android One has been launched used by an unknown advertising
to reduce the production costs of phones. The manufacturers mentioned earlier platform. A popular vendor of security
will be able to launch US$ 100 phones based on this platform. Google will solutions in North America detected
handle the software part, using Android One. So phones will get firmware dozens of devices that were infected
updates directly from Google. This is surprising because low budget phones with Selfmite. The attack campaign
usually don’t receive any software updates. Sundar Pichai, Android head at was launched using Google. The short
Google, showcased a Micromax device at the show. The Micromax Android linked URL of this malicious app was
One phone has an 11.43 cm (4.5 inch) display, FM radio, SD card slot and dual distributed in the Google shortlink
SIM slot. Google has reportedly partnered with MediaTek for chipsets to power format. The APK link was visited
the Android One devices. We speculate that it is MediTek’s MT6575 dual core 2,140 times. Later, Google disabled it.
processor that has been packed into Micromax’s Android One phone. Android devices detect apps from
It is worth mentioning here that 78 per cent of the smartphones launched unknown and unauthorised developers.
in Q1 of 2014 were priced around US$ 200 in India. So Google’s Android One But some users enable installation
will definitely herald major changes in this market. Google will also provide authentication even for apps from
manufacturers with guidelines on hardware designs. And it has tied up with ‘unknown sources’. Their devices
hardware component companies to provide high volume parts to manufacturers at a become the targets for worms like this.
lower cost in order to bring out budget Android smartphones.

14  |  aUGUST 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


OSFYClassifieds
Classifieds for Linux & Open Source IT Training Institutes

IPSR Solutions Ltd.


WESTERN REGION SOUTHERN REGION
Courses Offered: RHCE, RHCVA,
Linux Lab (empowering linux mastery) *astTECS Academy RHCSS, RHCDS, RHCA,
Courses Offered: Enterprise Linux Courses Offered: Basic Asterisk Course, Produced Highest number of
& VMware Advanced Asterisk Course, Free PBX Red Hat professionals
Course, Vici Dial Administration Course in the world
Address (HQ): 1104, D’ Gold House, Address (HQ): Merchant's
Nr. Bharat Petrol Pump, Ghyaneshwer Address (HQ): 1176, 12th B Main, Association Building, M.L. Road,
Paduka Chowk, FC Road, Shivajinagar HAL 2nd Stage, Indiranagar, Kottayam - 686001,
Pune-411 005 Bangalore - 560008, India Kerala, India
Contact Person: Mr.Bhavesh M. Nayani Contact Person: Lt. Col. Shaju N. T. Contact Person: Benila Mendus
Contact No.: +020 60602277, Contact No.: +91-9611192237 Contact No.: +91-9447294635
Email: [email protected]
+91 8793342945 Email: [email protected]
Branch(es): Kochi, Kozhikode,
Email: [email protected] Website: www.asttecs.com, Thrissur, Trivandrum
Branch(es): coming soon www.asterisk-training.com Website: www.ipsr.org
Website: www.linuxlab.org.in
Advantage Pro Linux Learning Centre
Linux Training & Certification Courses Offered: RHCSS, RHCVA, Courses Offered: Linux OS Admin
Courses Offered: RHCSA, RHCE, PHP, Perl, Python, Ruby, Ajax, & Security Courses for Migration,
RHCE, RHCVA, RHCSS, A prominent player in Open Source Courses for Developers, RHCE,
NCLA, NCLP, Linux Basics, Technology RHCVA, RHCSS, NCLP
Shell Scripting,
(Coming soon) MySQL Address (HQ): 1 & 2 , 4th Floor, Address (HQ): 635, 6th Main Road,
Jhaver Plaza, 1A Nungambakkam Hanumanthnagar,
Address (HQ): 104B Instant Plaza, High Road, Chennai - 600 034, India Bangalore - 560 019, India
Behind Nagrik Stores, Contact Person: Ms. Rema Contact Person: Mr. Ramesh Kumar
Near Ashok Cinema, Contact No.: +91-9840982185 Contact No.: +91-80-22428538,
Thane Station West - 400601, Email: [email protected] 26780762, 65680048 /
Maharashtra, India Website(s): www.vectratech.in +91-9845057731, 9449857731
Contact Person: Ms. Swati Farde Email: [email protected]
Contact No.: +91-22-25379116/ Branch(es): Bangalore
+91-9869502832 Website: www.linuxlearningcentre.com
Duestor Technologies
Email: [email protected]
Website: www.ltcert.com Courses Offered: Solaris, AIX, Eastern Region
RHEL, HP UX, SAN Administration
(Netapp, EMC, HDS, HP), Academy of Engineering and
NORTHERN REGION Virtualisation(VMWare, Citrix, OVM), Management (AEM)
Cloud Computing, Enterprise Courses Offered: RHCE, RHCVA,
GRRASLinuxTrainingandDevelopmentCenter RHCSS,Clustering & Storage,
Middleware.
Courses Offered: RHCE,RHCSS,RHCVA, Advanced Linux, Shell
CCNA,PHP,ShellScripting(onlinetraining Address (H.Q.): 2-88, 1st floor, Scripting, CCNA, MCITP, A+, N+
isalsoavailable) Sai Nagar Colony, Chaitanyapuri,
Hyderabad - 060 Address (HQ): North Kolkata, 2/80
Address (HQ): GRRASLinuxTrainingand Dumdum Road, Near Dumdum
Contact Person: Mr. Amit
DevelopmentCenter,219,HimmatNagar, Metro Station, 1st & 2nd Floor,
Contact Number(s): +91-9030450039,
BehindKiranSweets,GopalpuraTurn, Kolkata - 700074
+91-9030450397.
TonkRoad,Jaipur,Rajasthan,India Contact Person: Mr. Tuhin Sinha
E-mail id(s): [email protected]
Contact Person: Mr.AkhileshJain Contact No.: +91-9830075018,
Websit(es): www.duestor.com
Contact No.: +91-141-3136868/ 9830051236
+91-9983340133,9785598711,9887789124 Email: [email protected]
Email: [email protected] Branch(es): North & South Kolkata
Branch(es): Nagpur,Pune Website: www.aemk.org
Website(s): www.grras.org,www.grras.com
FOSSBYTES

Expect Android Wear app


section along with Google Play Calendar of forthcoming events
Service update
Google recently started rolling out its Google Name, Date and Venue Description Contact Details and Website

Play Service update 5.0 to all the devices. 4th Annual Datacenter The event aims to assist the community in Praveen Nair; Email: Praveen.nair@
This version is an advance from the existing Dynamics Converged. the data centre domain by exchanging ideas, datacenterdynamics.com; Ph: +91
September 18, 2014; accessing market knowledge and launching 9820003158; Website:
4.4, bringing the Android wearable services Bengaluru new initiatives. http://www.datacenterdynamics.com/
API and much more. Mainly focused on
CIOs and senior IT executives from across the
developers, this version was announced in Gartner Symposium IT Xpo,
world will gather at this event, which offers Website:
October 14-17, 2014; Grand
2014. According to the search giant’s blog, Hyatt, Goa
talks and workshops on new ideas and strate- http://www.gartner.com
gies in the IT industry.
the newest version of the Google Play store
includes many updates that can increase app Open Source India, Asia’s premier open source conference that Omar Farooq; Email: omar.farooq@
performance. These include wearable APIs, November 7-8, 2014; aims to nurture and promote the open source efy.in; Ph: 09958881862
NIMHANS Center, Bengaluru ecosystem across the sub-continent. http://www.osidays.com
dynamic security provider, improvements
in Drive, Wallet and Google Analytics, etc.
CeBit This is one of the world’s leading business IT Website:
The main focus is on the Android Wearable November 12-14, 2014; events, and offers a combination of services http://www.cebit-india.com/
platform and APIs, which will enable more BIEC, Bengaluru and benefits that will strengthen the Indian IT
and ITES markets.
applications on these devices. In addition to
this, Google has announced a separate section 5th Annual Datacenter The event aims to assist the community in Praveen Nair; Email: Praveen.nair@
Dynamics Converged; the datacentre domain by exchanging ideas, datacenterdynamics.com; Ph: +91
for Android Wear apps in the Play store. December 9, 2014; Riyadh accessing market knowledge and launching 9820003158; Website:
new initiatives. http://www.datacenterdynamics.com/
These apps for the Android Wear section
in the Google Play store come from Google Hostingconindia This event will be attended by Web hosting Website:
December 12-13, 2014; companies, Web design companies, domain http://www.hostingcon.com/
itself. The collection includes official NCPA, Jamshedji Bhabha and hosting resellers, ISPs and SMBs from contact-us/
companion apps for Android devices, Theatre, Mumbai across the world.
Hangouts and Google Maps. The main
purpose of the Android Wear Companion New podcast app for Linux is now ready for testing
app is to let users manage their devices from An all-new podcast app for Ubuntu was launched recently. This app, called
Android smartphones. It provides voice ‘Vocal’, has a great UI and design. Nathan Dyer, who is the developer of this
support, notifications and more. There are project, has released unstable beta builds of the app for Ubuntu 14.04 and 14.10,
third party apps as well from Pinterest, for testing purposes.
Banjo and Duolingo. Only next-gen easy-to-use desktops are capable of running the beta version
of Vocal. Installing beta versions of the app on Ubuntu is not as difficult as
Google plans to remove installing them on KDE, GNOME or Unity, but users can’t try the beta version of
QuickOffice from app stores Vocal without installing the unstable elementary desktop PPA. Vocal is an open
Google has announced the company’s source app, and one can easily port it to mainstream Linux versions from Ubuntu.
future plans about Google Docs, However, Dyer suggests users wait until the first official beta version of the app for
Slides and Sheets. It has integrated the easy-to-use desktops is available.
QuickOffice service in Google Docs The official developer’s blog has a detailed report on the project.
now. So, there is no longer a need for
the separate Google QuickOffice app. CoreOS Linux comes out with Linux containers as a service!
QuickOffice was acquired by Google in CoreOS has launched a commercial service to ease the workload of systems
2012. It served free document viewing administrators. The new commercial Linux distribution service can update
and editing on Android and iOS for two automatically. Systems administrators do not have to perform any major update
years. Google has decided to discontinue manually. Linux-based companies like RedHat and SUSE use open source
this free service. and free applications and libraries for their operations, yet offer commercial
The firm has integrated QuickOffice subscription services for enterprise editions of Linux. These services cover
into the Google Docs, Sheets and Slides software, updates, integration and technical support, bug fixes, etc.
app. The QuickOffice app will be removed CoreOS has a different strategy compared to competitive services offered by
from the Play Store and Apple’s App Store other players in the service, support and distribution industries. Users will not
soon and users will not be able to see or receive any major updates, since CoreOS wants to save you the hassle of manually
install it. Existing users will be able to updating all packages. The company plans to stream copies of updates directly to
continue to use the old version of the app. the OS. CoreOS has named the software ‘CoreUpdate’. It controls and monitors

16  |  aUGUST 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


FOSSBYTES

software packages, their updates and also provides the controls to administrators to
manually update a few packages if they want to. It has a roll-back feature in case Linux Foundation releases
an update causes any malfunction in a machine. CoreUpdate can manage multiple Automotive Grade Linux to
systems at a time. power cars
CoreOS was designed to promote the use of open source OS kernel, which The Linux Foundation recently
is used in a lot of cloud based virtual released Automotive Grade Linux
servers. The CoreOS consumes less (AGL) to power automobiles, a
than half of instance as compared move that marks its first steps into
to other Linux distribution services. the automotive industry. The Linux
Applications of distributions run in Foundation is sponsoring the AGL
a virtualised container called Docker. They can start instantly. CoreOS was project to collaborate with the
launched in December last year. It uses two partitions, which help in easily automotive, computing hardware
updating distributions. One partition contains the current OS, while the other and communications industries,
is used to store the updated OS. This smoothens out the entire process of apart from academia and other
upgrading a package or an entire distribution. The service can be directly sectors. The first release of this
installed and run in the system or via cloud services like Amazon, Google or system is available for free on the
Rackspace. The venture capital firm, Kleiner Perkins Caulfield and Byers, Internet. A Linux-based platform
has invested over US$ 8 million in CoreOS. The company was also backed by called Tizen IVI is used to power
Sequoia Capital and Fuel Capital in the past. AGL. Tizen IVI was primarily
designed for a broad range of
Mozilla to launch Firefox-based streaming Dongle, Netcast devices—from smartphones and
After the successful launch of TVs to cars and laptops.
Google’s Chromecast, which Here is the list of features
sold in millions, everyone else that you can experience in the
has discovered the potential of first release of AGL: a dashboard,
streaming devices. Recently, Bluetooth calling, Google Maps,
Amazon and Roku launched HVAC, audio controls, Smartphone
their devices. According to Link Integration, media playback,
GigaOM, Mozilla will soon home screen and news reader. The
enter the market with its Linux Foundation and its partners
Firefox-powered streaming are expecting this project to change
device. A Mozilla enthusiast, the future of open source software.
Christian Heilmann, recently They hope to see next-generation
uploaded a photo of Mozilla’s prototype streaming device on Twitter. car entertainment, navigation
People at GigaOM managed to dig out more on it and even got their hands on and other tools to be powered by
the prototype as soon as that leaked photo went viral on Twitter. The device provides open source software. The Linux
better functionality and options than Chromecast. Mozilla has partnered with some Foundation expects collaborators to
as yet unknown manufacturer to build this device. The prototype has been sent to add new features and capabilities
some developers for testing and reviews. This device, which is called Netcast, has a in future releases. Development
hackable open bootloader, which makes it run some Chromecast apps. of AGL is expected to continue
Mozilla has always looked for an open environment for its products. It is expected steadily.

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  aUGUST 2014  |  17


FOSSBYTES

that the company’s streaming stick will come with open source technology, which will
Microsoft to abandon help developers to develop HDTV streaming apps for smartphones.
X-Series Android
smartphones too Opera is once again available on Linux
It hasn’t been long since Microsoft Australian Web browser company, Opera, has
ventured into the Android market with finally released a beta version for its Linux OS.
its X series devices and the company This Opera 24 version for Linux has the same
has already revealed plans to abandon features as Opera 24 on the Windows and Mac
the series. With the announcement of platforms. Chrome and Firefox are currently the
up to 18,000 job cuts, the company is two most used browsers on the Linux platform.
also phasing out its feature phones and Opera 24 will be a good alternative to them.
recently launched Nokia X Android As of now, only the developer or beta version of Opera for Linux is
smartphones. available. We are hoping to see a stable version in the near future. In this beta
Here are excerpts of an internal version, Linux users will get to experience popular Opera features like Speed
email sent by Jo Harlow, who heads Dial, Discover, Stash, etc. Speed Dial is a home page that gives users an
the phone business under Microsoft overview of their history, folders and bookmarks. Discover is an RSS reader,
devices, to Microsoft employees: embedded within the browser. Gathering and reading articles of interest would
“Placing Mobile Phone services in be more authentic with the Discover feature. Stash is like Pinterest, within a
maintenance mode: With the clear browser. Its UI is inspired from Pinterest. It allows users to collect websites and
focus on Windows Phones, all Mobile categorise them. Stash is designed to enable users to plan their travel, work and
Phones-related services and enablers personal lives with a collection of links.
are planned to move into maintenance
mode; effective: immediately. This Unlock your Moto X with your tattoo
means there will be no new features Motorola is implementing an
or updates to services on any Mobile alternative security system
Phones platform as a result of these for Moto X. It is frustrating to
plans. We plan to consider strategic remember difficult passwords
options for Xpress Browser to enable while simpler passwords are easy
continuation of the service outside to crack. To counter this, VivaLnk
of Microsoft. We are committed to has launched digital tattoos. This
supporting our existing customers, tattoo will automatically unlock
and will ensure proper operation the Moto X when applied to the
during the controlled shutdown of skin.
services over the next 18 months. A This technology is based
detailed plan and timeline for each on Near Field Communication to connect with smartphones and authenticate
service will be communicated over the access. Motorola is working on optimising digital tattoos with Google’s Advance
coming weeks. Technology and Projects.
“Transitioning developer The pricing is on the higher side but this is a great initiative in wearable
efforts and investments: We technology. Developing user friendly alternatives to the password and PIN number
plan to transition developer has been a major focus of tech companies. Motorola had talked about this in the
efforts and investments to focus introductory session of the D11 conference at California this May, when it discussed
on the Windows ecosystem the idea of passwords in pills or tattoos. The idea may seem like a gimmick, but you
while improving the company’s never know when it will become commonly used. VivaLnk is working on making
financial performance. To focus this technology compatible with other smartphones too. It is considering entering
on the growing momentum behind the domain of creating tattoos of different types and designs.
Windows Phone, we plan to
immediately begin ramping down OpenSSL flaws fixed by PHP
developer engagement activities PHP recently pushed out new versions for its popular scripting language, which fix
related to Nokia X, Asha and many crucial bugs and, out of those, two are of OpenSSL. The flaws are not serious
Series 40 apps, and shift support to like Heartbleed, which popped up a couple of months back. One flaw is directly
maintenance mode.” related to OpenSSL handling time stamps and the other is related to the same thing
in a different way. PHP 5.5.14 and 5.4.30 have fixed both flaws.

18  |  aUGUST 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


FOSSBYTES

Other bugs which were fixed were not security related but of a more
general type.

iberry introduces the Auxus Linea L1 smartphone


and Auxus AX04 tablet in India
In a bid to expand its portfolio in the
Indian market, iberry has introduced two
new Android KitKat-powered devices in
the country—a smartphone and a tablet.
The Auxus Linea L1 smartphone is priced
at Rs 6,990 and the Auxus AX04 tablet
is priced at Rs 5,990. Both have been
available from the online megastore eBay
India, since June 25, this year.
The iberry Auxus Linea L1
smartphone features a 11.43 cm (4.5
inch) display with OGS technology and
Gorilla Glass protection. It is powered
by a 1.3GHz quad-core MediaTek
(MT6582) processor coupled with 1 GB
of DDR3 RAM. It sports a 5 MP rear camera with an LED flash and a 2 MP front-
facing camera. It comes with 4 GB of inbuilt storage expandable up to 64 GB via
microSD card. The dual-SIM device runs Android 4.4 KitKat, out-of-the-box. The
3G-supporting smartphone has a 2000mAh battery.
Meanwhile, the iberry Auxus AX04 tablet features a 17.78 cm (7 inch) IPS
display. It is powered by a 1.5 GHz dual-core processor (unspecified chipset)
coupled with 512 MB of RAM. The voice-supporting tablet sports a 2 MP rear
camera and a 0.3 MP front-facing camera. It comes with 4 GB of built-in storage
expandable up to 64 GB via micro-SD card slot. The dual-SIM device runs Android
4.4 KitKat out-of-the-box. It has a 3000mAh battery.

Google to splurge a whopping Rs 1,000 million


on marketing Android One
Looks like global search engine giant Google wants to leave no stone
unturned in its quest to make its ambitious Android One smartphone-for-
the-masses project reach its vastly dispersed target audience in emerging
economies (including India). The buzz is that Google is planning to splurge
over a whopping Rs 1,000 milllion with its official partners on advertising
and marketing for the platform.
Even as Sundar Pichai, senior
VP at Google who is in charge
of Android, Chrome and Apps, is all set to launch the first batch of low
budget Android smartphones in India sometime in October this year, the
latest development shows how serious Google is about the project.
It was observed that Google’s OEM partners were forced into launching a new
smartphone every nine months to stay ahead in the cut-throat competition. However,
thanks to Google’s new Android hardware and software reference platform, its
partners will now be able to save money and get enough time to choose the right
components, before pushing their smartphones into the market. Android One will
also allow them to push updates to their Android devices, offering an optimised stock
Android experience. With the Android One platform falling into place, Google will be
able to ensure a minimum set of standards for Android-based smartphones.

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  aUGUST 2014  |  19


FOSSBYTES

Linux kernel 3.2.61 LTS With the Android One platform, Google aims to reach the 5 billion
officially released people across the world who still do not own a smartphone. According to
The launch of the Linux kernel 3.2.61 Pichai, less than 10 per cent of the world’s population owns smartphones
LTS, the brand-new maintenance in emerging countries. The promise of a stock Android experience at
release of the 3.2 kernel series, has a low price point is what Android One aims to provide. Home-grown
been officially announced by Ben manufacturers such as Micromax, Karbonn and Spice will create and sell
Hutchings, the maintainer of the these Android One phones for which hardware reference points, software
Linux 2.6 kernel branch. While and subsequent updates will be provided by Google. Even though the spec
highlighting the slew of changes that sheet of Android One phones hasn’t been officially released, Micromax is
come bundled along with the latest already working on its next low budget phone, which many believe will
release, Hutchings advised users to be an Android One device.
upgrade to it as early as possible.
The Linux kernel 3.2.61 is an SQL injection vulnerability patched in Ruby on Rails
important release in the cycle, according to Two SQL injection vulnerabilities were patched in Ruby on Rails, which is
Hutchings. It introduces better support for an open source Web development framework now used by many developers.
the x86, ARM, PowerPC, s390 and MIPS Some high profile websites also use this framework. The Ruby on Rails
architectures. At the same time, it also developers recently launched versions 3.2.19, 4.0.7 and 4.1.3, and advised
improves support for the EXT4, ReiserFS, users to upgrade to these versions as soon as possible. And a few hours later,
Btrfs, NFS and UBIFS file systems. It also they again released versions 4.0.8 and 4.1.4 to fix problems caused by the
comes with updated drivers for wireless 4.0.7 and 4.1.3 updates.
connectivity, InfiniBand, USB, ACPI, One of the two SQL injection vulnerabilities affects applications running
Bluetooth, SCSI, Radeon and Intel i915, on Ruby versions 2.0.0 through to 3.2.18, which also use the PostgreSQL
among others. database system and query bit string data types. Another vulnerability affects
Meanwhile, Linux founder Linus applications running on Ruby on Rails versions 4.0.0 to 4.1.2, which use
Torvalds has officially announced the PostgreSQL and querying range data types.
fifth Release Candidate (RC) version Despite affecting different versions, these two flaws are related and allow
of the upcoming Linux kernel 3.16. attackers to inject arbitrary SQL code using crafted values.
The RC5 is a successor to Linux 3.16-
rc4. It is now available for download The city of Munich adopts Linux in a big way!
and testing. However, since it is a It’s certainly not a case of an overnight conversion. The city of Munich began
development version, it should not be to seek open source alternatives way
installed on production machines. back in 2003.
With a population of about 1.5
Motorola brings out Android million citizens and thousands of
4.4.4 KitKat upgrade for employees, this German city took its
Moto E, Moto G and Moto X time to adopt open source. Tens of
Motorola has unveiled the Android thousands of government workstations
4.4.4 KitKat update for its devices in were to be considered for the change. Its
India, for Moto E, Moto G and Moto initial shopping list had suitably rigid
X. This latest version of Android has specifications, spanning everything from
an extra layer of security for browsing avoiding vendor lock-in and receiving
Web content on the phone. regular hardware support updates, to
With this phased rollout, users having access to an expansive range of
will receive notifications that will free applications.
enable them to update their OS but, In its first stage of migration, in 2006,
alternatively, the update can also be Debian was introduced across a small percentage of government workstations,
accessed by way of the settings menu. with the remaining Windows computers switching to OpenOffice.org, followed
This release goes on to shore up by Firefox and Thunderbird.
Motorola’s commitment to offering its Debian was substituted for a custom Ubuntu-based distribution named
customers a pure, bloatware-free and ‘LiMux‘ in 2008, after the team handling the project ‘realised Ubuntu was the
seamless Android experience. platform that could satisfy our requirements best.’

20  |  aUGUST 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


In The News

SUSE Partners with Karunya University


to Make Engineers Employable
In one of the very first initiatives of its kind, SUSE and Novell have partnered with Karunya
University, Coimbatore, to ensure its students are industry-ready.

O
ut of the many interviews that we have conducted standards than they did with proprietary technologies. This
with recruiters asking them about what they look for trend makes it even more critical to incorporate open source
in a candidate, one common requirement seems to technologies in the college curriculum.
be knowledge of open source technology. As per NASSCOM Speaking about the initiative, Venkatesh Swaminathan,
reports, between 20 to 33 per cent of the million students that country head, The Attachmate Group (Novell, NetIQ,
graduate out of India’s engineering colleges every year, run the SUSE and Attachmate), said, “This is one of the first
risk of being unemployed. implementations of its kind but we do have
The Attachmate Group, along with engagements with universities on various
Karunya University, has taken a step other formats. Regarding this partnership with
forward to address this issue. Novell India, Karunya, we came out with a kind of a joint
in association with Karunya University, has strategy to make engineering graduates ready
introduced Novell’s professional courses for the jobs enterprises offer today. We thought
as part of the university’s curriculum. about the current curriculum and how we
Students enrolled in the university’s M. could modify it to make it more effective. Our
Tech course for Information Technology current education system places more emphasis
will be offered industry-accepted courses. on theory rather than the practical aspects of
Apart from this, another company of the engineering. With our initiative, we aim to bring
Attachmate Group, SUSE, has also pitched in more practical aspects into the curriculum. So
in to make the students familiar with the we have looked at what enterprises want from
world of open source technology. engineers when they deploy some solutions.
Speaking about the initiatives, Dr Venkatesh Swaminathan, Today, though many enterprises want to use
J Dinesh Peter, associate professor and country head, The Attachmate Group open source technologies effectively, the
(Novell, NetIQ, SUSE and Attachmate)
HoD I/C, Department Of Information unavailability of adequate talent to handle those
Technology, said, “We have already started with our first batch technologies is a major issue. So, the idea was to bridge the
of students, who are learning SUSE. I think adding open source gap between what enterprises want and what they are getting,
technology in the curriculum is a great idea because nowadays, with respect to the talent they require to implement and
most of the tech companies expect knowledge on open manage new technologies.”
source technology for the jobs that they offer. Open source Going forward, the company aims to partner with at least
technology is the future, and I think all universities must have it another 15 – 20 universities this year to integrate its courseware
incorporated in their curriculum in some form or the other.” into the curriculum to benefit the maximum number of students
The university has also gone ahead to provide professional in India. “The onus of ensuring that the technical and engineering
courses from Novell to the students. Dr Peter said, “In India, students who graduate every year in our country are world-class
where the problem of employability of technical graduates is and employable lies on both the academia as well as the industry.
acute, this initiative could provide the much needed shot in With this collaboration, we hope to take a small but important
the arm. We are pleased to be associated with Novell, which step towards achieving this objective,” Swaminathan added.
has offered its industry-relevant courses to our students. With
growing competition and demand for skilled employees in About The Attachmate Group
the technology industry, it is imperative that the industry and Headquartered in Houston, Texas, The Attachmate Group
academia work in sync to address the lacuna that currently is a privately-held software holding company, comprising
exists in our system.” distinct IT brands. Principal holdings include Attachmate,
Growth in the amount of open source software that NetIQ, Novell and SUSE.
enterprises use has been much faster than growth in proprietary
software usage, over the past 2-3 years. One major reason
By: Diksha P Gupta
for this is that open source technology helped companies
The author is senior assistant editor at EFY.
slash huge IT budgets, while maintaining higher performance

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  21


Buyers’ Guide

SSDs Move Ahead to


Overtake Hard Disk Drives
High speed, durable and sleek SSDs are moving in to replace ‘traditional’ HDDs.

A
solid state drive (SSD) is a data storage device medium. Kingston offers an entire range of SSDs, including
that uses integrated circuit assemblies as its entry levels variants as well as options for general use.
memory to store data. Now that everyone is There are a lot of factors to keep in mind when you
switching over to thin tablets and high performance are planning to buy an SSD—durability, portability,
notebooks, carrying heavy, bulky hard disks may be power consumption and speed. Gupta adds that, “The
difficult. SSDs, therefore, play a vital role in today’s performance of SSDs is typically indicated by their IOPS
world as they combine high speed, durability and smaller (Input output operation per second), so one should look at
sizes, with vast storage and power efficiency. the specifications of the product. Also, check the storage
SSDs consume minimal power because they do not have capacity. If you’re looking for an SSD when you already
any movable parts inside, which leads to less consumption of have a PC or laptop, then double check the compatibility
internal power. between your system and the SSD you’ve shortlisted. If
you’re buying a new system, then you can always check
HDDs vs SSDs with the vendors as to what SSD options are available.
The new technologies embedded in SSDs make them costlier Research the I/O speeds and get updates about how reliable
than HDDs. “SSDs, with their new technology, will gradually the product is.”
overtake hard disk drives (HDDs), which have been around “For PC users, some of the important performance
ever since PCs came into prominence. It takes time for a new parameters of SSDs are related to battery life, heating of the
technology to completely take over the traditional one. Also, device and portability. An SSD is 100 per cent solid state
new technologies are usually expensive. However, users are technology and has no motor inside, so the advantage is that it
ready to pay a little more for a new technology because it consumes less energy; hence, it extends the battery life of the
offers better performance,” explains Rajesh Gupta, country device and is quite portable,” explains Gupta.
head and director, Sandisk Corporation India. Listed below are a few broad specifications of SSDs,
SSDs use integrated circuit assemblies as memory for which can help buyers decide which variant to go in for.
storing data. The technology uses an electronic interface
which is compatible with traditional block input/output Portability
HDDs. So SSDs can easily replace HDDs in commonly Portability is one of the major concerns when buying an
used applications. external hard drive because, as discussed earlier, everyone
An SSD uses a flash-based medium for storage. It is is gradually shifting to tablets, iPads and notebooks and
believed to have a longer life than an HDD and also consumes so would not want to carry around an external hard disk
less power. “SSDs are the next stage in the evolution of PC that is heavier than the computing device. The overall
storage. They run faster, and are quieter and cooler than the portability of an SSD is evaluated on the basis of its size,
aging technology inside hard drives. With no moving parts, shape, how much it weighs and its ruggedness.
SSDs are also more durable and reliable than hard drives.
They not only boost the performance but can also be used High speed
to breathe new life into older systems,” says Vishal Parekh, Speed is another factor people look for, while buying an
marketing director, Kingston Technology India. SSD. If it is not fast, it is not worth the buy. SSDs offer
data transfer read speeds that range from approximately 530
How to select the right SSD MBps to 550 MBps, whereas a HDD offers only around
If you’re a videographer, or have a studio dedicated to audio/ 30 to 50 MBps. SSDs can also boot any operating system
video post-production work, or are in the banking sector, you almost four times faster than a traditional 7200 RPM 500
can look at ADATA’s latest launch, which has been featured GB hard drive disk. With SSDs, the applications provide a
later in the article. Kingston, too, has introduced SSDs for all 12 times faster response compared to the HDD. A system
possible purposes. SSDs are great options even for gamers, or equipped with an SSD also launches applications faster and
those who want to ensure their data has been saved in a secure offers a high performance overall.

22  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Buyers’ Guide

Durability new PC by reviving the system you already own,”


As an SSD does not have any moving parts like a motor adds Parekh.
and uses a flash-based medium for storing data, it is more
likely to keep the data secure and safe. Some of the SSDs A few options to choose from
are coated with metal, which extends their life. There are Many companies, including Kingston, ADATA and
almost no chances of their getting damaged. Even if you Sandisk, have launched their SSDs and it is quite a task
drop your laptop or PC, the data stays safe and does not trying to choose the best among them. Kingston has
get affected. always stood out in terms of delivering good products to
not just the Indian market but worldwide. Ashu Mehrotra,
Power consumption marketing manager, ADATA, speaks about his firm’s
In comparison to a HDD, a solid state drive consumes SSDs: “ADATA has been putting a lot of resources into
minimal power. “Usually, a PC user faces the challenge of a R&D for SSDs, because of which its products provide
limited battery life. But since an SSD is 100 per cent solid unique advantages to customers.” Gupta says, “Sandisk
state technology and has no motor inside, it consumes less is a completely vertically integrated solutions provider
energy; hence, it extends the life of the battery and the PC,” and is also a key manufacturer of flash-based storage
adds Rajesh Gupta. systems, which are required for SSDs. Because of this,
There are plenty of other reasons for choosing a SSD we are very conscious about the categories to be used in
over a HDD. These include the warranty, cost, efficiency, the SSD. We also make our own controllers and do our
etc. “Choosing a SSD can save you the cost of buying a own integration.”

HyperX Fury
from Kingston Technology
• It is a 6.35 cm and 7 mm solid state drive (SSD)
• Delivers impressive performance at an affordable price
• It speeds up system boot up, application loading time and
file execution
• Controller: SandForce SF-2281
• Performance: SATA Rev 3.0 (6 GBps)
• Read/write speed: 500 MBps to boost overall system
responsiveness and performance
• Reliability: Cool, rugged and durable drive to push your
system to the limits
• Warranty: Three years

Extreme PRO SSD


from Sandisk
ƒƒ Consistently fast data transfer speeds
ƒƒ Lower latency times
ƒƒ Reduces power consumption
ƒƒ Comes in the following capacities: 64 GB, 128
GB and 256 GB
ƒƒ Speed: 520 MBps
ƒƒ Compatibility: SATA Revision 3.0 (6 GBps)
ƒƒ Warranty: Three years

24  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Buyers’ Guide
Premier Pro SP920
from ADATA
ƒƒ It is designed to meet the high-performance requirements of
multimedia file transfers
ƒƒ It provides up to 7 per cent more space on its SSD due to the
right combination of controller and high quality flash
ƒƒ It weighs 70 grams and its dimensions are 100×69.85×7 mm
ƒƒ Controller: Marvell
ƒƒ It comes in the following capacities: 128 GB, 256 GB, 512
GB and 1 TB
ƒƒ NAND flash synchronous MLC
ƒƒ Interface: SATA 6 GBps
ƒƒ Read/write speed: From 560 to 180 MBps
ƒƒ Power consumption: 0.067 W idle/0.15 W active

SSD 840 EVO


from Samsung
ƒƒ Capacity: 500 GB (1 GB =1 billion bytes)
ƒƒ Dimensions: 100 x 69.85 x 6.80 mm
ƒƒ Weight: Max 5.3 kg
ƒƒ Interface: SATA 6 GBps (compatible with SATA 3
GBps and SATA 1.5 GBps)
ƒƒ Controller: Samsung 3-core MEX controller
ƒƒ Warranty: Three years

1200 SSD
from Seagate
ƒƒ It is designed for applications demanding the fast,
consistent performance and has dual port
12 GBps SAS
ƒƒ It comes with 800 GB capacity
ƒƒ Random read/write performance of up to 110K /40K IOPS
ƒƒ Sequential read/write performance from 500 MBps to
750 MBps

By: Manvi Saxena


The author is a part of the editorial team at EFY.
With inputs from ADATA, Kingston and Sandisk.

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  25


Developers How To

An Introduction to the Linux Kernel

This article provides an introduction to the Linux kernel, and demonstrates how
to write and compile a module.

H
ave you ever wondered how a computer manages the allowing communication and data sharing between processes
most complex tasks with such efficiency and accuracy? through inter-process communication (IPC). Additionally,
The answer is, with the help of the operating system. It with the help of the process scheduler, it schedules processes
is the operating system that uses hardware resources efficiently and enables resource sharing.
to perform various tasks and ultimately makes life easier. At a Memory management: This subsystem handles all
high level, the OS can be divided into two parts—the first being memory related requests. Available memory is divided into
the kernel and other is the utility programs. Various user space chunks of a fixed size called ‘pages’, which are allocated or
processes ask for system resources such as the CPU, storage, de-allocated to/from the process, on demand. With the help of
memory, network connectivity, etc, and the kernel services the memory management unit (MMU), it maps the process’
these requests. This column will explore loadable kernel virtual address space to a physical address space and creates
modules in GNU/Linux. the illusion of a contiguous large address space.
The Linux kernel is monolithic, which means that the File system: The GNU/Linux system is heavily dependent
entire OS runs solely in supervisor mode. Though the kernel on the file system. In GNU/Linux, almost everything is a file.
is a single process, it consists of various subsystems and This subsystem handles all storage related requirements like
each subsystem is responsible for performing certain tasks. the creation and deletion of files, compression and journaling
Broadly, any kernel performs the following main tasks. of data, the organisation of data in a hierarchical manner,
Process management: This subsystem handles the and so on. The Linux kernel supports all major file systems
process’ ‘life-cycle’. It creates and destroys processes, including MS Windows’ NTFS.

26  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Developers How To

Device control: Any computer system requires various manage the contents of the ring buffer.
devices. But to make the devices usable, there should be a
device driver and this layer provides that functionality. There Preparing the system
are various types of drivers present, like graphics drivers, a Now it’s time for action. Let’s create a development
Bluetooth driver, audio/video drivers and so on. environment. In this section, let’s install all the required
Networking: Networking is one of the important aspects packages on an RPM-based GNU/Linux distro like CentOS
of any OS. It allows communication and data transfer between and a Debian-based GNU/Linux distro like Ubuntu.
hosts. It collects, identifies and transmits network packets.
Additionally, it also enables routing functionality. Installing CentOS
First install the gcc compiler by executing the following
Dynamically loadable kernel modules command as a root user:
We often install kernel updates and security patches to make
sure our system is up-to-date. In case of MS Windows, a reboot [root]# yum -y install gcc
is often required, but this is not always acceptable; for instance,
the machine cannot be rebooted if is a production server. Then install the kernel development packages:
Wouldn’t it be great if we could add or remove functionality
to/from the kernel on-the-fly without a system reboot? The [root]# yum -y install kernel-devel
Linux kernel allows dynamic loading and unloading of kernel
modules. Any piece of code that can be added to the kernel at Finally, install the ‘make’ utility:
runtime is called a ‘kernel module’. Modules can be loaded
or unloaded while the system is up and running without any [root]# yum -y install make
interruption. A kernel module is an object code that can be
dynamically linked to the running kernel using the ‘insmod’ Installing Ubuntu
command and can be unlinked using the ‘rmmod’ command. First install the gcc compiler:

A few useful utilities [mickey] sudo apt-get install gcc


GNU/Linux provides various user-space utilities that provide
useful information about the kernel modules. Let us explore them. After that, install kernel development packages:
lsmod: This command lists the currently loaded kernel
modules. This is a very simple program which reads the /proc/ [mickey] sudo apt-get install kernel-package
modules file and displays its contents in a formatted manner.
insmod: This is also a trivial program which inserts a And, finally, install the ‘make’ utility:
module in the kernel. This command doesn’t handle module
dependencies. [mickey] sudo apt-get install make
rmmod: As the name suggests, this command is used to
unload modules from the kernel. Unloading is done only if Our first kernel module
the current module is not in use. rmmod also supports the -f Our system is ready now. Let us write the first kernel module.
or --force option, which can unload modules forcibly. But this Open your favourite text editor and save the file as hello.c
option is extremely dangerous. There is a safer way to remove with the following contents:
modules. With the -w or --wait option, rmmod will isolate the
module and wait until the module is no longer used. #include <linux/kernel.h>
modinfo: This command displays information about #include <linux/module.h>
the module that was passed as a command-line argument.
If the argument is not a filename, then it searches the /lib/ int init_module(void)
modules/<version> directory for modules. modinfo shows {
each attribute of the module in the field:value format. printk(KERN_INFO “Hello, World !!!\n”);

Note: <version> is the kernel version. We can obtain it return 0;


by executing the uname -r command. }

dmesg: Any user-space program displays its output on the void cleanup_module(void)
standard output stream, i.e., /dev/stdout but the kernel uses {
a different methodology. The kernel appends its output to printk(KERN_INFO “Exiting ...\n”);
the ring buffer, and by using the ‘dmesg’ command, we can }

28  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Developers

MODULE_LICENSE(“GPL”); MODULE_VERSION(“1.0”);
MODULE_AUTHOR(“Narendra Kangralkar.”);
MODULE_DESCRIPTION(“Hello world module.”); Here, the __init and __exit keywords imply initialisation
MODULE_VERSION(“1.0”); and clean-up functions, respectively.

Any module must have at least two functions. The first Compiling and loading the module
is initialisation and the second is the clean-up function. Now, let us understand the module compilation
In our case, init_module() is the initialisation function procedure. To compile a kernel module, we are going to
and cleanup_module() is the clean-up function. The use the kernel’s build system. Open your favourite text
initialisation function is called as soon as the module editor and write down the following compilation steps
is loaded and the clean-up function is called just before in it, before saving it as Makefile. Please note that the
unloading the module. MODULE_LICENSE and other kernel modules hello.c and Makefile must exist in the
macros are self-explanatory. same directory.
There is a printk() function, the syntax of which is
similar to the user-space printf() function. But unlike obj-m += hello.o
printf() , it doesn’t print messages on a standard output
stream; instead, it appends messages into the kernel’s all:
ring buffer. Each printk() statement comes with a make -C /lib/modules/$(shell uname -r)/build M=$(PWD)
priority. In our example, we used the KERN_INFO modules
priority. Please note that there is no comma (,) between
‘KERN_INFO’ and the format string. In the absence of clean:
explicit priority, DEFAULT_MESSAGE_LOGLEVEL make -C /lib/modules/$(shell uname -r)/build M=$(PWD)
priority will be used. The last statement in init_module() clean
is return 0 which indicates success.
The names of the initialisation and clean-up To build modules, kernel headers are required. The
functions are init_module() and cleanup_module() above makefile invokes the kernel’s build system from the
respectively. But with the new kernel (>= 2.3.13) kernel’s source and finally the kernel’s makefile invokes
we can use any name for the initialisation and clean- our Makefile to compile the module. Now that we have
up functions. These old names are still supported everything to build our module, just execute the make
for backward compatibility. The kernel provides command, and this will compile and create the kernel
module_init and module_exit macros, which register module named hello.ko:
initialisation and clean-up functions. Let us rewrite
the same module with names of our own choice for [mickey] $ ls
initialisation and cleanup functions: hello.c Makefile

#include <linux/kernel.h> [mickey]$ make


#include <linux/module.h> make -C /lib/modules/2.6.32-358.el6.x86_64/build M=/home/
static int __init hello_init(void) mickey modules
{ make[1]: Entering directory `/usr/src/kernels/2.6.32-358.
printk(KERN_INFO “Hello, World !!!\n”); el6.x86_64’
CC [M] /home/mickey/hello.o
return 0; Building modules, stage 2.
} MODPOST 1 modules
CC /home/mickey/hello.mod.o
static void __exit hello_exit(void) LD [M] /home/mickey/hello.ko.unsigned
{ NO SIGN [M] /home/mickey/hello.ko
printk(KERN_INFO “Exiting ...\n”); make[1]: Leaving directory `/usr/src/kernels/2.6.32-358.el6.
} x86_64’

module_init(hello_init); [mickey]$ ls
module_exit(hello_exit); hello.c hello.ko hello.ko.unsigned hello.mod.c hello.
MODULE_LICENSE(“GPL”); mod.o hello.o Makefile modules.order Module.symvers
MODULE_AUTHOR(“Narendra Kangralkar.”);
MODULE_DESCRIPTION(“Hello world module.”); We have now successfully compiled our first kernel

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  29


Developers How To

module. Now, let us look at how to load and unload this of the current->pid variable. Given below is the complete
module in the kernel. Please note that you must have super- working code (pid.c):
user privileges to load/unload kernel modules. To load a
module, switch to the super-user mode and execute the #include <linux/kernel.h>
insmod command, as shown below: #include <linux/module.h>
#include <linux/sched.h>
[root]# insmod hello.ko
static int __init pid_init(void)
insmod has done its job successfully. But where is the {
output? It is appended to the kernel’s ring buffer. So let’s printk(KERN_INFO “pid = %d\n”, current->pid);
verify it by executing the dmesg command:
return 0;
[root]# dmesg }
Hello, World !!!
static void __exit pid_exit(void)
We can also check whether our module is loaded or not. {
For this purpose, let’s use the lsmod command: /* Don’t do anything */
}
[root]# lsmod | grep hello
hello 859 0 module_init(pid_init);
module_exit(pid_exit);
To unload the module from the kernel, just execute the
rmmod command as shown below and check the output of the MODULE_LICENSE(“GPL”);
dmesg command. Now, dmesg shows the message from the MODULE_AUTHOR(“Narendra Kangralkar.”);
clean-up function: MODULE_DESCRIPTION(“Kernel module to find PID.”);
MODULE_VERSION(“1.0”);
[root]# rmmod hello
The Makefile is almost the same as the first makefile, with
[root]# dmesg a minor change in the object file’s name:
Hello, World !!!
Exiting ... obj-m += pid.o

In this module, we have used a couple of macros, which all:


provide information about the module. The modinfo command make -C /lib/modules/$(shell uname -r)/build M=$(PWD)
displays this information in a nicely formatted fashion: modules

[mickey]$ modinfo hello.ko clean:


filename: hello.ko make -C /lib/modules/$(shell uname -r)/build M=$(PWD)
version: 1.0 clean
description: Hello world module.
author: Narendra Kangralkar. Now compile and insert the module and check the output
license: GPL using the dmesg command:
srcversion: 144DCA60AA8E0CFCC9899E3
depends: [mickey]$ make
vermagic: 2.6.32-358.el6.x86_64 SMP mod_unload
modversions [root]# insmod pid.ko

Finding the PID of a process [root]# dmesg


Let us write one more kernel module to find out the Process pid = 6730
ID (PID) of the current process. The kernel stores all process
related information in the task_struct structure, which is A module that spans multiple files
defined in the <linux/sched.h> header file. It provides a So far we have explored how to compile a module from a
current variable, which is a pointer to the current process. single file. But in a large project, there are several source
To find out the PID of a current process, just print the value files for a single module and, sometimes, it is convenient to

30  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Developers

divide the module into multiple files. Let us understand the clean
procedure of building a module that spans two files. Let’s
divide the initialization and cleanup functions from the hello.c The Makefile is self-explanatory. Here, we are saying:
file into two separate files, namely startup.c and cleanup.c. “Build the final kernel object by using startup.o and
Given below is the source code for startup.c: cleanup.o.” Let us compile and test the module:

#include <linux/kernel.h> [mickey]$ ls


#include <linux/module.h> cleanup.c Makefile startup.c

static int __init hello_init(void) [mickey]$ make


{
printk(KERN_INFO “Function: %s from %s file\n”, __func__, Then, let’s display module information using the modinfo
__FILE__); command:

return 0; [mickey]$ modinfo final.ko


} filename: final.ko
version: 1.0
module_init(hello_init); description: Startup module.
author: Narendra Kangralkar.
MODULE_LICENSE(“GPL”); license: GPL
MODULE_AUTHOR(“Narendra Kangralkar.”); version: 1.1
MODULE_DESCRIPTION(“Startup module.”); description: Cleanup module.
MODULE_VERSION(“1.0”); author: Narendra Kangralkar.
license: BSD
And “cleanup.c” will look like this. srcversion: D808DB9E16AC40D04780E2F
depends:
#include <linux/kernel.h> vermagic: 2.6.32-358.el6.x86_64 SMP mod_unload
#include <linux/module.h> modversions

static void __exit hello_exit(void) Here, the modinfo command shows the version,
{ description, licence and author-related information from each
printk(KERN_INFO “Function %s from %s file\n”, __func__, module.
__FILE__); Let us load and unload the final.ko module and verify the
} output:

module_exit(hello_exit); [mickey]$ su -
Password:
MODULE_LICENSE(“BSD”);
MODULE_AUTHOR(“Narendra Kangralkar.”); [root]# insmod final.ko
MODULE_DESCRIPTION(“Cleanup module.”);
MODULE_VERSION(“1.1”); [root]# dmesg
Function: hello_init from /home/mickey/startup.c file
Now, here is the interesting part -- Makefile for these
modules: [root]# rmmod final

obj-m += final.o [root]# dmesg


final-objs := startup.o cleanup.o Function: hello_init from /home/mickey/startup.c file
Function hello_exit from /home/mickey/cleanup.c file
all:
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) Passing command-line arguments to the module
modules In user-space programs, we can easily manage command
line arguments with argc/ argv. But to achieve the same
clean: functionality through modules, we have to put in more of
make -C /lib/modules/$(shell uname -r)/build M=$(PWD) an effort.

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  31


Developers How To

To achieve command-line handling in modules, MODULE_AUTHOR(“Narendra Kangralkar.”);


we first need to declare global variables and use the MODULE_DESCRIPTION(“Module with command line arguments.”);
module_param() macro, which is defined in the <linux/ MODULE_VERSION(“1.0”);
moduleparam.h> header file. There is also the MODULE_
PARM_DESC() macro which provides descriptions After compilation, first insert the module without
about arguments. Without going into lengthy theoretical any arguments, which display the default values of the
discussions, let us write the code: variable. But after providing command-line arguments,
default values will be overridden. The output below
#include <linux/kernel.h> illustrates this:
#include <linux/module.h>
#include <linux/moduleparam.h> [root]# insmod parameters.ko

static char *name = “Narendra Kangralkar”; [root]# dmesg


static long roll_no = 1234; Name : Narendra Kangralkar
static int total_subjects = 5; Roll no : 1234
static int marks[5] = {80, 75, 83, 95, 87}; Subjectwise marks
Subject-1 = 80
module_param(name, charp, 0); Subject-2 = 75
MODULE_PARM_DESC(name, “Name of the a student”); Subject-3 = 83
Subject-4 = 95
module_param(roll_no, long, 0); Subject-5 = 87
MODULE_PARM_DESC(rool_no, “Roll number of a student”);
[root]# rmmod parameters
module_param(total_subjects, int, 0);
MODULE_PARM_DESC(total_subjects, “Total number of subjects”); Now, let us reload module with command-line arguments
and verify the output.
module_param_array(marks, int, &total_subjects, 0);
MODULE_PARM_DESC(marks, “Subjectwise marks of a student”); [root]# insmod ./parameters.ko name=”Mickey” roll_no=1001
marks=10,20,30,40,50
static int __init param_init(void)
{ [root]# dmesg
static int i; Name : Mickey
Roll no : 1001
printk(KERN_INFO “Name : %s\n”, name); Subjectwise marks
printk(KERN_INFO “Roll no : %ld\n”, roll_no); Subject-1 = 10
printk(KERN_INFO “Subjectwise marks “); Subject-2 = 20
Subject-3 = 30
for (i = 0; i < total_subjects; ++i) { Subject-4 = 40
printk(KERN_INFO “Subject-%d = %d\n”, i + 1, Subject-5 = 50
marks[i]);
} If you want to learn more about modules, the Linux
kernel’s source code is the best place to do so. You can
return 0; download the latest source code from https://www.kernel.
} org/. Additionally, there are a few good books available in
the market like ‘Linux Kernel Development’ (3rd Edition)
static void __exit param_exit(void) by Robert Love and ‘Linux Device Drivers’ (3rd Edition).
{ You can also download the free book from http://lwn.net/
/* Don’t do anything */ Kernel/LDD3/.
}

By: Narendra Kangralkar


module_init(param_init);
module_exit(param_exit); The author is a FOSS enthusiast and loves exploring
anything related to open source. He can be reached at
[email protected]
MODULE_LICENSE(“GPL”);

32  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Insight Developers

Write Better jQuery Code


for Your Project

jQuery, the cross-platform JavaScript library designed to simplify the client-side scripting
of HTML, is used by over 80 per cent of the 10,000 most popularly visited websites. jQuery
is free open source software which has a wide range of uses. In this article, the author
suggests some best practices for writing jQuery code.

T
his article aims to explain how to use jQuery in frameworks like jQuery and writing numerous lines of
a rapid and more sophisticated manner. Websites code to do some fairly minor task.
focus not only on backend functions like user For example, if one wants to write code to show the
registration, adding new friends or validation, but also on datepicker selection, on onclick event in plain Javascript,
how their Web pages will get displayed to the user, how the flow is:
their pages will behave in different situations, etc. For 1. For onclick event create one div element.
example, doing a mouse-over on the front page of a site 2. Inside that div, add content for dates, month and year.
will either show beautiful animations, properly formatted 3. Add navigation for changing the months and year.
error messages or interactive hints to the user on what can 4. Make sure that, on first client, div can be seen, and on
be done on the site. second client, div is hidden; and this should not affect
jQuery is a very handy, interactive, powerful and rich any other HTML elements. Just creating a datepicker
client-side framework built on JavaScript. It is able to is a slightly more difficult task and if this needs to be
handle powerful operations like HTML manipulation, implemented many times in the same page, it becomes
events handling and beautiful animations. Its most more complex. If the code is not properly implemented,
attractive feature is that it works across browsers. When then making modifications can be a nightmare.
using plain JavaScript, one of the things we need to ensure This is where jQuery comes to our rescue. By using it, we
is whether the code we write tends towards perfection. It can show the datepicker as follows:
should handle any exception. If the user enters an invalid
type of value, the script should not just hang or behave $(“#id”).datepicker();
badly. However, in my career, I have seen many junior
developers using plain JavaScript solutions instead of rich That’s it! We can reuse the same code multiple times by

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  33


Developers Insight

just changing the id(s); and without any kind of collision, 4. Use proper selectors and try to use more ‘find()’, because
we can show multiple datepickers in the same page. That find can traverse DOM faster. For example, if we want to
is the beauty of jQuery. In short, by using it, we can focus find content of #id3…
more on the functionality of the system and not just on small
parts of the system. And we can write more complex code //demo code snippet
like a rich text editor and lots of other operations. But if <div id=’#id1’>
we write jQuery code without proper guidance and proper <span id=’#id2’></span>
methodology, we end up writing bad code; and sometimes <div class=’divClass’>Here is the content.</div>
that can become a nightmare for other team members to </div>
understand and modify for minor changes.
Developers often make silly mistakes during jQuery code //developer generally uses
implementation. So, based on some silly mistakes that I have var content = $(‘#id1 .divClass’).html();
encountered, here are some general guidelines that every
developer should keep in mind while implementing jQuery code. //the better way is [This is faster in execution]
var content = $(‘#id1’).find(‘div.divClass’).html();
General guidelines for jQuery
1. Try to use ‘this’ instead of just using the id and class of 5. Write functions wherever required: Generally,
the DOM elements. I have seen that most developers are developers write the same code multiple times. To
happy with just using $(‘#id’) or $(‘.class’) everywhere: avoid this, we can write functions. To write functions,
let’s find the block that will repeat. For example, if
//What developers are doing: there is a validation of an entry for a text box and the
$(‘#id’).click(function(){ same gets repeated for many similar text boxes, then
var oldValue = $(‘#id’).val(); we can write a function for the same. Given below is a
var newValue = (oldValue * 10) / 2; simple example of a text box entry. If the value is left
$(‘#id’).val(newValue); empty in the entry, then function returns 0; else, if the
}); user has entered some value, then it should return the
same value.
//What should be done: Try to use more $(this) in your code.
$(‘#id’).click(function(){ //Javascript
$(this).val(($(this).val() * 10) / 2); function doValidation(elementId){
}); //get value using elementId
//check and return value
2. Avoid conflicts: When working with a CMS like }
WordPress or Magento, which might be using other
JavaScript frameworks instead of jQuery, you need to //simple jQuery
work with jQuery inside that CMS or project. Then use $(“input[type=’text’]”).blur(function(){
the noConflicts of jQuery. //get value using $(this)
//check and return value
var $abc = jQuery.noConflict(); });
$abc(‘#id’).click(function(){
//do something //best way to implement
}); //now you can use this function easily with click event also
$.doValidation = function(){
3. Take care of absent elements: Make sure that the element //get value
on which your jQuery code is working/manipulating is //check and return value
not absent. If the element on which your code manipulates };
is dynamically added, then first find it, if that element is
added on DOM. $(“input[type=’text’]”).blur($.doValidation);

$(‘#divId’).find(‘#someId’).length 6. Object organisation: This is another thing that each


developer needs to keep in mind. If one bunch of
This code returns 0 if there isn’t an element with ‘someId’ variables is related to one task and another bunch of
found; else it will return the total number of elements that are variables is related to another task, then get them better
inside ‘divId’. organised, as shown below:

34  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Insight Developers

//bad way if (callback && typeof (callback) === “function”) {


var disableTask1 = false; callback();
var defaultTask1 = 5; }
var pointerTask1 = 2; }
var disableTask2 = true;
var defaultTask2 = 10; function task2(callback){
var currentValueTask2 = 10; //do something
//like that many other variables if (callback && typeof (callback) === “function”) {
callback();
}
//better way }
var task1 = {
disable: false, //Better jQuery way
default: 5, $.task1 = function(){
pointer: 2, //do something
getNewValue: function(){ };
//do some thing
return task1.default + 5; $.task2 = function(){
} //do something
}; };

var task2 = { var callbacks = $.Callbacks();


disable: true, callbacks.add($.task1);
default: 10, callbacks.add($.task2);
currentValue: 10 callbacks.fire();
};
8. Use of ‘each’ for iteration: The snippet below shows how
//how to use them each can be used for iteration.
if(task1.disable){
//do some thing… var array;
return task1.default; //javascript way
} var length = array.length;
for(var i =0; i<length; i++){
7. Use of callbacks: When multiple functions are used in var key = array[i].key;
your code and if the second function is dependent on // like wise fetching other values.
the effects of the first output, then callbacks are required }
to be written.
For example, task2 needs to be executed after //jQuery way
completion of task1, or in other words, you need to halt $.each(array, function(key, value){
execution of task2 until task1 is executed. I have noticed alert(key);
that many developers are not aware of callback functions. });
So, they either initialise one variable for checking [like
mutex in the operating system] or set a timeout for 9. Don't repeat code: Never write any code again and again.
execution. Below, I have explained how easily this can be If you find yourself doing so, halt your coding and read
implemented using callback. the eight points listed above, all over again.
Next time I’ll explain how to write more effective plugins,
//Javascript way using some examples.

task1(function(){
task1(); By: Savan Koradia
}); The author works as a senior PHP Web developer at Multidots
Solutions Pvt Ltd. He writes tutorials to help other developers
function task1(callback){ to write better code. You can contact him at: savan.koradia@
multidots.in; Skype: savan.koradia.multidots
//do something

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  35


Developers Let’s Try

Back Up a Shared Server in MongoDB


Continuing the series on MongoDB, in this article, readers learn how to set up a backup for the
sharded environment that was set up over the previous two articles.

I
n the previous article in this series, we set up a sharded config servers store the same metadata and since we
environment in MongoDB. This article deals with one have three of them just to ensure availability, we’ll
of the most intriguing and crucial topics in database be backing just one config server for demonstration
administration—backups. The article will demonstrate the purposes. So open a command prompt and type the
MongoDB backup process and will make a backup of the following command to back up the config database of
sharded server that was configured earlier. So, to proceed, you our config server:
must set up your sharded environment as per our previous
article as we’ll be using the same configuration. C:\Users\viny\Desktop\mongodb-win32-i386-2.6.0\
Before we move on with the backup, make sure that bin>mongodump --host localhost:59020 --db config
the balancer is not running.
The balancer is the process that This command will dump
ensures that data is distributed your config database under
evenly in a sharded cluster. the dump directory of your
This is an automated process MongoDB root directory.
in MongoDB and at most Now let’s back up our
times, you won’t be bothered actual data by taking backups
with it. In this case, though, it of all of our shards. Issue the
needs to be stopped so that no following commands, one by
chunk migration takes place one, and take a backup of all
while we back up the server. the three replica sets of both
If you’re wondering what the the shards that we configured
term ‘chunk migration’ means, earlier:
let me tell you that if one
shard in a sharded MongoDB environment has more data mongodump --host localhost:38020 --out .\shard1\replica1
stored than its peers, then the balancer process migrates mongodump --host localhost:38021 --out .\shard1\replica2
some data to other shards. Evenly distributed data ensures mongodump --host localhost:38022 --out .\shard1\replica3
optimal performance in a sharded environment. mongodump --host localhost:48020 --out .\shard2\replica1
So now connect to a Mongo process by opening a mongodump --host localhost:48021 --out .\shard2\replica2
command prompt, going to the MongoDB root directory and mongodump --host localhost:48022 --out .\shard2\replica3
typing ‘Mongo’. Type sh.getBalancerState() to find out the
balancer’s status. If you get true as the output, your balancer The --out parameter defines the directory where
is running. Type sh.stopBalancer() to stop the balancer. MongoDB will place the dumps. Now you can start the
The next step is to back up the config server, which balancer by issuing the sh.startBalancer() command
stores metadata about shards. In the previous article, we and resume normal operations. So we’re done with our
set up three config servers for our shard. Since all the backup operation.
If you want to explore a bit more about backups
and restores in MongoDB, you can check MongoDB
documentation and the article in http://www.thegeekstuff.
com/2013/09/mongodump-mongorestore/ which will
give you some good insights into Mongodump and
Mongorestore commands.

By: Vinayak Pandey


The author is an experienced database developer, with
exposure to various database and data warehousing tools
and techniques, including Oracle, Teradata, Informatica
PowerCenter and MongoDB.
Figure 1: Balancer status

36  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


CODE
Sandya Mannarswamy
SPORT
This month’s column continues the discussion of natural language processing.

F
or the past few months, we have been to convert what constitutes our intuitive knowledge
discussing information retrieval and natural about how to look for a company’s name in a text
language processing (NLP), as well as the document into rules that can be automatically
algorithms associated with them. In this month’s checked by a program. This is the task that is faced
column, let’s continue our discussion on NLP while by NLP applications which try to do Named Entity
also covering an important NLP application called Recognition (NER). The point to note is that while
‘Named Entity Recognition’ (NER). As mentioned the simple heuristics we use to identify names of
earlier, given a large number of text documents, NLP companies does work well in many cases, it is also
techniques are employed to extract information from quite possible that it misses out extracting names
the documents. One of the most common sources of companies in certain other cases. For instance,
of textual information is newspaper articles. Let us consider the possibility of the company’s name
consider a simple example wherein we are given all being represented as IBM instead of I.B.M, or as
the newspaper articles that appeared in the last one International Business Machines. The rule-based
year. The task that is assigned to us is related to the system could potentially miss out recognising it.
world of business. We are asked to find out all the Similarly, consider a sentence like, “Indian Oil
mergers and acquisitions of businesses. We need to and Natural Gas Company decided that…” In this
extract information on which companies bought over case, it is difficult to figure out whether there are
other firms as well as the companies that merged two independent entities, namely, ‘Indian Oil’ and
with each other. Our first rudimentary steps towards ‘Natural Gas Company’ being referred to in the
getting this information will perhaps be to look sentence or if it is a single entity whose name is
for keyword-based searches that used terms such ‘Indian Oil and Natural Gas Company’. It requires
as ‘merger’ or ‘buys’. Once we find the sentences considerable knowledge about the business world
containing those keywords, we could then perhaps to resolve the ambiguity. We could perhaps consult
look for the names of the companies, if any occur in the ‘World Wide Web’ or Wikipedia to clear our
those sentences. Such a task requires us to identify doubts. The use of such sources of knowledge is
all company names present in the document. quite common in Named Entity Recognition (NER)
For a person reading the newspaper article, systems. Now let us look a bit deeper into NER
such a task seems simple and straightforward. Let systems and their uses.
us first try to list down the ways in which a human
being would try to identify the company names that Types of entities
could be present in a text document. We need to use What are the types of entities that are of interest to
heuristics such as: (a) Company names typically a NER system? Named entities are by definition,
would begin with capital letters; (b) They can contain proper nouns, i.e., nouns that refer to a particular
words such as ‘Corporation’ or ‘Ltd’; (c) They can person, place, organisation, thing, date or time, such
be represented by letters of the alphabet separated as Sandya, Star Wars, Pride and Prejudice, Cubbon
by full stops, such as I.B.M. We could also use Park, March, Friday, Wipro Ltd, Boy Scouts, and the
contextual clues such as ‘X’s stock price went up’ Statue of Liberty. Note that a named entity can span
to infer that X is a business or company. Now, the more than one word, as in the case of ‘Cubbon Park’.
question we are left with is whether it is possible Each of these entities are assigned different tags such

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  37


as Person, Company, Location, Month, Day, Book, etc. If While rule-based NER systems and gazetteer
the above example is tagged with entities, it will be tagged as approaches work well for a domain-specific NER, machine
<Person> Sandya </Person>, <Movie>Star Wars</Movie>, learning approaches generally perform well when applied
<Book> Pride and Prejudice </Book>, <Location> Cubbon across multiple domains. Many of the machine learning
Park </Location> , etc. based approaches use supervised learning techniques, by
It is not only important that the NER system recognises a which a large corpus of text is annotated manually with
phrase correctly as an entity but also that it labels it with the named entities and the goal is to use the annotated data to
right entity type. Consider the sentence, “Washington Jr went train the learner. These systems use statistical models and
to school in England, but for graduate studies, he moved to some form of feature identification to make predictions
the United States and studied at Washington.” This sentence about named entities in unlabelled text, based on what
contains two references to the noun ‘Washington’, one as a they have learnt from the annotated text. Typically,
person: ‘Washington Jr’ and another as a location: ‘Washington, supervised learning systems study the features of positive
United States’. While it may appear that if an NER system has a and negative examples, which have been tagged as named
list of all pronouns, it can correctly extract all entities, in reality, entities in the hand-annotated training set. They use that
this is not true. Consider the two sentences, “Jobs are hard to information to either come up with statistical models,
find…” and “Jobs said that the employment rate is picking up..” which can predict whether a newly encountered phrase is
Even if the NER system has an exhaustive list of pronouns, it a named entity or not. If it is a named entity, supervised
needs to figure out that the word ‘Jobs’ appearing in the first learning systems predict its type as well. In the next
sentence does not refer to an entity, whereas the reference ‘Jobs’ column, we will continue our discussion on how hidden
in the second sentence is an entity. Markov models and maximum entropy models can be used
Given our discussion so far, it is clear to us that NER to construct learner systems.
systems can be built in a number of ways, though no single
method can be considered to be superior to others and a My ‘must-read book’ for this month
combination of techniques is needed. We saw that rule- This month’s book suggestion comes from one of our readers,
based NER systems tend to be incomplete and have the Jayshankar, and his recommendation is very appropriate for
disadvantage of requiring manual extension quite frequently. this month’s column. He recommends an excellent resource
Rule-based systems use typical pattern matching techniques for text mining—a book called ‘Taming Text’ by Ingersol,
to identify the entities. On the other hand, it is possible Morton and Farris. The book describes different algorithms
to extract features associated with named entities and use for text search, text clustering and classification. There is also
them to train classifiers that can tag entities, using machine a detailed chapter on Named Entity Recognition, which will
learning techniques. Machine learning approaches for be useful supplementary reading for this month’s column.
identifying entities can be based on: (a) supervised learning Thank you, Jay, for sharing this book link.
techniques; (b) semi-supervised learning techniques; and (c) If you have a favourite programming book or article that
unsupervised learning techniques. you think is a must-read for every programmer, please do
The third kind of NER systems can be based on gazetteers, send me a note with the book’s name, and a short write-up on
wherein a lexicon or gazette for names is constructed and why you think it is useful, so I can mention it in the column.
made available to the NER system which then tags the text, This would help many readers who want to improve their
identifying entities in the text based on the lexicon entries. software skills.
Once a gazetteer is available, all that the NER needs to do is If you have any favourite programming questions or
to have an efficient lookup in the gazetteer for each phrase software topics that you would like to discuss on this forum,
it identifies in the text, and tag it based on the information please send them to me, along with your solutions and
it finds in the gazette. A gazette can also help to embed feedback, at sandyasm_AT_yahoo_DOT_com. Till we meet
external world information, which can help in name entity again next month, happy programming!
resolution. But first, the gazette needs to be built for it to be
available to the NER system. Building a gazette can consume
considerable manual effort. One of the alternatives is to build By: Sandya Mannarswamy
the lexicon or gazetteer itself through automatic means, which
The author is an expert in systems software and is currently working
brings us back to the problem of recognising named entities with Hewlett Packard India Ltd. Her interests include compilers,
automatically from various document sources. Typically, multi-core and storage systems. If you are preparing for systems
external world sources such as Wikipedia or Twitter can be software interviews, you may find it useful to visit Sandya's LinkedIn
used as the information sources from which the gazette can group ‘Computer Science Interview Training India’ at
http://www.linkedin.com/groups?home=HYPERLINK "http://www.
be built. Sometimes a combination of approaches can be used
linkedin.com/groups?home=&gid=2339182"&HYPERLINK "http://
with a lexicon, in conjunction with a rules-based or machine www.linkedin.com/groups?home=&gid=2339182"gid=2339182
learning approach.

40  |  august 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Exploring Software Guest Column

Anil Seth Big Data on a Desktop: A Virtual


Machine in an OpenStack Cloud
OpenStack is a worldwide collaboration between developers and
cloud computing technologists aimed at developing the cloud
computing platform for public and private clouds. Let’s install it
on our desktop.

I
nstalling OpenStack using Packstack is very simple. # route add -net 172.24.4.224 netmask 255.255.255.240 gw
After a test installation in a virtual machine, you will find 192.168.122.54
that the basic operations for creating and using virtual
machines are now quite simple when using a Web interface. Now, browse http://192.168.122.54/dashboard and
create a new project and a user associated with the project.
The environment 1. Sign in as the admin.
It is important to understand the virtual environment. While 2. Under the Identity panel, create a user (youser) and
everything is running on a desktop, the setup consists of a project (Bigdata). Sign out and sign in as youser to
multiple logical networks interconnected via virtual routers create and test a cloud VM.
and switches. You need to make sure that the routes are 3. Create a private network for the project under
defined properly because otherwise, you will not be able to Project/Network/Networks:
access the virtual machines you create. • Create the private network 192.168.10.0/24 with
On the desktop, the virt-manager creates a NAT-based the gateway 192.168.10.254
network by default. NAT assures that if your desktop can • Create a router and set a gateway to the public
access the Internet, so can the virtual machine. The Internet network. Add an interface to the private network
access had been used when the OpenStack distribution was and ip address 192.168.10.254.
installed in the virtual machine. 4. To be able to sign in using ssh, under the Project/
The Packstack installation process creates a virtual Compute/Access & Security, in the Security
public network for use by the various networks created Groups tab, add the following rules to the default
within the cloud environment. The virtual machine security group:
on which OpenStack is installed is the gateway to the • Allow ssh access: Custom TCP Rule for allowing
physical network. traffic on Port 22.
• Allow icmp access: Custom ICMP Rule with
Virtual Network on the Desktop (virbr0 interface): Type and Code value -1.
192.168.122.0/32 5. For password-less signing into the VM, under the
IP address of eth0 interface on OpenStack VM: 192.168.122.54 Project/Compute/Access & Security, in the Key Pairs
Public Virtual Network created by packstack on OpenStack VM: tab the following:
172.24.4.224/28 • Select the Import Key Pair option and give it a
IP address of the br-ex interface OpenStack VM: 172.24.4.225 name, e.g., ‘desktop user login’.
• In your desktop terminal window, use ssh-keygen
Testing the environment to create a public/private key pair in case you
In the OpenStack VM console, verify the network addresses. don't already have one.
In my case, I had to explicitly give an ip to the br-ex • Copy the contents of ~/.ssh/id_rsa.pub from your
interface, as follows: desktop account and paste them in the public key.
6. Allocate a public IP for accessing the VM under
# ifconfig Project/Compute/Access & Security in the Floating
# ip addr add 172.24.4.225/28 dev br-ex Ips tab, and allocate IP to the project. You may get
a value like 172.24.4.229
On the desktop, add a route to the public virtual network 7. Now launch the instance under Project/Compute/
on OpenStack VM: Instance:

42  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Guest Column Exploring Software

• Give it a name - test and choose the m1-


tiny flavour.
em1 virbr0 eth0 br-ext
• Select the boot source as ‘Boot from image'
with the image name ‘cirros', a very small
image included in the installation. Desktop OpenStack VM
• Once it is launched, associate the floating Internet
ip obtained above with this instance.
Now, you are ready to log in to the VM
created in your local cloud. In a terminal eth0
window, type:

VM Router
ssh [email protected]

You should be signed into the virtual machine Figure 1: Simplified network diagram
without needing a password.
You can experiment with importing the Fedora VM Should Packstack install OpenStack client tools [y|n] [y]
image you used for the OpenStack VM and launching it : y
in the cloud. Whether you succeed or not will depend on
the resources available in the OpenStack VM. The answers to the other questions will depend on the
network interface and the IP address of your desktop, but
Installing only the needed OpenStack services there is no ambiguity here. You should answer with the
You will have observed that OpenStack comes with a interface ‘lo' for CONFIG_NOVA_COMPUTE_PRIVIF and
very wide range of services, some of which are not likely CONFIG_NOVA_NETWORK_PRIVIF. You don't need an
to be very useful for your experiments on the desktop, extra physical interface as the compute services are running
e.g., the additional networks and router created in the on the same server.
tests above. Here is a part of the dialogue for installing Now, you are ready to test your OpenStack
the required services on the desktop: installation on the desktop. You may want to create a
project and add a user to the project. Under Project/
[root@amd ~]# packstack Compute/Access & Security, you will need to add
Welcome to Installer setup utility firewall rules and key pairs, as above.
Enter the path to your ssh Public key to install on However, you will not need to create any additional
servers: private network or a router.
Packstack changed given value to required value /root/. Import a basic cloud image, e.g., from http://fedoraproject.
ssh/id_rsa.pub org/get-fedora#clouds under Project/Compute/Images.
Should Packstack install MySQL DB [y|n] [y] : y You may want to create an additional flavour for a
Should Packstack install OpenStack Image Service (Glance) virtual machine. The m1.tiny flavour has 512MB of RAM
[y|n] [y] : y and 4GB of disk and is too small for running Hadoop. The
Should Packstack install OpenStack Block Storage (Cinder) m1.small flavour has 2GB of RAM and 20GB of disk,
service [y|n] [y] : n which will restrict the number of virtual machines you
Should Packstack install OpenStack Compute (Nova) service can run for testing Hadoop. Hence, you may create a mini
[y|n] [y] : y flavour with 1GB of RAM and 10GB of disk. This will
Should Packstack install OpenStack Networking (Neutron) need to be done as the admin user.
service [y|n] [y] : n Now, you can create an instance of the basic cloud
Should Packstack install OpenStack Dashboard (Horizon) image. The default user is fedora and your setup is ready
[y|n] [y] : y for exploration of Hadoop data.
Should Packstack install OpenStack Object Storage (Swift)
[y|n] [y] : n
Should Packstack install OpenStack Metering (Ceilometer) By: Dr Anil Seth
[y|n] [y] : n The author has earned the right to do what interests him.
Should Packstack install OpenStack Orchestration (Heat) You can find him online at http://sethanil.com, http://sethanil.
blogspot.com, and reach him via email at [email protected]
[y|n] [n] : n

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  43


Developers Let's Try

MariaDB
The MySQL Fork
that Google has Adopted
MariaDB is a community developed fork of MySQL, which
has overtaken MySQL. That many leading corporations in
the cyber environment, including Google, have migrated
to MariaDB speaks for its importance as a player in the
database firmament.

M
ariaDB is a high performance, open source History
database that helps the world's busiest websites In 2008, Sun Microsystems bought MySQL for US$ 1
deliver more content, faster. It has been created billion. But the original developer, Monty Widenius, was
by the developers of MySQL with the help of the FOSS quite disappointed with the way things were run at Sun and
community and is a fork of MySQL. It offers various features founded his own new company and his own fork of MySQL
and enhancements like alternate storage engines, server - MariaDB. It is named after Monty's younger daughter,
optimisations and patches. Maria. Later, when Oracle announced the acquisition of
The lead developer of MariaDB is Michael ‘Monty’ Sun, most of the MySQL developers jumped to its forks:
Widenius, who is also the founder of MySQL and Monty MariaDB and Drizzle.
Program AB. MariaDB version numbers follow MySQL numbers
No single person or company nurtures MariaDB/MySQL till 5.5. Thus, all the features in MySQL are available in
development. The guardian of the MariaDB community, MariaDB. After MariaDB 5.5, its developers started a new
the MariaDB Foundation, drives it. It states that it has the branch numbered MariaDB 10.0, which is the development
trademark of the MariaDB server and owns mariadb.org, which version of MariaDB. This was done to make it clear that
ensures that the official MariaDB development tree is always MariaDB 10.0 will not import all the features from MySQL
open to the developer community. The MariaDB Foundation 5.6. Also, at times, some of these features do not seem to be
assures the community that all the patches, as well as MySQL solid enough for MariaDB’s standards. Since new specific
source code, are merged into MariaDB. The Foundation also features have been developed in MariaDB, the team decided
provides a lot of documentation. MariaDB is a registered to go for a major version number. The currently used
trademark of SkySQL Corporation and is used by the MariaDB version, MariaDB 10.0, is built on the MariaDB 5.5 series
Foundation with permission. It is a good choice for database and has back ported features from MySQL 5.6 along with
professionals looking for the best and most robust SQL server.   entirely new developments.

44  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Let's Try Developers

Why MariaDB is better than MySQL Now, add the apt-get repository as per your Ubuntu
When comparing MariaDB and MySQL, we are comparing version.
different development cultures, features and performance. For Ubuntu 13.10
The patches developed by MariaDB focus on bug fixing
and performance. By supporting the features of MySQL, $ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
MariaDB implements more improvements and delivers mariadb/repo/5.5/ubuntu saucy main'
better performance without restrictions on compatibility with
MySQL. It also provides more storage engines than MySQL. For Ubuntu 13.04
What makes MariaDB different from MySQL is better testing,
fewer bugs and fewer warnings. The goal of MariaDB is to be $ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
a drop-in replacement for MySQL, with better developments. mariadb/repo/5.5/ubuntu raring main'
Navicat is a strong and powerful MariaDB administration
and development tool. It is graphic database management and For Ubuntu 12.10
development software produced by PremiumSoft CyberTech
Ltd. It provides a native environment for MariaDB database $ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
management and supports the extra features like new storage mariadb/repo/5.5/ubuntu quantal main'
engines, microsecond and virtual columns.
It is easy to convert from MySQL to MariaDB, as we For Ubuntu 12.04 LTS
need not convert any data and all our old connectors to other
languages work unchanged. As of now MariaDB is capable of $ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
handling data in terabytes, but more needs to be done for it to mariadb/repo/5.5/ubuntu precise main'
handle data in petabytes.
Step 2: Install MariaDB using the following commands:
Features
Here is a list of features that MariaDB provides: $ sudo apt-get update
ƒƒ Since it has been released under the GPL version 2, it is free. $ sudo apt-get install mariadb-server
ƒƒ It is completely open source.
ƒƒ Open contributions and suggestions are encouraged. Provide the root account password as shown in Figure 1.
ƒƒ MariaDB is one of the fastest databases available. Step 3: Log in to MariaDB using the following
ƒƒ Its syntax is pretty simple, flexible and easy to manage. command, after installation:
ƒƒ It can be easily imported or exported from CSV and XML.
ƒƒ It is useful for both small as well as large databases, mysql -u root -p
containing billions of records and terabytes of data in
hundreds of thousands of tables.
ƒƒ MariaDB includes pre-installed storage engines like Aria,
XtraDB, PBXT, FederatedX and SphinxSE.
ƒƒ The use of the Aria storage engine makes complex
queries faster. Aria is usually faster since it caches row
data in memory and normally doesn't have to write the
temporary rows to disk.
ƒƒ Some storage engines and plugins are pre-installed in
MariaDB.
ƒƒ It has a very strong community.   
Figure 1: Configuring MariaDB
Installing MariaDB
Now let’s look at how MariaDB is installed.
Step 1: First, make sure that the required packages
are installed along with the apt-get key for the MariaDB
repository, by using the following commands:

$ sudo apt-get install software-properties-common


$ sudo apt-key –recv-keys –keyserver hkp://keyserver.ubuntu.
com:80 0xcbcb082a1bb943db Figure 2: Logging into MariaDB

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  45


Developers Let's Try

Figure 3: A sample table created

Creating a database in MariaDB


When entering the account administrator password set up Figure 4: Inserting data into a table
during installation, you will be given a MariaDB prompt.
Create a database on students by using the following Inserting data into a MariaDB table
command: To insert data into a MariaDB table, use the following commands:

CREATE DATABASE students; INSERT INTO details(name,age,marks) values ("anu",15,450);

Switch to the new database using the following command (this INSERT INTO details(name,age,marks) VALUES("Bob",15,400);
is to make sure that you are currently working on this database):
The output will be as shown in Figure 4.
USE students;

Now that the database has been created, create a table: We need not add values in student_id. It is automatically
incremented. All other values are given in quotes.
CREATE TABLE details(student_id int(5) NOT NULL AUTO_
INCREMENT, Deleting a table
                       name varchar(20) DEFAULT NULL, To delete a table, type the following command:
                       age int(3) DEFAULT NULL,
                       marks int(5) DEFAULT NULL, DROP TABLE table_name;
                       PRIMARY KEY(student_i)d
                       ); Once the table is deleted, the data inside it cannot be
To see what we have done, use the following command: recovered.
We can view the current table using the show tables
show columns in details; command, which gives all the tables inside the database:

Each column in the table creation command is separated SHOW tables;


by a comma and is in the following format:
After deleting the table, use the following commands:
Column_Name Data_Type[(size_of_data)] [NULL or NOT NULL]
[DEFAULT default_value] DROP TABLE details;
[AUTO_INCREMENT] Query OK, 0 rows affected (0.02 sec)

These columns can be defined as: SHOW tables;


ƒƒ Column Name: Describes the attribute being assigned.
ƒƒ Data Type: Specifies the type of data in the column. The output will be:
ƒƒ Null: Defines whether null is a valid value for that field –-
it can be ‘null’or ‘not null’. Empty set (0.00 sec)
ƒƒ Default Value: Sets the initial value of all newly created
records that do not specify a value. Google waves goodbye to MySQL
ƒƒ auto_increment: MySQL will handle the sequential Google has now switched to MariaDB and dumped MySQL.
numbering of any column marked with this option, internally, “For the Web community, Google’s big move might be a
in order to provide a unique value for each record. paradigm shift in the DBMS ecosystem,” said a Google engineer.
Ultimately, before closing the table definition, Major Linux distributions, like Red Hat and SUSE, and well-
we need to use the primary key by typing PRIMARY known websites such as Wikipedia, have also switched from
KEY(column name). It guarantees that this column will MySQL to MariaDB. This is a great blow to MySQL.
serve as a unique field. Google has migrated applications that were previously

46  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Let's Try Developers

present in MariaDB, yet. And then it will be difficult to switch


back to the previous database.
MariaDB has the advantage of being bigger in terms of
the number of users, than its forks and clones. MySQL took a
lot of time and effort before emerging as the choice of many
companies. So, it is a little hard to introduce MariaDB in the
commercial field. Being a new open source standard, we can
only hope that MariaDB will overtake other databases in a
short span of time.
Figure 5: Tables in the database

running on MySQL on to MariaDB without changing the References


application code. There are five Google technicians working part-
[1] http://en.wikipedia.org/wiki/MariaDB
time on MariaDB patches and bug fixes, and Google continues [2] https://mariadb.org/
to maintain its internal branch of MySQL to have complete [3] http://tecadmin.net/install-mariadb-5-5-in-ubuntu/#
control over the improvement. Google running thousands of [4] https://www.digitalocean.com/community/tutorials/how-
to-create-a-table-in-mysql-and-mariadb-onan-ubuntu-
MariaDB servers can only be good news for those who feel more
cloud-server
comfortable with a non-Oracle future for MySQL. [5] http://en.wikibooks.org/wiki/MariaDB/Introduction
Though multinational corporations like Google have
switched to MariaDB, it does have a few shortcomings. By: Amrutha S.
MariaDB’s performance is slightly better in multi-core The author is currently studying for a bachelor’s degree in Computer
machines, but one suspects that MySQL could be tweaked Science and Engineering at Amrita University in Kerala, India. She is
to match the performance. All it requires is for Oracle to an open source enthusiast and also an active member of the Amrita
improve MySQL by adding some new features that are not FOSS club. She can be contacted at [email protected].

Please share your feedback/ thoughts/


views via email at [email protected]

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  47


Developers Let's Try

Haskell: The Purely Functional


Programming Language
Haskell, an open source programming language, is the outcome of 20 years of research.
Named after the logician, Haskell Curry, it has all the advantages of functional programming
and an intuitive syntax based on mathematical notation. This second article in the series on
Haskell explores a few functions.

C onsider the function sumInt to compute the sum of two


integers. It is defined as:
Loading package ghc-prim ... linking ... done.
Loading package integer-gmp ... linking ... done.
Loading package base ... linking ... done.
sumInt :: Int -> Int -> Int
sumInt x y = x + y Prelude> :l Sum.hs
[1 of 1] Compiling Main ( Sum.hs, interpreted )
The first line is the type signature in which the function name, Ok, modules loaded: Main.
arguments and return types are separated using a double colon (::).
The arguments and the return types are separated by the symbol *Main> :t sumInt
(->). Thus, the above type signature tells us that the sum function sumInt :: Int -> Int -> Int
takes two arguments of type Int and returns an Int. Note that the
function names must always begin with the letters of the alphabet *Main> sumInt 2 3
in lower case. The names are usually written in CamelCase style. 5
You can create a Sum.hs Haskell source file using your
favourite text editor, and load the file on to the Glasgow If we check the type of sumInt with arguments, we get the
Haskell Compiler interpreter (GHCi) using the following code: following output:

$ ghci *Main> :t sumInt 2 3


GHCi, version 7.6.3: http://www.haskell.org/ghc/ :? for help sumInt 2 3 :: Int

48  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Let's Try Developers

*Main> :t sumInt 2 from a list:


sumInt 2 :: Int -> Int
*Main> tail a
The value of sumInt 2 3 is an Int as defined in the type signature. [2,3,4,5]
We can also partially apply the function sumInt with one argument
and its return type will be Int -> Int. In other words, sumInt 2 takes *Main> :t tail
an integer and will return an integer with 2 added to it. tail :: [a] -> [a]
Every function in Haskell takes only one argument. So, we
can think of the sumInt function as one that takes an argument and The last function returns the last element of a list:
returns a function that takes another argument and computes their
sum. This return function can be defined as a sumTwoInt function *Main> last a
that adds a 2 to an Int using the sumInt function, as shown below: 5

sumTwoInt :: Int -> Int *Main> :t last


sumTwoInt x = sumInt 2 x last :: [a] -> a

The ‘=’ sign in Haskell signifies a definition and not The init function returns everything except the last
a variable assignment as seen in imperative programming element of a list:
languages. We can thus omit the ‘x' on either side and the
code becomes even more concise: *Main> init a
[1,2,3,4]
sumTwoInt :: Int -> Int
sumTwoInt = sumInt 2 *Main> :t init
init :: [a] -> [a]
By loading Sum.hs again in the GHCi prompt, we get the
following: The length function returns the length of a list:

*Main> :l Sum.hs *Main> length a


[1 of 1] Compiling Main ( Sum.hs, interpreted ) 5
Ok, modules loaded: Main.
*Main> :t length
*Main> :t sumTwoInt length :: [a] -> Int
sumTwoInt :: Int -> Int
The take function picks the first ‘n' elements from a list:
*Main> sumTwoInt 3
5 *Main> take 3 a
[1,2,3]
Let us look at some examples of functions that operate on
lists. Consider list ‘a', which is defined as [1, 2, 3, 4, 5] (a list *Main> :t take
of integers) in the Sum.hs file (re-load the file in GHCi before take :: Int -> [a] -> [a]
trying the list functions).
The drop function drops ‘n' elements from the beginning
a :: [Int] of a list, and returns the rest:
a = [1, 2, 3, 4, 5]
*Main> drop 3 a
The head function returns the first element of a list: [4,5]

*Main> head a *Main> :t drop


1 drop :: Int -> [a] -> [a]

*Main> :t head The zip function takes two lists and creates a new list of
head :: [a] -> a tuples with the respective pairs from each list. For example:

The tail function returns everything except the first element *Main> let b = ["one", "two", "three", "four", "five"]

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  49


Developers Let's Try

*Main> zip a b 1
[(1,"one"),(2,"two"),(3,"three"),(4,"four"),(5,"five")] *Main> factorial 1
1
*Main> :t zip *Main> factorial 2
zip :: [a] -> [b] -> [(a, b)] 2
*Main> factorial 3
The let expression defines the value of ‘b' in the GHCi 6
prompt. You can also define it in a way that’s similar to the *Main> factorial 4
definition of the list ‘a' in the source file. 24
The lines function takes input text and splits it at new lines: *Main> factorial 5
120
*Main> let sentence = "First\nSecond\nThird\nFourth\nFifth"
Functions operating on lists can also be called recursively.
*Main> lines sentence To compute the sum of a list of integers, you can write the
["First","Second","Third","Fourth","Fifth"] sumList function as:

*Main> :t lines sumList :: [Int] -> Int


lines :: String -> [String] sumList [] = 0
sumList (x:xs) = x + sumList xs
The words function takes input text and splits it on
white space: The notation (x:xs) represents a list, where ‘x' is the first
element in the list and ‘xs' is the rest of the list. On running
*Main> words "hello world" sumList with GHCi, you get the following:
["hello","world"]
*Main> sumList []
*Main> :t words 0
words :: String -> [String] *Main> sumList [1,2,3]
6
The map function takes a function and a list, and applies
the function to every element in the list: Sometimes, you will need a temporary function for a
computation, which you will not need to use elsewhere.
*Main> map sumTwoInt a You can then write an anonymous function. A function to
[3,4,5,6,7] increment an input value can be defined as:

*Main> :t map *Main> (\x -> x + 1) 3


map :: (a -> b) -> [a] -> [b] 4

The first argument to map is a function that is enclosed These are called Lambda functions, and the '\' represents
within parenthesis in the type signature (a -> b). This function the notation for the symbol Lambda. Another example is
takes an input of type ‘a' and returns an element of type ‘b'. given below:
Thus, when operating over a list [a], it returns a list of type [b].
Recursion provides a means of looping in functional *Main> map (\x -> x * x) [1, 2, 3, 4, 5]
programming languages. The factorial of a number, for example, [1,4,9,16,25]
can be computed in Haskell, using the following code:
It is a good practice to write the type signature of
factorial :: Int -> Int the function first when composing programs, and then
factorial 0 = 1 write the body of the function. Haskell is a functional
factorial n = n * factorial (n-1) programming language and understanding the use of
functions is very important.
The definition of factorial with different input use cases is
called pattern matching on the function. On running the above By: Shakthi Kannan
example with GHCi, you get the following output:
The author is a free software enthusiast and blogs
at shakthimaan.com
*Main> factorial 0

50  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Developers

Qt-WebKit, a major engine that can render Web pages and


execute JavaScript code, is the answer to the developer’s prayer.
Let’s take a look at a few examples that will aid developers in
making better use of this engine.

T
his article is for Qt developers. It is assumed that the Q_INVOKABLE
intended audience is aware of the famous Signals and This is a macro that is similar to Slot, except that it has a
Slots mechanisms of Qt. Creating an HTML page is return type. Thus, we will prefix Q_INVOKABLE to the
very quick compared to any other way of designing a GUI. methods that can be called by the JavaScript. The advantage
An HTML page is nothing but a fancy page that doesn’t have here is that we can have a return type with Q_INVOKABLE,
any logic in its build. With the amalgamation of JavaScript, as compared to Slot.
however, the HTML page builds in some intelligence. As
everything cannot be collated in JavaScript, we need a back-end Developing a sample HTML page with JavaScript
for it. Qt provides a way to mingle (HTML+Java) with C++. intelligence
Thus, you can call the C++ methods through JavaScripts and Here is a sample form in HTML-JavaScript that will allow
vice-versa. This is possible by using the Qt-WebKit framework. us to multiply any two given numbers. However, the logic of
The applications developed in Qt are not just limited to various multiplication should reside in the C++ method only.
desktop platforms. They are even ported over several mobile
platforms. Thus, you can design your apps that can just fit into <html>
the Windows, iOS and Android worlds, seamlessly. <head>
<script>
What is Qt-WebKit? function Multiply()
In simple words, Qt-WebKit is the Web-browsing module of {
Qt. It can be used to display live content from the Internet as /** MultOfNumbers a C++ Invokable method **/
well as local HTML files. var result = myoperations.MultOfNumbers(document.forms["DEMO_
FORM"]["Multiplicant_A"].value, document.forms["DEMO_FORM"]
Programming paradigm ["Multiplicant_B"].value);
In Qt-WebKit, the base class is known as QWebView. The document.getElementById("answer").value = result;
sub-class of QWebView is QWebViewPage, and a further sub- }
class is QWebFrame. This is useful while adding the desired </script>
class object to the JavaScript window object. In short, this </head>
class object will be visible to JavaScript once it is added to the <body>
JavaScript window object. However, JavaScript can invoke <form name="DEMO_FORM">
only the public Q_INVOKABLE methods. The Q_INVOKABLE Multiplicant A: <input type="number"
restriction was introduced to make the applications being name="Multiplicant_A"><br>
developed using Qt even more secure. Multiplicant B: <input type="number"

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  51


Developers How To

name="Multiplicant_B"><br> QWebView *view = new QWebView();


Result: <input type="number" id="answer" view->resize(400, 500);
name="Multiplicant_C"><br> view->page()->mainFrame()->addToJavaScriptWindowObject("myope
<input type="button" value="Multiplication_compute_on_C++" rations", new MyJavaScriptOperations);
onclick="Multiply()"> view->load(QUrl("./index.html"));
</form> view->show();
</body> return a.exec();
</html> }
#include "main.moc"
Please note that in the above HTML code, myoperations
is a class object. And MultOfNumbers is its public Q_ The output is given in Figure 1.
Invokable class method.
How to install a callback from C++ code to the
How to call the C++ methods from the Web Web page using the Qt-WebKit framework
page using the Qt-WebKit framework We have already seen the call to C++ methods by
Let's say, I have the following class that has the JavaScript. Now, how about a callback from C++ to
Q_Invokable method, MultOfNumbers. JavaScript? Yes, it is possible with the Qt-WebKit. There
are two ways to do so. However, for the sake of neatness in
class MyJavaScriptOperations : public QObject { design, let’s discuss only the Signals and Slots mechanisms
Q_OBJECT for the JavaScript callback.
public:
Q_INVOKABLE qint32 MultOfNumbers(int a, int b) { Installing Signals and Slots for the JavaScript
qDebug() << a * b; function
return (a*b); Here are the steps that need to be taken for the callback to be
} installed:
}; a) Add a JavaScript window object to the
javaScriptWindowObjectCleared slot.
This class object should be added to the JavaScript b) Declare a signal in the class.
window object by the following API: c) Emit the signal.
d) In JavaScript, connect the signal to the JavaScript
addToJavaScriptWindowObject("name of the object", new (class function slot.
that can be accessed)) Here is the syntax to help you connect:

Here is the entire program: <JavaScript_window_object>.<signal_name>.connect(<JavaScript


function name>);
#include <QtGui/QApplication>
#include <QApplication> Note, you can make a callback to JavaScript only after
#include <QDebug> the Web page is loaded. This can be ensured by connecting
#include <QWebFrame> to the Slot emitted by the Signal loadFinished() in the C++
#include <QWebPage> application.
#include <QWebView> Let’s look at a real example now. This will fire a callback
once the Web page is loaded.
class MyJavaScriptOperations : public QObject { The callback should be addressed by the JavaScript
Q_OBJECT function, which will show up an alert window.
public:
Q_INVOKABLE qint32 MultOfNumbers(int a, int b) {
qDebug() << a * b;
return (a*b);
}
};

int main(int argc, char *argv[])


{
QApplication a(argc, argv); Figure 1: QT DEMO output

52  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Developers

QT_DEMO
<html>
<head>
<script> Hit Me
function alert_click()
{
alert("you clicked");
}
JavaScript Alert -
function JavaScript_function()
{ Hello
alert("Hello");
} OK
myoperations.alert_script_signal.connect(JavaScript_
function);
</script>
</head>
<body>
<form name="myform"> Figure 2: QT DEMO callback output
<input type="button" value="Hit me" onclick="alert_click()">
</form> }
</body>
</html> MyJavaScriptOperations::MyJavaScriptOperations()
{
Here is the main file: qDebug()<<__PRETTY_FUNCTION__;
view = new QWebView();
#include <QtGui/QApplication> view->resize(400, 500);
#include <QApplication> connect(view->page()->mainFrame(), SIGNAL(javaScriptWindowObj
#include <QDebug> ectCleared()), this, SLOT(JS_ADDED()));
#include <QWebFrame> connect(view, SIGNAL(loadFinished(bool)), this,
#include <QWebPage> SLOT(loadFinished(bool)));
#include <QWebView> view->load(QUrl("./index.html"));
class MyJavaScriptOperations : public QObject { view->show();
Q_OBJECT }
public: int main(int argc, char *argv[])
QWebView *view; {
MyJavaScriptOperations(); QApplication a(argc, argv);
signals: MyJavaScriptOperations *jvs = new MyJavaScriptOperations;
void  alert_script_signal(); return a.exec();
public slots: }
void JS_ADDED(); #include "main.moc"
void loadFinished(bool);
}; The output is shown in Figure 2.
Qt is a rich framework for C++ developers. It not only
void MyJavaScriptOperations::JS_ADDED() provides these amazing features, but also has some interesting
{ attributes like in-built SQlite, D-Bus and various containers. It's
qDebug()<<__PRETTY_FUNCTION__; easy to develop an entire GUI application with it. You can even
view->page()->mainFrame()->addToJavaScriptWindowObject("myope port an existing HTML page to Qt. This makes Qt a wonderful
rations", this); choice to develop a cross-platform application quickly. It is
} now getting popular in the mobile world too.

void MyJavaScriptOperations::loadFinished(bool oper) By: Shreyas Joshi


{ The author is a technology enthusiast and software developer
qDebug()<<__PRETTY_FUNCTION__<< oper; at Pace Micro Technology. You can connect with him at
[email protected].
emit alert_script_signal();

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  53


Developers Overview

This article focuses on Yocto – a


complete embedded Linux development
environment that offers tools, metadata and
documentation.

T
he Yocto Project helps developers and companies encourages the Linux community. Its goals are:
get their project off the ground. It is an open source ƒƒ To develop custom Linux-based embedded systems
collaboration project that provides templates, tools and regardless of the architecture.
methods to create custom Linux-based systems for embedded ƒƒ To provide interoperability between tools and working code,
products, regardless of the hardware architecture. which will reduce the money and time spent on the project.
While building Linux-based embedded products, it is ƒƒ To develop licence-aware build systems that make it
important to have full control over the software running on the possible to include or remove software components
embedded device. This doesn’t happen when you are using a based on specific licence groups and the corresponding
normal Linux OS for your device. The software should have restriction levels.
full access as per the hardware requirements. That’s where ƒƒ To provide a place for open source projects that help in
the Yocto Project comes in handy. It helps you create custom the development of Linux-based embedded systems and
Linux-based systems for any hardware architecture and makes customisable Linux platforms.
the device easier to use and faster than expected. ƒƒ To focus on creating single build systems that address the
The Yocto Project was founded in 2010 as a solution needs of all users that other software components can later
for embedded Linux development by many open source be tethered to.
vendors, hardware manufacturers and electronic companies. ƒƒ To ensure that the tools developed are architecturally
The project aims at helping developers build their own independent.
Linux distributions, specific to their own environments. The ƒƒ To provide a better graphical user interface to the build
project provides developers with interoperable tools, methods system, which eases access.
and processes that help in the development of Linux-based ƒƒ To provide resources and information, catering to both
embedded systems. The central goal of the project is to enable new and experienced users.
the user to reuse and customise tools and working code. It ƒƒ To provide core system component recipes provided by
encourages interaction with embedded projects and has been the OpenEmbedded project.
a steady contributor to the OpenEmbedded core, BitBake, the ƒƒ To further educate the community about the benefits
Linux kernel development process and several other projects. of this standardisation and collaboration in the Linux
It not only deals with building Linux-based embedded community and in the industry.
systems, but also the tool chain for cross compilation and
software development kits (SDK) so that users can choose the The Yocto Project community
package manager format they intend to use. The community shares many common traits with a typical open
source organisation. Anyone who is interested can contribute to
The goals of the Yocto Project the development of the project. The Yocto Project is developed
Although the main aim is to help developers of customised and governed as a collaborative effort by an open community
Linux systems supporting various hardware architectures, it of professionals, volunteers and contributors.
has also a key role in several other fields where it supports and The project’s governance is mainly divided into two wings

54  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Overview Developers
Latest updates
Yocto Project 1.6
The latest release of Yocto Project (YP) 1.6 ‘Daisy’ has a
great set of features to help developers build with a very
good user interface. The Toaster, a new UI to the YP build
system, enables detailed examination of the build output,
with great control over the view of the data. The Linux
kernel update and the GCC update to 4.8.2 adds further
functionality to the latest release. It also supports building
Python 3. The new client for reporting errors to a central
Web interface helps developers to focus on problem
management.
AMD and LG Electronics partner with Yocto
The introduction of new standardised features to ensure
quick access to the latest Board Support Packages (BSP)
Figure1: YP community for AMD 64-bit x86 architecture has made AMD a new
gold member in the YP community.
LG Electronics, joining as a new member organisation
to help support and guide the project, is of great impor-
Developer- Commercial tance.
Specific Layer Layer Embedded Linux Conference 2014
The Yocto Project is one of the silver sponsors of this pre-
mier vendor-neutral technical conference for companies
and developers that use Linux in embedded products.
Sponsored by the Linux Foundation, it has a key role in
Hardware- encouraging newcomers to the world of open source and
UI-Specific Layer embedded products.
Specific BSP
Toaster prototype
Toaster, a part of the latest YP 1.6 release, is a Web inter-
face for BitBake, build system. Toaster collects all kinds
of data about the building process, so that it is easy to
Yocto-Specific OpenEmbedded search and query through this data in a specific way.
Layer Metadata Core Metadata

ƒƒ Technical leaders: Those who work within the sub-


projects, doing the same thing as the layer maintainers.
Figure 2: YP layers ƒƒ Upstream projects: Many Yocto Project components such
as the Linux kernel are dependent on the upstream projects.
—administrative and technical. The administrative board ƒƒ Advisory board: The advisory board gives direction to the
includes executive leaders from organisations that participate project and helps in setting the requirements for the project.
on the advisory board and also in several sub-groups that
perform several non-technical services including community Layers
management, financial management, infrastructure The build system is composed of different layers, which are
management, advocacy and outreach. The technical board the containers for the building blocks used to construct the
includes several sub-groups, which oversee tasks that range system. The layers are grouped according to functionality,
from submitting patches to the project architect to deciding on which makes the management of extensions and
who is the final authority on the project. customisations easier.
The building of the project requires the coordinated
efforts of many people, who work in several roles. These roles
References
are listed below.
ƒƒ Architect: One who holds the final authority and provides [1] https://www.yoctoproject.org/
[2] https://wiki.yoctoproject.org/wiki/Main_Page
overall leadership to the project’s development.
ƒƒ Sub-system maintainers: The project is further divided
By: Vishnu N K
into several sub-projects and the maintainers are assigned
to these sub-projects. The author, an open source enthusiast, is in the midst of his B. Tech
ƒƒ Layer maintainers: Those who ensure the components’ degree in Computer Science at Amrita Vishwa Vidyapeetham and
contributes to Mediawiki. Contact him at [email protected]
excellence and functionality.

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  55


Developers How To

What is Linux Kernel Porting?

One of the aspects of hacking a Linux kernel is to port it. While this might sound difficult, it won’t be
once you read this article. The author explains porting techniques in a simplified manner.

W
ith the evolution of embedded systems, porting target. There may be a need to change a few lines here and
has become extremely important. Whenever there, before it is up and running. But, the key thing to
you have new hardware at hand, the first and know is, what needs to be changed and where.
the most critical thing to be done is porting. For hobbyists,
what has made this even more interesting is the open source What Linux kernel porting involves
nature of the Linux kernel. So, let’s dive into porting and Linux kernel porting involves two things at a higher level:
understand the nitty-gritty of it. architecture porting and board porting. Architecture, in
Porting means making something work on an Linux terminology, refers to CPU. So, architecture porting
environment it is not designed for. Embedded Linux porting means adapting the Linux kernel to the target CPU, which
means making Linux work on an embedded platform, may be ARM, Power PC, MIPS, and so on. In addition
for which it was not designed. Porting is a broader term to this, SOC porting can also be considered as part of
and when I say ‘embedded Linux porting’, it not only architecture porting. As far as the Linux kernel is concerned,
involves Linux kernel porting, but also porting a first stage most of the times, you don't need to port it for architecture
bootloader, a second stage bootloader and, last but not the as this would already be supported in Linux. However, you
least, the applications. Porting differs from development. still need to port Linux for the board and this is where the
Usually, porting doesn't involve as much of coding as in major focus lies. Architecture porting entails porting of
development. This means that there is already some code initial start-up code, interrupt service routines, dispatcher
available and it only needs to be fine-tuned to the desired routine, timer routine, memory management, and so on.

56  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Developers

Whereas board porting involves writing custom drivers and


initialisation code for devices specific to the board.

Building a Linux kernel for the target platform


Kernel building is a two-step process: first, the kernel
needs to be configured for the target platform. There are
many ways to configure the kernel, based on the preferred
configuration interface. Given below are some of the
common methods.
To run the text-based configuration, execute the following Figure 1: Plain text-based kernel configuration
command:

$ make config

This will show the configuration options on the console


as seen in Figure 1. It is a little cumbersome to configure the
kernel with this, as it prompts every configuration option, in
order, and doesn't allow the reversion of changes.
To run the menu-driven configuration, execute the
following command:

$ make menuconfig

This will show the menu options for configuring the Figure 2: Menu-driven kernel configuration
kernel, as seen in Figure 2. This requires the ncurses library to
be installed on the system. This is the most popular interface ARCH=<architecture>
used to configure the kernel. CROSS-COMPILE = <toolchain prefix>
To run the window-based configuration, execute the
following command: The first line defines the architecture the kernel needs to
be built for, and the second line defines the cross compilation
$ make xconfig toolchain prefix. So, if the architecture is ARM and the
toolchain is say, from CodeSourcery, then it would be:
This allows configuration using the mouse. It requires QT
to be installed on the system. ARCH=arm
For details on other options, execute the following CROSS_COMPILE=arm-none-linux-gnueabi-
command in the kernel top directory:
Optionally, make can be invoked as shown below:
$ make help
$ make ARCH=arm menuconfig - For configuring the kernel
Once the kernel is configured, the next step is to build $ make ARCH=arm CROSS_COMPILE=arm-none-linux-gnueabi- - For
the kernel with the make command. A few commonly used compiling the kernel
commands are given below:
The kernel image generated after the compilation is
$ make vmlinux - Builds the bare kernel usually vmlinux, which is in ELF format. This image can't
$ make modules - Builds the modules be used directly with embedded system bootloaders such as
$ make modules_prepare – Sets up the kernel for building the u-boot. So convert it into the format suitable for a second
modules external to kernel. stage bootloader. Conversion is a two-step process and is
done with the following commands:
If the above commands are executed as stated, the kernel
will be configured and compiled for the host system, which arm-none-linux-gnueabi-objcopy -O binary vmlinux vmlinux.bin
is generally the x86 platform. But, for porting, the intention mkimage -A arm -O linux -T kernel -C none -a 0x80008000 -e
is to configure and build the kernel for the target platform, 0x80008000 -n linux-3.2.8 -d vmlinux.bin uImage
which in turn, requires configuration of makefile. Two things -A ==> set architecture
that need to be changed in the makefile are given below: -O ==> set operating system

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  57


Developers How To

-T ==> set image type means it won't be built at all. Where are these values stored?
-C ==> set compression type There is a file called .config in the top level directory, which
-a ==> set load address (hex) holds these values. So, the .config file is the output of the
-e ==> set entry point (hex) configuration target such as menuconfig.
-n ==> set image name Where are these symbols used? In makefile, as shown below:
-d ==> use image data from file
obj-$(CONFIG_MYDRIVER) += my_driver.o
The first command converts the ELF into a raw binary.
This binary is then passed to mkimage, which is a utility to So, if CONFIG_MYDRIVER is set to value ‘y', the driver
generate the u-boot specific kernel image. mkimage is the my_driver.c will be built as part of the kernel image and if set
utility provided by u-boot. The generated kernel image is to value ‘m', it will be built as a module with the extension
named uImage. .ko. And, for value ‘n', it won't be compiled at all.
As you now know a little more about kbuild, let’s
The Linux kernel build system consider adding a simple character driver to the kernel tree.
One of the beautiful things about the Linux kernel is that it The first step is to write a driver and place it at the correct
is highly configurable and the same code base can be used location. I have a file named my_driver.c. Since it’s a character
for a variety of applications, ranging from high end servers driver, I will prefer adding it at the drivers/char/ sub-directory.
to tiny embedded devices. And the infrastructure, which So copy this at the location drivers/char in the kernel.
plays an important role in achieving this in an efficient The next step is to add a configuration entry in the drivers/
manner, is the kernel build system, also known as kbuild. char/Kconfig file. Each entry can be of type bool, tristate, int,
The kernel build system has two main components – string or hex. bool means that the configuration symbol can have
makefile and Kconfig. the values ‘y' or ‘n', while tristate means it can have values ‘y',
Makefile: Every sub-directory has its own makefile, which is ‘m' or ‘n'. And ‘int', ‘string' and ‘hex' mean that the value can be
used to compile the files in that directory and generate the object an integer, string or hexadecimal, respectively. Given below is
code out of that. The top level makefile percolates recursively the segment of code added in drivers/char/Kconfig:
into its sub-directories and invokes the corresponding makefile
to build the modules and finally, the Linux kernel image. The config MY_DRIVER
makefile builds only the files for which the configuration option tristate "Demo for My Driver"
is enabled through the configuration tool. default m
Kconfig: As with the makefile, every sub-directory has a help
Kconfig file. Kconfig is in configuration language and Kconfig Adding this small driver to kernel for
files located inside each sub-directory are the programs. demonstrating the kbuild
Kconfig contains the entries, which are read by configuration
targets such as make menuconfig to show a menu-like structure. The first line defines the configuration symbol. The
So we have covered makefile and Kconfig and at present second specifies the type for the symbol and the text which
they seem to be pretty much disconnected. For kbuild to will be shown as the menu. The third specifies the default
work properly, there has to be some link between the Kconfig value for this symbol and the last two lines are for the help
and makefile. And that link is nothing but the configuration message. Another thing that you will generally find in a
symbols, which generally have a prefix CONFIG_. These Kconfig file is ‘depends on'. This is very useful when you
symbols are generated by a configuration target such as want to select the particular feature, only if its dependency
menuconfig, based on entries defined in the Kconfig file. is selected. For example, if we are writing a driver for
And based on what the user has selected in the menu, these i2c EEPROM, then the menu option for the driver should
symbols can have the values ‘y', ‘n', or ‘m'. appear only if the 2c driver is selected. This can be achieved
Now, as most of us are aware, Linux supports hot with the ‘depends on' entry.
plugging of the drivers, which means, we can dynamically After saving the above changes in Kconfig, execute the
add and remove the drivers from the running kernel. The following command:
drivers which can be added/removed dynamically are known
as modules. However, drivers that are part of the kernel $ make menuconfig
image can't be removed dynamically. So, there are two ways
to have a driver in the kernel. One is to build it as a part of Now, navigate to Device Drivers->Character devices and
the kernel, and the other is to build it separately as a module you will see an entry for My Driver.
for hot-plugging. The value ‘y' for CONFIG_, means the By default, it is supposed to be built as a module. Once
corresponding driver will be part of the kernel image; the you are done with configuration, exit the menu and save the
value ‘m' means it will be built as a module and value ‘n' configuration. This saves the configuration in .config file. Now,

58  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Developers

open the .config file, and there will be an entry as shown below:

CONFIG_MY_DRIVER=m

Here, the driver is configured to be built as a module.


Also, one thing worth noting is that the symbol ‘MY_
DRIVER' in Kconfig is prefixed with CONFIG_.
Now, just adding an entry in the Kconfig file and
configuration alone won't compile the driver. There has to
be the corresponding change in makefile as well. So, add the
following line to makefile:

obj-$(CONFIG_MYDRIVER) += my_driver.o Figure 3: Menu option for My Driver

After the kernel is compiled, the module my_driver.ko will board-specific custom code and needs to be specifically
be placed at drivers/char/. This module can be inserted in the brought in with the kernel. And this collection of board-
kernel with the following command: specific initialisation and custom code is referred to as a
Board Support Package or, in Linux terminology, a LSP. In
$ insmod my_driver.ko simple words, whatever software code you require (which
is specific to the target platform) to boot up the target with
Aren't these configuration symbols needed in the C code? the operating system can be called LSP.
Yes, or else how will the conditional compilation be taken
care of? How are these symbols included in C code? During Components of LSP
the kernel compilation, the Kconfig and .config files are read, As the name itself suggests, BSP is dependent on the things that
and are used to generate the C header file named autoconf.h. are specific to the target board. So, it consists of the code which
This is placed at include/generated and contains the #defines is specific to that particular board, and it applies only to that
for the configuration symbols. These symbols are used by the board. The usual list includes Interrupt Request Numbers (IRQ),
C code to conditionally compile the required code. which are dependent on how the various devices are connected
Now, let’s suppose I have configured the kernel and that it on the board. Also, some boards have an audio codec and you
works fine with this configuration. And, if I make some new need to have a driver for that codec. Likewise, there would be
changes in the kernel configuration, the earlier ones will be switch interfaces, a matrix keypad, external eeprom, and so on.
overwritten. In order to avoid this from happening, we can save
.config file in the arch/arm/configs directory with a name like LSP placement
my_config, for instance. And next time, we can execute the LSP is placed under a specific <arch> folder of the kernel's
following command to configure the kernel with older options: arch folder. For example, architecture-specific code for ARM
resides in the arch/arm directory. This is about the code, but
$ make my_config_defconfig you also need the headers which are placed under arch/arm/
include/asm. However, board-specific code is placed at arch/
Linux Support Packages (LSP)/Board Support arm/mach-<board_name> and corresponding headers are
Packages (BSP) placed at arch/arm/mach-<soc architecture>/include. For
One of the most important and probably the most example, LSP for Beagle Board is placed at arch/arm/mach-
challenging thing in porting is the development of Board omap2/board-omap3beagle.c and corresponding headers
Support Packages (BSP). BSP development is a one- are placed at arch/arm/mach-omap2/include/mach/. This is
time effort during the product development lifecycle and, shown in figure 4.
obviously, the most critical. As we have discussed, porting
involves architecture porting and board porting. Board Machine ID
porting involves board-specific initialisation code that Every board in the kernel is identified by a machine ID.
includes initialisation of the various interfaces such as This helps the kernel maintainers to manage the boards
memory, peripherals such as serial, and i2c, which in turn, based on ARM architecture in the source tree. This ID is
involves the driver porting. passed to the kernel from the second stage bootloader such
There are two categories of drivers. One is the standard as u-boot. For the kernel to boot properly, there has to be a
device driver such as the i2c driver and block driver match between the kernel and the second stage boot loader.
located at the standard directory location. Another is the This information is available in arch/arm/tools/mach-types
custom interface or device driver, which includes the and is used to generate the file linux/include/generated/

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  59


Developers How To

done here. This function should be defined during the board


porting. This includes things such as setting up the pin
multiplexing, initialisation of the serial console, initialisation
of RAM, initialisation of Ethernet, USB and so on.
MACHINE_END ends the macro. This macro is defined
in arch/arm/include/asm/mach/arch.h.

How to begin with porting


The most common and recommended way to begin with
Figure 4: LSP placement in kernel source porting is to start with some reference board, which closely
resembles yours. So, if you are porting for a board based on
mach-types.h. The macros defined by mach-types.h are OMAP3 architecture, take Beagle Board as a reference. Also,
used by the rest of the kernel code. For example, the for porting, you should understand the system very well.
machine ID for Beagle Board is 1546, and this is the Depending on the features available on your board, configure
number which the second stage bootloader passes to the the kernel accordingly. To start with, just enable the minimal
kernel. For registering the new board for ARM, provide set of features required to boot the kernel. This may include
the board details at http://www.arm.linux.org.uk/developer/ but not be limited to initialisation of RAM, Gpio subsystems,
machines/?action=new. serial interfaces, and filesystems drivers for mounting the
root filesystem. Once the kernel boots up with the minimal
Note: The porting concepts described over here are configuration, start adding the new features, as required.
specific to boards based on the ARM platform and may So, let’s summarise the steps involved in porting:
differ for other architectures. 1. The first step is to register the machine with the kernel
maintainer and get the unique ID for your board. While this
MACHINE_START macro is not necessary to begin with porting, it needs to be done
One of the steps involved in kernel porting is to define the eventually, if patches are to be submitted to the mainline.
initialisation functions for the various interfaces on the board, Place the machine ID in arch/arm/tools/mach-types.
such as serial, Ethernet, Gpio, etc. Once these functions are 2. Create the board-specific file ‘board-<board_name>'
defined, they need to be linked with the kernel so that it can at arch/arm/mach-<soc> and define the MACHINE_
invoke them during boot-up. For this, the kernel provides the START for the new board. For example, the board-
macro MACHINE_START. Typically, a MACHINE_START specific file for the Panda Board resides at arch/arm/
macro looks like what’s shown below: mach-omap2/board-omap4panda.c.
3. Update the Kconfig file at arch/arm/mach_<soc> to add
MACHINE_START(MY_BOARD, "My Board for Demo") an entry for the new board as shown below:
.atag_offset = 0x100,
.init_early = my_board_early, config MACH_MY_BOARD
.init_irq = my_board_irq, bool “My Board for Demo”
.init_machine = my_board_init, depends on ARCH_OMAP3
MACHINE_END default y

Let's understand this macro. MY_BOARD is machine 4. Update the corresponding makefile, so that the board-
ID defined in arch/arm/tools/mach-types. The second specific file gets compiled. This is shown below:
parameter to the macro is a string describing the board. The
next few lines specify the various initialisation functions, obj-$(CONFIG_MACH_MY_BOARD) += board-my_board.o
which the kernel has to invoke during boot-up. These
include the following: 5. Create a default configuration file for the new board. To
.atag_offset: Defines the offset in RAM, where the boot begin with, take any .config file as a starting point and
parameters will be placed. These parameters are passed from customise it for the new board. Place the working .config
the second stage bootloader, such as u-boot. file at arch/arm/configs/my_board_defconfig.
my_board_early: Calls the SOC initialisation functions.
This function will be defined by the SOC vendor, if the kernel By: Pradeep Tewani
is ported for it. The author works at Intel, Bangalore. He shares his learnings on
my_board_irq: Intialisation related to interrupts is done Linux & embedded systems through his weekend workshops.
over here. Learn more about his experiments at http://sysplay.in. He can
be reached at [email protected].
my_board_init: All the board-specific initialisation is

60  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Developers

Writing an RTC Driver


Based on the SPI Bus
Most computers have one or more hardware clocks that display the current time. These are
‘Real Time Clocks’ or RTCs. Battery backup is provided for one of these clocks so that time
is tracked even when the computer is switched off. RTCs can be used for alarms and other
functions like switching computers on or off. This article explains how to write Linux device
drivers for SPI-based RTC chips.

W
e will focus on the RTC DS1347 to explain how this case, the slave device is RTC DS1347. Describing the
device drivers are written for RTC chips. You can SPI slave device is an independent task that can be done as
refer to the RTC DS1347 datasheet for a complete discussed in the section on ‘Registering RTC DS1347 as an
understanding of this driver. SPI slave device’.
The SPI protocol driver: This interface provides methods
Linux SPI subsystem to read and write the SPI slave device (RTC DS1347).
In Linux, the SPI subsystem is designed in such a way that Writing an SPI protocol driver is described in the section on
the system running Linux is always an SPI master. The SPI ‘Registering the DS1347 SPI protocol driver’.
subsystem has three parts, which are listed below. The steps for writing an RTC DS1347 driver based on the
The SPI master driver: For each SPI bus in the system, SPI bus are as follows:
there will be an SPI master driver in the kernel, which has 1. Register RTC DS1347 as an SPI slave device with the SPI
routines to read and write on that SPI bus. Each SPI master master driver, based on the SPI bus number to which the
driver in the kernel is identified by an SPI bus number. For the SPI slave device is connected.
purposes of this article, let’s assume that the SPI master driver 2. Register the RTC DS1347 SPI protocol driver.
is already present in the system. 3. Once the probe routine of the protocol driver is called,
The SPI slave device: This interface provides a way of register the RTC DS1347 protocol driver to read and write
describing the SPI slave device connected to the system. In routines to the Linux RTC subsystem.

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  61


Developers How To

After all this, the Linux RTC subsystem can use the
registered protocol driver’s read and write routines to read and Linux Kernel SPI Master
SPI RCT
write the RTC. driver
BUS DS1347
spi master
spi master
read
write

RTC DS1347 hardware overview SPI Subsystem

RTC DS1347 is a low current SPI compatible real time clock. C Slave device spi write
spi read
The information it provides includes the seconds, minutes and P
registration operation
struct spi_device
operation

hours of the day, as well as what day, date, month and year U SPI Slave device SPI Protocol Driver
it is. This information can either be read from or be written struct spi_board_info struct spi_driver

to the RTC DS1347 using the SPI interface. RTC DS1347 RTC
write
RTC
read

acts as a slave SPI device and the microcontroller connected


to it acts as the SPI master device. The CS pin of the RTC is RTC Subsystem

asserted ‘low’ by the microcontroller to initiate the transfer,


and de-asserted ‘high’ to terminate the transfer. The DIN pin
of the RTC transfers data from the microcontroller to the Figure 1: RTC DS1347 driver block diagram
RTC and the DOUT pin transfers data from the RTC to the
microcontroller. The SCLK pin is used to provide a clock by 1. Specify the driver’s RTC read and write routines through
the microcontroller to synchronise the transfer between the the function pointer interface provided by the RTC
microcontroller and the RTC. subsystem.
The RTC DS1347 works in the SPI Mode 3. Any transfer 2. Register with the RTC subsystem using devm_rtc_device_
between the microcontroller and the RTC requires the register API.
microcontroller to first send the command/address byte to the The RTC subsystem requires that the driver fill the struct
RTC. Data is then transferred out of the DOUT pin if it is a rtc_class_ops routine, which has the following function
read operation; else, data is sent by the microcontroller to the pointers.
DIN pin of the RTC if it is a write operation. If the MSB bit read_time: This routine is called by the kernel when the
of the address is one, then it is a read operation; and if it is user application executes a system call to read the RTC time.
zero, then it is a write operation. All the clock information is set_time: This routine is called by the kernel when the
mapped to SPI addresses as shown in Table 1. user application executes a system call to set the RTC time.
There are other function pointers in the structure, but the above
Read Write RTC Range
address address register two are the minimum an interface requires for an RTC driver.
Whenever the kernel wants to perform any operation on
0x81 0x01 Seconds 0 - 59
the RTC, it calls the above function pointer, which will call
0x83 0x03 Minutes 0 - 59 the driver’s RTC routines.
0x85 0x05 Hours 0 - 23 After the above RTC operations structure has been filled,
0x87 0x07 Date 1 - 31 it has to be registered with the Linux RTC subsystem. This is
0x89 0x09 Month 1 - 12 done through the kernel API:
0x8B 0x0B Day 1-7
devm_rtc_device_register(struct device *dev, const char
0x8D 0x0D Year 0 - 99 *name, const struct rtc_class_ops *ops, struct module
0x8F 0x0F Control 00H - 81H *owner);
0x97 0x17 Status 03H - E7H
0xBF 0x3F Clock burst -
The first parameter is the device object, the second is the
name of the RTC driver, the third is the driver RTC operations
Table 1: RTC DS1347 SPI register map
structure that has been discussed above, and the last is the
When the clock burst command is given to the RTC, the owner, which is THIS_MODULE macro.
latter will give out the values of seconds, minutes, hours, the
date, month, day and year, one by one, and continuously. The Registering the RTC DS1347 as an SPI slave device
clock burst command is used in the driver to read the RTC. The Linux kernel requires a description of all devices
connected to it. Each subsystem in the Linux driver model
The Linux RTC subsystem has a way of describing the devices related to that subsystem.
The Linux RTC subsystem is the interface through which Similarly, the SPI subsystem represents devices based on the
Linux manages the time of the system. The following SPI bus as a struct spi_device. This structure defines the SPI
procedure is what the driver goes through to register the RTC slave device connected to the processor running the Linux
with the Linux RTC subsystem. kernel. The device structure is written in the board file in the

62  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Developers

Linux kernel, which is a part of the board support package. In the above specified manner, any SPI slave device is
The board file resides in arch/ directory in Linux (for registered with the Linux kernel and the struct spi_device is
example, the board file for the Beagle board is in arch/arm/ created and linked to the Linux SPI subsystem to describe
mach-omap2/board-omap3beagle.c). The struct spi_device the device. This spi_device struct will be passed as a
is not directly written but a different structure called struct parameter to the SPI protocol driver probe routine when the
spi_board_info is filled and registered, which creates the SPI protocol driver is loaded.
struct spi_device in the kernel automatically and links it to the
SPI master driver that contains the routines to read and write Registering the RTC DS1347 SPI protocol driver
on the SPI bus. The struct spi_board_info for RTC DS1347 The driver is the medium through which the kernel interacts
can be written in the board file as follows: with the device connected to the system. In case of the SPI
device, it is called the SPI protocol driver. The first step in
struct spi_board_info spi_board_info[] __initdata = { writing an SPI protocol driver is to fill the struct spi_driver
.modalias = “ds1347”, structure. For RTC DS1347, the structure is filled as follows:
.bus_num = 1,
.chip_select = 1, static struct spi_driver ds1347_driver = {
}; .driver = {
.name = "ds1347",
Modalias is the name of the driver used to identify the .owner = THIS_MODULE,
driver that is related to this SPI slave device—in which case },
the driver will have the same name. Bus_num is the number .probe = ds1347_probe,
of the SPI bus. It is used to identify the SPI master driver that };
controls the bus to which this SPI slave device is connected.
Chip_select is used in case the SPI bus has multiple chip The name field has the name of the driver (this should be
select pins; then this number is used to identify the chip select the same as in the modalias field of the struct spi_board_info).
pin to which this SPI slave device is connected. ‘Owner’ is the module that owns the driver, THIS_MODULE
The next step is to register the struct spi_board_info is the macro that refers to the current module in which the
with the Linux kernel. In the board file initialisation code, the driver is written (the ‘owner’ field is used for reference
structure is registered as follows: counting of the module owning the driver). The probe is the
most important routine that is called when the device and the
spi_register_board_info(spi_board_info, 1); driver are both registered with the kernel.
The next step is to register the driver with the kernel.
The first parameter is the array of the struct spi_board_ This is done by a macro module_spi_driver (struct spi_
info and the second parameter is the number of elements in driver *). In the case of RTC DS1347, the registration is
the array. In the case of RTC DS1347, it is one. This API done as follows:
will check if the bus number specified in the spi_board_info
structure matches with any of the master driver bus numbers module_spi_driver(ds1347_driver);
that are registered with the Linux kernel. If any of them do
match, it will create the struct spi_device and initialise the The probe routine of the driver is called if any of the
fields of the spi_device structure as follows: following cases are satisfied:
1. If the device is already registered with the kernel and then
master = spi_master driver which has the same bus number as the driver is registered with the kernel.
bus_num in the spi_board_info structure. 2. If the driver is registered first, then when the device is
chip_select = chip_select of spi_board_info registered with the kernel, the probe routine is called.
modalias = modalias of spi_board_info In the probe routine, we need to read and write on the SPI
bus, for which certain common steps need to be followed.
After initialising the above fields, the structure is These steps are written in a generic routine, which is called
registered with the Linux SPI subsystem. The following are throughout to avoid duplicating steps. The generic routines
the fields of the struct spi_device, which will be initialised are written as follows:
by the SPI protocol driver as needed by the driver, and if not 1. First, the address of the SPI slave device is written on
needed, will be left empty. the SPI bus. In the case of the RTC DS1347, the address
should contain its most significant bit, reset for the write
max_speed_hz = the maximum rate of transfer to the bus. operation (as per the DS1347 datasheet).
bits_per_word = the number of bits per transfer. 2. Then the data is written to the SPI bus.
mode = the mode in which the SPI device works. Since this is a common operation, a separate routine ds1347_

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  63


Developers How To

write_reg is written as follows: registered. The first thing the probe routine does is to set the
SPI parameters to be used to write on the bus. The parameters
static int ds1347_write_reg(struct device *dev, unsigned char are the mode in which the SPI device works. In the case of
address, unsigned char data) RTC DS1347, it works on Mode 3 of the SPI:
{
struct spi_device *spi = to_spi_device(dev); spi->mode = SPI_MODE_3;
unsigned char buf[2];
bits_per_word is the number of bits transferred. In the
buf[0] = address & 0x7F; case of RTC DS1347, it is 8 bits.
buf[1] = data;
spi->bits_per_word = 8;
return spi_write_then_read(spi, buf, 2, NULL, 0);
} After changing the parameters, the kernel has to be
informed of the changes, which is done by calling the spi_
The parameters to the routine are the address to which the setup routine as follows:
data has to be written and the data which has to be written
to the device. spi_write_then_read is the routine that has the spi_setup(spi);
following parameters:
struct spi_device: The slave device to be written. The following steps are carried out to check and configure
tx_buf: Transmission buffer. This can be NULL if the RTC DS1347.
reception only. 1. First, the RTC control register is read to see if the RTC is
tx_no_bytes: The number of bytes in the tx buffer. present and if it responds to the read command.
rx_buf: Receive buffer. This can be NULL if 2. Then the write protection of the RTC is disabled so that
transmission only. the code is able to write on the RTC registers.
rx_no_bytes: The number of bytes in the receive buffer. 3. Then the oscillator of the RTC DS1347 is started so that
In the case of the RTC DS1347 write routine, only two the RTC starts working.
bytes are to be written: one is the address and the other is the Till this point the kernel is informed that the RTC is on an SPI
data on that address. bus and it is configured. After the RTC is ready to be read and
The reading of the SPI bus is done as follows: written by the user, the read and write routines of the RTC are to be
1. First, the address of the SPI slave device is written on registered with the Linux kernel RTC subsystem as follows:
the SPI bus. In the case of RTC DS1347, the address should
contain its most significant bit set for the read operation (as rtc = devm_rtc_device_register(&spi->dev, "ds1347", &ds1347_
per the DS1347 datasheet). rtc_ops, THIS_MODULE);
2. Then the data is read from the SPI bus.
Since this is a common operation, a separate routine, The parameters are the name of the RTC driver, the
ds1347_read_reg, is written as follows: RTC operation structure that contains the read and write
operations of the RTC, and the owner of the module. After
static int ds1347_read_reg(struct device *dev, unsigned char this registration, the Linux kernel will be able to read
address, unsigned char *data) and write on the RTC of the system. The RTC operation
{ structure is filled as follows:
struct spi_device *spi = to_spi_device(dev);
static const struct rtc_class_ops ds1347_rtc_ops = {
*data = address | 0x80; .read_time = ds1347_read_time,
.set_time = ds1347_set_time,
return spi_write_then_read(spi, data, 1, data, 1); };
}
The RTC read routine is implemented as follows.
In the case of RTC DS1347, only one byte, which is the The RTC read routine has two parameters, one is the
address, is written on the SPI bus and one byte is to be read device object and the other is the pointer to the Linux RTC
from the SPI device. time structure struct, rtc_time.
The rtc_time structure has the following fields, which
RTC DS1347 driver probe routine have to be filled by the driver:
When the probe routine is called, it passes an spi_device tm_sec: seconds (0 to 59, same as RTC DS1347)
struct, which was created when spi_board_info was tm_min: minutes (0 to 59, same as RTC DS1347)

64  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Developers

tm_hour: hour (0 to 23, same as RTC DS1347) /* year in linux is from 1900 i.e in range of 100
tm_mday: day of month (1 to 31, same as RTC DS1347) in rtc it is from 00 to 99 */
tm_mon: month (0 to 11 but RTC DS1347 provides dt->tm_year = dt->tm_year % 100;
months from 1 to 12, so the value returned by RTC needs to
have 1 subtracted from it) buf[7] = bin2bcd(dt->tm_year);
tm_year: year (year since 1900; RTC DS1347 stores years buf[8] = bin2bcd(0x00);
from 0 to 99, and the driver considers the RTC valid from
2000 to 2099, so the value returned from RTC is added to 100 After this, the data is sent to the RTC device, and the
and as a result the offset is the year from 1900) status of the write is sent to the kernel as follows:
First the clock burst command is executed on the RTC,
which gives out all the date and time registers through the SPI return spi_write_then_read(spi, buf, 9, NULL, 0);
interface, i.e., a total of 8 bytes:
Contributing to the RTC subsystem
buf[0] = DS1347_CLOCK_BURST | 0x80; The RTC DS1347 is a Maxim Dallas RTC. There are
err = spi_write_then_read(spi, buf, 1, buf, 8); various other RTCs in the Maxim database and they are
if (err) not supported by the Linux kernel, just like it is with
return err; various other manufacturers of RTCs. All the RTCs that are
supported by the Linux kernel are present in the drivers/rtc
Then the read date and time is stored in the Linux date directory of the kernel. The following steps can be taken to
and time structure of the RTC. The time in Linux is in binary write support for the RTC in the Linux kernel.
format so the conversion is also done: 1. Pick any RTC from the ‘Manufacturer’ (e.g., Maxim)
database which does not have support in the Linux kernel
dt->tm_sec = bcd2bin(buf[0]); (see the drivers/rtc directory for supported RTCs).
dt->tm_min = bcd2bin(buf[1]); 2. Download the datasheet of the RTC and study its features.
dt->tm_hour = bcd2bin(buf[2] & 0x3F); 3. Refer to rtc-ds1347.c and other RTC files in the drivers/
dt->tm_mday = bcd2bin(buf[3]); rtc directory in the Linux kernel and go over even this
dt->tm_mon = bcd2bin(buf[4]) - 1; article for how to implement RTC drivers.
dt->tm_wday = bcd2bin(buf[5]) - 1; 4. Write the support for the RTC.
dt->tm_year = bcd2bin(buf[6]) + 100; 5. Use git (see ‘References’ below) to create a patch for the
RTC driver written.
After storing the date and time of the RTC in the Linux 6. Submit the patch by mailing it to the Linux RTC mailing list:
RTC date and time structure, the date and time is validated • [email protected]
through rtc_valid_tm API. After validation, the validation • [email protected]
status from the API is returned—if the date and time is valid, • [email protected]
then the kernel will return the date and time in the structure 7. The patch will be reviewed and any changes required will
to the user application; else it will return an error: be suggested, and if everything is fine, the driver will be
acknowledged and be added to the Linux tree.
return rtc_valid_tm(dt);

The RTC write routine is implemented as follows. References


First, the local buffer is filled with the clock burst write
command, and the date and time passed to the driver write [1] DS1347 datasheet, datasheets.maximintegrated.com/en/ds/
DS1347.pdf
routine. The clock burst command informs the RTC that [2] DS1347 driver file https://git.kernel.org/cgit/linux/kernel/git/
the date and time will follow this command, which is to torvalds/linux.git/tree/drivers/rtc/rtc-ds1347.c
be written to the RTC. Also, the time in RTC is in the bcd [3] Writing and submitting your first Linux kernel patch video,
format; so the conversion is also done: https://www.youtube.com/watch?v=LLBrBBImJt4
[4] Writing and submitting your first Linux kernel patch text file
and presentation, https://github.com/gregkh/kernel-tutorial
buf[0] = DS1347_CLOCK_BURST & 0x7F;
buf[1] = bin2bcd(dt->tm_sec);
By: Raghavendra Chandra Ganiga
buf[2] = bin2bcd(dt->tm_min);
buf[3] = (bin2bcd(dt->tm_hour) & 0x3F); The author is an embedded firmware development engineer
at General Industrial Controls Pvt Ltd, Pune. His interests lie in
buf[4] = bin2bcd(dt->tm_mday);
microcontrollers, networking firmware, RTOS development and
buf[5] = bin2bcd(dt->tm_mon + 1); Linux device drivers.
buf[6] = bin2bcd(dt->tm_wday + 1);

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  65


Developers Let's Try

Use GIT for


Linux Kernel
Development

This article is aimed at newbie developers who are planning to set up a development
environment or move their Linux kernel development environment to GIT.

G
IT is a free, open source distributed version control communications between the software and hardware using
tool. It is easy to learn and is also fast, as most of IPC and system calls. It resides in the main memory (RAM),
the operations are performed locally. It has a very when any operating system is loaded in memory.
small footprint. Just to compare GIT with another SVN The kernel is mainly of two types - the micro kernel and
(Source Version Control) tool, refer to http://git-scm.com/ the monolithic kernel. The Linux kernel is monolithic, as is
about/small-and-fast. depicted clearly in Figure 2.
GIT allows multiple local copies (branches), each Based on the above diagram, the kernel can be viewed
totally different from the other—it allows the making of as a resource manager; the managed resource could be a
clones of the entire repository so each user will have a full process, hardware, memory or storage devices. More details
backup of the main repository. Figure 1 gives one among about the internals of the Linux kernel can be found at http://
the many pictorial representations of GIT. Developers can kernelnewbies.org/LinuxVersions and https://www.kernel.org/
clone the main repository, maintain their own local copy doc/Documentation/.
(branch and branch1) and push the code changes (branch1)
to the main repository. For more information on GIT, refer Linux kernel files and modules
to http://git-scm.com/book. In Ubuntu, kernel files are stored under the /boot/ directory
(run ls /boot/ from the command prompt). Inside this
Note: GIT is under development and hence changes are directory, the kernel file will look something like this:
often pushed into GIT repositories. To get the latest GIT code,
use the following command: ‘vmlinuz-A.B.C-D’
$git clone git://git.kernel.org/pub/scm/git/git.git
… where A.B is 3.2, C is your version and D is a patch or fix.
The kernel Let’s delve deeper into certain aspects depicted in Figure 3:
The kernel is the lowest level program that manages ƒƒ Vmlinuz-3.2.0-29-generic: In vmlinuz, ‘z’ indicates the

66  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Let's Try Developers

‘compressed’ Linux kernel. With the development of


Public1 Public2 Main repository
virtual memory, the prefix vm was used to indicate that the
kernel supports virtual memory.
ƒƒ Initrd.img-3.2.0-29-generic: An initial ‘ramdisk’ for your kernel.
ƒƒ Config-3.2.0-29-generic: The ‘config’ file is used to
configure the kernel. We can configure, define options and
determine which modules to load into the kernel image
Developer Developer Integration
while compiling.
ƒƒ System.map-3.2.0-29-generic: This is used for memory
management before the kernel loads. Figure 1: GIT

Kernel modules User Applications


The interesting thing about kernel modules is that they can be
Kernel
loaded or unloaded at runtime. These modules typically add System Call
functionality to the kernel—file systems, devices and system
Virtual File System
calls. They are located under /lib/modules with the extension .ko. Process Physical/ virtual
Management & Memory & Device Driver
architecture Management
Setting up a development environment
Let’s set up the host machine with Ubuntu 14.04. Building
the Linux kernel requires a few tools like GIT, make, gcc and Hardware

ctag/ncurser-dev. Run the following command: Memory


Storage & other
CPU devices

Sudo apt-get install git-core gcc make libncurses5-dev


exuberant-ctags Figure 2: Linux kernel architecture

Once GIT is installed on the local machine (I am using


Ubuntu), open a command prompt and issue the following
commands to create an account:
Figure 3: Locating Ubuntu Linux kernel files
git config --global user.name “Vinay Patkar”
git config –global user. Email [email protected]

Let’s set up our own local repository for the Linux kernel.

Note: 1. Multiple Linux kernel repositories exist online.


Here, we pull Linus Torvald’s Linux-2.6 GIT code -- Git clone
http://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux-2.6.git
2. In case you are behind a proxy-server, set the proxy
by running git config –global https.proxy https://domain\
usernmae:password@proxy:port. Figure 4: GIT pull

Now you can see a directory named linux-2.6 in the Next, find the latest stable kernel tag by running the
current directory. Do a GIT pull to update your repository: following code:

Cd linux-2.6 git tag -l | less


Git pull git checkout -b stable v3.9

Note: Alternatively, you can clone the latest stable build


as shown below: Note: RC is the release candidate, and it is a functional
but not stable build.
git clone git://git.kernel.org/pub/scm/linux/kernel/git/
stable/linux-stable.git Once you have the latest kernel code pulled, create your
cd linux-2.6 own local branch using GIT. Make some changes to the code
and to commit changes to the code, run git commit –a.

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  67


Developers Let's Try

your current config.


There are multiple files that start with config; find the file that
is associated with your kernel by running uname –a. Then run:

cp /boot/config-`uname -r`* .config or


cp /boot/config-3.13.0-24-generic .config
Make defconfig <---- for default configuration
Or
Make nconfig <---- for minimal configuration, here we
can enable or disable features

Figure 5: GIT checkout At this point edit MakeFile as shown below


VERSION = 3
PATCHLEVEL = 9
SUBLEVEL = 0
EXTRAVERSION = -rc9 <-- there [edit this part]
NAME = Saber-toothed Squirrel

Now run:

Make

This will take some time and if everything goes well,


Figure 6: Make
install the newly built kernel by running the following
command:

Sudo make modules_install


Sudo make install

At this point, you should have your own version of the


kernel, so reboot the machine and log in as the super user (root)
and check uname –a. It should list your own version of the Linux
kernel (something like ‘Linux Kernel 3.9.0rc9’).

References
[1] http://linux.yyz.us/git-howto.html
[2] http://kernelnewbies.org/KernelBuild
Figure 7: Modules_install and Install [3] https://www.kernel.org/doc/Documentation/
[4] http://kernelnewbies.org/LinuxVersions
Setting up the kernel configuration
Many kernel drivers can be turned on or off, or be built By: Vinay Patkar
on modules. The .config file in the kernel source directory The author works as a software development engineer at Dell
determines which drivers are built. When you download the India R&D Centre, Bengaluru, and has close to two years’
source tree, it doesn’t come with a .config file. You have several experience in automation and Windows Server OS. He is
interested in virtualisation and cloud computing technologies.
options for generating a .config file. The easiest is to duplicate

Know the Leading Players ACCESS ELECTRONICS


in Every Sector of the B2B INDUSTRY WITH A
Electronics Industry
www.electronicsb2b.com
Log on to www.electronicsb2b.com and be in touch with the Electronics B2B Fraternity 24x7

68  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Admin

Managing Your IT Infrastructure


Effectively with Zentyal
Zentyal (formerly eBox Platform) is a program for servers used by small and medium businesses
(SMBs). It plays multiple roles—as a gateway, network infrastructure manager, unified threat
manager, office server, unified communications server or a combination of all of the above. This is
the third and last article in our series on Zentyal.

I
n previous articles in this series, we discussed various 4. From the dashboard, select HTTP Proxy under the
scenarios that included DHCP, DNS and setting Gateway section. This will show different options
up a captive portal. In this article, let’s discuss the like General settings, Access rules, Filter profiles,
HTTP proxy, traffic shaping and setting up of ‘Users and Categorized Lists and Bandwidth throttling.
Computers’ modules. 5. Select General settings to configure some basic
parameters.
The HTTP proxy set-up 6. Under General settings, select Transparent Proxy. This
We will start with the set-up of the HTTP proxy module of option is used to manage proxy settings without making
Zentyal. This module will be used to filter out unwanted clients aware about the proxy server.
traffic from our network. The steps for the configuration are 7. Check Ad Blocking, which will block all the
as follows: advertisements from the HTTP traffic.
1. Open the Zentyal dashboard by using the domain name 8. Cache size defines the stored HTTP traffic storage area.
set up in the previous article or use the IP address. Mention the size in MBs.
2. The URL will be https://domain-name. 9. Click Change and then click Save changes.
3. Enter the user ID and password. 10. To filter the unwanted sites from the network, block

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  july 2014  |  69


Admin How To

them using Filter profiles. Click Filter profiles under 2. Click Add new to add the network object.
HTTP proxy. 3. Enter the name of the object, like LAN. Click Add, and
11. Click Add new. then Save changes.
12. Enter the name of the profile. In our case, we used 4. After you have added the network object, you have to
Spam. Click Add and save changes. configure members under that object. Click the icon
13. Click the button under Configuration. under Members.
14. To block all spam sites, let’s use the Threshold option. 5. Click Add new to add members.
The various options of Threshold will decide how to 6. Enter the names of the members. We will use LAN users.
block the enlisted sites. Let’s select Very strict under 7. Under IP address, select the IP address range.
Threshold and click Change. Then click Save changes 8. Enter the range of your DHCP address range, since we
to save the changes permanently. would like to apply it to all the users in the network.
15. Select Use antivirus to block all incoming files, 9. Click Add and then Save changes.
which may be viruses. Click the Change and the Save 10. Till now, we have added all the users of the network, on
changes buttons. which we wish to apply the bandwidth throttling rule.
16. To add a site to be blocked by proxy, click Domain and Now we will apply the rule. To do this, click HTTP
URLs and under Domain and URL rules, click the Add Proxy and select Bandwidth throttling.
new button. 11. This setting will be used to set the total amount of
17. You will then be asked for the domain name. Enter bandwidth that a single client can use. Click Enable per
the domain name of the site which is to be blocked. client limit.
Decision option will instruct the proxy to allow or deny 12. Enter the Maximum unlimited size per client, to be
the specified site. Then click Add and Save changes. set as a limit for a user under the network object. Enter
18. To activate the Spam profile, click Access rules under ‘50 MB’. A client can now download a 50 MB file with
HTTP proxy. maximum speed, but if the client tries to download a file
19. Click Add new. Define the time period and the days of a greater size than the specified limit, the throttling
when the profile is to be applied. rule will limit the speed to the maximum download rate
20. Select Any from Source dropdown menu and then per client. This speed option is set in the next step.
select Apply filter profile from Decision dropdown 13. Enter the maximum download rate per client (for our
menu. You will see a Spam profile. example, enter 20). This means that if the download
21. Click Add and Save changes. reaches the threshold, the speed will be decreased to
With all the above steps, you will be able to either block 20 KBps.
or allow sites, depending on what you want your clients to 14. Click Add and Save changes.
have access to. All the other settings can be experimented
with, as per your requirements. Traffic shaping set-up
With bandwidth throttling, we have set the upper limit for
Bandwidth throttling downloads, but to effectively manage our bandwidth we
The setting under HTTP proxy is used to add delay pools, have to use the Traffic shaping module. Follow the steps
so that a big file that users wish to download does not shown below:
hamper the download speed of the other users. 1. Click on Traffic shaping under the Gateway section.
To do this, follow the steps mentioned below: 2. Click on Rules. This will display two sections: rules for
1. First create the network object on which you wish to internal interfaces and rules for external interfaces.
apply the rule. Click Network and select Objects under 3. Follow the example rules given in Table 1-- these can be
Network options. used to shape the bandwidth on eth1.
Table 1
Based on Service Source Destination Priority Guaranteed Limited rate
the firewall rate (KBps) (KBps)
Yes Any Any 2 512 0

Yes Any Any 1 512 0

Yes Any Any 3 1024 2048


Yes Any Any 3 1024 2048
Yes Any Any 7 0 10

70  |  july 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Admin
Table 2
Based on the Service Source Destination Priority Guaranteed Limited
firewall rate (KBps) rate (KBps)
Yes Any Any 7 0 10

No (Prioritise small - - - 0 60 200


packets)

The rules mentioned in Table 1 will set protocols with Users’ set-up: To set up users for the domain system and
their priority over other protocols, for guaranteed speed. captive portal, follow the steps shown below.
4. The rules given in Table 2 will manage the upload speed 1. Click Users and Computers under the Office section.
for the protocols on eth0. 2. Click Manage. Here you will see the LDAP tree. Select
5. After adding all the rules, click on Save changes. Users and click on the plus sign.
6. With these steps, you have set the priorities of the With all the information entered and passed, users can
protocols and applications. One last thing to be done here log in to the system through the captive portal.
is to set the upload and download rates of the server. To
do this, click Interface rates under Traffic Shaping.
7. Click Action. Change the upload and download speed References
of the server, supplied by your service provider. Click [1] http://en.wikipedia.org/wiki/Proxy_server#Transparent_proxy
Change and then Save changes. [2] http://en.wikipedia.org/wiki/Bandwidth_throttling
[3] http://doc.zentyal.org/en/qos.html
Setting up Users and Computers
Setting up of groups and users can be done as follows. By: Gaurav Parashar
Group set-up: For this, follow the steps shown below. The author is a FOSS enthusiast, and loves to work with open
1. Click Users and Computers under the Office section. source technologies like Moodle and Ubuntu. He works as an
2. Click Manage. Select groups from the LDAP tree. Click assistant dean (for IT students) at Inmantec Institutions, Ghaziabad,
UP. He can be reached at [email protected]
on the plus sign to add groups.

Customer Feedback Form


Open Source For You

None

OSFY?

You can mail us at [email protected] You can send this form to


‘The Editor’, OSFY, D-87/1, Okhla Industrial Area, Phase-1, New Delhi-20. Phone No. 011-26810601/02/03, Fax: 011-26817563

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  july 2014  |  71


Insight Admin
ƒƒ Windows as the host OS since Type 2 hypervisors depend on an OS, they are not in full
• VMware workstation (Any guest OS) control of the end user’s machine.
• VirtualBox (Any guest OS)
Hypervisor Type 1 products
• Hyper-V (Any guest OS)
• VMware ESXi
ƒƒ Linux as the host OS • Citrix Xen
• VMware workstation • KVM (Kernel Virtual Machine)
• Microsoft virtual PC • Hyper-V
• VMLite workstation Hypervisor Type 2 products
• VMware Workstation
• VirtualBox • VirtualBox
• Xen
A hypervisor or virtual machine monitor (VMM) is a piece Table 1
of computer software, firmware or hardware that creates and Hypervisors and their cloud service providers
runs virtual machines. A computer on which a hypervisor is Hypervisor Cloud service provider
running one or more VMs is defined as a host machine. Each Xen Amazon EC2
VM is called a guest machine. The hypervisor presents the IBM SoftLayer
guest OSs with a virtual operating platform, and manages the Fujitsu Global Cloud Platform
execution of the guest operating systems. Multiple instances Linode
OrionVM
of a variety of operating systems may share the virtualised
ESXi VMware Cloud
hardware resources.
KVM Red Hat
HP
Hypervisors of Type 1 (bare metal installation) Dell
and Type 2 (hosted installation) Rackspace
When implementing and deploying a cloud service, Type 1 Hyper-V Microsoft Azure
hypervisors are used. These are associated with the concept of
bare metal installation. It means there is no need of any host Data centres and uptime tier levels
operating system to install the hypervisor. When using this Just as a virtual machine is mandatory for cloud computing,
technology, there is no risk of corrupting the host OS. These the data centre is also an essential part of the technology. All
hypervisors are directly installed on the hardware without the cloud computing infrastructure is located in remote data
the need for any other OS. Multiple VMs are created on this centres where resources like computer systems and associated
hypervisor. components, such as telecommunications and storage
A Type 1 hypervisor is a type of client hypervisor that systems, reside. Data centres typically include redundant
interacts directly with hardware that is being virtualised. It or backup power supplies, redundant data communications
is completely independent of the operating system, unlike a connections, environmental controls, air conditioning, fire
Type 2 hypervisor, and boots before the OS. Currently, Type suppression systems as well as security devices.
1 hypervisors are being used by all the major players in the The tier level is the rating or evaluation aspect of the
desktop virtualisation space, including but not limited to data centres. Large data centres are used for industrial scale
VMware, Microsoft and Citrix. operations that can use as much electricity as a small town.
The classical virtualisation software or Type 2 hypervisor The standards comprise a four-tiered scale, with Tier 4 being
is always installed on any host OS. If the host OS gets corrupt the most robust and full-featured (Table 2).
or crashes for any reason, the virtualisation software or Type
2 hypervisor will also crash and, obviously, all VMs and other Cloud simulations
resources will be lost. That’s why the hypervisor technology Cloud service providers charge users depending upon the
or bare metal installation is very popular in the cloud space or service provided.
computing world. In R&D, it is not always possible to have the actual cloud
Type 2 (hosted) hypervisors execute within a conventional infrastructure for performing experiments. For any research
OS environment. With the hypervisor layer as a distinct scholar, academician or scientist, it is not feasible to hire
second software level, guest OSs run at the third level cloud services every time and then execute their algorithms or
above the hardware. A Type 2 hypervisor is a type of implementations.
client hypervisor that sits on top of an OS. Unlike a Type For the purpose of research, development and testing,
1 hypervisor, a Type 2 hypervisor relies heavily on the open source libraries are available, which give the feel
operating system. It cannot boot until the OS is already up and of cloud services. Nowadays, in the research market,
running, and if for any reason the OS crashes, all end users cloud simulators are widely used by research scholars and
are affected. This is a big drawback of Type 2 hypervisors, as practitioners, without the need to pay any amount to a cloud
they are only as secure as the OS on which they rely. Also, service provider.

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  73


Admin Insight
Table 2
Tier Possible unavailability
Requirements
Level in a given year
1 • Single non-redundant distribution path serving the IT equipment 1729.224 minutes
• Non-redundant capacity components (28.8 hours)
• Basic site infrastructure with expected availability of 99.671 per cent
2 • Meets or exceeds all Tier 1 requirements 1361.304 minutes
• Redundant site infrastructure capacity components with expected availability of 99.741 per (22.6 hours)
cent
3 • Meets or exceeds all Tier 1 and Tier 2 requirements 94.608 minutes
• Multiple independent distribution paths serving the IT equipment (1.5 hours)
• All IT equipment must be dual-powered and fully compatible with the topology of a site’s
architecture
• Concurrently maintainable site infrastructure with expected availability of 99.982 per cent
4 • Meets or exceeds all Tier 1, Tier 2 and Tier 3 requirements 26.28 minutes
• All cooling equipment is independently dual-powered, including chillers, heaters, ventilation (0.4 hours)
and air-conditioning (HVAC) systems
• Fault-tolerant site infrastructure with electrical power storage and distribution facilities with
expected availability of 99.995 per cent

Using cloud simulators, researchers can execute their • CloudAnalyst


algorithmic approaches on a software-based library and • GreenCloud
can get the results in different parameters including energy • iCanCloud
optimisation, security, integrity, confidentiality, bandwidth, • MDCSim
power and many others. • NetworkCloudSim
• VirtualCloud
Tasks performed by cloud simulators • CloudMIG Xpress
The following tasks can be performed with the help of • CloudAuction
cloud simulators: • CloudReports
• Modelling and simulation of large scale cloud • RealCloudSim
computing data centres • DynamicCloudSim
• Modelling and simulation of virtualised server hosts, • WorkFlowSim
with customisable policies for provisioning host
resources to VMs CloudSim
• Modelling and simulation of energy-aware CloudSim is a famous simulator for cloud parameters developed in
computational resources the CLOUDS Laboratory, at the Computer Science and Software
• Modelling and simulation of data centre network Engineering Department of the University of Melbourne.
topologies and message-passing applications The CloudSim library is used for the following
• Modelling and simulation of federated clouds operations:
• Dynamic insertion of simulation elements, stopping ƒƒ Large scale cloud computing at data centres
and resuming simulation ƒƒ Virtualised server hosts with customisable policies
• User-defined policies for allocation of hosts to VMs, ƒƒ Support for modelling and simulation of large scale cloud
and policies for allotting host resources to VMs computing data centres
ƒƒ Support for modelling and simulation of virtualised server
Scope and features of cloud simulations hosts, with customisable policies for provisioning host
The scope and features of cloud simulations include: resources to VMs
• Data centres ƒƒ Support for modelling and simulation of energy-aware
• Load balancing computational resources
• Creation and execution of cloudlets ƒƒ Support for modelling and simulation of data centre
• Resource provisioning network topologies and message-passing applications
• Scheduling of tasks ƒƒ Support for modelling and simulation of federated clouds
• Storage and cost factors ƒƒ Support for dynamic insertion of simulation elements, as
• Energy optimisation, and many others well as stopping and resuming simulation
ƒƒ Support for user-defined policies to allot hosts to VMs,
Cloud simulation tools and plugins and policies for allotting host resources to VMs
Cloud simulation tools and plugins include: ƒƒ User-defined policies for allocation of hosts to virtual
• CloudSim machines

74  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Insight Admin
The major limitation of CloudSim is the lack of a contains the CloudSim examples class files
graphical user interface (GUI). But despite this, CloudSim is ƒƒ jars/CloudSim-examples-< CloudSimVersion >-sources.
still used in universities and the industry for the simulation of jar-- contains the CloudSim examples source code files
cloud-based algorithms.
Steps to integrate CloudSim with Eclipse
Downloading, installing and integrating CloudSim After installing Eclipse IDE, let’s create a new project and
CloudSim is free and open source software available at http:// integrate CloudSim into it.
www.cloudbus.org/CloudSim/. It is a code library based on 1. Create a new project in Eclipse.
Java. This library can be directly used by integrating with the 2. This can be done by File->New->Project->Java
JDK to compile and execute the code. Project
For rapid applications development and testing, CloudSim
is integrated with Java-based IDEs (Integrated Development
Environment) including Eclipse or NetBeans.
Using Eclipse or NetBeans IDE, the CloudSim library can
be accessed and the cloud algorithm implemented.
The directory structure of the CloudSim toolkit is given
below:
CloudSim/ -- CloudSim root directory
docs/ -- API documentation
examples/ -- Examples
jars/ -- JAR archives
sources/ -- Source code
tests/ -- Unit tests
CloudSim needs to be unpacked for installation. To
uninstall CloudSim, the whole CloudSim directory needs to
be removed.
There is no need to compile CloudSim source code. The
JAR files with the CloudSim package have been provided to
compile and run CloudSim applications:
ƒƒ jars/CloudSim-<CloudSimVersion>.jar-- contains the
CloudSim class files Figure 2: Assigning a name to the Java Project
ƒƒ jars/CloudSim-< CloudSimVersion >-sources.jar--
contains the CloudSim source code files
ƒƒ jars/CloudSim-examples-< CloudSimVersion >.jar--

Figure 1: Creating a new Java Project in Eclipse Figure 3: Build path for CloudSim library

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  75


Admin Insight
Starting Cloud Simulation with Dynamic and Hybrid Secured Key
Initialising...
MD5 Hash Digest(in Hex. format)::
6e47ed33cde35ef1cc100a78d3da9c9f
Hybrid Approach (SHA+MD5) Hash Hex format:
b0a309c58489d6788262859da2e7da45b6ac20a052b6e606ed1759648e43e40b
Hybrid Approach Based (SHA+MD5) Security Key Transmitted =>
ygcxsbyybpr4¢­  ª¢£?¡®­£
Starting CloudSim version 3.0
CloudDatacentre-1 is starting...
CloudDatacentre-2 is starting...
Broker is starting...
Entities started.
Figure 4: Go to the path of CloudSim library 0.0: Broker: Cloud Resource List received with 2 resource(s)
0.0: Broker: Trying to Create VM #0 in CloudDatacentre-1
0.0: Broker: Trying to Create VM #1 in CloudDatacentre-1
[VmScheduler.vmCreate] Allocation of VM #1 to Host #0 failed by
MIPS
0.1: Broker: VM #0 has been created in Datacentre #2, Host #0
0.1: Broker: Creation of VM #1 failed in Datacentre #2
0.1: Broker: Trying to Create VM #1 in CloudDatacentre-2
0.2: Broker: VM #1 has been created in Datacentre #3, Host #0
0.2: Broker: Sending cloudlet 0 to VM #0
0.2: Broker: Sending cloudlet 1 to VM #1
0.2: Broker: Sending cloudlet 2 to VM #0
160.2: Broker: Cloudlet 1 received
320.2: Broker: Cloudlet 0 received
320.2: Broker: Cloudlet 2 received
320.2: Broker: All Cloudlets executed. Finishing...
Figure 5: Select all JAR files of CloudSim for integration
320.2: Broker: Destroying VM #0
3. Give a name to 320.2: Broker: Destroying VM #1
your project. Broker is shutting down...
4. Configure the Simulation: No more future events
build path for adding CloudInformationService: Notify all CloudSim entities for
the CloudSim library. shutting down.
5. Search and select CloudDatacentre-1 is shutting down...
the CloudSim JAR files. CloudDatacentre-2 is shutting down...
In the integration Broker is shutting down...
and implementation Simulation completed.
of Java code and Simulation completed.
CloudSim, the Java-
based methods and ============================= OUTPUT ================= ========
packages can be used. Cloudlet ID STATUS Data centre ID VM ID Time Start Time
In this approach, the Finish Time
Java library is directly ==============================================================
associated with 1 SUCCESS 3 1 160 0.2 160.2
CloudSim code. 0 SUCCESS 2 0 320 0.2 320.2
After executing 2 SUCCESS 2 0 320 0.2 320.2
the code in Eclipse,
the following output Cloud Simulation Finish
Figure 6: JAR files of CloudSim visible in the
referenced libraries of Eclipse with Java Project
will be generated, which Simulation Scenario Finish with Successful Matching of the Keys
makes it evident that the Simulation Scenario Execution Time in MillSeconds => 5767
integration of the dynamic key exchange is implemented with Security Parameter => 30.959372773933122
the CloudSim code: 2014-07-09 16:15:21.19

76  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Insight Admin
Communities of users and data centres supporting the social
networks are characterised and based on their location. Parameters
such as user experience while using the social network application
and the load on the data centre are obtained/logged.
CloudAnalyst is used to model and analyse real world
problems through case studies of social networking
applications deployed on the cloud.
The main features of CloudAnalyst are:
ƒƒ User friendly graphical user interface (GUI)
ƒƒ Simulation with a high degree of configurability and
Figure 7: Create a new Java program for integration with CloudSim flexibility
ƒƒ Performs different types of experiments with repetitions
ƒƒ Connectivity with Java for extensions

The GreenCloud cloud simulator


GreenCloud is also getting famous in the international
market as the cloud simulator that can be used for energy-
aware cloud computing data centres with the main focus on
cloud communications. It provides the features for detailed
fine-grained modelling of the energy consumed by the data
centre IT equipment like the servers, communication switches
and communication links. GreenCloud simulator allows
Figure 8: Writing the Java code with the import of CloudSim packages researchers to investigate, observe, interact and measure the
cloud’s performance based on multiple parameters. Most
of the code of GreenCloud is written in C++. TCL is also
included in the library of GreenCloud.
GreenCloud is an extension of the network simulator
ns-2 that is widely used for creating and executing network
scenarios. It provides the simulation environment that enables
energy-aware cloud computing data centres. GreenCloud
mainly focuses on the communications within a cloud. Here,
all of the processes related to communication are simulated at
the packet level.

Figure 9: Execution of the Java code integrated with CloudSim


By: Dr Gaurav Kumar
By: Anil Kumar Pugalia
The CloudAnalyst cloud simulator The author is associated with various academic and research
institutes, delivering lectures and conducting technical
CloudAnalyst is another cloud simulator that is completely
workshops on the latest technologies and tools. Contact him at
GUI-based and supports the evaluation of social network tools [email protected]
according to the geographic distribution of users and data centres.

EB Times
• Electronics • Trade Channel • Updates

is Becoming Regional
Get North, East, West & South Edition at you
doorstep. Write to us at [email protected] and
get EB Times regularly
This monthly B2B Newspaper is a resource for traders, distributors, dealers, and those
who head channel business, as it aims to give an impetus to channel sales

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  77


Admin How To

Docker is an open source project, which packages applications and their dependencies
in a virtual container that can run on any Linux server. Docker has immense possibilities
as it facilitates the running of several OSs on the same server.

T
echnology is changing faster than styles in the fashion a single host. LXC does this by using kernel level name
world, and there are many new entrants specific to space, which helps to isolate containers from the host.
the open source, cloud, virtualisation and DevOps Now questions might arise about security. If I am logged
technologies. Docker is one of them. The aim of this article in to my container as the root user, I can hack my base OS;
is to give you a clear idea of Docker, its architecture and its so is it not secured? This is not the case because the user
functions, before getting started with it. name space separates the users of the containers and the
Docker is a new open source tool based on Linux host, ensuring that the container root user does not have
container technology (LXC), designed to change how you the root privilege to log in to the host OS. Likewise, there
think about workload/application deployments. It helps are the process name space and the network name space,
you to easily create light-weight, self-sufficient, portable which ensure that the display and management of the
application containers that can be shared, modified and processes run in the container but not on the host and the
easily deployed to different infrastructures such as cloud/ network container, which has its own network device and
compute servers or bare metal servers. The idea is to IP addresses.
provide a comprehensive abstraction layer that allows
developers to ‘containerise’ or ‘package’any application and Cgroups
have it run on any infrastructure. Cgroups, also known as control groups, help to implement
Docker is based on container virtualisation and it is not resource accounting and limiting. They help to limit resource
new. There is no better tool than Docker to help manage kernel utilisation or consumption by a container such as memory, the
level technologies such as LXC, cgroups and a copy-on-write CPU and disk I/O, and also provide metrics around resource
filesystem. It helps us manage the complicated kernel layer consumption on various processes within the container.
technologies through tools and APIs.
Copy-on-write filesystem
What is LXC (Linux Container)? Docker leverages a copy-on-write filesystem (currently
I will not delve too deeply into what LXC is and how it works, AUFS, but other filesystems are being investigated). This
but will just describe some major components. allows Docker to spawn containers (to put it simply—instead
LXC is an OS level virtualisation method for running of having to make full copies, it basically uses ‘pointers’ back
multiple isolated Linux operating systems or containers on to existing files).

78  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Admin

Image: An image is a read-only layer used to build a


container.
Container: This is a self-contained runtime environment
libcontainer
that is built using one or more images. It also allows us to
commit changes to a container and create an image.
Docker registry: These are the public or private servers,
Systemd- where anyone can upload their repositories so that they can
libvirt LXC
nspawn be easily shared.
The detailed architecture is outside the scope of this article.
Have a look at http://docker.io for detailed information.

Linux Kernel
Note: I am using CentOS, so the following
netlink netfilter SELinux instructions are applicable for CentOS 6.5.
cgroups
apparmor namespace Docker is part of Extra Packages for Enterprise Linux
(EPEL), which is a community repository of non-standard
packages for the RHEL distribution. First, we need to install
the EPEL repository using the command shown below:
Hardware (Intel, AMD)
[root@localhost ~] # rpm -ivh http://dl.fedoraproject.org/
pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
Figure 1: Linux Container
As per the best practice update,
Containerisation vs virtualisation
What is the rationale behind the container-based [root@localhost ~] # yum update -y
approach or how is it different from virtualisation?
Figure 2 speaks for itself. docker-io is the package that we need to install. As
Containers virtualise at the OS level, whereas I am using CentOS, Yum is my package manager; so
both Type-I and Type-2 hypervisor-based solutions depending on your distribution ensure that the correct
virtualise at the hardware level. Both virtualisation and command is used, as shown below:
containerisation are a kind of virtualisation; in the case
of VMs, a hypervisor (both for Type-I and Type-II) slices [root@localhost ~] # yum -y install docker-io
the hardware, but containers make available protected
portions of the OS. They effectively virtualise the OS. If Once the above installation is done, start the Docker
you run multiple containers on the same host, no container service with the help of the command below:
will come to know that it is sharing the same resources
because each container has its own abstraction.LXC takes [root@localhost ~] # service docker start
the help of name spaces to provide the isolated regions
known as containers. Each container runs in its own To ensure that the Docker service starts at each reboot, use
allocated name space and does not have access outside of the following command:
it. Technologies such as cgroups, union filesystems and
container formats are also used for different purposes [root@localhost ~] # chkconfig docker on
throughout the containerisation.

Linux containers
Unlike virtual machines, with the help of LXC
you can share multiple containers from a single
source disk OS image. LXC is very lightweight,
has a faster start-up and needs less resources.

Installation of Docker
Before we jump into the installation process,
we should be aware of certain terms commonly
used in Docker documentation. Figure 2: Virtualisation

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  79


Admin How To

To check the Docker version, use the following Run the following command to see your new image in
command: the list. You will find the newly created image ‘lamp-image’
is shown in the output:
[root@localhost ~] # docker version
[root@localhost ~] # docker images
How to create a LAMP stack with Docker REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
We are going to create a LAMP stack on a CentOS VM. lamp-image latest b71507766b2d 2 minutes ago
However, you can work on different variants as well. First, 339.7 MB
let’s get the latest CentOS image. The command below will centos latest 0c752394b855 13 days ago
help us to do so: 124.1 MB

[root@localhost ~] # docker pull centos:latest Let’s log in to this image/container to check the PHP
version:
Next, let’s make sure that we can see the image by
running the following code: [root@localhost ~] # docker run -i -t lamp-image /bin/bash
bash-4.1# php -v
[root@localhost ~] # docker image centos PHP 5.3.3 (cli) (built: Dec 11 2013 03:29:57)
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE Zend Engine v2.3.0 Copyright (c) 1998-2010 Zend Technologies
centos latest 0c752394b855 13 days ago
124.1 MB Now, let us configure Apache.
Log in to the container and create a file called index.html.
Running a simple bash shell to test the image also helps If you don’t want to install VI or VIM, use the Echo
you to start a new container: command to redirect the following content to the index.html file:

[root@localhost ~] # docker run -i -t centos /bin/bash <?php echo “Hello world”; ?>

If everything is working properly, you'll get a simple Start the Apache process with the following command:
bash prompt. Now, as this is just a base image, we need to
install the PHP, MySQL and the LAMP stack: [root@localhost ~] # /etc/init.d/httpd start

[root@localhost ~] # yum install php php-mySQL mySQL-server And then test it with the help of browser/curl/links utilities.
httpd If you’re running Docker inside a VM, you’ll need to
forward port 80 on the VM to another port on the VM’s
The container now has the LAMP stack. Type ‘exit’ to host machine. The following command might help you
quit from the bash shell. to configure port forwarding. Docker has the feature to
We are going to create this as a golden image, so that forward ports from containers to the host.
the next time we need another LAMP container, we don’t
need to install it again. [root@localhost ~] # docker run -i -t -p :80 lamp-image /
Run the following command and please note the bin/bash
‘CONTAINER ID’ of the image. In my case, the ID was
‘4de5614dd69c’: For detailed information on Docker and other
technologies related to container virtualisation, check out the
[root@localhost ~] # docker ps -a links given under ‘References’.

The ID shown in the listing is used to identify the References


container you are using, and you can use this ID to tell [1] Docker: https://docs.docker.com/
Docker to create an image. [2] LXC: https://linuxcontainers.org/
Run the command below to make an image of the
previously created LAMP container. The syntax is docker By: Pradyumna Dash
commit <CONTAINER ID> <name>. I have used the The author is an independent consultant, and works as a cloud/
previous container ID, which we got in the earlier step: DevOps architect. An open source enthusiast, he loves to cook
good food and brew ideas. He is also the co-founder of the site
http:/www.sillycon.org/
[root@localhost ~] # docker commit 4de5614dd69c lamp-image

80  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Admin

Wireshark: Essential for a Network


Professional’s Toolbox
This article, the second in the series, presents further experiments with Wireshark, the
open source packet analyser. In this part, Wireshark will be used to analyse packets
captured from an Ethernet hub.

T
he first article in the Wireshark series, published in the Wireshark. So start the capture and once you have sufficient
July 2014 issue of OSFY, covered Wireshark architecture, packets, stop and view the packets before you continue reading.
its installation on Windows and Ubuntu, as well as various An interesting observation about this capture is
ways to capture traffic in a switched environment. Interpretation that, unlike only broadcast and host traffic in a switched
of DNS and ICMP Ping protocol captures was also covered. environment, it contains packets from all source IP addresses
Let us now carry the baton forward and understand additional connected in the network. Did you notice this?
Wireshark features and protocol interpretation. The traffic thus contains:
To start with, capture some traffic from a network ƒƒ Broadcast packets
connected to an Ethernet hub—which is the simplest way to ƒƒ Packets from all systems towards the Internet
capture complete network traffic. ƒƒ PC-to-PC communication packets
Interested readers may purchase an Ethernet hub from a ƒƒ Multicast packets
second hand computer dealer at a throwaway price and go Now, at this point, imagine analysing traffic captured from
ahead to capture a few packets in their test environment. The hundreds of computers in a busy network—the sheer volume of
aim of this is to acquire better hands-on practice of using captured packets will be baffling. Here, an important Wireshark

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  81


Admin How To

Figure 1: Traffic captured using HUB

Figure 3: ARP protocol

source or the destination IP address is 192.168.51.1


Click on ‘Save’ to store the required filter for future use. By
default, the top 10 custom filters created are available for ready
use under the dropdown menu of the ‘Filter’ dialogue box.
With this background, let us look at two simple protocols
—ARP and DHCP.
Figure 2: Default Wireshark display filters
Address Resolution Protocol (ARP)
feature called ‘Display Filter' can be used very effectively. This is used to find the MAC address from the IP address. It
works in two steps—the ARP request and ARP reply. Here
Wireshark’s Display Filter are the details.
This helps to sort/view the network traffic using various Apply the appropriate display filter (ARP) and view only
parameters such as the traffic originating from a particular IP ARP traffic from the complete capture. Also, refer to Figure
or MAC address, traffic with a particular source or destination 3 - the ARP protocol.
port, ARP traffic and so on. It is impossible to imagine The protocol consists of the ARP request and ARP reply.
Wireshark without display filters! ARP request: This is used to find the MAC address of a
Click on ‘Expressions’ or go to ‘Analyse – Display filters’ system with a known IP address. For this, an ARP request is
to find a list of pre-defined filters available with Wireshark. sent as a broadcast towards the MAC broadcast address:
You can create custom filters depending upon the analysis
requirements—the syntax is really simple. Sender MAC address – 7c:05:07:ad:42:53
As seen in Figure 2, the background colours of the display Sender IP address – 192.168.51.208
filter box offer ready help while creating proper filters. A green Target MAC address – 00:00:00:00:00:00
background indicates the correct command or syntax, while a Target IP address – 192.168.51.1
red background indicates an incorrect or incomplete command.
Use these background colours to quickly identify syntax and Note: Target IP address indicates the IP address for
gain confidence in creating the desired display filters. which the MAC address is requested.
A few simple filters are listed below:
tcp: Displays TCP traffic only Wireshark displays the ARP request under the‘Info’ box
arp: Displays ARP traffic as: Who has 192.168.51.1? tell 192.168.51.208
eth.addr == aa:bb:cc:dd:ee:ff: Displays traffic where the ARP reply: This ARP request broadcast is received by
Ethernet MAC address is aa:bb:cc:dd:ee:ff all systems connected to the network segment of the sender
ip.src == 192.168.51.203: Displays traffic where the (below the router), mind well, this broadcast also reach router
source IP address is 192.168.51.203 port connected to this segment.
ip.dst == 4.2.2.1: Displays traffic where the destination IP The system with the destination IP address mentioned in
address is 4.2.2.1 the ARP request packet replies with its MAC address via an
ip.addr == 192.168.51.1: Displays traffic where the ARP reply. The important contents of the ARP reply are:

82  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


How To Admin

Sender MAC Address – Belonging to system which replies to the


ARP request Updated by the system – 00:21:97:88:28:21
Sender IP Address – Belonging to system which replies to the
ARP request – 192.168.51.1
Target MAC Address – Source MAC of ARP request packet –
7c:05:07:ad:42:53
Target IP Address – Source IP address of the ARP request
packet – 192.168.51.208

Wireshark displays the ARP reply under the ‘Info’ box as:
192.168.51.1 is at 00:21:97:88:28:21.
Thus, with the help of an ARP request and reply, system
192.168.51.208 has detected the MAC address belonging to
192.168.51.1.

Dynamic Host Configuration Protocol (DHCP)


This protocol saves a lot of time for network engineers by
offering a unique dynamic IP address to a system without
an IP address, which is connected in a network. This also
helps to avoid IP conflicts (the use of one IP address by
multiple systems) to a certain extent. The computer users
also benefit by the ability to connect to various networks
without knowing the corresponding IP address range and
the unused IP address.
This DHCP protocol consists of four phases—DHCP
discover, DHCP offer, DHCP request and DHCP ACK. Let us
understand the protocol and interpret how these packets are
seen in Wireshark.

Figure 4: DHCP protocol

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  83


Admin How To

1. The DHCP Server Identifier field, which specifies the IP


address of the accepted server.
2. The host name of the client computer.
Use Pane 2 of Wireshark to view these parameters under
‘Bootstrap Protocol’ – Options 54 and 12.
The DHCP request packet also contains additional
client requests for the server to provide more configuration
parameters such as the default gateway, DNS (Domain Name
Server), address, etc.
DHCP acknowledgement: The server acknowledges a
DHCP request by sending information on the lease duration
and other configurations, as requested by the client during the
DHCP request phase, thus completing the DHCP cycle.
For better understanding, capture a few packets, use
Wireshark ‘Display Filters’ to filter and view ARP and DHCP,
and read them using Wireshark panes.

Saving packets
Packets captured using Wireshark can be saved from the menu
‘File – Save as’ in different formats such as Wireshark, Novell
LANalyzer and Sun Snoop, to name a few.
In addition to saving all captured packets in various file
formats, the ‘File – Export Specified Packets’ option offers
users the choice of saving ‘Display Filtered’ packets or a
range of packets.
Figure 5: Screenshot of DHCP protocol Please feel free to download the pcap files used for
preparing this article from opensourceforu.com. I believe all
When a system configured with the ‘Obtain an IP address OSFY readers will enjoy this interesting world of Wireshark,
automatically’ setting is connected to a network, it uses DHCP packet capturing and various protocols!
to get an IP address from the DHCP server. Thus, this is a
client–server protocol. To capture DHCP packets, users may Troubleshooting tips
start Wireshark on such a system, then start packet capture Capturing ARP traffic could reveal ARP poisoning (or ARP
and, finally, connect the network cable. spoofing) in the network. This will be discussed in more
Please refer to Figures 4 and 5, which give a diagram and detail at a later stage. Similarly, studying the capture of DHCP
a screenshot of the DHCP protocol, respectively. protocol may lead to the discovery of an unintentional or a
Discovering DHCP servers: To discover DHCP server(s) rogue DHCP server within the network.
in the network, the client sends a broadcast on 255.255.255.255
with the source IP as 0.0.0.0, using UDP port 68 (bootpc) as A word of caution
the source port and UDP 67 (bootps) as the destination. This Packets captured using the test scenarios described in
message also contains the source MAC address as that of the this series of articles are capable of revealing sensitive
client and ff:ff:ff:ff:ff:ff as the destination MAC. information such as login names and passwords. Some
A DHCP offer: The nearest DHCP server receives this scenarios, such as using ARP spoofing may disrupt the
‘discover’ broadcast and replies with an offer containing network temporarily. Make sure to use these techniques
the offered IP address, the subnet mask, the duration of the only in a test environment. If at all you wish to use them in
default gateway lease and the IP address of the DHCP server. a live environment, do not forget to get the explicit written
The source MAC address is that of the DHCP server and the permission before doing so.
destination MAC address is that of the requesting client. Here,
the UDP source and destination ports are reversed.
DHCP requests: Remember that there can be more than By: Rajesh Deodhar
one DHCP server in a network. Thus, a client can receive The author has been an IS auditor and network security consultant-
multiple DHCP offers. The DHCP request packet is broadcast trainer for the last two decades. He is a BE in Industrial Electronics,
by the client with parameters similar to discovering a DHCP and holds CISA, CISSP, CCNA and DCL certifications. He can be
contacted at [email protected]
server, with two major differences:

84  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Let's Try Open Gurus

Building the Android Platform:


Compile the Kernel
Tired of stock ROMs? Build and flash your own version
of Android on your smartphone. This new series of
articles will see you through from compiling your
kernel to flashing it on your phone.

M
any of us are curious and eager to learn how to
port or flash a new version of Android to our
phones and tablets. This article is the first step
towards creating your own custom Android system. Here,
you will learn to set up the build environment for the
Android kernel and build it on Linux.
Let us start by understanding what Android is. Is it an
application framework or is it an operating system? It can be
called a mobile operating system based on the Linux kernel,
for the sake of simplicity, but it is much more than that. It
consists of the operating system, middleware, and application
software that originated from a group of companies led by
Google, known as the Open Handset Alliance.

Android system architecture ƒƒ Application framework: Applications written in Java


Before we begin building an Android platform, let’s directly interact with this layer.
understand how it works at a higher level. Figure 1 illustrates ƒƒ Binder IPC: It is an Android-specific IPC mechanism.
how Android works at the system level. ƒƒ Android system services: To access the underlying
We will not get into the finer details of the architecture in hardware application framework, APIs often communicate
this article since the primary goal is to build the kernel. Here via system services.
is a quick summary of what the architecture comprises. ƒƒ HAL: This acts as a glue between the Android system and
the underlying device drivers.
Applications and Framework
ƒƒ Linux kernel: At the bottom of the stack is a Linux kernel,
with some architectural changes/additions including
Binder IPC
binder, ashmem, pmem, logger, wavelocks, different out-
of-memory (OOM) handling, etc.
Android System Services
In this article, I describe how to compile the kernel for the
Media Services System Services
Samsung Galaxy Star Duos
MediaPlayer Window
(GT-S5282) with Android
AudioFlinger
version 4.1.2. The build
Service Search Service Manager

Camera
Service Activity
process was performed on
Other System
Other Media
Services
Manager
Services and
Managers
an Intel i5 core processor
running 64-bit Ubuntu Linux
HAL 14.04 LTS (Trusty Tahr).
Camera HAL Audio HAL
However, the process should
Graphics HAL
Other HALs
work with any Android
kernel and device, with minor
Linux Kernel modifications. The handset
Camera Driver
Audio Driver
(ALSA, OSS, Display Drivers
details are shown in the
etc) Other Drivers
screenshot (Figure 2) taken
from the Setting ->About
Figure 1: Android system architecture Figure 2: Handset details for GT-S5282 device menu of the phone.

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  85


Open Gurus Let's Try

System and software requirements respectively.


Before you download and build the Android kernel, ensure
that your system meets the following requirements: $ mkdir ~/android
ƒƒ Linux system (Linux running on a virtual machine will $ mkdir ~/android/kernel
also work but is not recommended). Steps explained $ mkdir ~/android/ndk
in this article are for Ubuntu 14.04 LTS to be specific.
Other distributions should also work. Now extract the archive:
ƒƒ Around 5 GB of free space to install the dependent
software and build the kernel. $ cd ~/Downloads/kernel
ƒƒ Pre-built tool-chain.
ƒƒ Dependent software should include GNU Make, $ unzip GT-S5282_SEA_JB_Opensource.zip
libncurses5-dev, etc.
ƒƒ Android kernel source (as mentioned earlier, this $ tar -C ~/android/kernel -zxf Kernel.tar.gz
article describes the steps for the Samsung Galaxy Star
kernel). The unzip command will extract the zip archive,
ƒƒ Optionally, if you are planning to compile the whole which contains the following files:
Android platform (not just the kernel), a 64-bit ƒƒ Kernel.tar.gz: The kernel to be compiled.
system is required for Gingerbread (2.3.x) and newer ƒƒ Platform.tar.gz: Android platform files.
versions. ƒƒ README_Kernel.txt: Readme for kernel compilation.
It is assumed that the reader is familiar with Linux ƒƒ README_Platform.txt: Readme for Android platform
commands and the shell. Commands and file names are compilation.
case sensitive. Bash shell is used to execute the commands If the unzip command is not installed, you can extract
in this article. the files using any other file extraction tool.
By running the tar command, we are extracting the
Step 1: Getting the source code kernel source to ~/android/kernel. While creating a sub-
The Android Open Source Project (AOSP) maintains directory for extracting is recommended, let’s avoid it
the complete Android software stack, which includes here for the sake of simplicity.
everything except for the Linux kernel. The Android
Linux kernel is developed upstream and also by various Step 3: Install and set up the toolchain
handset manufacturers. There are several ways to install the toolchain. We will
The kernel source can be obtained from: use the Android NDK to compile the kernel.
1. Google Android kernel sources: Visit https://source. Please visit https://developer.android.com/tools/sdk/
android.com/source/building-kernels.html for details. ndk/index.html to get details about NDK.
The kernel for a select set of devices is available here. For 64-bit Linux, download Android NDK android-
2. From the handset manufacturers or OEM website: I ndk-r9-linux-x86_64-legacy-toolchains.tar.bz2 from
am listing a few links to the developer sites where you http://dl.google.com/android/ndk/android-ndk-r9-linux-
can find the kernel sources. Please understand that the x86_64-legacy-toolchains.tar.bz2
links may change in the future. Ensure that the file is saved in the ~/android/ndk
ƒƒ Samsung: http://opensource.samsung.com/ directory.
ƒƒ HTC: https://www.htcdev.com/
ƒƒ Sony: Most of the kernel is available on github. Note: To be specific, we need the GCC 4.4.3
3. Developers: They provide a non-official kernel. version to compile the downloaded kernel. Using
This article will use the second method—we will get the latest version of Android NDK will yield to
the official Android kernel for Samsung Galaxy Star (GT- compilation errors.
S5282). Go to the URL http://opensource.samsung.com/
and search for GT-S5282. Download the file GT-S5282_ Extract the NDK to ~/android/ndk:
SEA_JB_Opensource.zip (184 MB).
Let’s assume that the file is downloaded in the ~/ $ cd ~/android/ndk
Downloads/kernel directory. # For 64 bit version
$ tar -jxf android-ndk-r9-linux-x86_64-legacy-
Step 2: Extract the kernel source code toolchains.tar.bz2
Let us create a directory ‘android’ to store all relevant
files in the user's home directory. The kernel and Android Add the toolchain path to the PATH environment
NDK will be stored in the kernel and ndk directories, variable in .bashrc or the equivalent:

86  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Let's Try Open Gurus

you are unsure.

Step 5: Build the kernel


Finally, we are ready to fire the build. Run the make
command, as follows:

$ make zImage

If you want to speed up the build, specify the -j option to


the make command. For example, if you have four processor
Figure 3: Kernel configuration – making changes cores, you can specify the -j4 option to make:

#Set the path for Android build env (64 bit) $ make -j4 zImage
export PATH=${HOME}/android/ndk/android-ndk-r9/toolchains/
arm-linux-androideabi-4.4.3/prebuilt/linux-x86_64/ The compilation process will take time to complete, based
bin:$PATH on the options available in the kernel configuration (.config)
and the performance of the build system. On completion, the
Step 4: Configure the Android kernel kernel image (zImage) will be generated in the arch/arm/boot/
Install the necessary dependencies, as follows: directory of the kernel source.
Compile the modules:
$ sudo apt-get install libncurses5-dev build-essential
$ make modules
Set up the architecture and cross compiler, as follows:
This will trigger the build for kernel modules, and .ko files
$ export ARCH=arm should be generated in the corresponding module directories.
$ export CROSS_COMPILE=arm-linux-androideabi- Run the find command to get a list of .ko files in the kernel
directory:
The kernel Makefile refers to the above variables
to select the architecture and cross compile. The cross $ find . -name “*.ko”
compiler command will be ${CROSS_COMPILE}gcc
which is expanded to arm-linux-androideabi-gcc. The same What next?
applies for other tools like g++, as, objdump, gdb, etc. Now that you have set up the Android build environment,
Configure the kernel for the device: and compiled an Android kernel and necessary modules,
how do you flash it to the handset so that you can see the
$ cd ~/android/kernel kernel working? This requires the handset to be rooted first,
$ make mint-vlx-rev03_defconfig followed by flashing the kernel and related software. It turns
out that there are many new concepts to understand before
The device-specific configuration files for ARM we get into this. So be sure to follow the next article on
architecture are available in the arch/arm/configs directory. rooting and flashing your custom Android kernel.
Executing the configuration command may throw a
few warnings. You can ignore these warnings now. The
command will create a .config file, which contains the References
kernel configuration for the device. https://source.android.com/
To view and edit the kernel configuration, run the https://developer.android.com/
following command: http://xda-university.com

$ make menuconfig By: Mubeen Jukaku


Mubeen is technology head at Emertxe Information Technologies
Next, let’s assume you want to change lcd overlay (http://www.emertxe.com). His area of expertise is the architecture
support. and design of Linux-based embedded systems. He has vast
Navigate to Drivers → Graphics → Support for experience in kernel internals, device drivers and application porting,
framebuffer devices. The option to support lcd overlay and is passionate about leveraging the power of open source for
building innovative products and solutions. He can be reached at
should be displayed as shown in Figure 3. [email protected]
Skip the menuconfig step or do not make any changes if

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  87


For U & Me Let’s Try

Lists: The Building Blocks of Maxima


This 20th article in our series on Mathematics in Open Source showcases the list manipulations
in Maxima, the programming language with an ALGOL-like syntax but Lisp-like semantics.

L
ists are the basic building blocks of Maxima. The its arguments. Note that makelist() is limited by the variation
fundamental reason is that Maxima is implemented in it can have, which to be specific, is just one – ‘i’ in the first
Lisp, the building blocks of which are also lists. two examples and ‘x’ in the last one. If we want more, the
To begin with, let us walk through the ways of creating create_list() function comes into play.
a list. The simplest method to get a list in Maxima is to just create_list(f, x1, L1, ..., xn, Ln) creates and returns a list
define it, using []. So, [x, 5, 3, 2*y] is a list consisting of four with members of the form ‘f’, evaluated for the variables x1,
members. However, Maxima provides two powerful functions ..., xn using the values from the corresponding lists L1, ..., Ln.
for automatically generating lists: makelist() and create_list(). Here is just a glimpse of its power:
makelist() can take two forms. makelist (e, x, x0, xn)
creates and returns a list using the expression ‘e’, evaluated $ maxima -q
for ‘x’ using the values ranging from ‘x0’ to ‘xn’. makelist(e, (%i1) create_list(concat(x, y), x, [p, q], y, [1, 2]);
x, L) creates and returns a list using the expression ‘e’, (%o1) [p1, p2, q1, q2]
evaluated for ‘x’ using the members of the list L. Check out (%i2) create_list(concat(x, y, z), x, [p, q], y, [1, 2], z, [a,
the example below for better clarity: b]);
(%o2) [p1a, p1b, p2a, p2b, q1a, q1b, q2a, q2b]
$ maxima -q (%i3) create_list(concat(x, y, z), x, [p, q], y, [1, 2, 3], z,
(%i1) makelist(2 * i, i, 1, 5); [a, b]);
(%o1) [2, 4, 6, 8, 10] (%o3) [p1a, p1b, p2a, p2b, p3a, p3b, q1a, q1b, q2a, q2b, q3a,
(%i2) makelist(concat(x, 2 * i - 1), i, 1, 5); q3b]
(%o2) [x1, x3, x5, x7, x9] (%i4) quit();
(%i3) makelist(concat(x, 2), x, [a, b, c, d]);
(%o3) [a2, b2, c2, d2] Note that ‘all possible combinations’ are created using the
(%i4) quit(); values for the variables ‘x’, ‘y’ and ‘z’.
Once we have created the lists, Maxima provides a host of
Note the interesting usage of concat() to just concatenate functions to play around with them. Let’s take a look at these.

88  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Let’s Try For U & Me
Testing the lists members in the list L
The following set of functions demonstrates the various checks ƒƒ reverse(L) - returns a list with members of the list L in
on lists: reverse order
ƒƒ atom(v) - returns ‘true’ if ‘v’ is an atomic element; ‘false’
otherwise $ maxima -q
ƒƒ listp(L) - returns ‘true’ if ‘L’ is a list; ‘false’ otherwise (%i1) L: makelist(i, i, 1, 10);
ƒƒ member(v, L) - returns ‘true’ if ‘v’ is a member of list L; (%o1) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
‘false’ otherwise (%i2) cons(0, L);
ƒƒ some(p, L) - returns ‘true’ if predicate ‘p’ is true for at least (%o2) [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
one member of list L; ‘false’ otherwise (%i3) endcons(11, L);
ƒƒ every(p, L) - returns ‘true’ if predicate ‘p’ is true for all (%o3) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11]
members of list L; ‘false’ otherwise (%i4) rest(L);
(%o4) [2, 3, 4, 5, 6, 7, 8, 9, 10]
$ maxima -q (%i5) rest(L, 3);
(%i1) atom(5); (%o5) [4, 5, 6, 7, 8, 9, 10]
(%o1) true (%i6) rest(L, -3);
(%i2) atom([5]); (%o6) [1, 2, 3, 4, 5, 6, 7]
(%o2) false (%i7) join(L, [a, b, c, d]);
(%i3) listp(x); (%o7) [1, a, 2, b, 3, c, 4, d]
(%o3) false (%i8) delete(6, L);
(%i4) listp([x]); (%o8) [1, 2, 3, 4, 5, 7, 8, 9, 10]
(%o4) true (%i9) delete(4, delete(6, L));
(%i5) listp([x, 5]); (%o9) [1, 2, 3, 5, 7, 8, 9, 10]
(%o5) true (%i10) delete(4, delete(6, join(L, L)));
(%i6) member(x, [a, b, c]); (%o10) [1, 1, 2, 2, 3, 3, 5, 5, 7, 7, 8, 8, 9, 9, 10, 10]
(%o6) false (%i11) L1: rest(L, 7);
(%i7) member(x, [a, x, c]); (%o11) [8, 9, 10]
(%o7) true (%i12) L2: rest(rest(L, -3), 3);
(%i8) some(primep, [1, 4, 9]); (%o12) [4, 5, 6, 7]
(%o8) false (%i13) L3: rest(L, -7);
(%i9) some(primep, [1, 2, 4, 9]); (%o13) [1, 2, 3]
(%o9) true (%i14) append(L1, L2, L3);
(%i10) every(integerp, [1, 2, 4, 9]); (%o14) [8, 9, 10, 4, 5, 6, 7, 1, 2, 3]
(%o10) true (%i15) reverse(L);
(%i11) every(integerp, [1, 2, 4, x]); (%o15) [10, 9, 8, 7, 6, 5, 4, 3, 2, 1]
(%o11) false (%i16) join(reverse(L), L);
(%i12) quit(); (%o16) [10, 1, 9, 2, 8, 3, 7, 4, 6, 5, 5, 6, 4, 7, 3, 8, 2, 9, 1,
10]
List recreations (%i17) unique(join(reverse(L), L));
Next is a set of functions operating on list(s) to create and return (%o17) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
new lists: (%i18) L;
ƒƒ cons(v, L) - returns a list with ‘v’, followed by members of L (%o18) [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
ƒƒ endcons(v, L) - returns a list with members of L followed by ‘v’ (%i19) quit();
ƒƒ rest(L, n) - returns a list with members of L, except the first ‘n’
members (if ‘n’ is non-negative), otherwise except the last ‘-n’ Note that the list L is still not modified. For that matter, ,
members. ‘n’ is optional, in which case, it is taken as 1 even L1, L2, L3 are not modified. In fact, that is what is meant
ƒƒ join(L1, L2) - returns a list with members of L1 and L2 when we state that all these functions recreate new modified
interspersed lists, rather than modify the existing ones.
ƒƒ delete(v, L, n) - returns a list like L but with the first ‘n’
occurrences of ‘v’ deleted from it. ‘n’ is optional, in which List extractions
case all occurrences of ‘v’ are deleted Here is a set of functions extracting the various members of a
ƒƒ append(L1, ..., Ln) - returns a list with members of L1, ..., list. first(L), second(L), third(L), fourth(L), fifth(L), sixth(L),
Ln, one after the other seventh(L), eight(L), ninth(L), and tenth(L), respectively return
ƒƒ unique(L) - returns a list obtained by removing the duplicate the first, second, ... member of the list L. last(L) returns the last

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  89


For U & Me Let’s Try
member of the list L. (%o8) [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
(%i9) K: copylist(L);
$ maxima -q (%o9) [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
(%i1) L: create_list(i * x, x, [a, b, c], i, [1, 2, 3, 4]); (%i10) length(L);
(%o1) [a, 2 a, 3 a, 4 a, b, 2 b, 3 b, 4 b, c, 2 c, 3 c, 4 c] (%o10) 10
(%i2) first(L); (%i11) pop(L);
(%o2) a (%o11) 2
(%i3) seventh(L); (%i12) length(L);
(%o3) 3 b (%o12) 9
(%i4) last(L); (%i13) K;
(%o4) 4 c (%o13) [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
(%i5) third(L); last(L); (%i14) L;
(%o5) 3 a (%o14) [4, 6, 8, 10, 12, 14, 16, 18, 20]
(%o6) 4 c (%i15) pop([1, 2, 3]); /* Actual list is not allowed */
(%i7) L; arg must be a symbol [1, 2, 3]
(%o7) [a, 2 a, 3 a, 4 a, b, 2 b, 3 b, 4 b, c, 2 c, 3 c, 4 c] #0: symbolcheck(x=[1,2,3])(basic.mac line 22)
(%i8) quit(); #1: pop(l=[1,2,3])(basic.mac line 26)
-- an error. To debug this try: debugmode(true);
Again, note that the list L is still not modified. However, we (%i16) quit();
may need to modify the existing lists, and none of the above
functions will do that. It could be achieved by assigning the Advanced list operations
return values of the various list recreation functions back to And finally, here is a bonus of two sophisticated list operations:
the original list. However, there are a few functions, which do ƒƒ sublist_indices(L, p) - returns the list indices for the
modify the list right away. members of the list L, for which predicate ‘p’ is ‘true’.
ƒƒ assoc(k, L, d) - L must have all its members in the form
List manipulations of x op y, where op is some binary operator. Then, assoc()
The following are the two list manipulating functions provided searches for ‘k’ in the left operand of the members of
by Maxima: L. If found, it returns the corresponding right operand,
ƒƒ push(v, L) - inserts ‘v’ at the beginning of the list L otherwise it returns‘d’; or it returns false, if ‘d’ is missing.
ƒƒ pop(L) - removes and returns the first element from list L Check out the demonstration below for both the above
L must be a symbol bound to a list, not the list itself, in operations
both the above functions, for them to modify it. Also, these
functionalities are not available by default, so we need to load $ maxima -q
the basic Maxima file. Check out the demonstration below. (%i1) sublist_indices([12, 23, 57, 37, 64, 67], primep);
We may display L after doing these operations, or even check the (%o1) [2, 4, 6]
length of L to verify the actual modification of L. In case we need to (%i2) sublist_indices([12, 23, 57, 37, 64, 67], evenp);
preserve a copy of the list, the function copylist() can be used. (%o2) [1, 5]
(%i3) sublist_indices([12, 23, 57, 37, 64, 67], oddp);
$ maxima -q (%o3) [2, 3, 4, 6]
(%i1) L: makelist(2 * x, x, 1, 10); (%i4) sublist_indices([2 > 0, -2 > 0, 1 = 1, x = y], identity);
(%o1) [2, 4, 6, 8, 10, 12, 14, 16, 18, 20] (%o4) [1, 3]
(%i2) push(0, L); /* This doesn’t work */ (%i5) assoc(2, [2^r, x+y, 2=4, 5/6]);
(%o2) push(0, [2, 4, 6, 8, 10, 12, 14, 16, 18, 20]) (%o5) r
(%i3) pop(L); /* Nor does this work */ (%i6) assoc(6, [2^r, x+y, 2=4, 5/6]);
(%o3) pop([2, 4, 6, 8, 10, 12, 14, 16, 18, 20]) (%o6) false
(%i4) load(basic); /* Loading the basic Maxima file */ (%i7) assoc(6, [2^r, x+y, 2=4, 5/6], na);
(%o4) /usr/share/maxima/5.24.0/share/macro/basic.mac (%o7) na
(%i5) push(0, L); /* Now, this works */ (%i8) quit();
(%o5) [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
(%i6) L;
By: Anil Kumar Pugalia
(%o6) [0, 2, 4, 6, 8, 10, 12, 14, 16, 18, 20]
The
By:author
Anil is aKumar
gold medallist from NIT, Warangal and IISc,
Pugalia
(%i7) pop(L); /* Even this works */ Bengaluru. Mathematics and knowledge-sharing are two of
(%o7) 0 his many passions. Learn more about him at http://sysplay.in.
He can be reached at [email protected].
(%i8) L;

90  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Overview For U & Me

Replicant: A Truly Free


Version of Android
Replicant is a free and open source mobile operating system based on the Android
platform. It aims at replacing proprietary Android apps and components with open source
alternatives. It is security focused, as it blocks all known Android backdoors.

S
martphones have evolved from being used just for will help you if you are stuck with a problem.
communicating with others to offering a wide range When Android was first launched in 2007, Google also
of functions. The fusion between the Internet and announced the ‘Open Handset Alliance (OHA)’ to work with
smartphones has made these devices very powerful and useful other mobile vendors to create an open source mobile operating
to us. Android had been a grand success in the smartphone system, which would allow anyone to work on it. This seemed
business. It’s no exaggeration to say that more than 80 per cent of to be a good deal for the mobile vendors, because Apple’s
the smartphone market is now occupied by Android, which has iPhone practically owned the smartphone market at that time.
become the preference of most mobile vendors today. The mobile vendors needed another player, or ‘game changer’,
The reason is simple, Android is free and available to public. in the smartphone market and they got Android.
But there’s a catch. Have you ever wondered how well When Google releases the Android source code to the public
Android respects ‘openness’ ? And how much Android for free, it is called ‘stock Android’. This comprises only the
respects your freedom? If you haven’t thought about it, please very basic system. The mobile vendors take this stock Android
take a moment to do so. When you’re done, you will realise and tailor it according to their device’s specifications—featuring
that Android is not completely open to everyone. unique visual aspects such as themes, graphics and so on.
That’s why we’re going to explore Replicant –- a truly OHA has many terms and conditions, so if you want to use
free version of Android. Android in your devices, you have to play by Google’s rules.
The following aspects are mandatory for each Android phone:
Android and openness ƒƒ Google setup-wizard
Let’s talk about openness first. The problem with a closed ƒƒ Google phone-top search
source program is that you cannot feel safe with it. There have ƒƒ Gmail apps
been many incidents, which suggest that people can easily be ƒƒ Google calendar
spied upon through closed source programs. ƒƒ Google Talk
On the other hand, since open source code is open and ƒƒ Google Hangouts
available to everyone, one cannot plant a bug in an open source ƒƒ YouTube
program because the bug can easily be found. Apart from that ƒƒ Google maps for mobiles
aspect, open source programs can be continually improved by ƒƒ Google StreetView
people contributing to them—enhancing a feature and writing ƒƒ Google Play store
software patches, also there are many user communities that ƒƒ Google voice search

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  91


For U & Me Overview
These specifications are in Google’s ‘Mobile Application The following list features devices supported by Replicant and
Distribution Agreement- (MADA)’ which was leaked in their corresponding Replicant versions.
• HTC Dream/HTC Magic: Replicant 2.2
February 2014.
• Nexus One: Replicant 2.3
There are some exceptions in the market such as • Nexus S: Replicant 4.2
Amazon’s Kindle Fire, which is based on the Android OS but • Galaxy S: Replicant 4.2
doesn’t feature the usual Google stuff and has Amazon’s App • Galaxy S2: Replicant 4.2
• Galaxy Note: Replicant 4.2
Store instead of Google Play.
• Galaxy Nexus: Replicant 4.2
For a while, we were all convinced that Android was • Galaxy Tab 2 7.0: Replicant 4.2
free and open to everyone. It may seem so on the surface but • Galaxy Tab 2 10.1: Replicant 4.2
under the hood, Android is not so open. We all know that, • Galaxy S3: Replicant 4.2
• Galaxy Note 2: Replicant4.2
at its core, Android has a Linux kernel, which is released
• GTA04: Replicant 2.3
under the GNU Public License, but that’s only a part of Separate installation instructions for these devices can be found
Android. Many other components are licensed under the on the Replicant website.
Apache licence, which allows the source code of Android to
be distributed freely and not necessarily to be released to the confidential data such as bank account numbers, passwords,
public. Some mobile vendors make sure that their devices etc, on it. It’s not an exaggeration to state that our smartphones
run their very own tailored Android version by preventing contain more confidential data than any other secure vault in
users from installing any other custom ROMs. A forcibly this world. In today’s world, the easiest way to track people’s
installed custom ROM in your Android will nullify the whereabouts is via their phones. So you should realise that
warranty of the device. So, most users are forced to keep the you are holding a powerful device in your hands, and you are
Android version shipped with the device. responsible for keeping your data safe.
Another frustrating aspect for Android users is with respect People use smartphones to stay organised, set reminders
to the updates. In Android, updates are very complex, because or keep notes about ideas. Some of the apps use centralised
there is no uniformity among the various devices running the servers to store the data. What users do not realise is that you
Android OS. Even closed OSs support their updates—for lose control of your data when you trust a centralised server
example, Apple’s iOS 5 supports iPhone 4, 4s, iPad and iPad 2; that is owned by a corporation you don’t know. You are kept
and Microsoft allows its users to upgrade to Windows 7 from ignorant about how your data is being used and protected. If
Windows XP without hassles. As you have probably noticed, an attacker can compromise that centralised server, then your
only a handful of devices receive the new Android version. data could be at risk. To make things even more complicated,
The rest of the users are forced to change their phones. Most an attacker could erase all that precious data and you wouldn’t
users are alright with that, because today, the life expectancy of even know about it.
mobiles is a maximum of about two years. People who want to Most of the apps in the Google Play store are closed source.
stay updated as much as possible, change their phones within a Some apps are malicious in nature, working against the interests
year. The reason behind this mess is that updates depend mostly of the user. Some apps keep tabs on you, or worse, they can
on the hardware, the specs of which differ from vendor to steal the most confidential data from your device without your
vendor. Most vendors upgrade their hardware specs as soon as knowledge. Some apps act as tools for promoting non-free
a new Android version hits the market. So the next time you try services or software by carrying ads. Several studies reveal
to install an app which doesn’t work well on your device, just that these apps track their users’ locations and store other
remember, “It’s time to change your phone!” background information about them.
You may think of this as paranoia, but the thing is that cyber
Android and freedom criminals thrive on the ignorance of the public. It may be argued
Online privacy is becoming a myth, since security threats pose that most users do not have any illegal secrets in the phone,
a constant challenge. No matter how hard we work to make nor are they important people, so why should they worry about
our systems secure, there’s always some kind of threat arising being monitored? Thinking along those lines resembles the man
daily. That’s why systems administrators continually evaluate who ignores an empty gun at his door step. He may not use that
security and take the necessary steps to mitigate threats. gun, but is completely ignorant of the fact that someone else
Not long ago, we came to know about PRISM –- an NSA might use that gun and frame him for murder.
(USA) spy program that can monitor anyone, anywhere in the
world, at any time. Thanks to Edward Snowden, who leaked Replicant
this news, we now realise how vulnerable we are online. Despite the facts that stack up against Android, it is almost
Although some may think that worrying about this borders on impossible to underestimate its benefits. For a while, Linux was
being paranoid, there’s sufficient proof that all this is happening considered a ‘nerdy’ thing, used only by developers, hackers
as you read this article. Many of us use smartphones for almost and others in research. Typically, those in the ‘normal’ user
everything. We keep business contacts, personal details, and community did not know much about Linux. After the arrival of

92  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Overview For U & Me
Android, everyone has the Linux kernel in their hands. Android mentioned earlier, people who value their freedom above all
acts as a gateway for Linux to reach all kinds of people. The else, find Replicant very appealing.
FOSS community believes in Android, but since Android poses The Replicant team is gradually making progress in adding
a lot of problems due to the closed nature of its source code, support for more devices. For some devices, the conversion
some people thought of creating a mobile operating system from closed source to open source becomes cumbersome,
without relying on any closed or proprietary code or services. which is why these devices are rejected by the Replicant team.
That’s how Replicant was born.
Most of Android’s non-free code deals with hardware F-Droid
such as the camera, GPS, RIL (Radio interface layer), etc. So, One of the reasons for the grand success of Android is the
Replicant attempts to build a fully functional Android operating wide range of apps that is readily available on the Google
system that relies completely on free and open source code. Play store for anyone to download.
The project began in 2010—named after the fictional For Replicant, you cannot use Google Play but you can
Replicant androids in the movie ‘Blade Runner’. Denis use an alternative—F-Droid, which has only free and open
‘GNUtoo’ Carikli and Paul Kocialkowski are the current lead source software.
developers for the Replicant. The problem with Google Play is that many apps on it are
In the beginning, they began by writing code for the closed source. So since we may not be able to look at their source
HTC ‘Dream’ in order to make it a fully functional phone code, there’s a great possibility of an app that could spy on you
that did not rely on any non-free code. They made a little or worse, steal your data being installed on it. By installing
progress such as getting the audio to work with fully apps from Google Play, users inadvertently promote non-free
free and open source code, and after that they succeeded software. Some apps also track their users’ whereabouts.
in making and receiving calls. You can find a video of F-Droid, on the other hand, makes sure all apps are built
Replicant working on the HTC Dream on YouTube. from their source code. When an application is submitted to
The earlier versions of Replicant were based on AOSP F-Droid, it is in the form of source code. The F-Droid team
(Android Open Source Project) but in order to support more builds it into a nice APK package from the source, so the user
devices, the base was changed to Cynogenmod—another is assured that no other malicious code is added to that app
custom ROM which is free but still has some proprietary since you can view the source code.
drivers. The Replicant version 4.2 was released on January The F-Droid client app can be downloaded from
22, 2014, which is based on Cynogenmod 10.1. the F-Droid website. This app is extremely handy for
On January 3, 2014, the Replicant team released its downloading and installing apps without hassle. You don’t
full-libre Replicant SDK. You’ve probably noticed that the need an account but can install various versions of apps
Android SDK is no longer open source software. When you provided there. You can choose the one that works best for
try to download it, you will be presented with lengthy ‘terms you and also easily get automatic updates.
and conditions’, clearly stating that you must agree to that If you’re an Android user but want FOSS on your device,
license’s terms or you are not allowed to use that SDK. F-Droid is available to you. You have to allow your device
Replicant is all about freedom. As you can see, the to install apps from sources other than Google Play (which
Replicant team is labelling it the truly free version of Android. would be F-Droid). Using the single F-Droid client, you can
The team didn’t focus much on open source, although the easily browse through various sections of apps and easily
source code for Replicant is open to everyone. When it comes remove the installed apps in your device or update your apps.
to freedom, from the users’ perspective, the word simply Using Replicant doesn’t grant your device complete
means that they are given complete control over their device, protection, but it can make your device less vulnerable to
even though they might not know what to do with that control. threats. It can offer you real control over your device and
The Replicant team isn’t making any compromises when it you can enjoy true freedom. If your device doesn’t support
comes to the user’s freedom. Although there may be some Replicant, you can use Cynogenmod instead, which is
trade-offs concerning freedom, the biggest challenge for the officially prescribed as an alternative to Replicant.
Replicant team is to write hardware drivers and firmware that As Benjamin Franklin put it, “Those who give up
can support various devices. This is a difficult task since one essential liberty to purchase a little temporary safety, deserve
Android device may differ from another. It’s not surprising that neither liberty nor safety.” It’s up to you to choose between
they mainly differ in their hardware capabilities. That is why liberty and temporary safety.
some apps that work well on one device may not necessarily
work well on another. This problem could be solved if device
manufacturers decide that the drivers and firmware should be By: Magimai Prakash
given to the public, but we all know that’s not going to happen. By:
The Anil
author hasKumar
completed aPugalia
B.E. in computer science. As he
That’s why there are some devices running on Replicant that is deeply interested in Linux, he spends most of his leisure time
exploring open source.
still don’t have 3D graphics, GPS, camera access, etc, but as

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  93


For U & Me Open Strategy

Firefox May Change the


Mobile Market!
TCL Communication’s smartphone brand, Alcatel One Touch, launched the Alcatel
One Touch Fire smartphone ‘globally’ last year. Fire was the first ever phone to run
the Firefox OS, an open source operating system created by Mozilla. According to
many, this OS is in some ways on par with Android, if not better. Sadly, Fire has failed to
see the light of day in India, because our smartphone market has embraced Android
on such a large scale that other OSs find it hard to make an impact. In a candid chat,
Piyush A Garg, project manager, APAC BU India, spoke to Saurabh Singh from Open
Source For You about how the Firefox OS could be the next big thing and why Alcatel
One Touch has not yet given up on it.

I
t was not very long ago (July 25, 2011, to be precise) that but there’s such a big hoo-ha about Android. Last year, it
Andreas Gal, director of research at Mozilla Corporation, was a big thing. First, you have to create some space for the
announced the ‘Boot to Gecko’ project (B2G) to build a OS itself, and then create a buzz,” revealed Piyush A Garg,
complete, standalone operating system for the open Web, which project manager, APAC BU India.
could provide a community-based alternative to commercially According to Garg, there’s still a basic lack of awareness
developed operating systems such as Apple’s iOS and regarding the Firefox OS in India. “Techies might be aware of
Microsoft’s Windows Phone. Besides, the Linux-based operating what the Firefox OS is but the average end user may not. And
system for smartphones and tablets (among others) also aimed ultimately, it is the end user who has to purchase the phone.
to give Google’s Android, Jolla’s Sailfish OS as well as other We have to communicate the advantages of Mozilla Firefox
community-based open source systems such to the end user, create awareness and only then launch a
as Ubuntu Touch, a run for their money product based on it,” he said.
(pun intended!). Although, on
paper, the project boasts of Alcatel’s plans for Firefox-based
tremendous potential, it has smartphones
failed to garner the kind of So the bottom line is, India will not
response its developers had see the Alcatel One Touch Fire any
initially hoped for. The time soon; or maybe not see it at
relatively few devices in a all. “Sadly, yes. Fire is not coming
market that is flooded with to India at all. It’s not going to
the much-loved Android come to India because Fire was
OS could be one possible an 8.89 cm (3.5 inch) product.
reason. Companies like Instead, we might be coming up
ZTE, Telefónica and with an 8.89-10.16 cm (3.5-4 inch)
GeeksPhone have taken the product. Initially, we were considering
onus of launching Firefox OS- a 12.7-13.97 cm (5-5.5 inch) device.
based devices; however, giants However, we are looking to come
in the field have shied away from up with a low-end phone and such
adopting it, until now. a device cannot come in the 12.7 cm
Hong Kong’s Alcatel One (5 inch) segment. So, once the product is
Touch is one of the few companies launched with an 8.89—10.16 cm (3.5-4 inch)
that has bet on Firefox by launching the screen with the Firefox OS, we may launch a whole series of
Alcatel One Touch Fire smartphone globally, last year. Firefox OS-based devices,” said Garg.
The Firefox OS 1.0-based Fire was primarily intended for
emerging markets with the aim of ridding the world of The Firefox OS ecosystem needs a push in India
feature phones. Sadly, the Indian market was left out when With that said, it has taken a fairly long time for the company to
the first Firefox OS-based smartphone was tested—could realise that the Firefox OS could be a deal-breaker in an extensive
Android dominance be the reason? “Alcatel Fire (Alcatel market such as India. “Firefox OS may change the mobile game.
4012) was launched globally last year. We tried everything, However, it still needs to grow in India. Considering the fact

94  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Open Strategy For U & Me

that Android has such a huge base in India, we are waiting ideas. Then either we accept them, which means we buy the idea,
for the right time to launch the Firefox-based smartphones or we work out some kind of association with which developers
here,” he said. But is the Firefox OS really a ‘deal-breaker’ get revenue out of the collaboration. In China, more than 100,000
for customers? “The Firefox OS can be at par with Android. developers are engaged in building apps for Alcatel. India is on
The major advantages of Mozilla Firefox are primarily the our to-do list for building a community of app dvelopers. It’s
memory factor and the space that it takes—the entire OS as currently at an ‘amateur stage’; however, we expect things to
well as the applications. It’s not basically an API kind of OS; happen eventually,” he said.
it’s an installation directly coming from HTML. That’s a major Although there’s no definite time period for the launch
advantage. Also, apps for the OS are of Alcatel’s One Touch Firefox
built using HTML5, which means OS-based smartphone in India
that, in theory, they run on the Web (Garg is confident it will be here
and on your phone or tablet. What by the end of 2014, followed by
made Android jump from Jelly a whole series, depending upon
Bean to KitKat (which requires low how it’s received), one thing that
memory) is the fact that the end user is is certain is that the device will
looking at a low memory OS. Mozilla be very affordable. Cutting costs
Firefox is also easy to use. I won’t while developing such low-end
say ‘better’ or ‘any less’, but at par devices is certainly a challenge
with Android,” said Garg, evidently for companies, since customers do
confident of the platform. tend to choose ‘value for money’
To take things forward, vis-à-vis when making their purchases.
the platform, Alcatel One Touch is “We are not allowed to do any
also planning to come up with an ‘trimming’ with respect to the
exclusive App Store, with its own set hardware quality—since we
of apps. “We have already planned our are FCC-compliant, we cannot
‘play store’, and tied up with a number compromise on that,” said Garg.
of developers to build our own apps. I cannot comment on the So what do companies like Alcatel One Touch actually
timeline of the app store but it’s in the pipeline. We currently do to cut manufacturing costs? “We look at larger quantities
have as many as five R&D centres in China. We are not yet in that we can sell at a low cost, using competitive chipsets
India, although we are looking to engage developers here as that are offered at a low price. On the hardware side, we
well. We’re already in the discussion phase on that front,” said may not give lamination in a low-cost phone, or we may not
Garg. So, what’s the company’s strategy to engage developers in offer Corning glass or an IPS, and instead give a TFT, for
particular? “We invite developers to come up and give in their instance,” Garg added.

OSFY Magazine Attractions During 2014-15


Month Theme Featured List buyers’ guide
March 2014 Network monitoring Security -------------------
April 2014 Android Special Anti Virus Wifi Hotspot Devices
May 2014 Backup and Data Storage Certification External Storage
June 2014 Open Source on Windows Mobile Apps UTMs fo SMEs
July 2014 Firewall and Network security Web Hosting Solutions Providers MFD Printers for SMEs
August 2014 Kernel Development Big Data solution Providers SSDs for Servers
September 2014 Open Source for Start-ups Cloud Android Devices
October 2014 Mobile App Development Training on Programming Languages Projectors
November 2014 Cloud Special Virtualisation Solutions Providers Network Switches and Routers
December 2014 Web Development Leading Ecommerce Sites AV Conferencing
January 2015 Programming Languages IT Consultancy Service Providers Laser Printers for SMEs
February 2015 Top 10 of Everything on Open Source Storage Solutions Providers Wireless Routers

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  95


Interview For U & Me
HP’s latest mantra is the ‘new style of IT’. Conventional It is the time for converged systems,
servers and data storage systems do not work for the company which are opening up an altogether
and its style of IT any longer. This is about the evolution of
converged systems that have taken over the traditional forms
new dimension of IT. With converged
of IT. The company is taking its mantra forward in every systems, you get three different
possible way. systems comprising the compute part,
HP has recently launched the HP Apollo family of
high-performance computing (HPC) systems. The company
and the storage and the networking
claims that HP Apollo is capable of delivering up to four parts, to work together.
times the performance of standard rack servers while using
less space and energy. The new offerings reset data centre oriented database like Vertica, we have a converged system
expectations by combining a modular design with improvised for virtualisation. Some time back, servers were a sprawl, but
power distribution and cooling techniques. Apart from this, these days, virtual machines are a big sprawl.
the company claims that HP Apollo has a higher density at
a lower total cost of ownership. The air-cooled HP Apollo
6000 System maximises performance efficiency and makes
HPC capabilities accessible to a wide range of enterprise
Q Converged systems have been around for about 18
months now. Can you throw some light on customers’
experiences with these systems?
customers. It is a supercomputer that combines high levels of Yes, converged systems have been around for a while now
processing power with a water-cooling design for ultra-low and we have incrementally improved on their management.
energy usage. What we have today as a CSM for virtualisation or CSM for
These servers add to the fast pace of changes going on in Hanna, wasn’t there a year back. The journey has been good
the IT space today. Vikram K from HP shares his deep insight and plenty of enterprises have expressed interest in such
into how IT is changing. Read on... evolved IT. With respect to the adoption rate, the IT/ITES
segment has been the first large adopter of converged systems,

Q Since you have just launched your latest servers here, what
is your take on the Indian server market?
From a server standpoint, we are very excited, because virtually
primarily because it has a huge issue about just doing the
systems integration of ‘X’ computers that compute ‘Y’ storage
while somebody else takes care of the networks . Now, it
every month and a half, we’ve been offering a new enhancement is the time for systems that come integrated with all three
or releasing a new product, which is different from the previous elements, and the best part is that it is very workload specific.
one. So the question is - how are these different? Well, we have We see a lot of converged systems being adopted in the
basically gone back and looked at things through the eyes of the area of manufacturing also. People who had deployed SAP
customer to understand what they expect from IT. They want earlier have some issues. One of them is that it is multi-tier,
to get away from conventional IT and move to an improvised i.e., it has multiple application servers and multiple instances
level of IT. So we see three broad areas: admin controlled IT; in the database. So when they want to run analytics, it gets
user controlled IT, which is more like the cloud and is workload extremely slow because a lot of tools are used to extract
specific; and then there is application-specific ‘compute and information. We came up with a solution, which customers
serve’ IT. These are the three distinct combinations. Within across the manufacturing and IT/ITES segments are now
these three areas, we have had product launches, one after the discovering. That is why we see a very good adoption of
other. The first one, of course, is an area where we dominate. So, converged systems across segments.
we decided to extend the lead and that is how the innovations
continue to happen.
Q We hear a lot about software defined data centres
(SDCs). Many players like VMware are investing a lot in

Q What do you mean by ‘new style of IT’?


It is the time for converged systems, which are opening up
an altogether new dimension of IT. With converged systems,
this domain. How do you think SDCs are evolving in India?
The software-defined data centre really does have the potential
to transform the entire IT paradigm and the infrastructure and
you get three different systems comprising the compute part, application landscape. We have recently launched new products
and the storage and the networking parts, to work together. and services in the networking, high-performance computing,
A variety of IT heads are opting for this primarily because storage and converged infrastructure areas. They will allow
they want to either centralise IT, consolidate or improve enterprises to build software-defined data centres and hybrid
the overall efficiency and performance. When they do that, cloud infrastructures. Big data, mobility, security and cloud
they need to have better converged systems management. computing are forcing organisations to rethink their approach to
So we have combined our view of converged systems and technology, causing them to invest heavily in IT infrastructure.
made them workload specific. These days we have workload So, when we are talking about software defined data centres, we
specific systems. For example, with something like a column- are talking about a scenario in which it can be a heterogeneous

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  97


For U & Me Interview
setup of hypervisors, infrastructure, et al, which will help you
migrate from one to another, seamlessly. Q So, are SMBs ready to jump onto the integrated systems’
bandwagon?
Yes, there are quite a few SMBs in India that are very

Q So, software defined data centres could replace


traditional data centres in the future? Therefore, can we
consider them a part of new-age IT?
positive about integrated systems. Customers, irrespective
of the segment that they belong to, look at it from the angle
of how the business functions, and what kind of specificity
Well, I don’t believe that is so. We have been living with old TP they want to get to. I wouldn’t be particularly concerned
for about 30-35 years. As the cloud, big data and mobility pick about the segment, but I would look at it from the context of
up even more, and are used in the context of analytics, you will what workload specificity a customer wants.
still have two contexts residing together, which is old TP and
old AP. Then you would have more converged systems and will
talk about converged system management. That is exactly our
version of how we want to define software defined data centres.
Q What are the issues that you have seen IT heads face
while adopting converged IT systems?
Fortunately, we have not heard of many challenges that
the IT heads have faced while adopting converged IT

Q We talk a lot about integrated and converged systems. It


sounds like a great idea as it would involve all the solutions
coming in from one vendor. But does that not lead to some kind
solutions. In fact, it has eased things for them, primarily
because they have been told in advance about what they
are getting into. They are no more dealing with three
of vendor lock-in? separate items. They are getting into one whole thing,
No it doesn’t, primarily because these are workload specific. which is getting deployed and what they used to take
So, one would not implement a converged system just for months to achieve, is done in two or three days. This
the sake of it. As I mentioned, it has to be workload specific. is because we run the app maps prior to the actual sale
So, if you want to virtualise, then you would do one type and tell them what exactly will reach them, how it will
of converged system or integrated system. If you want to run and what kind of performance it will deliver. The
do Hanna, that is an entirely different converged system. major challenges are related to the fact that they are on
What helps the customers is that it breaks down the cycle of the verge of a transition (from the business perspective),
project deployment and hence, frees up a lot of resources that and they see any transition as being slightly risky.
would otherwise be consumed for mere active deployment or Hence, they thoroughly check on the ROI and are
transitioning from one context to another. generally very cautious.

98  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Overview For U & Me

Swap Space for Linux:


How Much is Really Needed?
Linux divides its physical memory (RAM) into chunks called pages. Swapping is the process
whereby pages get transferred to a preconfigured hard disk area. The quantum of swap space
is determined during the Linux installation process. This article is all about swap space, and
explains the term in detail so that newbies don’t find it a problem choosing the right amount of it
when installing Linux.

T
he virtual memory of any system is a combination swap space should be double the amount of physical memory
of two things - physical memory, which can be (RAM) available, i.e., if we have 16 GB of RAM, then we
accessed, i.e., RAM, and swap space. The latter ought to allot 32 GB to the swap space. But this is not very
holds the inactive pages that are not accessed by any effective these days.
running application. Swap space is used when the RAM Actually, the amount of swap space depends on the kind
has insufficient space for active processes, but it has of application you run and the kind of user you are. If you are
certain spaces which are inactive at that point in time. a hacker, you need to follow the old rule. If you frequently
These inactive pages are temporarily transferred to the use hibernation, then you would need more swap space
swap space, which frees up space in the RAM for active because during hibernation, the kernel transfers all the files
processes. Hence, the swap space acts as temporary from the memory to the swap area.
storage that is required if there is insufficient space So how can the swap space improve the performance of
in your RAM for active processes. But as soon as the Linux? Sometimes, RAM is used as a disk cache rather than
application is closed, the files that were temporarily to store program memory. It is, therefore, better to swap out
stored in the swap space are transferred back to the RAM. a program that is inactive at that moment and, instead, keep
The access time for swap space is less. In short, swapping the often-used files in cache. Responsiveness is improved by
is required for two reasons: swapping pages out when the system is idle, rather than when
ƒƒ When more memory than is available in physical memory the memory is full.
(RAM) is required by the system, the kernel swaps less- Even though we know that swapping has many
used pages and gives the system enough memory to run advantages, it does not necessarily improve the performance
the application smoothly. of Linux on your system, always. Swapping can even make
ƒƒ Certain pages are required by the application only at your system slow if the right quantity of it is not allotted.
the time of initialisation and never again. Such files are There are certain basic concepts behind this also. Compared
transferred to the swap space as soon as the application to memory, disks are very slow. Memory can be accessed in
accesses these pages. nanoseconds, while disks are accessed by the processor in
After understanding the basic concept of swap space, milliseconds. Accessing the disk can be many times slower
one should know what amount of space needs to be actually than accessing the physical memory. Hence, the more the
allotted to the swap space so that the performance of Linux swapping, the slower the system. We should know the
actually improves. An earlier rule stated that the amount of amount of space that we need to allot for swapping. The

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  july 2014  |  99


For U & Me Overview

following rules can effectively help to improve Linux’s cat /proc/sys/vm/swappiness


performance on your system.
For normal servers: A temporary change (lost at reboot) in a swappiness value
ƒƒ Swap space should be equal to RAM size if RAM size is of 10, for example, can be done with the following command:
less than 2 GB.
ƒƒ Swap space should be equal to 2 GB if RAM size is sudosysctlvm.swappiness=10
greater than 2 GB.
For heavy duty servers with fast storage requirements: For a permanent change, edit the configuration file as follows:
ƒƒ Swap space should be equal to RAM size if RAM size is
less than 8 GB. gksudogedit /etc/sysctl.conf
ƒƒ Swap space should be equal to 0.5 times the size of the
RAM if the RAM size is greater than 8 GB. If the swappiness value is 0, then the kernel restricts the
If you have already installed Linux, you can check swapping process; and if the value is 100, the kernel swaps
your swap space by using the following command in the very aggressively.
Linux terminal: So, while Linux as an operating system has great powers,
you should know how to use those powers effectively so that
cat /proc/swaps you can improve the performance of your system.

Swappiness and how to change it By: Roopak T J


Swappiness is a parameter that controls the tendency of the
The author is an open source contributor and enthusiast. He
kernel to transfer the processes from physical memory to has contributed to a couple of open source organisations
‘swap space’. It has a value between 0 to 100 and in Ubuntu, including Mediawiki and LibreOffice. He is currently in his
it has a default value of 60. To check the swappiness value, second year at Amrita University (B. Tech). You can contact him
at [email protected]
use the following command:

THE COMPLETE MAGAZINE


ON OPEN SOURCE

www.electronicsforu.com www.eb.efyindia.com www.OpenSourceForu.com www.ffymag.com www.efyindia.com

100  |  july 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


TIPS
& TRICKS
Booting an ISO directly from the hard Now reboot the system. The new menu entry will be
drive using GRUB 2 added in the Grub boot option.
We often find ourselves in a situation in which we have —Kiran P S,
an ISO image of Ubuntu on our hard disk and we need [email protected]
to test it by first running it. Try out this method for using
the ISO image. Playing around with arguments
Create a GRUB menu entry by editing the /etc/ While writing shell scripts, we often need to use
grub.d/40_custom file. Add the text given below just after different arguments passed along with the command. Here is
the existing text in the file: a simple tip to display the argument of the last command.
Use ‘!!:n’ to select the nth argument of the last command,
#gksu gedit /etc/grub.d/40_custom and ‘!$’ for the last argument.

Add the menu entry: dev@home$ echo a b c d

menuentry “Ubuntu 12.04.2 ISO” a b c d

{ dev@home$ echo !$

set isofile=”/home/<username>/Downloads/ubuntu-12.04.2- echo d


desktop-amd64.iso” #path of isofile
d
loopback loop (X,Y)$isofile
dev@home$ echo a b c d
linux (loop)/casper/vmlinuz.efi boot=casper iso-scan/
filename=$isofile noprompt noeject a b c d

initrd (loop)/casper/initrd.lz dev@home$ echo !!:3

} echo c
c
isofile variable is not required but simplifies the —Shivam Kotwalia,
creation of multiple Ubuntu ISO menu entries. [email protected]
The loopback line must reflect the actual location of the
ISO file. In the example, the ISO file is stored in the user’s Retrieving disk information from
Downloads folder. X is the drive number, starting with 0; the command line
Y is the partition number, starting with 1. sda5 would be Want to know details of your hard disk even without physically
designated as (hd0,5) and sdb1 would be (hd1,1). Do not touching it? Here are a few commands that will do the trick. I
use (X,Y) in the menu entry but use something like (hd0,5). will use /dev/sda as my disk device, for which I want the details.
Thus, it all depends on your system’s configuration.
Save the file and update the GRUB 2 menu: smartctl -i /dev/sda

#sudo update-grub smartctl is a command line utility designed to perform

102  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


SMART (Self-Monitoring, Analysis and Reporting is that a single command does a single operation and does it well.
Technology) tasks such as printing the SMART self-test and
error logs, enabling and disabling SMART automatic testing, —Pankaj Rane,
and initiating device self-tests. When the command is used [email protected]
with the ‘ –i ’ switch, it gives information about the disk.
The output of the above command will show the model Downloading/converting HTML pages to PDF
family, device model, serial number, firmware version, user wkhtmltopdf is a software package that converts
capacity, etc, of the hard disk (sda). HTML pages to PDF. If this is not installed on your system,
You can also use the hdparm command: use the following command to do so:

hdparm -I /dev/sda $sudo apt-get install wkhtmltopdf

hdparm can give much more information than smartctl. After installing, you can run the command using the
following syntax:
—Munish Kumar,
[email protected] $wkhtmltopdf URL[oftheHTMLfile] NAME[of the PDF file].pdf

Writing an ISO image file to a CD-ROM For example, by using:


from the command line
We usually download ISO images of popular Linux distros for $wkhtmltopdf opensourceforu.com OSFY.pdf
installation or as live media, but end up using a GUI CD burning
tool to create a bootable CD or DVD ROM. But, if you’re feeling …the OSFY.PDF will be downloaded to the current
a bit geeky, you could try doing so from the command line too: working directory.
You can read the documentation to know more about this.
# cdrecord -v speed=0 driveopts=burnfree -eject dev=1,0,0
<src_iso_file> —Manu Prasad,
[email protected]
speed=0 instructs the program to write the disk at the lowest
possible drive speed. But, if you are in a hurry, you can try Going invisible on the terminal
speed=1 or speed=2. Keep in mind that these are relative speeds. Did you ever think that you could type commands that
The -eject switch instructs the program to eject the disk would be invisible on your system but still would execute,
after the operation is complete. provided you typed them correctly? This can easily be done by
Now, the most important part to specify is the device’s ID. It changing the terminal settings using the following command:
is absolutely important that you specify the device ID of your CD
ROM drive correctly or you may end up writing the ISO to some stty -echo
other place on the disk and corrupting your entire hard disk.
To find out the device ID of your CD ROM drive, just run the To restore the visibility of your commands, just type the
following command prior to running the first command: following command:

#cdrecord -scanbus stty echo

Your CD ROM’s device ID should look something like


what’s shown below: Note: Only the ‘minus’ sign has been removed.

1,0,0 —Sumit Agarwal,


[email protected]
Also, note that you cannot create a bootable DVD disk
using this command. But, do not be disheartened—there is
another simpler command to burn a bootable DVD, which is: Share Your Linux Recipes!
The joy of using Linux is in finding ways to get around
# growisofs -dvd-compat -speed=0 -Z /dev/dvd=myfile.iso problems—take them head on, defeat them! We invite you to
share your tips and tricks with us for publication in OSFY so
that they can reach a wider audience. Your tips could be related
Here, /dev/dvd is the device file that represents your DVD to administration, programming, troubleshooting or general
ROM. It is quite likely to be the same on your system as well. tweaking. Submit them at www.linuxforu.com. The sender of
each published tip will get a T-shirt.
Do not use growisofs to burn a CD ROM. The beauty of Linux

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  103


For U & Me Overview

The Mozilla Location Service:


Addressing Privacy Concerns
Dubbed a research project, Mozilla Location Service is the crowd-sourced mapping
of wireless networks (Wi-Fi access points, cell phone towers, etc) around the world.
This information is commonly used by mobile devices and computers to ascertain
their location when GPS services are not available. The entry of Mozilla into this field
is expected to be a game changer. So get to know more about Mozilla’s MozStumbler
mobile app as well as Ichnaea.

T
he Mozilla mission statement expresses a desire to There are several services that a user might not even
promote openness, innovation and opportunity on be aware of while using a cell phone. The network-based
the Web. And Mozilla is trying to comply with this location service is one of the most used services by cell
pretty seriously. phone owners to determine their location if the GPS
Firefox, Thunderbird, Firefox OS… the list of service is not available. Several companies currently offer
Mozilla’s open source products is growing. Yet there are this service but there are major privacy concerns associated
several areas in which tech giants like Google, Nokia with it. It is no secret that advertising companies track a
and Apple are dominant and the mobile ecosystem is one user’s location history and offer ads or services based on it.
of them. Mozilla is now trying to break into this space. Till now, there was no transparent option among
After Firefox OS, the foundation now offers a new these services but Mozilla has come to our rescue, to
service for mobile users. prevent tech giants sniffing out our locations. As stated on

104  |  August 2014  |  OPEN SOURCE For You  |  www.OpenSourceForU.com


Overview For U & Me

Figure 1: The MozStumbler app Figure 2: MozStumbler options Figure 3: MozStumbler settings

Mozilla’s location service website, “The Mozilla Location You can optionally give your username in this app to track
Service is a research project to investigate crowd-sourced your contributions. Mozilla has also created a leader board to
mapping of wireless networks (Wi-Fi access points, cell let users track and rank their contributions, apart from more
towers, etc) around the world. Mobile devices and desktop detailed statistics that are available on this website. No user
computers commonly use this information to figure out their identifiable information is collected through this app.
location when GPS satellites are not accessible.” Mozilla is not only collecting the data but also providing
In the same statement, Mozilla acknowledges the presence users with a publicly accessible API. It has code named the API
of and the challenges presented by the other services, ‘Ichnaea’, which means ‘the tracker’. This API can be accessed
saying, “There are few high-quality sources for this kind of to submit data, search data or search your location. As the data
geolocation data currently open to the public. The Mozilla collection is still in progress, it is not recommended to use this
Location Service aims to address this issue by providing an service for commercial applications, but you can try it out on
open service to provide location data.” your own just for fun.
This service provides geolocation lookups based on
publicly observable cell tower and Wi-Fi access point Note: Mozilla Ichnaea can be accessed at
information. Mozilla has come out with an Android app to https://mozilla-ichnaea.readthedocs.org
collect publicly observable cell towers and Wi-Fi data; it’s
called MozStumbler. The MozStumbler app provides an option for geofencing,
This app scans and uploads information of cell towers which means you can pause the scanning within a one km radius of
and Wi-Fi access points to Mozilla servers. The latest the desired location. This deals with user concerns over collecting
stable version of this app is ver 0.20.5 which is ready for behavioural commute data such as Home, Work and travelling habits.
download. MozStumbler provides the option to upload this In short, Mozilla is trying to provide a high quality location
scanned data over a Wi-Fi or cellular network. But you service to the general public at no cost! Recently, Mozilla India
don’t need to be online while scanning; you can upload this held a competition ‘Mozilla Geolocation Pilot Project India’,
data afterwards. which encouraged more and more users to scan their area. To
contribute to this project, you can fork the repository on github or
Note: 1. This app is not available on Google Play just install the app; you will be welcomed aboard.
store but you can download it from https://github.com/
MozStumbler/releases/ By: Vinit Wankhede
2. The Firefox OS version of this app is on its way too. You The author is a fan of free and open source software. He is
can stay abreast of what’s happening with the Firefox OS currently contributing to the translation of the MozStumbler app
app at http://github.com/FxStumbler/ for Mozilla location services.

www.OpenSourceForU.com  |  OPEN SOURCE For You  |  August 2014  |  105

You might also like