Open Source For You
Open Source For You
More content for non-IT readers ED: It’s great to hear from you. We will definitely cover the
I have been reading your magazine since the last few years. The topics suggested by you in one of our forthcoming issues. Keep
company I work in is in the manufacturing industry. Similarly, reading our magazine. And do feel free to get in touch with us if
your subscribers’ database may have more individuals like me you have any such valuable feedback.
from companies that are not directly related to the IT industry.
Currently, your primary focus is on technical matters, A request for the Backtrack OS to be
and the magazine carries articles written by skilled technical bundled on the DVD
individuals, so OSFY is really helpful for open source developers. I am a huge fan of Open Source For You. Thank you for
However, you also have some non-IT subscribers like us, who bundling the Ubuntu DVD with the May 2014 issue. Some of
can understand that something great is available in the open my team members and I require the Backtrack OS. Could you
source domain, which they can deploy to reduce their IT costs. provide this in your next edition? I am studying information
But, unfortunately, your magazine does not inform us about open sciences for my undergrad degree. Please suggest the important
source solutions providers. programming languages that I should become proficient in.
I request you to introduce the companies that provide end-to- —aravind naik;
end IT solutions on open source platforms including thin clients, [email protected]
desktops, servers, virtualisation, embedded customised OSs, ERP,
CRM, MRP, emails and file servers, etc. Kindly publish relevant ED: Thanks for writing in to us. We’re pleased to know that you
case studies, with the overall cost savings and benefits. Just as you liked the DVD. Backtrack is no longer being maintained. The
feature job vacancies, do give us information about the solutions updated version for penetration testing is now known as ‘Kali
providers I mentioned above. Linux’ and we bundled it with the April 2014 issue of OSFY. For
—Shekhar Ranjankar; career related queries, you can refer to older OSFY issues or you
[email protected] can find related articles on www.opensourceforu.com
ED: Thank you for your valuable feedback. We do carry case studies Overseas subscriptions
of companies deploying open source, from time to time. We also Previously, I used to get the copies of LINUX For You/ Open
regularly carry a list of different solutions providers from different Source For You and Electronics For You from local book stores
open source sectors. We will surely take note of your suggestion and but, lately, none of them carry these magazines any more. So
try to continue carrying content that interests non-IT readers too. how can I get the copies of all these magazines in Malaysia, and
where can I get previous issues too?
Requesting an article on Linux server migration —Abdullah Abd. Hamid;
I am glad to receive my first copy of OSFY. I have a [email protected]
suggestion to make: if possible, please include an article
on migrating to VMware (from a Linux physical server to ED: Thank you for reaching out to us. Currently, we do not have
VMware ESX). Also, do provide an overview of some open any reseller or distributor in Malaysia for news stand sales, but you
source tools (like Ghost for Linux) to take an image of a can always subscribe to the print edition or the e-zine version of the
physical Linux server. magazines. You can find the details of how to subscribe to the print
—Rohit Rajput; editions on www.pay.efyindia.com and for the e-zine version, please
[email protected] go to www.ezines.efyindia.com
www.esds.co.in www.cloudoye.com
Get 10%
discount
35%
off & more
“Do not wait! Be a part of
Reseller package special offer ! the winning team”
Free Dedicated hosting/VPS for one Get 35% off on course fees and if you appear
month. Subscribe for annual package for two Red Hat exams, the second shot is free.
of Dedicated hosting/VPS and get
Hurry!till 31st one month FREE Hurry!till 31st
alid alid Contact us @ 98409 82184/85 or
Offer vust 2014! Contact us at 09841073179 Offer vust 2014! Write to [email protected]
Aug Aug
or Write to [email protected]
www.space2host.com www.vectratech.in
www.goforhosting.com www.prox.packwebhosting.com
www.opensourceforu.com
Email : [email protected]
FOSSBYTES Powered by www.efytimes.com
Play Service update 5.0 to all the devices. 4th Annual Datacenter The event aims to assist the community in Praveen Nair; Email: Praveen.nair@
This version is an advance from the existing Dynamics Converged. the data centre domain by exchanging ideas, datacenterdynamics.com; Ph: +91
September 18, 2014; accessing market knowledge and launching 9820003158; Website:
4.4, bringing the Android wearable services Bengaluru new initiatives. http://www.datacenterdynamics.com/
API and much more. Mainly focused on
CIOs and senior IT executives from across the
developers, this version was announced in Gartner Symposium IT Xpo,
world will gather at this event, which offers Website:
October 14-17, 2014; Grand
2014. According to the search giant’s blog, Hyatt, Goa
talks and workshops on new ideas and strate- http://www.gartner.com
gies in the IT industry.
the newest version of the Google Play store
includes many updates that can increase app Open Source India, Asia’s premier open source conference that Omar Farooq; Email: omar.farooq@
performance. These include wearable APIs, November 7-8, 2014; aims to nurture and promote the open source efy.in; Ph: 09958881862
NIMHANS Center, Bengaluru ecosystem across the sub-continent. http://www.osidays.com
dynamic security provider, improvements
in Drive, Wallet and Google Analytics, etc.
CeBit This is one of the world’s leading business IT Website:
The main focus is on the Android Wearable November 12-14, 2014; events, and offers a combination of services http://www.cebit-india.com/
platform and APIs, which will enable more BIEC, Bengaluru and benefits that will strengthen the Indian IT
and ITES markets.
applications on these devices. In addition to
this, Google has announced a separate section 5th Annual Datacenter The event aims to assist the community in Praveen Nair; Email: Praveen.nair@
Dynamics Converged; the datacentre domain by exchanging ideas, datacenterdynamics.com; Ph: +91
for Android Wear apps in the Play store. December 9, 2014; Riyadh accessing market knowledge and launching 9820003158; Website:
new initiatives. http://www.datacenterdynamics.com/
These apps for the Android Wear section
in the Google Play store come from Google Hostingconindia This event will be attended by Web hosting Website:
December 12-13, 2014; companies, Web design companies, domain http://www.hostingcon.com/
itself. The collection includes official NCPA, Jamshedji Bhabha and hosting resellers, ISPs and SMBs from contact-us/
companion apps for Android devices, Theatre, Mumbai across the world.
Hangouts and Google Maps. The main
purpose of the Android Wear Companion New podcast app for Linux is now ready for testing
app is to let users manage their devices from An all-new podcast app for Ubuntu was launched recently. This app, called
Android smartphones. It provides voice ‘Vocal’, has a great UI and design. Nathan Dyer, who is the developer of this
support, notifications and more. There are project, has released unstable beta builds of the app for Ubuntu 14.04 and 14.10,
third party apps as well from Pinterest, for testing purposes.
Banjo and Duolingo. Only next-gen easy-to-use desktops are capable of running the beta version
of Vocal. Installing beta versions of the app on Ubuntu is not as difficult as
Google plans to remove installing them on KDE, GNOME or Unity, but users can’t try the beta version of
QuickOffice from app stores Vocal without installing the unstable elementary desktop PPA. Vocal is an open
Google has announced the company’s source app, and one can easily port it to mainstream Linux versions from Ubuntu.
future plans about Google Docs, However, Dyer suggests users wait until the first official beta version of the app for
Slides and Sheets. It has integrated the easy-to-use desktops is available.
QuickOffice service in Google Docs The official developer’s blog has a detailed report on the project.
now. So, there is no longer a need for
the separate Google QuickOffice app. CoreOS Linux comes out with Linux containers as a service!
QuickOffice was acquired by Google in CoreOS has launched a commercial service to ease the workload of systems
2012. It served free document viewing administrators. The new commercial Linux distribution service can update
and editing on Android and iOS for two automatically. Systems administrators do not have to perform any major update
years. Google has decided to discontinue manually. Linux-based companies like RedHat and SUSE use open source
this free service. and free applications and libraries for their operations, yet offer commercial
The firm has integrated QuickOffice subscription services for enterprise editions of Linux. These services cover
into the Google Docs, Sheets and Slides software, updates, integration and technical support, bug fixes, etc.
app. The QuickOffice app will be removed CoreOS has a different strategy compared to competitive services offered by
from the Play Store and Apple’s App Store other players in the service, support and distribution industries. Users will not
soon and users will not be able to see or receive any major updates, since CoreOS wants to save you the hassle of manually
install it. Existing users will be able to updating all packages. The company plans to stream copies of updates directly to
continue to use the old version of the app. the OS. CoreOS has named the software ‘CoreUpdate’. It controls and monitors
software packages, their updates and also provides the controls to administrators to
manually update a few packages if they want to. It has a roll-back feature in case Linux Foundation releases
an update causes any malfunction in a machine. CoreUpdate can manage multiple Automotive Grade Linux to
systems at a time. power cars
CoreOS was designed to promote the use of open source OS kernel, which The Linux Foundation recently
is used in a lot of cloud based virtual released Automotive Grade Linux
servers. The CoreOS consumes less (AGL) to power automobiles, a
than half of instance as compared move that marks its first steps into
to other Linux distribution services. the automotive industry. The Linux
Applications of distributions run in Foundation is sponsoring the AGL
a virtualised container called Docker. They can start instantly. CoreOS was project to collaborate with the
launched in December last year. It uses two partitions, which help in easily automotive, computing hardware
updating distributions. One partition contains the current OS, while the other and communications industries,
is used to store the updated OS. This smoothens out the entire process of apart from academia and other
upgrading a package or an entire distribution. The service can be directly sectors. The first release of this
installed and run in the system or via cloud services like Amazon, Google or system is available for free on the
Rackspace. The venture capital firm, Kleiner Perkins Caulfield and Byers, Internet. A Linux-based platform
has invested over US$ 8 million in CoreOS. The company was also backed by called Tizen IVI is used to power
Sequoia Capital and Fuel Capital in the past. AGL. Tizen IVI was primarily
designed for a broad range of
Mozilla to launch Firefox-based streaming Dongle, Netcast devices—from smartphones and
After the successful launch of TVs to cars and laptops.
Google’s Chromecast, which Here is the list of features
sold in millions, everyone else that you can experience in the
has discovered the potential of first release of AGL: a dashboard,
streaming devices. Recently, Bluetooth calling, Google Maps,
Amazon and Roku launched HVAC, audio controls, Smartphone
their devices. According to Link Integration, media playback,
GigaOM, Mozilla will soon home screen and news reader. The
enter the market with its Linux Foundation and its partners
Firefox-powered streaming are expecting this project to change
device. A Mozilla enthusiast, the future of open source software.
Christian Heilmann, recently They hope to see next-generation
uploaded a photo of Mozilla’s prototype streaming device on Twitter. car entertainment, navigation
People at GigaOM managed to dig out more on it and even got their hands on and other tools to be powered by
the prototype as soon as that leaked photo went viral on Twitter. The device provides open source software. The Linux
better functionality and options than Chromecast. Mozilla has partnered with some Foundation expects collaborators to
as yet unknown manufacturer to build this device. The prototype has been sent to add new features and capabilities
some developers for testing and reviews. This device, which is called Netcast, has a in future releases. Development
hackable open bootloader, which makes it run some Chromecast apps. of AGL is expected to continue
Mozilla has always looked for an open environment for its products. It is expected steadily.
that the company’s streaming stick will come with open source technology, which will
Microsoft to abandon help developers to develop HDTV streaming apps for smartphones.
X-Series Android
smartphones too Opera is once again available on Linux
It hasn’t been long since Microsoft Australian Web browser company, Opera, has
ventured into the Android market with finally released a beta version for its Linux OS.
its X series devices and the company This Opera 24 version for Linux has the same
has already revealed plans to abandon features as Opera 24 on the Windows and Mac
the series. With the announcement of platforms. Chrome and Firefox are currently the
up to 18,000 job cuts, the company is two most used browsers on the Linux platform.
also phasing out its feature phones and Opera 24 will be a good alternative to them.
recently launched Nokia X Android As of now, only the developer or beta version of Opera for Linux is
smartphones. available. We are hoping to see a stable version in the near future. In this beta
Here are excerpts of an internal version, Linux users will get to experience popular Opera features like Speed
email sent by Jo Harlow, who heads Dial, Discover, Stash, etc. Speed Dial is a home page that gives users an
the phone business under Microsoft overview of their history, folders and bookmarks. Discover is an RSS reader,
devices, to Microsoft employees: embedded within the browser. Gathering and reading articles of interest would
“Placing Mobile Phone services in be more authentic with the Discover feature. Stash is like Pinterest, within a
maintenance mode: With the clear browser. Its UI is inspired from Pinterest. It allows users to collect websites and
focus on Windows Phones, all Mobile categorise them. Stash is designed to enable users to plan their travel, work and
Phones-related services and enablers personal lives with a collection of links.
are planned to move into maintenance
mode; effective: immediately. This Unlock your Moto X with your tattoo
means there will be no new features Motorola is implementing an
or updates to services on any Mobile alternative security system
Phones platform as a result of these for Moto X. It is frustrating to
plans. We plan to consider strategic remember difficult passwords
options for Xpress Browser to enable while simpler passwords are easy
continuation of the service outside to crack. To counter this, VivaLnk
of Microsoft. We are committed to has launched digital tattoos. This
supporting our existing customers, tattoo will automatically unlock
and will ensure proper operation the Moto X when applied to the
during the controlled shutdown of skin.
services over the next 18 months. A This technology is based
detailed plan and timeline for each on Near Field Communication to connect with smartphones and authenticate
service will be communicated over the access. Motorola is working on optimising digital tattoos with Google’s Advance
coming weeks. Technology and Projects.
“Transitioning developer The pricing is on the higher side but this is a great initiative in wearable
efforts and investments: We technology. Developing user friendly alternatives to the password and PIN number
plan to transition developer has been a major focus of tech companies. Motorola had talked about this in the
efforts and investments to focus introductory session of the D11 conference at California this May, when it discussed
on the Windows ecosystem the idea of passwords in pills or tattoos. The idea may seem like a gimmick, but you
while improving the company’s never know when it will become commonly used. VivaLnk is working on making
financial performance. To focus this technology compatible with other smartphones too. It is considering entering
on the growing momentum behind the domain of creating tattoos of different types and designs.
Windows Phone, we plan to
immediately begin ramping down OpenSSL flaws fixed by PHP
developer engagement activities PHP recently pushed out new versions for its popular scripting language, which fix
related to Nokia X, Asha and many crucial bugs and, out of those, two are of OpenSSL. The flaws are not serious
Series 40 apps, and shift support to like Heartbleed, which popped up a couple of months back. One flaw is directly
maintenance mode.” related to OpenSSL handling time stamps and the other is related to the same thing
in a different way. PHP 5.5.14 and 5.4.30 have fixed both flaws.
Other bugs which were fixed were not security related but of a more
general type.
Linux kernel 3.2.61 LTS With the Android One platform, Google aims to reach the 5 billion
officially released people across the world who still do not own a smartphone. According to
The launch of the Linux kernel 3.2.61 Pichai, less than 10 per cent of the world’s population owns smartphones
LTS, the brand-new maintenance in emerging countries. The promise of a stock Android experience at
release of the 3.2 kernel series, has a low price point is what Android One aims to provide. Home-grown
been officially announced by Ben manufacturers such as Micromax, Karbonn and Spice will create and sell
Hutchings, the maintainer of the these Android One phones for which hardware reference points, software
Linux 2.6 kernel branch. While and subsequent updates will be provided by Google. Even though the spec
highlighting the slew of changes that sheet of Android One phones hasn’t been officially released, Micromax is
come bundled along with the latest already working on its next low budget phone, which many believe will
release, Hutchings advised users to be an Android One device.
upgrade to it as early as possible.
The Linux kernel 3.2.61 is an SQL injection vulnerability patched in Ruby on Rails
important release in the cycle, according to Two SQL injection vulnerabilities were patched in Ruby on Rails, which is
Hutchings. It introduces better support for an open source Web development framework now used by many developers.
the x86, ARM, PowerPC, s390 and MIPS Some high profile websites also use this framework. The Ruby on Rails
architectures. At the same time, it also developers recently launched versions 3.2.19, 4.0.7 and 4.1.3, and advised
improves support for the EXT4, ReiserFS, users to upgrade to these versions as soon as possible. And a few hours later,
Btrfs, NFS and UBIFS file systems. It also they again released versions 4.0.8 and 4.1.4 to fix problems caused by the
comes with updated drivers for wireless 4.0.7 and 4.1.3 updates.
connectivity, InfiniBand, USB, ACPI, One of the two SQL injection vulnerabilities affects applications running
Bluetooth, SCSI, Radeon and Intel i915, on Ruby versions 2.0.0 through to 3.2.18, which also use the PostgreSQL
among others. database system and query bit string data types. Another vulnerability affects
Meanwhile, Linux founder Linus applications running on Ruby on Rails versions 4.0.0 to 4.1.2, which use
Torvalds has officially announced the PostgreSQL and querying range data types.
fifth Release Candidate (RC) version Despite affecting different versions, these two flaws are related and allow
of the upcoming Linux kernel 3.16. attackers to inject arbitrary SQL code using crafted values.
The RC5 is a successor to Linux 3.16-
rc4. It is now available for download The city of Munich adopts Linux in a big way!
and testing. However, since it is a It’s certainly not a case of an overnight conversion. The city of Munich began
development version, it should not be to seek open source alternatives way
installed on production machines. back in 2003.
With a population of about 1.5
Motorola brings out Android million citizens and thousands of
4.4.4 KitKat upgrade for employees, this German city took its
Moto E, Moto G and Moto X time to adopt open source. Tens of
Motorola has unveiled the Android thousands of government workstations
4.4.4 KitKat update for its devices in were to be considered for the change. Its
India, for Moto E, Moto G and Moto initial shopping list had suitably rigid
X. This latest version of Android has specifications, spanning everything from
an extra layer of security for browsing avoiding vendor lock-in and receiving
Web content on the phone. regular hardware support updates, to
With this phased rollout, users having access to an expansive range of
will receive notifications that will free applications.
enable them to update their OS but, In its first stage of migration, in 2006,
alternatively, the update can also be Debian was introduced across a small percentage of government workstations,
accessed by way of the settings menu. with the remaining Windows computers switching to OpenOffice.org, followed
This release goes on to shore up by Firefox and Thunderbird.
Motorola’s commitment to offering its Debian was substituted for a custom Ubuntu-based distribution named
customers a pure, bloatware-free and ‘LiMux‘ in 2008, after the team handling the project ‘realised Ubuntu was the
seamless Android experience. platform that could satisfy our requirements best.’
O
ut of the many interviews that we have conducted standards than they did with proprietary technologies. This
with recruiters asking them about what they look for trend makes it even more critical to incorporate open source
in a candidate, one common requirement seems to technologies in the college curriculum.
be knowledge of open source technology. As per NASSCOM Speaking about the initiative, Venkatesh Swaminathan,
reports, between 20 to 33 per cent of the million students that country head, The Attachmate Group (Novell, NetIQ,
graduate out of India’s engineering colleges every year, run the SUSE and Attachmate), said, “This is one of the first
risk of being unemployed. implementations of its kind but we do have
The Attachmate Group, along with engagements with universities on various
Karunya University, has taken a step other formats. Regarding this partnership with
forward to address this issue. Novell India, Karunya, we came out with a kind of a joint
in association with Karunya University, has strategy to make engineering graduates ready
introduced Novell’s professional courses for the jobs enterprises offer today. We thought
as part of the university’s curriculum. about the current curriculum and how we
Students enrolled in the university’s M. could modify it to make it more effective. Our
Tech course for Information Technology current education system places more emphasis
will be offered industry-accepted courses. on theory rather than the practical aspects of
Apart from this, another company of the engineering. With our initiative, we aim to bring
Attachmate Group, SUSE, has also pitched in more practical aspects into the curriculum. So
in to make the students familiar with the we have looked at what enterprises want from
world of open source technology. engineers when they deploy some solutions.
Speaking about the initiatives, Dr Venkatesh Swaminathan, Today, though many enterprises want to use
J Dinesh Peter, associate professor and country head, The Attachmate Group open source technologies effectively, the
(Novell, NetIQ, SUSE and Attachmate)
HoD I/C, Department Of Information unavailability of adequate talent to handle those
Technology, said, “We have already started with our first batch technologies is a major issue. So, the idea was to bridge the
of students, who are learning SUSE. I think adding open source gap between what enterprises want and what they are getting,
technology in the curriculum is a great idea because nowadays, with respect to the talent they require to implement and
most of the tech companies expect knowledge on open manage new technologies.”
source technology for the jobs that they offer. Open source Going forward, the company aims to partner with at least
technology is the future, and I think all universities must have it another 15 – 20 universities this year to integrate its courseware
incorporated in their curriculum in some form or the other.” into the curriculum to benefit the maximum number of students
The university has also gone ahead to provide professional in India. “The onus of ensuring that the technical and engineering
courses from Novell to the students. Dr Peter said, “In India, students who graduate every year in our country are world-class
where the problem of employability of technical graduates is and employable lies on both the academia as well as the industry.
acute, this initiative could provide the much needed shot in With this collaboration, we hope to take a small but important
the arm. We are pleased to be associated with Novell, which step towards achieving this objective,” Swaminathan added.
has offered its industry-relevant courses to our students. With
growing competition and demand for skilled employees in About The Attachmate Group
the technology industry, it is imperative that the industry and Headquartered in Houston, Texas, The Attachmate Group
academia work in sync to address the lacuna that currently is a privately-held software holding company, comprising
exists in our system.” distinct IT brands. Principal holdings include Attachmate,
Growth in the amount of open source software that NetIQ, Novell and SUSE.
enterprises use has been much faster than growth in proprietary
software usage, over the past 2-3 years. One major reason
By: Diksha P Gupta
for this is that open source technology helped companies
The author is senior assistant editor at EFY.
slash huge IT budgets, while maintaining higher performance
A
solid state drive (SSD) is a data storage device medium. Kingston offers an entire range of SSDs, including
that uses integrated circuit assemblies as its entry levels variants as well as options for general use.
memory to store data. Now that everyone is There are a lot of factors to keep in mind when you
switching over to thin tablets and high performance are planning to buy an SSD—durability, portability,
notebooks, carrying heavy, bulky hard disks may be power consumption and speed. Gupta adds that, “The
difficult. SSDs, therefore, play a vital role in today’s performance of SSDs is typically indicated by their IOPS
world as they combine high speed, durability and smaller (Input output operation per second), so one should look at
sizes, with vast storage and power efficiency. the specifications of the product. Also, check the storage
SSDs consume minimal power because they do not have capacity. If you’re looking for an SSD when you already
any movable parts inside, which leads to less consumption of have a PC or laptop, then double check the compatibility
internal power. between your system and the SSD you’ve shortlisted. If
you’re buying a new system, then you can always check
HDDs vs SSDs with the vendors as to what SSD options are available.
The new technologies embedded in SSDs make them costlier Research the I/O speeds and get updates about how reliable
than HDDs. “SSDs, with their new technology, will gradually the product is.”
overtake hard disk drives (HDDs), which have been around “For PC users, some of the important performance
ever since PCs came into prominence. It takes time for a new parameters of SSDs are related to battery life, heating of the
technology to completely take over the traditional one. Also, device and portability. An SSD is 100 per cent solid state
new technologies are usually expensive. However, users are technology and has no motor inside, so the advantage is that it
ready to pay a little more for a new technology because it consumes less energy; hence, it extends the battery life of the
offers better performance,” explains Rajesh Gupta, country device and is quite portable,” explains Gupta.
head and director, Sandisk Corporation India. Listed below are a few broad specifications of SSDs,
SSDs use integrated circuit assemblies as memory for which can help buyers decide which variant to go in for.
storing data. The technology uses an electronic interface
which is compatible with traditional block input/output Portability
HDDs. So SSDs can easily replace HDDs in commonly Portability is one of the major concerns when buying an
used applications. external hard drive because, as discussed earlier, everyone
An SSD uses a flash-based medium for storage. It is is gradually shifting to tablets, iPads and notebooks and
believed to have a longer life than an HDD and also consumes so would not want to carry around an external hard disk
less power. “SSDs are the next stage in the evolution of PC that is heavier than the computing device. The overall
storage. They run faster, and are quieter and cooler than the portability of an SSD is evaluated on the basis of its size,
aging technology inside hard drives. With no moving parts, shape, how much it weighs and its ruggedness.
SSDs are also more durable and reliable than hard drives.
They not only boost the performance but can also be used High speed
to breathe new life into older systems,” says Vishal Parekh, Speed is another factor people look for, while buying an
marketing director, Kingston Technology India. SSD. If it is not fast, it is not worth the buy. SSDs offer
data transfer read speeds that range from approximately 530
How to select the right SSD MBps to 550 MBps, whereas a HDD offers only around
If you’re a videographer, or have a studio dedicated to audio/ 30 to 50 MBps. SSDs can also boot any operating system
video post-production work, or are in the banking sector, you almost four times faster than a traditional 7200 RPM 500
can look at ADATA’s latest launch, which has been featured GB hard drive disk. With SSDs, the applications provide a
later in the article. Kingston, too, has introduced SSDs for all 12 times faster response compared to the HDD. A system
possible purposes. SSDs are great options even for gamers, or equipped with an SSD also launches applications faster and
those who want to ensure their data has been saved in a secure offers a high performance overall.
HyperX Fury
from Kingston Technology
• It is a 6.35 cm and 7 mm solid state drive (SSD)
• Delivers impressive performance at an affordable price
• It speeds up system boot up, application loading time and
file execution
• Controller: SandForce SF-2281
• Performance: SATA Rev 3.0 (6 GBps)
• Read/write speed: 500 MBps to boost overall system
responsiveness and performance
• Reliability: Cool, rugged and durable drive to push your
system to the limits
• Warranty: Three years
1200 SSD
from Seagate
It is designed for applications demanding the fast,
consistent performance and has dual port
12 GBps SAS
It comes with 800 GB capacity
Random read/write performance of up to 110K /40K IOPS
Sequential read/write performance from 500 MBps to
750 MBps
This article provides an introduction to the Linux kernel, and demonstrates how
to write and compile a module.
H
ave you ever wondered how a computer manages the allowing communication and data sharing between processes
most complex tasks with such efficiency and accuracy? through inter-process communication (IPC). Additionally,
The answer is, with the help of the operating system. It with the help of the process scheduler, it schedules processes
is the operating system that uses hardware resources efficiently and enables resource sharing.
to perform various tasks and ultimately makes life easier. At a Memory management: This subsystem handles all
high level, the OS can be divided into two parts—the first being memory related requests. Available memory is divided into
the kernel and other is the utility programs. Various user space chunks of a fixed size called ‘pages’, which are allocated or
processes ask for system resources such as the CPU, storage, de-allocated to/from the process, on demand. With the help of
memory, network connectivity, etc, and the kernel services the memory management unit (MMU), it maps the process’
these requests. This column will explore loadable kernel virtual address space to a physical address space and creates
modules in GNU/Linux. the illusion of a contiguous large address space.
The Linux kernel is monolithic, which means that the File system: The GNU/Linux system is heavily dependent
entire OS runs solely in supervisor mode. Though the kernel on the file system. In GNU/Linux, almost everything is a file.
is a single process, it consists of various subsystems and This subsystem handles all storage related requirements like
each subsystem is responsible for performing certain tasks. the creation and deletion of files, compression and journaling
Broadly, any kernel performs the following main tasks. of data, the organisation of data in a hierarchical manner,
Process management: This subsystem handles the and so on. The Linux kernel supports all major file systems
process’ ‘life-cycle’. It creates and destroys processes, including MS Windows’ NTFS.
Device control: Any computer system requires various manage the contents of the ring buffer.
devices. But to make the devices usable, there should be a
device driver and this layer provides that functionality. There Preparing the system
are various types of drivers present, like graphics drivers, a Now it’s time for action. Let’s create a development
Bluetooth driver, audio/video drivers and so on. environment. In this section, let’s install all the required
Networking: Networking is one of the important aspects packages on an RPM-based GNU/Linux distro like CentOS
of any OS. It allows communication and data transfer between and a Debian-based GNU/Linux distro like Ubuntu.
hosts. It collects, identifies and transmits network packets.
Additionally, it also enables routing functionality. Installing CentOS
First install the gcc compiler by executing the following
Dynamically loadable kernel modules command as a root user:
We often install kernel updates and security patches to make
sure our system is up-to-date. In case of MS Windows, a reboot [root]# yum -y install gcc
is often required, but this is not always acceptable; for instance,
the machine cannot be rebooted if is a production server. Then install the kernel development packages:
Wouldn’t it be great if we could add or remove functionality
to/from the kernel on-the-fly without a system reboot? The [root]# yum -y install kernel-devel
Linux kernel allows dynamic loading and unloading of kernel
modules. Any piece of code that can be added to the kernel at Finally, install the ‘make’ utility:
runtime is called a ‘kernel module’. Modules can be loaded
or unloaded while the system is up and running without any [root]# yum -y install make
interruption. A kernel module is an object code that can be
dynamically linked to the running kernel using the ‘insmod’ Installing Ubuntu
command and can be unlinked using the ‘rmmod’ command. First install the gcc compiler:
dmesg: Any user-space program displays its output on the void cleanup_module(void)
standard output stream, i.e., /dev/stdout but the kernel uses {
a different methodology. The kernel appends its output to printk(KERN_INFO “Exiting ...\n”);
the ring buffer, and by using the ‘dmesg’ command, we can }
MODULE_LICENSE(“GPL”); MODULE_VERSION(“1.0”);
MODULE_AUTHOR(“Narendra Kangralkar.”);
MODULE_DESCRIPTION(“Hello world module.”); Here, the __init and __exit keywords imply initialisation
MODULE_VERSION(“1.0”); and clean-up functions, respectively.
Any module must have at least two functions. The first Compiling and loading the module
is initialisation and the second is the clean-up function. Now, let us understand the module compilation
In our case, init_module() is the initialisation function procedure. To compile a kernel module, we are going to
and cleanup_module() is the clean-up function. The use the kernel’s build system. Open your favourite text
initialisation function is called as soon as the module editor and write down the following compilation steps
is loaded and the clean-up function is called just before in it, before saving it as Makefile. Please note that the
unloading the module. MODULE_LICENSE and other kernel modules hello.c and Makefile must exist in the
macros are self-explanatory. same directory.
There is a printk() function, the syntax of which is
similar to the user-space printf() function. But unlike obj-m += hello.o
printf() , it doesn’t print messages on a standard output
stream; instead, it appends messages into the kernel’s all:
ring buffer. Each printk() statement comes with a make -C /lib/modules/$(shell uname -r)/build M=$(PWD)
priority. In our example, we used the KERN_INFO modules
priority. Please note that there is no comma (,) between
‘KERN_INFO’ and the format string. In the absence of clean:
explicit priority, DEFAULT_MESSAGE_LOGLEVEL make -C /lib/modules/$(shell uname -r)/build M=$(PWD)
priority will be used. The last statement in init_module() clean
is return 0 which indicates success.
The names of the initialisation and clean-up To build modules, kernel headers are required. The
functions are init_module() and cleanup_module() above makefile invokes the kernel’s build system from the
respectively. But with the new kernel (>= 2.3.13) kernel’s source and finally the kernel’s makefile invokes
we can use any name for the initialisation and clean- our Makefile to compile the module. Now that we have
up functions. These old names are still supported everything to build our module, just execute the make
for backward compatibility. The kernel provides command, and this will compile and create the kernel
module_init and module_exit macros, which register module named hello.ko:
initialisation and clean-up functions. Let us rewrite
the same module with names of our own choice for [mickey] $ ls
initialisation and cleanup functions: hello.c Makefile
module_init(hello_init); [mickey]$ ls
module_exit(hello_exit); hello.c hello.ko hello.ko.unsigned hello.mod.c hello.
MODULE_LICENSE(“GPL”); mod.o hello.o Makefile modules.order Module.symvers
MODULE_AUTHOR(“Narendra Kangralkar.”);
MODULE_DESCRIPTION(“Hello world module.”); We have now successfully compiled our first kernel
module. Now, let us look at how to load and unload this of the current->pid variable. Given below is the complete
module in the kernel. Please note that you must have super- working code (pid.c):
user privileges to load/unload kernel modules. To load a
module, switch to the super-user mode and execute the #include <linux/kernel.h>
insmod command, as shown below: #include <linux/module.h>
#include <linux/sched.h>
[root]# insmod hello.ko
static int __init pid_init(void)
insmod has done its job successfully. But where is the {
output? It is appended to the kernel’s ring buffer. So let’s printk(KERN_INFO “pid = %d\n”, current->pid);
verify it by executing the dmesg command:
return 0;
[root]# dmesg }
Hello, World !!!
static void __exit pid_exit(void)
We can also check whether our module is loaded or not. {
For this purpose, let’s use the lsmod command: /* Don’t do anything */
}
[root]# lsmod | grep hello
hello 859 0 module_init(pid_init);
module_exit(pid_exit);
To unload the module from the kernel, just execute the
rmmod command as shown below and check the output of the MODULE_LICENSE(“GPL”);
dmesg command. Now, dmesg shows the message from the MODULE_AUTHOR(“Narendra Kangralkar.”);
clean-up function: MODULE_DESCRIPTION(“Kernel module to find PID.”);
MODULE_VERSION(“1.0”);
[root]# rmmod hello
The Makefile is almost the same as the first makefile, with
[root]# dmesg a minor change in the object file’s name:
Hello, World !!!
Exiting ... obj-m += pid.o
divide the module into multiple files. Let us understand the clean
procedure of building a module that spans two files. Let’s
divide the initialization and cleanup functions from the hello.c The Makefile is self-explanatory. Here, we are saying:
file into two separate files, namely startup.c and cleanup.c. “Build the final kernel object by using startup.o and
Given below is the source code for startup.c: cleanup.o.” Let us compile and test the module:
static void __exit hello_exit(void) Here, the modinfo command shows the version,
{ description, licence and author-related information from each
printk(KERN_INFO “Function %s from %s file\n”, __func__, module.
__FILE__); Let us load and unload the final.ko module and verify the
} output:
module_exit(hello_exit); [mickey]$ su -
Password:
MODULE_LICENSE(“BSD”);
MODULE_AUTHOR(“Narendra Kangralkar.”); [root]# insmod final.ko
MODULE_DESCRIPTION(“Cleanup module.”);
MODULE_VERSION(“1.1”); [root]# dmesg
Function: hello_init from /home/mickey/startup.c file
Now, here is the interesting part -- Makefile for these
modules: [root]# rmmod final
jQuery, the cross-platform JavaScript library designed to simplify the client-side scripting
of HTML, is used by over 80 per cent of the 10,000 most popularly visited websites. jQuery
is free open source software which has a wide range of uses. In this article, the author
suggests some best practices for writing jQuery code.
T
his article aims to explain how to use jQuery in frameworks like jQuery and writing numerous lines of
a rapid and more sophisticated manner. Websites code to do some fairly minor task.
focus not only on backend functions like user For example, if one wants to write code to show the
registration, adding new friends or validation, but also on datepicker selection, on onclick event in plain Javascript,
how their Web pages will get displayed to the user, how the flow is:
their pages will behave in different situations, etc. For 1. For onclick event create one div element.
example, doing a mouse-over on the front page of a site 2. Inside that div, add content for dates, month and year.
will either show beautiful animations, properly formatted 3. Add navigation for changing the months and year.
error messages or interactive hints to the user on what can 4. Make sure that, on first client, div can be seen, and on
be done on the site. second client, div is hidden; and this should not affect
jQuery is a very handy, interactive, powerful and rich any other HTML elements. Just creating a datepicker
client-side framework built on JavaScript. It is able to is a slightly more difficult task and if this needs to be
handle powerful operations like HTML manipulation, implemented many times in the same page, it becomes
events handling and beautiful animations. Its most more complex. If the code is not properly implemented,
attractive feature is that it works across browsers. When then making modifications can be a nightmare.
using plain JavaScript, one of the things we need to ensure This is where jQuery comes to our rescue. By using it, we
is whether the code we write tends towards perfection. It can show the datepicker as follows:
should handle any exception. If the user enters an invalid
type of value, the script should not just hang or behave $(“#id”).datepicker();
badly. However, in my career, I have seen many junior
developers using plain JavaScript solutions instead of rich That’s it! We can reuse the same code multiple times by
just changing the id(s); and without any kind of collision, 4. Use proper selectors and try to use more ‘find()’, because
we can show multiple datepickers in the same page. That find can traverse DOM faster. For example, if we want to
is the beauty of jQuery. In short, by using it, we can focus find content of #id3…
more on the functionality of the system and not just on small
parts of the system. And we can write more complex code //demo code snippet
like a rich text editor and lots of other operations. But if <div id=’#id1’>
we write jQuery code without proper guidance and proper <span id=’#id2’></span>
methodology, we end up writing bad code; and sometimes <div class=’divClass’>Here is the content.</div>
that can become a nightmare for other team members to </div>
understand and modify for minor changes.
Developers often make silly mistakes during jQuery code //developer generally uses
implementation. So, based on some silly mistakes that I have var content = $(‘#id1 .divClass’).html();
encountered, here are some general guidelines that every
developer should keep in mind while implementing jQuery code. //the better way is [This is faster in execution]
var content = $(‘#id1’).find(‘div.divClass’).html();
General guidelines for jQuery
1. Try to use ‘this’ instead of just using the id and class of 5. Write functions wherever required: Generally,
the DOM elements. I have seen that most developers are developers write the same code multiple times. To
happy with just using $(‘#id’) or $(‘.class’) everywhere: avoid this, we can write functions. To write functions,
let’s find the block that will repeat. For example, if
//What developers are doing: there is a validation of an entry for a text box and the
$(‘#id’).click(function(){ same gets repeated for many similar text boxes, then
var oldValue = $(‘#id’).val(); we can write a function for the same. Given below is a
var newValue = (oldValue * 10) / 2; simple example of a text box entry. If the value is left
$(‘#id’).val(newValue); empty in the entry, then function returns 0; else, if the
}); user has entered some value, then it should return the
same value.
//What should be done: Try to use more $(this) in your code.
$(‘#id’).click(function(){ //Javascript
$(this).val(($(this).val() * 10) / 2); function doValidation(elementId){
}); //get value using elementId
//check and return value
2. Avoid conflicts: When working with a CMS like }
WordPress or Magento, which might be using other
JavaScript frameworks instead of jQuery, you need to //simple jQuery
work with jQuery inside that CMS or project. Then use $(“input[type=’text’]”).blur(function(){
the noConflicts of jQuery. //get value using $(this)
//check and return value
var $abc = jQuery.noConflict(); });
$abc(‘#id’).click(function(){
//do something //best way to implement
}); //now you can use this function easily with click event also
$.doValidation = function(){
3. Take care of absent elements: Make sure that the element //get value
on which your jQuery code is working/manipulating is //check and return value
not absent. If the element on which your code manipulates };
is dynamically added, then first find it, if that element is
added on DOM. $(“input[type=’text’]”).blur($.doValidation);
task1(function(){
task1(); By: Savan Koradia
}); The author works as a senior PHP Web developer at Multidots
Solutions Pvt Ltd. He writes tutorials to help other developers
function task1(callback){ to write better code. You can contact him at: savan.koradia@
multidots.in; Skype: savan.koradia.multidots
//do something
I
n the previous article in this series, we set up a sharded config servers store the same metadata and since we
environment in MongoDB. This article deals with one have three of them just to ensure availability, we’ll
of the most intriguing and crucial topics in database be backing just one config server for demonstration
administration—backups. The article will demonstrate the purposes. So open a command prompt and type the
MongoDB backup process and will make a backup of the following command to back up the config database of
sharded server that was configured earlier. So, to proceed, you our config server:
must set up your sharded environment as per our previous
article as we’ll be using the same configuration. C:\Users\viny\Desktop\mongodb-win32-i386-2.6.0\
Before we move on with the backup, make sure that bin>mongodump --host localhost:59020 --db config
the balancer is not running.
The balancer is the process that This command will dump
ensures that data is distributed your config database under
evenly in a sharded cluster. the dump directory of your
This is an automated process MongoDB root directory.
in MongoDB and at most Now let’s back up our
times, you won’t be bothered actual data by taking backups
with it. In this case, though, it of all of our shards. Issue the
needs to be stopped so that no following commands, one by
chunk migration takes place one, and take a backup of all
while we back up the server. the three replica sets of both
If you’re wondering what the the shards that we configured
term ‘chunk migration’ means, earlier:
let me tell you that if one
shard in a sharded MongoDB environment has more data mongodump --host localhost:38020 --out .\shard1\replica1
stored than its peers, then the balancer process migrates mongodump --host localhost:38021 --out .\shard1\replica2
some data to other shards. Evenly distributed data ensures mongodump --host localhost:38022 --out .\shard1\replica3
optimal performance in a sharded environment. mongodump --host localhost:48020 --out .\shard2\replica1
So now connect to a Mongo process by opening a mongodump --host localhost:48021 --out .\shard2\replica2
command prompt, going to the MongoDB root directory and mongodump --host localhost:48022 --out .\shard2\replica3
typing ‘Mongo’. Type sh.getBalancerState() to find out the
balancer’s status. If you get true as the output, your balancer The --out parameter defines the directory where
is running. Type sh.stopBalancer() to stop the balancer. MongoDB will place the dumps. Now you can start the
The next step is to back up the config server, which balancer by issuing the sh.startBalancer() command
stores metadata about shards. In the previous article, we and resume normal operations. So we’re done with our
set up three config servers for our shard. Since all the backup operation.
If you want to explore a bit more about backups
and restores in MongoDB, you can check MongoDB
documentation and the article in http://www.thegeekstuff.
com/2013/09/mongodump-mongorestore/ which will
give you some good insights into Mongodump and
Mongorestore commands.
F
or the past few months, we have been to convert what constitutes our intuitive knowledge
discussing information retrieval and natural about how to look for a company’s name in a text
language processing (NLP), as well as the document into rules that can be automatically
algorithms associated with them. In this month’s checked by a program. This is the task that is faced
column, let’s continue our discussion on NLP while by NLP applications which try to do Named Entity
also covering an important NLP application called Recognition (NER). The point to note is that while
‘Named Entity Recognition’ (NER). As mentioned the simple heuristics we use to identify names of
earlier, given a large number of text documents, NLP companies does work well in many cases, it is also
techniques are employed to extract information from quite possible that it misses out extracting names
the documents. One of the most common sources of companies in certain other cases. For instance,
of textual information is newspaper articles. Let us consider the possibility of the company’s name
consider a simple example wherein we are given all being represented as IBM instead of I.B.M, or as
the newspaper articles that appeared in the last one International Business Machines. The rule-based
year. The task that is assigned to us is related to the system could potentially miss out recognising it.
world of business. We are asked to find out all the Similarly, consider a sentence like, “Indian Oil
mergers and acquisitions of businesses. We need to and Natural Gas Company decided that…” In this
extract information on which companies bought over case, it is difficult to figure out whether there are
other firms as well as the companies that merged two independent entities, namely, ‘Indian Oil’ and
with each other. Our first rudimentary steps towards ‘Natural Gas Company’ being referred to in the
getting this information will perhaps be to look sentence or if it is a single entity whose name is
for keyword-based searches that used terms such ‘Indian Oil and Natural Gas Company’. It requires
as ‘merger’ or ‘buys’. Once we find the sentences considerable knowledge about the business world
containing those keywords, we could then perhaps to resolve the ambiguity. We could perhaps consult
look for the names of the companies, if any occur in the ‘World Wide Web’ or Wikipedia to clear our
those sentences. Such a task requires us to identify doubts. The use of such sources of knowledge is
all company names present in the document. quite common in Named Entity Recognition (NER)
For a person reading the newspaper article, systems. Now let us look a bit deeper into NER
such a task seems simple and straightforward. Let systems and their uses.
us first try to list down the ways in which a human
being would try to identify the company names that Types of entities
could be present in a text document. We need to use What are the types of entities that are of interest to
heuristics such as: (a) Company names typically a NER system? Named entities are by definition,
would begin with capital letters; (b) They can contain proper nouns, i.e., nouns that refer to a particular
words such as ‘Corporation’ or ‘Ltd’; (c) They can person, place, organisation, thing, date or time, such
be represented by letters of the alphabet separated as Sandya, Star Wars, Pride and Prejudice, Cubbon
by full stops, such as I.B.M. We could also use Park, March, Friday, Wipro Ltd, Boy Scouts, and the
contextual clues such as ‘X’s stock price went up’ Statue of Liberty. Note that a named entity can span
to infer that X is a business or company. Now, the more than one word, as in the case of ‘Cubbon Park’.
question we are left with is whether it is possible Each of these entities are assigned different tags such
I
nstalling OpenStack using Packstack is very simple. # route add -net 172.24.4.224 netmask 255.255.255.240 gw
After a test installation in a virtual machine, you will find 192.168.122.54
that the basic operations for creating and using virtual
machines are now quite simple when using a Web interface. Now, browse http://192.168.122.54/dashboard and
create a new project and a user associated with the project.
The environment 1. Sign in as the admin.
It is important to understand the virtual environment. While 2. Under the Identity panel, create a user (youser) and
everything is running on a desktop, the setup consists of a project (Bigdata). Sign out and sign in as youser to
multiple logical networks interconnected via virtual routers create and test a cloud VM.
and switches. You need to make sure that the routes are 3. Create a private network for the project under
defined properly because otherwise, you will not be able to Project/Network/Networks:
access the virtual machines you create. • Create the private network 192.168.10.0/24 with
On the desktop, the virt-manager creates a NAT-based the gateway 192.168.10.254
network by default. NAT assures that if your desktop can • Create a router and set a gateway to the public
access the Internet, so can the virtual machine. The Internet network. Add an interface to the private network
access had been used when the OpenStack distribution was and ip address 192.168.10.254.
installed in the virtual machine. 4. To be able to sign in using ssh, under the Project/
The Packstack installation process creates a virtual Compute/Access & Security, in the Security
public network for use by the various networks created Groups tab, add the following rules to the default
within the cloud environment. The virtual machine security group:
on which OpenStack is installed is the gateway to the • Allow ssh access: Custom TCP Rule for allowing
physical network. traffic on Port 22.
• Allow icmp access: Custom ICMP Rule with
Virtual Network on the Desktop (virbr0 interface): Type and Code value -1.
192.168.122.0/32 5. For password-less signing into the VM, under the
IP address of eth0 interface on OpenStack VM: 192.168.122.54 Project/Compute/Access & Security, in the Key Pairs
Public Virtual Network created by packstack on OpenStack VM: tab the following:
172.24.4.224/28 • Select the Import Key Pair option and give it a
IP address of the br-ex interface OpenStack VM: 172.24.4.225 name, e.g., ‘desktop user login’.
• In your desktop terminal window, use ssh-keygen
Testing the environment to create a public/private key pair in case you
In the OpenStack VM console, verify the network addresses. don't already have one.
In my case, I had to explicitly give an ip to the br-ex • Copy the contents of ~/.ssh/id_rsa.pub from your
interface, as follows: desktop account and paste them in the public key.
6. Allocate a public IP for accessing the VM under
# ifconfig Project/Compute/Access & Security in the Floating
# ip addr add 172.24.4.225/28 dev br-ex Ips tab, and allocate IP to the project. You may get
a value like 172.24.4.229
On the desktop, add a route to the public virtual network 7. Now launch the instance under Project/Compute/
on OpenStack VM: Instance:
VM Router
ssh [email protected]
You should be signed into the virtual machine Figure 1: Simplified network diagram
without needing a password.
You can experiment with importing the Fedora VM Should Packstack install OpenStack client tools [y|n] [y]
image you used for the OpenStack VM and launching it : y
in the cloud. Whether you succeed or not will depend on
the resources available in the OpenStack VM. The answers to the other questions will depend on the
network interface and the IP address of your desktop, but
Installing only the needed OpenStack services there is no ambiguity here. You should answer with the
You will have observed that OpenStack comes with a interface ‘lo' for CONFIG_NOVA_COMPUTE_PRIVIF and
very wide range of services, some of which are not likely CONFIG_NOVA_NETWORK_PRIVIF. You don't need an
to be very useful for your experiments on the desktop, extra physical interface as the compute services are running
e.g., the additional networks and router created in the on the same server.
tests above. Here is a part of the dialogue for installing Now, you are ready to test your OpenStack
the required services on the desktop: installation on the desktop. You may want to create a
project and add a user to the project. Under Project/
[root@amd ~]# packstack Compute/Access & Security, you will need to add
Welcome to Installer setup utility firewall rules and key pairs, as above.
Enter the path to your ssh Public key to install on However, you will not need to create any additional
servers: private network or a router.
Packstack changed given value to required value /root/. Import a basic cloud image, e.g., from http://fedoraproject.
ssh/id_rsa.pub org/get-fedora#clouds under Project/Compute/Images.
Should Packstack install MySQL DB [y|n] [y] : y You may want to create an additional flavour for a
Should Packstack install OpenStack Image Service (Glance) virtual machine. The m1.tiny flavour has 512MB of RAM
[y|n] [y] : y and 4GB of disk and is too small for running Hadoop. The
Should Packstack install OpenStack Block Storage (Cinder) m1.small flavour has 2GB of RAM and 20GB of disk,
service [y|n] [y] : n which will restrict the number of virtual machines you
Should Packstack install OpenStack Compute (Nova) service can run for testing Hadoop. Hence, you may create a mini
[y|n] [y] : y flavour with 1GB of RAM and 10GB of disk. This will
Should Packstack install OpenStack Networking (Neutron) need to be done as the admin user.
service [y|n] [y] : n Now, you can create an instance of the basic cloud
Should Packstack install OpenStack Dashboard (Horizon) image. The default user is fedora and your setup is ready
[y|n] [y] : y for exploration of Hadoop data.
Should Packstack install OpenStack Object Storage (Swift)
[y|n] [y] : n
Should Packstack install OpenStack Metering (Ceilometer) By: Dr Anil Seth
[y|n] [y] : n The author has earned the right to do what interests him.
Should Packstack install OpenStack Orchestration (Heat) You can find him online at http://sethanil.com, http://sethanil.
blogspot.com, and reach him via email at [email protected]
[y|n] [n] : n
MariaDB
The MySQL Fork
that Google has Adopted
MariaDB is a community developed fork of MySQL, which
has overtaken MySQL. That many leading corporations in
the cyber environment, including Google, have migrated
to MariaDB speaks for its importance as a player in the
database firmament.
M
ariaDB is a high performance, open source History
database that helps the world's busiest websites In 2008, Sun Microsystems bought MySQL for US$ 1
deliver more content, faster. It has been created billion. But the original developer, Monty Widenius, was
by the developers of MySQL with the help of the FOSS quite disappointed with the way things were run at Sun and
community and is a fork of MySQL. It offers various features founded his own new company and his own fork of MySQL
and enhancements like alternate storage engines, server - MariaDB. It is named after Monty's younger daughter,
optimisations and patches. Maria. Later, when Oracle announced the acquisition of
The lead developer of MariaDB is Michael ‘Monty’ Sun, most of the MySQL developers jumped to its forks:
Widenius, who is also the founder of MySQL and Monty MariaDB and Drizzle.
Program AB. MariaDB version numbers follow MySQL numbers
No single person or company nurtures MariaDB/MySQL till 5.5. Thus, all the features in MySQL are available in
development. The guardian of the MariaDB community, MariaDB. After MariaDB 5.5, its developers started a new
the MariaDB Foundation, drives it. It states that it has the branch numbered MariaDB 10.0, which is the development
trademark of the MariaDB server and owns mariadb.org, which version of MariaDB. This was done to make it clear that
ensures that the official MariaDB development tree is always MariaDB 10.0 will not import all the features from MySQL
open to the developer community. The MariaDB Foundation 5.6. Also, at times, some of these features do not seem to be
assures the community that all the patches, as well as MySQL solid enough for MariaDB’s standards. Since new specific
source code, are merged into MariaDB. The Foundation also features have been developed in MariaDB, the team decided
provides a lot of documentation. MariaDB is a registered to go for a major version number. The currently used
trademark of SkySQL Corporation and is used by the MariaDB version, MariaDB 10.0, is built on the MariaDB 5.5 series
Foundation with permission. It is a good choice for database and has back ported features from MySQL 5.6 along with
professionals looking for the best and most robust SQL server. entirely new developments.
Why MariaDB is better than MySQL Now, add the apt-get repository as per your Ubuntu
When comparing MariaDB and MySQL, we are comparing version.
different development cultures, features and performance. For Ubuntu 13.10
The patches developed by MariaDB focus on bug fixing
and performance. By supporting the features of MySQL, $ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
MariaDB implements more improvements and delivers mariadb/repo/5.5/ubuntu saucy main'
better performance without restrictions on compatibility with
MySQL. It also provides more storage engines than MySQL. For Ubuntu 13.04
What makes MariaDB different from MySQL is better testing,
fewer bugs and fewer warnings. The goal of MariaDB is to be $ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
a drop-in replacement for MySQL, with better developments. mariadb/repo/5.5/ubuntu raring main'
Navicat is a strong and powerful MariaDB administration
and development tool. It is graphic database management and For Ubuntu 12.10
development software produced by PremiumSoft CyberTech
Ltd. It provides a native environment for MariaDB database $ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
management and supports the extra features like new storage mariadb/repo/5.5/ubuntu quantal main'
engines, microsecond and virtual columns.
It is easy to convert from MySQL to MariaDB, as we For Ubuntu 12.04 LTS
need not convert any data and all our old connectors to other
languages work unchanged. As of now MariaDB is capable of $ sudo add-apt-repository 'deb http://ftp.kaist.ac.kr/
handling data in terabytes, but more needs to be done for it to mariadb/repo/5.5/ubuntu precise main'
handle data in petabytes.
Step 2: Install MariaDB using the following commands:
Features
Here is a list of features that MariaDB provides: $ sudo apt-get update
Since it has been released under the GPL version 2, it is free. $ sudo apt-get install mariadb-server
It is completely open source.
Open contributions and suggestions are encouraged. Provide the root account password as shown in Figure 1.
MariaDB is one of the fastest databases available. Step 3: Log in to MariaDB using the following
Its syntax is pretty simple, flexible and easy to manage. command, after installation:
It can be easily imported or exported from CSV and XML.
It is useful for both small as well as large databases, mysql -u root -p
containing billions of records and terabytes of data in
hundreds of thousands of tables.
MariaDB includes pre-installed storage engines like Aria,
XtraDB, PBXT, FederatedX and SphinxSE.
The use of the Aria storage engine makes complex
queries faster. Aria is usually faster since it caches row
data in memory and normally doesn't have to write the
temporary rows to disk.
Some storage engines and plugins are pre-installed in
MariaDB.
It has a very strong community.
Figure 1: Configuring MariaDB
Installing MariaDB
Now let’s look at how MariaDB is installed.
Step 1: First, make sure that the required packages
are installed along with the apt-get key for the MariaDB
repository, by using the following commands:
Switch to the new database using the following command (this INSERT INTO details(name,age,marks) VALUES("Bob",15,400);
is to make sure that you are currently working on this database):
The output will be as shown in Figure 4.
USE students;
Now that the database has been created, create a table: We need not add values in student_id. It is automatically
incremented. All other values are given in quotes.
CREATE TABLE details(student_id int(5) NOT NULL AUTO_
INCREMENT, Deleting a table
name varchar(20) DEFAULT NULL, To delete a table, type the following command:
age int(3) DEFAULT NULL,
marks int(5) DEFAULT NULL, DROP TABLE table_name;
PRIMARY KEY(student_i)d
); Once the table is deleted, the data inside it cannot be
To see what we have done, use the following command: recovered.
We can view the current table using the show tables
show columns in details; command, which gives all the tables inside the database:
The ‘=’ sign in Haskell signifies a definition and not The init function returns everything except the last
a variable assignment as seen in imperative programming element of a list:
languages. We can thus omit the ‘x' on either side and the
code becomes even more concise: *Main> init a
[1,2,3,4]
sumTwoInt :: Int -> Int
sumTwoInt = sumInt 2 *Main> :t init
init :: [a] -> [a]
By loading Sum.hs again in the GHCi prompt, we get the
following: The length function returns the length of a list:
*Main> :t head The zip function takes two lists and creates a new list of
head :: [a] -> a tuples with the respective pairs from each list. For example:
The tail function returns everything except the first element *Main> let b = ["one", "two", "three", "four", "five"]
*Main> zip a b 1
[(1,"one"),(2,"two"),(3,"three"),(4,"four"),(5,"five")] *Main> factorial 1
1
*Main> :t zip *Main> factorial 2
zip :: [a] -> [b] -> [(a, b)] 2
*Main> factorial 3
The let expression defines the value of ‘b' in the GHCi 6
prompt. You can also define it in a way that’s similar to the *Main> factorial 4
definition of the list ‘a' in the source file. 24
The lines function takes input text and splits it at new lines: *Main> factorial 5
120
*Main> let sentence = "First\nSecond\nThird\nFourth\nFifth"
Functions operating on lists can also be called recursively.
*Main> lines sentence To compute the sum of a list of integers, you can write the
["First","Second","Third","Fourth","Fifth"] sumList function as:
The first argument to map is a function that is enclosed These are called Lambda functions, and the '\' represents
within parenthesis in the type signature (a -> b). This function the notation for the symbol Lambda. Another example is
takes an input of type ‘a' and returns an element of type ‘b'. given below:
Thus, when operating over a list [a], it returns a list of type [b].
Recursion provides a means of looping in functional *Main> map (\x -> x * x) [1, 2, 3, 4, 5]
programming languages. The factorial of a number, for example, [1,4,9,16,25]
can be computed in Haskell, using the following code:
It is a good practice to write the type signature of
factorial :: Int -> Int the function first when composing programs, and then
factorial 0 = 1 write the body of the function. Haskell is a functional
factorial n = n * factorial (n-1) programming language and understanding the use of
functions is very important.
The definition of factorial with different input use cases is
called pattern matching on the function. On running the above By: Shakthi Kannan
example with GHCi, you get the following output:
The author is a free software enthusiast and blogs
at shakthimaan.com
*Main> factorial 0
T
his article is for Qt developers. It is assumed that the Q_INVOKABLE
intended audience is aware of the famous Signals and This is a macro that is similar to Slot, except that it has a
Slots mechanisms of Qt. Creating an HTML page is return type. Thus, we will prefix Q_INVOKABLE to the
very quick compared to any other way of designing a GUI. methods that can be called by the JavaScript. The advantage
An HTML page is nothing but a fancy page that doesn’t have here is that we can have a return type with Q_INVOKABLE,
any logic in its build. With the amalgamation of JavaScript, as compared to Slot.
however, the HTML page builds in some intelligence. As
everything cannot be collated in JavaScript, we need a back-end Developing a sample HTML page with JavaScript
for it. Qt provides a way to mingle (HTML+Java) with C++. intelligence
Thus, you can call the C++ methods through JavaScripts and Here is a sample form in HTML-JavaScript that will allow
vice-versa. This is possible by using the Qt-WebKit framework. us to multiply any two given numbers. However, the logic of
The applications developed in Qt are not just limited to various multiplication should reside in the C++ method only.
desktop platforms. They are even ported over several mobile
platforms. Thus, you can design your apps that can just fit into <html>
the Windows, iOS and Android worlds, seamlessly. <head>
<script>
What is Qt-WebKit? function Multiply()
In simple words, Qt-WebKit is the Web-browsing module of {
Qt. It can be used to display live content from the Internet as /** MultOfNumbers a C++ Invokable method **/
well as local HTML files. var result = myoperations.MultOfNumbers(document.forms["DEMO_
FORM"]["Multiplicant_A"].value, document.forms["DEMO_FORM"]
Programming paradigm ["Multiplicant_B"].value);
In Qt-WebKit, the base class is known as QWebView. The document.getElementById("answer").value = result;
sub-class of QWebView is QWebViewPage, and a further sub- }
class is QWebFrame. This is useful while adding the desired </script>
class object to the JavaScript window object. In short, this </head>
class object will be visible to JavaScript once it is added to the <body>
JavaScript window object. However, JavaScript can invoke <form name="DEMO_FORM">
only the public Q_INVOKABLE methods. The Q_INVOKABLE Multiplicant A: <input type="number"
restriction was introduced to make the applications being name="Multiplicant_A"><br>
developed using Qt even more secure. Multiplicant B: <input type="number"
QT_DEMO
<html>
<head>
<script> Hit Me
function alert_click()
{
alert("you clicked");
}
JavaScript Alert -
function JavaScript_function()
{ Hello
alert("Hello");
} OK
myoperations.alert_script_signal.connect(JavaScript_
function);
</script>
</head>
<body>
<form name="myform"> Figure 2: QT DEMO callback output
<input type="button" value="Hit me" onclick="alert_click()">
</form> }
</body>
</html> MyJavaScriptOperations::MyJavaScriptOperations()
{
Here is the main file: qDebug()<<__PRETTY_FUNCTION__;
view = new QWebView();
#include <QtGui/QApplication> view->resize(400, 500);
#include <QApplication> connect(view->page()->mainFrame(), SIGNAL(javaScriptWindowObj
#include <QDebug> ectCleared()), this, SLOT(JS_ADDED()));
#include <QWebFrame> connect(view, SIGNAL(loadFinished(bool)), this,
#include <QWebPage> SLOT(loadFinished(bool)));
#include <QWebView> view->load(QUrl("./index.html"));
class MyJavaScriptOperations : public QObject { view->show();
Q_OBJECT }
public: int main(int argc, char *argv[])
QWebView *view; {
MyJavaScriptOperations(); QApplication a(argc, argv);
signals: MyJavaScriptOperations *jvs = new MyJavaScriptOperations;
void alert_script_signal(); return a.exec();
public slots: }
void JS_ADDED(); #include "main.moc"
void loadFinished(bool);
}; The output is shown in Figure 2.
Qt is a rich framework for C++ developers. It not only
void MyJavaScriptOperations::JS_ADDED() provides these amazing features, but also has some interesting
{ attributes like in-built SQlite, D-Bus and various containers. It's
qDebug()<<__PRETTY_FUNCTION__; easy to develop an entire GUI application with it. You can even
view->page()->mainFrame()->addToJavaScriptWindowObject("myope port an existing HTML page to Qt. This makes Qt a wonderful
rations", this); choice to develop a cross-platform application quickly. It is
} now getting popular in the mobile world too.
T
he Yocto Project helps developers and companies encourages the Linux community. Its goals are:
get their project off the ground. It is an open source To develop custom Linux-based embedded systems
collaboration project that provides templates, tools and regardless of the architecture.
methods to create custom Linux-based systems for embedded To provide interoperability between tools and working code,
products, regardless of the hardware architecture. which will reduce the money and time spent on the project.
While building Linux-based embedded products, it is To develop licence-aware build systems that make it
important to have full control over the software running on the possible to include or remove software components
embedded device. This doesn’t happen when you are using a based on specific licence groups and the corresponding
normal Linux OS for your device. The software should have restriction levels.
full access as per the hardware requirements. That’s where To provide a place for open source projects that help in
the Yocto Project comes in handy. It helps you create custom the development of Linux-based embedded systems and
Linux-based systems for any hardware architecture and makes customisable Linux platforms.
the device easier to use and faster than expected. To focus on creating single build systems that address the
The Yocto Project was founded in 2010 as a solution needs of all users that other software components can later
for embedded Linux development by many open source be tethered to.
vendors, hardware manufacturers and electronic companies. To ensure that the tools developed are architecturally
The project aims at helping developers build their own independent.
Linux distributions, specific to their own environments. The To provide a better graphical user interface to the build
project provides developers with interoperable tools, methods system, which eases access.
and processes that help in the development of Linux-based To provide resources and information, catering to both
embedded systems. The central goal of the project is to enable new and experienced users.
the user to reuse and customise tools and working code. It To provide core system component recipes provided by
encourages interaction with embedded projects and has been the OpenEmbedded project.
a steady contributor to the OpenEmbedded core, BitBake, the To further educate the community about the benefits
Linux kernel development process and several other projects. of this standardisation and collaboration in the Linux
It not only deals with building Linux-based embedded community and in the industry.
systems, but also the tool chain for cross compilation and
software development kits (SDK) so that users can choose the The Yocto Project community
package manager format they intend to use. The community shares many common traits with a typical open
source organisation. Anyone who is interested can contribute to
The goals of the Yocto Project the development of the project. The Yocto Project is developed
Although the main aim is to help developers of customised and governed as a collaborative effort by an open community
Linux systems supporting various hardware architectures, it of professionals, volunteers and contributors.
has also a key role in several other fields where it supports and The project’s governance is mainly divided into two wings
One of the aspects of hacking a Linux kernel is to port it. While this might sound difficult, it won’t be
once you read this article. The author explains porting techniques in a simplified manner.
W
ith the evolution of embedded systems, porting target. There may be a need to change a few lines here and
has become extremely important. Whenever there, before it is up and running. But, the key thing to
you have new hardware at hand, the first and know is, what needs to be changed and where.
the most critical thing to be done is porting. For hobbyists,
what has made this even more interesting is the open source What Linux kernel porting involves
nature of the Linux kernel. So, let’s dive into porting and Linux kernel porting involves two things at a higher level:
understand the nitty-gritty of it. architecture porting and board porting. Architecture, in
Porting means making something work on an Linux terminology, refers to CPU. So, architecture porting
environment it is not designed for. Embedded Linux porting means adapting the Linux kernel to the target CPU, which
means making Linux work on an embedded platform, may be ARM, Power PC, MIPS, and so on. In addition
for which it was not designed. Porting is a broader term to this, SOC porting can also be considered as part of
and when I say ‘embedded Linux porting’, it not only architecture porting. As far as the Linux kernel is concerned,
involves Linux kernel porting, but also porting a first stage most of the times, you don't need to port it for architecture
bootloader, a second stage bootloader and, last but not the as this would already be supported in Linux. However, you
least, the applications. Porting differs from development. still need to port Linux for the board and this is where the
Usually, porting doesn't involve as much of coding as in major focus lies. Architecture porting entails porting of
development. This means that there is already some code initial start-up code, interrupt service routines, dispatcher
available and it only needs to be fine-tuned to the desired routine, timer routine, memory management, and so on.
$ make config
$ make menuconfig
This will show the menu options for configuring the Figure 2: Menu-driven kernel configuration
kernel, as seen in Figure 2. This requires the ncurses library to
be installed on the system. This is the most popular interface ARCH=<architecture>
used to configure the kernel. CROSS-COMPILE = <toolchain prefix>
To run the window-based configuration, execute the
following command: The first line defines the architecture the kernel needs to
be built for, and the second line defines the cross compilation
$ make xconfig toolchain prefix. So, if the architecture is ARM and the
toolchain is say, from CodeSourcery, then it would be:
This allows configuration using the mouse. It requires QT
to be installed on the system. ARCH=arm
For details on other options, execute the following CROSS_COMPILE=arm-none-linux-gnueabi-
command in the kernel top directory:
Optionally, make can be invoked as shown below:
$ make help
$ make ARCH=arm menuconfig - For configuring the kernel
Once the kernel is configured, the next step is to build $ make ARCH=arm CROSS_COMPILE=arm-none-linux-gnueabi- - For
the kernel with the make command. A few commonly used compiling the kernel
commands are given below:
The kernel image generated after the compilation is
$ make vmlinux - Builds the bare kernel usually vmlinux, which is in ELF format. This image can't
$ make modules - Builds the modules be used directly with embedded system bootloaders such as
$ make modules_prepare – Sets up the kernel for building the u-boot. So convert it into the format suitable for a second
modules external to kernel. stage bootloader. Conversion is a two-step process and is
done with the following commands:
If the above commands are executed as stated, the kernel
will be configured and compiled for the host system, which arm-none-linux-gnueabi-objcopy -O binary vmlinux vmlinux.bin
is generally the x86 platform. But, for porting, the intention mkimage -A arm -O linux -T kernel -C none -a 0x80008000 -e
is to configure and build the kernel for the target platform, 0x80008000 -n linux-3.2.8 -d vmlinux.bin uImage
which in turn, requires configuration of makefile. Two things -A ==> set architecture
that need to be changed in the makefile are given below: -O ==> set operating system
-T ==> set image type means it won't be built at all. Where are these values stored?
-C ==> set compression type There is a file called .config in the top level directory, which
-a ==> set load address (hex) holds these values. So, the .config file is the output of the
-e ==> set entry point (hex) configuration target such as menuconfig.
-n ==> set image name Where are these symbols used? In makefile, as shown below:
-d ==> use image data from file
obj-$(CONFIG_MYDRIVER) += my_driver.o
The first command converts the ELF into a raw binary.
This binary is then passed to mkimage, which is a utility to So, if CONFIG_MYDRIVER is set to value ‘y', the driver
generate the u-boot specific kernel image. mkimage is the my_driver.c will be built as part of the kernel image and if set
utility provided by u-boot. The generated kernel image is to value ‘m', it will be built as a module with the extension
named uImage. .ko. And, for value ‘n', it won't be compiled at all.
As you now know a little more about kbuild, let’s
The Linux kernel build system consider adding a simple character driver to the kernel tree.
One of the beautiful things about the Linux kernel is that it The first step is to write a driver and place it at the correct
is highly configurable and the same code base can be used location. I have a file named my_driver.c. Since it’s a character
for a variety of applications, ranging from high end servers driver, I will prefer adding it at the drivers/char/ sub-directory.
to tiny embedded devices. And the infrastructure, which So copy this at the location drivers/char in the kernel.
plays an important role in achieving this in an efficient The next step is to add a configuration entry in the drivers/
manner, is the kernel build system, also known as kbuild. char/Kconfig file. Each entry can be of type bool, tristate, int,
The kernel build system has two main components – string or hex. bool means that the configuration symbol can have
makefile and Kconfig. the values ‘y' or ‘n', while tristate means it can have values ‘y',
Makefile: Every sub-directory has its own makefile, which is ‘m' or ‘n'. And ‘int', ‘string' and ‘hex' mean that the value can be
used to compile the files in that directory and generate the object an integer, string or hexadecimal, respectively. Given below is
code out of that. The top level makefile percolates recursively the segment of code added in drivers/char/Kconfig:
into its sub-directories and invokes the corresponding makefile
to build the modules and finally, the Linux kernel image. The config MY_DRIVER
makefile builds only the files for which the configuration option tristate "Demo for My Driver"
is enabled through the configuration tool. default m
Kconfig: As with the makefile, every sub-directory has a help
Kconfig file. Kconfig is in configuration language and Kconfig Adding this small driver to kernel for
files located inside each sub-directory are the programs. demonstrating the kbuild
Kconfig contains the entries, which are read by configuration
targets such as make menuconfig to show a menu-like structure. The first line defines the configuration symbol. The
So we have covered makefile and Kconfig and at present second specifies the type for the symbol and the text which
they seem to be pretty much disconnected. For kbuild to will be shown as the menu. The third specifies the default
work properly, there has to be some link between the Kconfig value for this symbol and the last two lines are for the help
and makefile. And that link is nothing but the configuration message. Another thing that you will generally find in a
symbols, which generally have a prefix CONFIG_. These Kconfig file is ‘depends on'. This is very useful when you
symbols are generated by a configuration target such as want to select the particular feature, only if its dependency
menuconfig, based on entries defined in the Kconfig file. is selected. For example, if we are writing a driver for
And based on what the user has selected in the menu, these i2c EEPROM, then the menu option for the driver should
symbols can have the values ‘y', ‘n', or ‘m'. appear only if the 2c driver is selected. This can be achieved
Now, as most of us are aware, Linux supports hot with the ‘depends on' entry.
plugging of the drivers, which means, we can dynamically After saving the above changes in Kconfig, execute the
add and remove the drivers from the running kernel. The following command:
drivers which can be added/removed dynamically are known
as modules. However, drivers that are part of the kernel $ make menuconfig
image can't be removed dynamically. So, there are two ways
to have a driver in the kernel. One is to build it as a part of Now, navigate to Device Drivers->Character devices and
the kernel, and the other is to build it separately as a module you will see an entry for My Driver.
for hot-plugging. The value ‘y' for CONFIG_, means the By default, it is supposed to be built as a module. Once
corresponding driver will be part of the kernel image; the you are done with configuration, exit the menu and save the
value ‘m' means it will be built as a module and value ‘n' configuration. This saves the configuration in .config file. Now,
open the .config file, and there will be an entry as shown below:
CONFIG_MY_DRIVER=m
After the kernel is compiled, the module my_driver.ko will board-specific custom code and needs to be specifically
be placed at drivers/char/. This module can be inserted in the brought in with the kernel. And this collection of board-
kernel with the following command: specific initialisation and custom code is referred to as a
Board Support Package or, in Linux terminology, a LSP. In
$ insmod my_driver.ko simple words, whatever software code you require (which
is specific to the target platform) to boot up the target with
Aren't these configuration symbols needed in the C code? the operating system can be called LSP.
Yes, or else how will the conditional compilation be taken
care of? How are these symbols included in C code? During Components of LSP
the kernel compilation, the Kconfig and .config files are read, As the name itself suggests, BSP is dependent on the things that
and are used to generate the C header file named autoconf.h. are specific to the target board. So, it consists of the code which
This is placed at include/generated and contains the #defines is specific to that particular board, and it applies only to that
for the configuration symbols. These symbols are used by the board. The usual list includes Interrupt Request Numbers (IRQ),
C code to conditionally compile the required code. which are dependent on how the various devices are connected
Now, let’s suppose I have configured the kernel and that it on the board. Also, some boards have an audio codec and you
works fine with this configuration. And, if I make some new need to have a driver for that codec. Likewise, there would be
changes in the kernel configuration, the earlier ones will be switch interfaces, a matrix keypad, external eeprom, and so on.
overwritten. In order to avoid this from happening, we can save
.config file in the arch/arm/configs directory with a name like LSP placement
my_config, for instance. And next time, we can execute the LSP is placed under a specific <arch> folder of the kernel's
following command to configure the kernel with older options: arch folder. For example, architecture-specific code for ARM
resides in the arch/arm directory. This is about the code, but
$ make my_config_defconfig you also need the headers which are placed under arch/arm/
include/asm. However, board-specific code is placed at arch/
Linux Support Packages (LSP)/Board Support arm/mach-<board_name> and corresponding headers are
Packages (BSP) placed at arch/arm/mach-<soc architecture>/include. For
One of the most important and probably the most example, LSP for Beagle Board is placed at arch/arm/mach-
challenging thing in porting is the development of Board omap2/board-omap3beagle.c and corresponding headers
Support Packages (BSP). BSP development is a one- are placed at arch/arm/mach-omap2/include/mach/. This is
time effort during the product development lifecycle and, shown in figure 4.
obviously, the most critical. As we have discussed, porting
involves architecture porting and board porting. Board Machine ID
porting involves board-specific initialisation code that Every board in the kernel is identified by a machine ID.
includes initialisation of the various interfaces such as This helps the kernel maintainers to manage the boards
memory, peripherals such as serial, and i2c, which in turn, based on ARM architecture in the source tree. This ID is
involves the driver porting. passed to the kernel from the second stage bootloader such
There are two categories of drivers. One is the standard as u-boot. For the kernel to boot properly, there has to be a
device driver such as the i2c driver and block driver match between the kernel and the second stage boot loader.
located at the standard directory location. Another is the This information is available in arch/arm/tools/mach-types
custom interface or device driver, which includes the and is used to generate the file linux/include/generated/
Let's understand this macro. MY_BOARD is machine 4. Update the corresponding makefile, so that the board-
ID defined in arch/arm/tools/mach-types. The second specific file gets compiled. This is shown below:
parameter to the macro is a string describing the board. The
next few lines specify the various initialisation functions, obj-$(CONFIG_MACH_MY_BOARD) += board-my_board.o
which the kernel has to invoke during boot-up. These
include the following: 5. Create a default configuration file for the new board. To
.atag_offset: Defines the offset in RAM, where the boot begin with, take any .config file as a starting point and
parameters will be placed. These parameters are passed from customise it for the new board. Place the working .config
the second stage bootloader, such as u-boot. file at arch/arm/configs/my_board_defconfig.
my_board_early: Calls the SOC initialisation functions.
This function will be defined by the SOC vendor, if the kernel By: Pradeep Tewani
is ported for it. The author works at Intel, Bangalore. He shares his learnings on
my_board_irq: Intialisation related to interrupts is done Linux & embedded systems through his weekend workshops.
over here. Learn more about his experiments at http://sysplay.in. He can
be reached at [email protected].
my_board_init: All the board-specific initialisation is
W
e will focus on the RTC DS1347 to explain how this case, the slave device is RTC DS1347. Describing the
device drivers are written for RTC chips. You can SPI slave device is an independent task that can be done as
refer to the RTC DS1347 datasheet for a complete discussed in the section on ‘Registering RTC DS1347 as an
understanding of this driver. SPI slave device’.
The SPI protocol driver: This interface provides methods
Linux SPI subsystem to read and write the SPI slave device (RTC DS1347).
In Linux, the SPI subsystem is designed in such a way that Writing an SPI protocol driver is described in the section on
the system running Linux is always an SPI master. The SPI ‘Registering the DS1347 SPI protocol driver’.
subsystem has three parts, which are listed below. The steps for writing an RTC DS1347 driver based on the
The SPI master driver: For each SPI bus in the system, SPI bus are as follows:
there will be an SPI master driver in the kernel, which has 1. Register RTC DS1347 as an SPI slave device with the SPI
routines to read and write on that SPI bus. Each SPI master master driver, based on the SPI bus number to which the
driver in the kernel is identified by an SPI bus number. For the SPI slave device is connected.
purposes of this article, let’s assume that the SPI master driver 2. Register the RTC DS1347 SPI protocol driver.
is already present in the system. 3. Once the probe routine of the protocol driver is called,
The SPI slave device: This interface provides a way of register the RTC DS1347 protocol driver to read and write
describing the SPI slave device connected to the system. In routines to the Linux RTC subsystem.
After all this, the Linux RTC subsystem can use the
registered protocol driver’s read and write routines to read and Linux Kernel SPI Master
SPI RCT
write the RTC. driver
BUS DS1347
spi master
spi master
read
write
RTC DS1347 is a low current SPI compatible real time clock. C Slave device spi write
spi read
The information it provides includes the seconds, minutes and P
registration operation
struct spi_device
operation
hours of the day, as well as what day, date, month and year U SPI Slave device SPI Protocol Driver
it is. This information can either be read from or be written struct spi_board_info struct spi_driver
to the RTC DS1347 using the SPI interface. RTC DS1347 RTC
write
RTC
read
Linux kernel, which is a part of the board support package. In the above specified manner, any SPI slave device is
The board file resides in arch/ directory in Linux (for registered with the Linux kernel and the struct spi_device is
example, the board file for the Beagle board is in arch/arm/ created and linked to the Linux SPI subsystem to describe
mach-omap2/board-omap3beagle.c). The struct spi_device the device. This spi_device struct will be passed as a
is not directly written but a different structure called struct parameter to the SPI protocol driver probe routine when the
spi_board_info is filled and registered, which creates the SPI protocol driver is loaded.
struct spi_device in the kernel automatically and links it to the
SPI master driver that contains the routines to read and write Registering the RTC DS1347 SPI protocol driver
on the SPI bus. The struct spi_board_info for RTC DS1347 The driver is the medium through which the kernel interacts
can be written in the board file as follows: with the device connected to the system. In case of the SPI
device, it is called the SPI protocol driver. The first step in
struct spi_board_info spi_board_info[] __initdata = { writing an SPI protocol driver is to fill the struct spi_driver
.modalias = “ds1347”, structure. For RTC DS1347, the structure is filled as follows:
.bus_num = 1,
.chip_select = 1, static struct spi_driver ds1347_driver = {
}; .driver = {
.name = "ds1347",
Modalias is the name of the driver used to identify the .owner = THIS_MODULE,
driver that is related to this SPI slave device—in which case },
the driver will have the same name. Bus_num is the number .probe = ds1347_probe,
of the SPI bus. It is used to identify the SPI master driver that };
controls the bus to which this SPI slave device is connected.
Chip_select is used in case the SPI bus has multiple chip The name field has the name of the driver (this should be
select pins; then this number is used to identify the chip select the same as in the modalias field of the struct spi_board_info).
pin to which this SPI slave device is connected. ‘Owner’ is the module that owns the driver, THIS_MODULE
The next step is to register the struct spi_board_info is the macro that refers to the current module in which the
with the Linux kernel. In the board file initialisation code, the driver is written (the ‘owner’ field is used for reference
structure is registered as follows: counting of the module owning the driver). The probe is the
most important routine that is called when the device and the
spi_register_board_info(spi_board_info, 1); driver are both registered with the kernel.
The next step is to register the driver with the kernel.
The first parameter is the array of the struct spi_board_ This is done by a macro module_spi_driver (struct spi_
info and the second parameter is the number of elements in driver *). In the case of RTC DS1347, the registration is
the array. In the case of RTC DS1347, it is one. This API done as follows:
will check if the bus number specified in the spi_board_info
structure matches with any of the master driver bus numbers module_spi_driver(ds1347_driver);
that are registered with the Linux kernel. If any of them do
match, it will create the struct spi_device and initialise the The probe routine of the driver is called if any of the
fields of the spi_device structure as follows: following cases are satisfied:
1. If the device is already registered with the kernel and then
master = spi_master driver which has the same bus number as the driver is registered with the kernel.
bus_num in the spi_board_info structure. 2. If the driver is registered first, then when the device is
chip_select = chip_select of spi_board_info registered with the kernel, the probe routine is called.
modalias = modalias of spi_board_info In the probe routine, we need to read and write on the SPI
bus, for which certain common steps need to be followed.
After initialising the above fields, the structure is These steps are written in a generic routine, which is called
registered with the Linux SPI subsystem. The following are throughout to avoid duplicating steps. The generic routines
the fields of the struct spi_device, which will be initialised are written as follows:
by the SPI protocol driver as needed by the driver, and if not 1. First, the address of the SPI slave device is written on
needed, will be left empty. the SPI bus. In the case of the RTC DS1347, the address
should contain its most significant bit, reset for the write
max_speed_hz = the maximum rate of transfer to the bus. operation (as per the DS1347 datasheet).
bits_per_word = the number of bits per transfer. 2. Then the data is written to the SPI bus.
mode = the mode in which the SPI device works. Since this is a common operation, a separate routine ds1347_
write_reg is written as follows: registered. The first thing the probe routine does is to set the
SPI parameters to be used to write on the bus. The parameters
static int ds1347_write_reg(struct device *dev, unsigned char are the mode in which the SPI device works. In the case of
address, unsigned char data) RTC DS1347, it works on Mode 3 of the SPI:
{
struct spi_device *spi = to_spi_device(dev); spi->mode = SPI_MODE_3;
unsigned char buf[2];
bits_per_word is the number of bits transferred. In the
buf[0] = address & 0x7F; case of RTC DS1347, it is 8 bits.
buf[1] = data;
spi->bits_per_word = 8;
return spi_write_then_read(spi, buf, 2, NULL, 0);
} After changing the parameters, the kernel has to be
informed of the changes, which is done by calling the spi_
The parameters to the routine are the address to which the setup routine as follows:
data has to be written and the data which has to be written
to the device. spi_write_then_read is the routine that has the spi_setup(spi);
following parameters:
struct spi_device: The slave device to be written. The following steps are carried out to check and configure
tx_buf: Transmission buffer. This can be NULL if the RTC DS1347.
reception only. 1. First, the RTC control register is read to see if the RTC is
tx_no_bytes: The number of bytes in the tx buffer. present and if it responds to the read command.
rx_buf: Receive buffer. This can be NULL if 2. Then the write protection of the RTC is disabled so that
transmission only. the code is able to write on the RTC registers.
rx_no_bytes: The number of bytes in the receive buffer. 3. Then the oscillator of the RTC DS1347 is started so that
In the case of the RTC DS1347 write routine, only two the RTC starts working.
bytes are to be written: one is the address and the other is the Till this point the kernel is informed that the RTC is on an SPI
data on that address. bus and it is configured. After the RTC is ready to be read and
The reading of the SPI bus is done as follows: written by the user, the read and write routines of the RTC are to be
1. First, the address of the SPI slave device is written on registered with the Linux kernel RTC subsystem as follows:
the SPI bus. In the case of RTC DS1347, the address should
contain its most significant bit set for the read operation (as rtc = devm_rtc_device_register(&spi->dev, "ds1347", &ds1347_
per the DS1347 datasheet). rtc_ops, THIS_MODULE);
2. Then the data is read from the SPI bus.
Since this is a common operation, a separate routine, The parameters are the name of the RTC driver, the
ds1347_read_reg, is written as follows: RTC operation structure that contains the read and write
operations of the RTC, and the owner of the module. After
static int ds1347_read_reg(struct device *dev, unsigned char this registration, the Linux kernel will be able to read
address, unsigned char *data) and write on the RTC of the system. The RTC operation
{ structure is filled as follows:
struct spi_device *spi = to_spi_device(dev);
static const struct rtc_class_ops ds1347_rtc_ops = {
*data = address | 0x80; .read_time = ds1347_read_time,
.set_time = ds1347_set_time,
return spi_write_then_read(spi, data, 1, data, 1); };
}
The RTC read routine is implemented as follows.
In the case of RTC DS1347, only one byte, which is the The RTC read routine has two parameters, one is the
address, is written on the SPI bus and one byte is to be read device object and the other is the pointer to the Linux RTC
from the SPI device. time structure struct, rtc_time.
The rtc_time structure has the following fields, which
RTC DS1347 driver probe routine have to be filled by the driver:
When the probe routine is called, it passes an spi_device tm_sec: seconds (0 to 59, same as RTC DS1347)
struct, which was created when spi_board_info was tm_min: minutes (0 to 59, same as RTC DS1347)
tm_hour: hour (0 to 23, same as RTC DS1347) /* year in linux is from 1900 i.e in range of 100
tm_mday: day of month (1 to 31, same as RTC DS1347) in rtc it is from 00 to 99 */
tm_mon: month (0 to 11 but RTC DS1347 provides dt->tm_year = dt->tm_year % 100;
months from 1 to 12, so the value returned by RTC needs to
have 1 subtracted from it) buf[7] = bin2bcd(dt->tm_year);
tm_year: year (year since 1900; RTC DS1347 stores years buf[8] = bin2bcd(0x00);
from 0 to 99, and the driver considers the RTC valid from
2000 to 2099, so the value returned from RTC is added to 100 After this, the data is sent to the RTC device, and the
and as a result the offset is the year from 1900) status of the write is sent to the kernel as follows:
First the clock burst command is executed on the RTC,
which gives out all the date and time registers through the SPI return spi_write_then_read(spi, buf, 9, NULL, 0);
interface, i.e., a total of 8 bytes:
Contributing to the RTC subsystem
buf[0] = DS1347_CLOCK_BURST | 0x80; The RTC DS1347 is a Maxim Dallas RTC. There are
err = spi_write_then_read(spi, buf, 1, buf, 8); various other RTCs in the Maxim database and they are
if (err) not supported by the Linux kernel, just like it is with
return err; various other manufacturers of RTCs. All the RTCs that are
supported by the Linux kernel are present in the drivers/rtc
Then the read date and time is stored in the Linux date directory of the kernel. The following steps can be taken to
and time structure of the RTC. The time in Linux is in binary write support for the RTC in the Linux kernel.
format so the conversion is also done: 1. Pick any RTC from the ‘Manufacturer’ (e.g., Maxim)
database which does not have support in the Linux kernel
dt->tm_sec = bcd2bin(buf[0]); (see the drivers/rtc directory for supported RTCs).
dt->tm_min = bcd2bin(buf[1]); 2. Download the datasheet of the RTC and study its features.
dt->tm_hour = bcd2bin(buf[2] & 0x3F); 3. Refer to rtc-ds1347.c and other RTC files in the drivers/
dt->tm_mday = bcd2bin(buf[3]); rtc directory in the Linux kernel and go over even this
dt->tm_mon = bcd2bin(buf[4]) - 1; article for how to implement RTC drivers.
dt->tm_wday = bcd2bin(buf[5]) - 1; 4. Write the support for the RTC.
dt->tm_year = bcd2bin(buf[6]) + 100; 5. Use git (see ‘References’ below) to create a patch for the
RTC driver written.
After storing the date and time of the RTC in the Linux 6. Submit the patch by mailing it to the Linux RTC mailing list:
RTC date and time structure, the date and time is validated • [email protected]
through rtc_valid_tm API. After validation, the validation • [email protected]
status from the API is returned—if the date and time is valid, • [email protected]
then the kernel will return the date and time in the structure 7. The patch will be reviewed and any changes required will
to the user application; else it will return an error: be suggested, and if everything is fine, the driver will be
acknowledged and be added to the Linux tree.
return rtc_valid_tm(dt);
This article is aimed at newbie developers who are planning to set up a development
environment or move their Linux kernel development environment to GIT.
G
IT is a free, open source distributed version control communications between the software and hardware using
tool. It is easy to learn and is also fast, as most of IPC and system calls. It resides in the main memory (RAM),
the operations are performed locally. It has a very when any operating system is loaded in memory.
small footprint. Just to compare GIT with another SVN The kernel is mainly of two types - the micro kernel and
(Source Version Control) tool, refer to http://git-scm.com/ the monolithic kernel. The Linux kernel is monolithic, as is
about/small-and-fast. depicted clearly in Figure 2.
GIT allows multiple local copies (branches), each Based on the above diagram, the kernel can be viewed
totally different from the other—it allows the making of as a resource manager; the managed resource could be a
clones of the entire repository so each user will have a full process, hardware, memory or storage devices. More details
backup of the main repository. Figure 1 gives one among about the internals of the Linux kernel can be found at http://
the many pictorial representations of GIT. Developers can kernelnewbies.org/LinuxVersions and https://www.kernel.org/
clone the main repository, maintain their own local copy doc/Documentation/.
(branch and branch1) and push the code changes (branch1)
to the main repository. For more information on GIT, refer Linux kernel files and modules
to http://git-scm.com/book. In Ubuntu, kernel files are stored under the /boot/ directory
(run ls /boot/ from the command prompt). Inside this
Note: GIT is under development and hence changes are directory, the kernel file will look something like this:
often pushed into GIT repositories. To get the latest GIT code,
use the following command: ‘vmlinuz-A.B.C-D’
$git clone git://git.kernel.org/pub/scm/git/git.git
… where A.B is 3.2, C is your version and D is a patch or fix.
The kernel Let’s delve deeper into certain aspects depicted in Figure 3:
The kernel is the lowest level program that manages Vmlinuz-3.2.0-29-generic: In vmlinuz, ‘z’ indicates the
Let’s set up our own local repository for the Linux kernel.
Now you can see a directory named linux-2.6 in the Next, find the latest stable kernel tag by running the
current directory. Do a GIT pull to update your repository: following code:
Now run:
Make
References
[1] http://linux.yyz.us/git-howto.html
[2] http://kernelnewbies.org/KernelBuild
Figure 7: Modules_install and Install [3] https://www.kernel.org/doc/Documentation/
[4] http://kernelnewbies.org/LinuxVersions
Setting up the kernel configuration
Many kernel drivers can be turned on or off, or be built By: Vinay Patkar
on modules. The .config file in the kernel source directory The author works as a software development engineer at Dell
determines which drivers are built. When you download the India R&D Centre, Bengaluru, and has close to two years’
source tree, it doesn’t come with a .config file. You have several experience in automation and Windows Server OS. He is
interested in virtualisation and cloud computing technologies.
options for generating a .config file. The easiest is to duplicate
I
n previous articles in this series, we discussed various 4. From the dashboard, select HTTP Proxy under the
scenarios that included DHCP, DNS and setting Gateway section. This will show different options
up a captive portal. In this article, let’s discuss the like General settings, Access rules, Filter profiles,
HTTP proxy, traffic shaping and setting up of ‘Users and Categorized Lists and Bandwidth throttling.
Computers’ modules. 5. Select General settings to configure some basic
parameters.
The HTTP proxy set-up 6. Under General settings, select Transparent Proxy. This
We will start with the set-up of the HTTP proxy module of option is used to manage proxy settings without making
Zentyal. This module will be used to filter out unwanted clients aware about the proxy server.
traffic from our network. The steps for the configuration are 7. Check Ad Blocking, which will block all the
as follows: advertisements from the HTTP traffic.
1. Open the Zentyal dashboard by using the domain name 8. Cache size defines the stored HTTP traffic storage area.
set up in the previous article or use the IP address. Mention the size in MBs.
2. The URL will be https://domain-name. 9. Click Change and then click Save changes.
3. Enter the user ID and password. 10. To filter the unwanted sites from the network, block
them using Filter profiles. Click Filter profiles under 2. Click Add new to add the network object.
HTTP proxy. 3. Enter the name of the object, like LAN. Click Add, and
11. Click Add new. then Save changes.
12. Enter the name of the profile. In our case, we used 4. After you have added the network object, you have to
Spam. Click Add and save changes. configure members under that object. Click the icon
13. Click the button under Configuration. under Members.
14. To block all spam sites, let’s use the Threshold option. 5. Click Add new to add members.
The various options of Threshold will decide how to 6. Enter the names of the members. We will use LAN users.
block the enlisted sites. Let’s select Very strict under 7. Under IP address, select the IP address range.
Threshold and click Change. Then click Save changes 8. Enter the range of your DHCP address range, since we
to save the changes permanently. would like to apply it to all the users in the network.
15. Select Use antivirus to block all incoming files, 9. Click Add and then Save changes.
which may be viruses. Click the Change and the Save 10. Till now, we have added all the users of the network, on
changes buttons. which we wish to apply the bandwidth throttling rule.
16. To add a site to be blocked by proxy, click Domain and Now we will apply the rule. To do this, click HTTP
URLs and under Domain and URL rules, click the Add Proxy and select Bandwidth throttling.
new button. 11. This setting will be used to set the total amount of
17. You will then be asked for the domain name. Enter bandwidth that a single client can use. Click Enable per
the domain name of the site which is to be blocked. client limit.
Decision option will instruct the proxy to allow or deny 12. Enter the Maximum unlimited size per client, to be
the specified site. Then click Add and Save changes. set as a limit for a user under the network object. Enter
18. To activate the Spam profile, click Access rules under ‘50 MB’. A client can now download a 50 MB file with
HTTP proxy. maximum speed, but if the client tries to download a file
19. Click Add new. Define the time period and the days of a greater size than the specified limit, the throttling
when the profile is to be applied. rule will limit the speed to the maximum download rate
20. Select Any from Source dropdown menu and then per client. This speed option is set in the next step.
select Apply filter profile from Decision dropdown 13. Enter the maximum download rate per client (for our
menu. You will see a Spam profile. example, enter 20). This means that if the download
21. Click Add and Save changes. reaches the threshold, the speed will be decreased to
With all the above steps, you will be able to either block 20 KBps.
or allow sites, depending on what you want your clients to 14. Click Add and Save changes.
have access to. All the other settings can be experimented
with, as per your requirements. Traffic shaping set-up
With bandwidth throttling, we have set the upper limit for
Bandwidth throttling downloads, but to effectively manage our bandwidth we
The setting under HTTP proxy is used to add delay pools, have to use the Traffic shaping module. Follow the steps
so that a big file that users wish to download does not shown below:
hamper the download speed of the other users. 1. Click on Traffic shaping under the Gateway section.
To do this, follow the steps mentioned below: 2. Click on Rules. This will display two sections: rules for
1. First create the network object on which you wish to internal interfaces and rules for external interfaces.
apply the rule. Click Network and select Objects under 3. Follow the example rules given in Table 1-- these can be
Network options. used to shape the bandwidth on eth1.
Table 1
Based on Service Source Destination Priority Guaranteed Limited rate
the firewall rate (KBps) (KBps)
Yes Any Any 2 512 0
The rules mentioned in Table 1 will set protocols with Users’ set-up: To set up users for the domain system and
their priority over other protocols, for guaranteed speed. captive portal, follow the steps shown below.
4. The rules given in Table 2 will manage the upload speed 1. Click Users and Computers under the Office section.
for the protocols on eth0. 2. Click Manage. Here you will see the LDAP tree. Select
5. After adding all the rules, click on Save changes. Users and click on the plus sign.
6. With these steps, you have set the priorities of the With all the information entered and passed, users can
protocols and applications. One last thing to be done here log in to the system through the captive portal.
is to set the upload and download rates of the server. To
do this, click Interface rates under Traffic Shaping.
7. Click Action. Change the upload and download speed References
of the server, supplied by your service provider. Click [1] http://en.wikipedia.org/wiki/Proxy_server#Transparent_proxy
Change and then Save changes. [2] http://en.wikipedia.org/wiki/Bandwidth_throttling
[3] http://doc.zentyal.org/en/qos.html
Setting up Users and Computers
Setting up of groups and users can be done as follows. By: Gaurav Parashar
Group set-up: For this, follow the steps shown below. The author is a FOSS enthusiast, and loves to work with open
1. Click Users and Computers under the Office section. source technologies like Moodle and Ubuntu. He works as an
2. Click Manage. Select groups from the LDAP tree. Click assistant dean (for IT students) at Inmantec Institutions, Ghaziabad,
UP. He can be reached at [email protected]
on the plus sign to add groups.
None
OSFY?
Figure 1: Creating a new Java Project in Eclipse Figure 3: Build path for CloudSim library
EB Times
• Electronics • Trade Channel • Updates
is Becoming Regional
Get North, East, West & South Edition at you
doorstep. Write to us at [email protected] and
get EB Times regularly
This monthly B2B Newspaper is a resource for traders, distributors, dealers, and those
who head channel business, as it aims to give an impetus to channel sales
Docker is an open source project, which packages applications and their dependencies
in a virtual container that can run on any Linux server. Docker has immense possibilities
as it facilitates the running of several OSs on the same server.
T
echnology is changing faster than styles in the fashion a single host. LXC does this by using kernel level name
world, and there are many new entrants specific to space, which helps to isolate containers from the host.
the open source, cloud, virtualisation and DevOps Now questions might arise about security. If I am logged
technologies. Docker is one of them. The aim of this article in to my container as the root user, I can hack my base OS;
is to give you a clear idea of Docker, its architecture and its so is it not secured? This is not the case because the user
functions, before getting started with it. name space separates the users of the containers and the
Docker is a new open source tool based on Linux host, ensuring that the container root user does not have
container technology (LXC), designed to change how you the root privilege to log in to the host OS. Likewise, there
think about workload/application deployments. It helps are the process name space and the network name space,
you to easily create light-weight, self-sufficient, portable which ensure that the display and management of the
application containers that can be shared, modified and processes run in the container but not on the host and the
easily deployed to different infrastructures such as cloud/ network container, which has its own network device and
compute servers or bare metal servers. The idea is to IP addresses.
provide a comprehensive abstraction layer that allows
developers to ‘containerise’ or ‘package’any application and Cgroups
have it run on any infrastructure. Cgroups, also known as control groups, help to implement
Docker is based on container virtualisation and it is not resource accounting and limiting. They help to limit resource
new. There is no better tool than Docker to help manage kernel utilisation or consumption by a container such as memory, the
level technologies such as LXC, cgroups and a copy-on-write CPU and disk I/O, and also provide metrics around resource
filesystem. It helps us manage the complicated kernel layer consumption on various processes within the container.
technologies through tools and APIs.
Copy-on-write filesystem
What is LXC (Linux Container)? Docker leverages a copy-on-write filesystem (currently
I will not delve too deeply into what LXC is and how it works, AUFS, but other filesystems are being investigated). This
but will just describe some major components. allows Docker to spawn containers (to put it simply—instead
LXC is an OS level virtualisation method for running of having to make full copies, it basically uses ‘pointers’ back
multiple isolated Linux operating systems or containers on to existing files).
Linux Kernel
Note: I am using CentOS, so the following
netlink netfilter SELinux instructions are applicable for CentOS 6.5.
cgroups
apparmor namespace Docker is part of Extra Packages for Enterprise Linux
(EPEL), which is a community repository of non-standard
packages for the RHEL distribution. First, we need to install
the EPEL repository using the command shown below:
Hardware (Intel, AMD)
[root@localhost ~] # rpm -ivh http://dl.fedoraproject.org/
pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
Figure 1: Linux Container
As per the best practice update,
Containerisation vs virtualisation
What is the rationale behind the container-based [root@localhost ~] # yum update -y
approach or how is it different from virtualisation?
Figure 2 speaks for itself. docker-io is the package that we need to install. As
Containers virtualise at the OS level, whereas I am using CentOS, Yum is my package manager; so
both Type-I and Type-2 hypervisor-based solutions depending on your distribution ensure that the correct
virtualise at the hardware level. Both virtualisation and command is used, as shown below:
containerisation are a kind of virtualisation; in the case
of VMs, a hypervisor (both for Type-I and Type-II) slices [root@localhost ~] # yum -y install docker-io
the hardware, but containers make available protected
portions of the OS. They effectively virtualise the OS. If Once the above installation is done, start the Docker
you run multiple containers on the same host, no container service with the help of the command below:
will come to know that it is sharing the same resources
because each container has its own abstraction.LXC takes [root@localhost ~] # service docker start
the help of name spaces to provide the isolated regions
known as containers. Each container runs in its own To ensure that the Docker service starts at each reboot, use
allocated name space and does not have access outside of the following command:
it. Technologies such as cgroups, union filesystems and
container formats are also used for different purposes [root@localhost ~] # chkconfig docker on
throughout the containerisation.
Linux containers
Unlike virtual machines, with the help of LXC
you can share multiple containers from a single
source disk OS image. LXC is very lightweight,
has a faster start-up and needs less resources.
Installation of Docker
Before we jump into the installation process,
we should be aware of certain terms commonly
used in Docker documentation. Figure 2: Virtualisation
To check the Docker version, use the following Run the following command to see your new image in
command: the list. You will find the newly created image ‘lamp-image’
is shown in the output:
[root@localhost ~] # docker version
[root@localhost ~] # docker images
How to create a LAMP stack with Docker REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
We are going to create a LAMP stack on a CentOS VM. lamp-image latest b71507766b2d 2 minutes ago
However, you can work on different variants as well. First, 339.7 MB
let’s get the latest CentOS image. The command below will centos latest 0c752394b855 13 days ago
help us to do so: 124.1 MB
[root@localhost ~] # docker pull centos:latest Let’s log in to this image/container to check the PHP
version:
Next, let’s make sure that we can see the image by
running the following code: [root@localhost ~] # docker run -i -t lamp-image /bin/bash
bash-4.1# php -v
[root@localhost ~] # docker image centos PHP 5.3.3 (cli) (built: Dec 11 2013 03:29:57)
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE Zend Engine v2.3.0 Copyright (c) 1998-2010 Zend Technologies
centos latest 0c752394b855 13 days ago
124.1 MB Now, let us configure Apache.
Log in to the container and create a file called index.html.
Running a simple bash shell to test the image also helps If you don’t want to install VI or VIM, use the Echo
you to start a new container: command to redirect the following content to the index.html file:
[root@localhost ~] # docker run -i -t centos /bin/bash <?php echo “Hello world”; ?>
If everything is working properly, you'll get a simple Start the Apache process with the following command:
bash prompt. Now, as this is just a base image, we need to
install the PHP, MySQL and the LAMP stack: [root@localhost ~] # /etc/init.d/httpd start
[root@localhost ~] # yum install php php-mySQL mySQL-server And then test it with the help of browser/curl/links utilities.
httpd If you’re running Docker inside a VM, you’ll need to
forward port 80 on the VM to another port on the VM’s
The container now has the LAMP stack. Type ‘exit’ to host machine. The following command might help you
quit from the bash shell. to configure port forwarding. Docker has the feature to
We are going to create this as a golden image, so that forward ports from containers to the host.
the next time we need another LAMP container, we don’t
need to install it again. [root@localhost ~] # docker run -i -t -p :80 lamp-image /
Run the following command and please note the bin/bash
‘CONTAINER ID’ of the image. In my case, the ID was
‘4de5614dd69c’: For detailed information on Docker and other
technologies related to container virtualisation, check out the
[root@localhost ~] # docker ps -a links given under ‘References’.
T
he first article in the Wireshark series, published in the Wireshark. So start the capture and once you have sufficient
July 2014 issue of OSFY, covered Wireshark architecture, packets, stop and view the packets before you continue reading.
its installation on Windows and Ubuntu, as well as various An interesting observation about this capture is
ways to capture traffic in a switched environment. Interpretation that, unlike only broadcast and host traffic in a switched
of DNS and ICMP Ping protocol captures was also covered. environment, it contains packets from all source IP addresses
Let us now carry the baton forward and understand additional connected in the network. Did you notice this?
Wireshark features and protocol interpretation. The traffic thus contains:
To start with, capture some traffic from a network Broadcast packets
connected to an Ethernet hub—which is the simplest way to Packets from all systems towards the Internet
capture complete network traffic. PC-to-PC communication packets
Interested readers may purchase an Ethernet hub from a Multicast packets
second hand computer dealer at a throwaway price and go Now, at this point, imagine analysing traffic captured from
ahead to capture a few packets in their test environment. The hundreds of computers in a busy network—the sheer volume of
aim of this is to acquire better hands-on practice of using captured packets will be baffling. Here, an important Wireshark
Wireshark displays the ARP reply under the ‘Info’ box as:
192.168.51.1 is at 00:21:97:88:28:21.
Thus, with the help of an ARP request and reply, system
192.168.51.208 has detected the MAC address belonging to
192.168.51.1.
Saving packets
Packets captured using Wireshark can be saved from the menu
‘File – Save as’ in different formats such as Wireshark, Novell
LANalyzer and Sun Snoop, to name a few.
In addition to saving all captured packets in various file
formats, the ‘File – Export Specified Packets’ option offers
users the choice of saving ‘Display Filtered’ packets or a
range of packets.
Figure 5: Screenshot of DHCP protocol Please feel free to download the pcap files used for
preparing this article from opensourceforu.com. I believe all
When a system configured with the ‘Obtain an IP address OSFY readers will enjoy this interesting world of Wireshark,
automatically’ setting is connected to a network, it uses DHCP packet capturing and various protocols!
to get an IP address from the DHCP server. Thus, this is a
client–server protocol. To capture DHCP packets, users may Troubleshooting tips
start Wireshark on such a system, then start packet capture Capturing ARP traffic could reveal ARP poisoning (or ARP
and, finally, connect the network cable. spoofing) in the network. This will be discussed in more
Please refer to Figures 4 and 5, which give a diagram and detail at a later stage. Similarly, studying the capture of DHCP
a screenshot of the DHCP protocol, respectively. protocol may lead to the discovery of an unintentional or a
Discovering DHCP servers: To discover DHCP server(s) rogue DHCP server within the network.
in the network, the client sends a broadcast on 255.255.255.255
with the source IP as 0.0.0.0, using UDP port 68 (bootpc) as A word of caution
the source port and UDP 67 (bootps) as the destination. This Packets captured using the test scenarios described in
message also contains the source MAC address as that of the this series of articles are capable of revealing sensitive
client and ff:ff:ff:ff:ff:ff as the destination MAC. information such as login names and passwords. Some
A DHCP offer: The nearest DHCP server receives this scenarios, such as using ARP spoofing may disrupt the
‘discover’ broadcast and replies with an offer containing network temporarily. Make sure to use these techniques
the offered IP address, the subnet mask, the duration of the only in a test environment. If at all you wish to use them in
default gateway lease and the IP address of the DHCP server. a live environment, do not forget to get the explicit written
The source MAC address is that of the DHCP server and the permission before doing so.
destination MAC address is that of the requesting client. Here,
the UDP source and destination ports are reversed.
DHCP requests: Remember that there can be more than By: Rajesh Deodhar
one DHCP server in a network. Thus, a client can receive The author has been an IS auditor and network security consultant-
multiple DHCP offers. The DHCP request packet is broadcast trainer for the last two decades. He is a BE in Industrial Electronics,
by the client with parameters similar to discovering a DHCP and holds CISA, CISSP, CCNA and DCL certifications. He can be
contacted at [email protected]
server, with two major differences:
M
any of us are curious and eager to learn how to
port or flash a new version of Android to our
phones and tablets. This article is the first step
towards creating your own custom Android system. Here,
you will learn to set up the build environment for the
Android kernel and build it on Linux.
Let us start by understanding what Android is. Is it an
application framework or is it an operating system? It can be
called a mobile operating system based on the Linux kernel,
for the sake of simplicity, but it is much more than that. It
consists of the operating system, middleware, and application
software that originated from a group of companies led by
Google, known as the Open Handset Alliance.
Camera
Service Activity
process was performed on
Other System
Other Media
Services
Manager
Services and
Managers
an Intel i5 core processor
running 64-bit Ubuntu Linux
HAL 14.04 LTS (Trusty Tahr).
Camera HAL Audio HAL
However, the process should
Graphics HAL
Other HALs
work with any Android
kernel and device, with minor
Linux Kernel modifications. The handset
Camera Driver
Audio Driver
(ALSA, OSS, Display Drivers
details are shown in the
etc) Other Drivers
screenshot (Figure 2) taken
from the Setting ->About
Figure 1: Android system architecture Figure 2: Handset details for GT-S5282 device menu of the phone.
$ make zImage
#Set the path for Android build env (64 bit) $ make -j4 zImage
export PATH=${HOME}/android/ndk/android-ndk-r9/toolchains/
arm-linux-androideabi-4.4.3/prebuilt/linux-x86_64/ The compilation process will take time to complete, based
bin:$PATH on the options available in the kernel configuration (.config)
and the performance of the build system. On completion, the
Step 4: Configure the Android kernel kernel image (zImage) will be generated in the arch/arm/boot/
Install the necessary dependencies, as follows: directory of the kernel source.
Compile the modules:
$ sudo apt-get install libncurses5-dev build-essential
$ make modules
Set up the architecture and cross compiler, as follows:
This will trigger the build for kernel modules, and .ko files
$ export ARCH=arm should be generated in the corresponding module directories.
$ export CROSS_COMPILE=arm-linux-androideabi- Run the find command to get a list of .ko files in the kernel
directory:
The kernel Makefile refers to the above variables
to select the architecture and cross compile. The cross $ find . -name “*.ko”
compiler command will be ${CROSS_COMPILE}gcc
which is expanded to arm-linux-androideabi-gcc. The same What next?
applies for other tools like g++, as, objdump, gdb, etc. Now that you have set up the Android build environment,
Configure the kernel for the device: and compiled an Android kernel and necessary modules,
how do you flash it to the handset so that you can see the
$ cd ~/android/kernel kernel working? This requires the handset to be rooted first,
$ make mint-vlx-rev03_defconfig followed by flashing the kernel and related software. It turns
out that there are many new concepts to understand before
The device-specific configuration files for ARM we get into this. So be sure to follow the next article on
architecture are available in the arch/arm/configs directory. rooting and flashing your custom Android kernel.
Executing the configuration command may throw a
few warnings. You can ignore these warnings now. The
command will create a .config file, which contains the References
kernel configuration for the device. https://source.android.com/
To view and edit the kernel configuration, run the https://developer.android.com/
following command: http://xda-university.com
L
ists are the basic building blocks of Maxima. The its arguments. Note that makelist() is limited by the variation
fundamental reason is that Maxima is implemented in it can have, which to be specific, is just one – ‘i’ in the first
Lisp, the building blocks of which are also lists. two examples and ‘x’ in the last one. If we want more, the
To begin with, let us walk through the ways of creating create_list() function comes into play.
a list. The simplest method to get a list in Maxima is to just create_list(f, x1, L1, ..., xn, Ln) creates and returns a list
define it, using []. So, [x, 5, 3, 2*y] is a list consisting of four with members of the form ‘f’, evaluated for the variables x1,
members. However, Maxima provides two powerful functions ..., xn using the values from the corresponding lists L1, ..., Ln.
for automatically generating lists: makelist() and create_list(). Here is just a glimpse of its power:
makelist() can take two forms. makelist (e, x, x0, xn)
creates and returns a list using the expression ‘e’, evaluated $ maxima -q
for ‘x’ using the values ranging from ‘x0’ to ‘xn’. makelist(e, (%i1) create_list(concat(x, y), x, [p, q], y, [1, 2]);
x, L) creates and returns a list using the expression ‘e’, (%o1) [p1, p2, q1, q2]
evaluated for ‘x’ using the members of the list L. Check out (%i2) create_list(concat(x, y, z), x, [p, q], y, [1, 2], z, [a,
the example below for better clarity: b]);
(%o2) [p1a, p1b, p2a, p2b, q1a, q1b, q2a, q2b]
$ maxima -q (%i3) create_list(concat(x, y, z), x, [p, q], y, [1, 2, 3], z,
(%i1) makelist(2 * i, i, 1, 5); [a, b]);
(%o1) [2, 4, 6, 8, 10] (%o3) [p1a, p1b, p2a, p2b, p3a, p3b, q1a, q1b, q2a, q2b, q3a,
(%i2) makelist(concat(x, 2 * i - 1), i, 1, 5); q3b]
(%o2) [x1, x3, x5, x7, x9] (%i4) quit();
(%i3) makelist(concat(x, 2), x, [a, b, c, d]);
(%o3) [a2, b2, c2, d2] Note that ‘all possible combinations’ are created using the
(%i4) quit(); values for the variables ‘x’, ‘y’ and ‘z’.
Once we have created the lists, Maxima provides a host of
Note the interesting usage of concat() to just concatenate functions to play around with them. Let’s take a look at these.
S
martphones have evolved from being used just for will help you if you are stuck with a problem.
communicating with others to offering a wide range When Android was first launched in 2007, Google also
of functions. The fusion between the Internet and announced the ‘Open Handset Alliance (OHA)’ to work with
smartphones has made these devices very powerful and useful other mobile vendors to create an open source mobile operating
to us. Android had been a grand success in the smartphone system, which would allow anyone to work on it. This seemed
business. It’s no exaggeration to say that more than 80 per cent of to be a good deal for the mobile vendors, because Apple’s
the smartphone market is now occupied by Android, which has iPhone practically owned the smartphone market at that time.
become the preference of most mobile vendors today. The mobile vendors needed another player, or ‘game changer’,
The reason is simple, Android is free and available to public. in the smartphone market and they got Android.
But there’s a catch. Have you ever wondered how well When Google releases the Android source code to the public
Android respects ‘openness’ ? And how much Android for free, it is called ‘stock Android’. This comprises only the
respects your freedom? If you haven’t thought about it, please very basic system. The mobile vendors take this stock Android
take a moment to do so. When you’re done, you will realise and tailor it according to their device’s specifications—featuring
that Android is not completely open to everyone. unique visual aspects such as themes, graphics and so on.
That’s why we’re going to explore Replicant –- a truly OHA has many terms and conditions, so if you want to use
free version of Android. Android in your devices, you have to play by Google’s rules.
The following aspects are mandatory for each Android phone:
Android and openness Google setup-wizard
Let’s talk about openness first. The problem with a closed Google phone-top search
source program is that you cannot feel safe with it. There have Gmail apps
been many incidents, which suggest that people can easily be Google calendar
spied upon through closed source programs. Google Talk
On the other hand, since open source code is open and Google Hangouts
available to everyone, one cannot plant a bug in an open source YouTube
program because the bug can easily be found. Apart from that Google maps for mobiles
aspect, open source programs can be continually improved by Google StreetView
people contributing to them—enhancing a feature and writing Google Play store
software patches, also there are many user communities that Google voice search
I
t was not very long ago (July 25, 2011, to be precise) that but there’s such a big hoo-ha about Android. Last year, it
Andreas Gal, director of research at Mozilla Corporation, was a big thing. First, you have to create some space for the
announced the ‘Boot to Gecko’ project (B2G) to build a OS itself, and then create a buzz,” revealed Piyush A Garg,
complete, standalone operating system for the open Web, which project manager, APAC BU India.
could provide a community-based alternative to commercially According to Garg, there’s still a basic lack of awareness
developed operating systems such as Apple’s iOS and regarding the Firefox OS in India. “Techies might be aware of
Microsoft’s Windows Phone. Besides, the Linux-based operating what the Firefox OS is but the average end user may not. And
system for smartphones and tablets (among others) also aimed ultimately, it is the end user who has to purchase the phone.
to give Google’s Android, Jolla’s Sailfish OS as well as other We have to communicate the advantages of Mozilla Firefox
community-based open source systems such to the end user, create awareness and only then launch a
as Ubuntu Touch, a run for their money product based on it,” he said.
(pun intended!). Although, on
paper, the project boasts of Alcatel’s plans for Firefox-based
tremendous potential, it has smartphones
failed to garner the kind of So the bottom line is, India will not
response its developers had see the Alcatel One Touch Fire any
initially hoped for. The time soon; or maybe not see it at
relatively few devices in a all. “Sadly, yes. Fire is not coming
market that is flooded with to India at all. It’s not going to
the much-loved Android come to India because Fire was
OS could be one possible an 8.89 cm (3.5 inch) product.
reason. Companies like Instead, we might be coming up
ZTE, Telefónica and with an 8.89-10.16 cm (3.5-4 inch)
GeeksPhone have taken the product. Initially, we were considering
onus of launching Firefox OS- a 12.7-13.97 cm (5-5.5 inch) device.
based devices; however, giants However, we are looking to come
in the field have shied away from up with a low-end phone and such
adopting it, until now. a device cannot come in the 12.7 cm
Hong Kong’s Alcatel One (5 inch) segment. So, once the product is
Touch is one of the few companies launched with an 8.89—10.16 cm (3.5-4 inch)
that has bet on Firefox by launching the screen with the Firefox OS, we may launch a whole series of
Alcatel One Touch Fire smartphone globally, last year. Firefox OS-based devices,” said Garg.
The Firefox OS 1.0-based Fire was primarily intended for
emerging markets with the aim of ridding the world of The Firefox OS ecosystem needs a push in India
feature phones. Sadly, the Indian market was left out when With that said, it has taken a fairly long time for the company to
the first Firefox OS-based smartphone was tested—could realise that the Firefox OS could be a deal-breaker in an extensive
Android dominance be the reason? “Alcatel Fire (Alcatel market such as India. “Firefox OS may change the mobile game.
4012) was launched globally last year. We tried everything, However, it still needs to grow in India. Considering the fact
that Android has such a huge base in India, we are waiting ideas. Then either we accept them, which means we buy the idea,
for the right time to launch the Firefox-based smartphones or we work out some kind of association with which developers
here,” he said. But is the Firefox OS really a ‘deal-breaker’ get revenue out of the collaboration. In China, more than 100,000
for customers? “The Firefox OS can be at par with Android. developers are engaged in building apps for Alcatel. India is on
The major advantages of Mozilla Firefox are primarily the our to-do list for building a community of app dvelopers. It’s
memory factor and the space that it takes—the entire OS as currently at an ‘amateur stage’; however, we expect things to
well as the applications. It’s not basically an API kind of OS; happen eventually,” he said.
it’s an installation directly coming from HTML. That’s a major Although there’s no definite time period for the launch
advantage. Also, apps for the OS are of Alcatel’s One Touch Firefox
built using HTML5, which means OS-based smartphone in India
that, in theory, they run on the Web (Garg is confident it will be here
and on your phone or tablet. What by the end of 2014, followed by
made Android jump from Jelly a whole series, depending upon
Bean to KitKat (which requires low how it’s received), one thing that
memory) is the fact that the end user is is certain is that the device will
looking at a low memory OS. Mozilla be very affordable. Cutting costs
Firefox is also easy to use. I won’t while developing such low-end
say ‘better’ or ‘any less’, but at par devices is certainly a challenge
with Android,” said Garg, evidently for companies, since customers do
confident of the platform. tend to choose ‘value for money’
To take things forward, vis-à-vis when making their purchases.
the platform, Alcatel One Touch is “We are not allowed to do any
also planning to come up with an ‘trimming’ with respect to the
exclusive App Store, with its own set hardware quality—since we
of apps. “We have already planned our are FCC-compliant, we cannot
‘play store’, and tied up with a number compromise on that,” said Garg.
of developers to build our own apps. I cannot comment on the So what do companies like Alcatel One Touch actually
timeline of the app store but it’s in the pipeline. We currently do to cut manufacturing costs? “We look at larger quantities
have as many as five R&D centres in China. We are not yet in that we can sell at a low cost, using competitive chipsets
India, although we are looking to engage developers here as that are offered at a low price. On the hardware side, we
well. We’re already in the discussion phase on that front,” said may not give lamination in a low-cost phone, or we may not
Garg. So, what’s the company’s strategy to engage developers in offer Corning glass or an IPS, and instead give a TFT, for
particular? “We invite developers to come up and give in their instance,” Garg added.
Q Since you have just launched your latest servers here, what
is your take on the Indian server market?
From a server standpoint, we are very excited, because virtually
primarily because it has a huge issue about just doing the
systems integration of ‘X’ computers that compute ‘Y’ storage
while somebody else takes care of the networks . Now, it
every month and a half, we’ve been offering a new enhancement is the time for systems that come integrated with all three
or releasing a new product, which is different from the previous elements, and the best part is that it is very workload specific.
one. So the question is - how are these different? Well, we have We see a lot of converged systems being adopted in the
basically gone back and looked at things through the eyes of the area of manufacturing also. People who had deployed SAP
customer to understand what they expect from IT. They want earlier have some issues. One of them is that it is multi-tier,
to get away from conventional IT and move to an improvised i.e., it has multiple application servers and multiple instances
level of IT. So we see three broad areas: admin controlled IT; in the database. So when they want to run analytics, it gets
user controlled IT, which is more like the cloud and is workload extremely slow because a lot of tools are used to extract
specific; and then there is application-specific ‘compute and information. We came up with a solution, which customers
serve’ IT. These are the three distinct combinations. Within across the manufacturing and IT/ITES segments are now
these three areas, we have had product launches, one after the discovering. That is why we see a very good adoption of
other. The first one, of course, is an area where we dominate. So, converged systems across segments.
we decided to extend the lead and that is how the innovations
continue to happen.
Q We hear a lot about software defined data centres
(SDCs). Many players like VMware are investing a lot in
T
he virtual memory of any system is a combination swap space should be double the amount of physical memory
of two things - physical memory, which can be (RAM) available, i.e., if we have 16 GB of RAM, then we
accessed, i.e., RAM, and swap space. The latter ought to allot 32 GB to the swap space. But this is not very
holds the inactive pages that are not accessed by any effective these days.
running application. Swap space is used when the RAM Actually, the amount of swap space depends on the kind
has insufficient space for active processes, but it has of application you run and the kind of user you are. If you are
certain spaces which are inactive at that point in time. a hacker, you need to follow the old rule. If you frequently
These inactive pages are temporarily transferred to the use hibernation, then you would need more swap space
swap space, which frees up space in the RAM for active because during hibernation, the kernel transfers all the files
processes. Hence, the swap space acts as temporary from the memory to the swap area.
storage that is required if there is insufficient space So how can the swap space improve the performance of
in your RAM for active processes. But as soon as the Linux? Sometimes, RAM is used as a disk cache rather than
application is closed, the files that were temporarily to store program memory. It is, therefore, better to swap out
stored in the swap space are transferred back to the RAM. a program that is inactive at that moment and, instead, keep
The access time for swap space is less. In short, swapping the often-used files in cache. Responsiveness is improved by
is required for two reasons: swapping pages out when the system is idle, rather than when
When more memory than is available in physical memory the memory is full.
(RAM) is required by the system, the kernel swaps less- Even though we know that swapping has many
used pages and gives the system enough memory to run advantages, it does not necessarily improve the performance
the application smoothly. of Linux on your system, always. Swapping can even make
Certain pages are required by the application only at your system slow if the right quantity of it is not allotted.
the time of initialisation and never again. Such files are There are certain basic concepts behind this also. Compared
transferred to the swap space as soon as the application to memory, disks are very slow. Memory can be accessed in
accesses these pages. nanoseconds, while disks are accessed by the processor in
After understanding the basic concept of swap space, milliseconds. Accessing the disk can be many times slower
one should know what amount of space needs to be actually than accessing the physical memory. Hence, the more the
allotted to the swap space so that the performance of Linux swapping, the slower the system. We should know the
actually improves. An earlier rule stated that the amount of amount of space that we need to allot for swapping. The
{ dev@home$ echo !$
} echo c
c
isofile variable is not required but simplifies the —Shivam Kotwalia,
creation of multiple Ubuntu ISO menu entries. [email protected]
The loopback line must reflect the actual location of the
ISO file. In the example, the ISO file is stored in the user’s Retrieving disk information from
Downloads folder. X is the drive number, starting with 0; the command line
Y is the partition number, starting with 1. sda5 would be Want to know details of your hard disk even without physically
designated as (hd0,5) and sdb1 would be (hd1,1). Do not touching it? Here are a few commands that will do the trick. I
use (X,Y) in the menu entry but use something like (hd0,5). will use /dev/sda as my disk device, for which I want the details.
Thus, it all depends on your system’s configuration.
Save the file and update the GRUB 2 menu: smartctl -i /dev/sda
hdparm can give much more information than smartctl. After installing, you can run the command using the
following syntax:
—Munish Kumar,
[email protected] $wkhtmltopdf URL[oftheHTMLfile] NAME[of the PDF file].pdf
T
he Mozilla mission statement expresses a desire to There are several services that a user might not even
promote openness, innovation and opportunity on be aware of while using a cell phone. The network-based
the Web. And Mozilla is trying to comply with this location service is one of the most used services by cell
pretty seriously. phone owners to determine their location if the GPS
Firefox, Thunderbird, Firefox OS… the list of service is not available. Several companies currently offer
Mozilla’s open source products is growing. Yet there are this service but there are major privacy concerns associated
several areas in which tech giants like Google, Nokia with it. It is no secret that advertising companies track a
and Apple are dominant and the mobile ecosystem is one user’s location history and offer ads or services based on it.
of them. Mozilla is now trying to break into this space. Till now, there was no transparent option among
After Firefox OS, the foundation now offers a new these services but Mozilla has come to our rescue, to
service for mobile users. prevent tech giants sniffing out our locations. As stated on
Figure 1: The MozStumbler app Figure 2: MozStumbler options Figure 3: MozStumbler settings
Mozilla’s location service website, “The Mozilla Location You can optionally give your username in this app to track
Service is a research project to investigate crowd-sourced your contributions. Mozilla has also created a leader board to
mapping of wireless networks (Wi-Fi access points, cell let users track and rank their contributions, apart from more
towers, etc) around the world. Mobile devices and desktop detailed statistics that are available on this website. No user
computers commonly use this information to figure out their identifiable information is collected through this app.
location when GPS satellites are not accessible.” Mozilla is not only collecting the data but also providing
In the same statement, Mozilla acknowledges the presence users with a publicly accessible API. It has code named the API
of and the challenges presented by the other services, ‘Ichnaea’, which means ‘the tracker’. This API can be accessed
saying, “There are few high-quality sources for this kind of to submit data, search data or search your location. As the data
geolocation data currently open to the public. The Mozilla collection is still in progress, it is not recommended to use this
Location Service aims to address this issue by providing an service for commercial applications, but you can try it out on
open service to provide location data.” your own just for fun.
This service provides geolocation lookups based on
publicly observable cell tower and Wi-Fi access point Note: Mozilla Ichnaea can be accessed at
information. Mozilla has come out with an Android app to https://mozilla-ichnaea.readthedocs.org
collect publicly observable cell towers and Wi-Fi data; it’s
called MozStumbler. The MozStumbler app provides an option for geofencing,
This app scans and uploads information of cell towers which means you can pause the scanning within a one km radius of
and Wi-Fi access points to Mozilla servers. The latest the desired location. This deals with user concerns over collecting
stable version of this app is ver 0.20.5 which is ready for behavioural commute data such as Home, Work and travelling habits.
download. MozStumbler provides the option to upload this In short, Mozilla is trying to provide a high quality location
scanned data over a Wi-Fi or cellular network. But you service to the general public at no cost! Recently, Mozilla India
don’t need to be online while scanning; you can upload this held a competition ‘Mozilla Geolocation Pilot Project India’,
data afterwards. which encouraged more and more users to scan their area. To
contribute to this project, you can fork the repository on github or
Note: 1. This app is not available on Google Play just install the app; you will be welcomed aboard.
store but you can download it from https://github.com/
MozStumbler/releases/ By: Vinit Wankhede
2. The Firefox OS version of this app is on its way too. You The author is a fan of free and open source software. He is
can stay abreast of what’s happening with the Firefox OS currently contributing to the translation of the MozStumbler app
app at http://github.com/FxStumbler/ for Mozilla location services.