Mobile Application Development
Mobile Application Development
CT 53
SECTION 5
CERTIFIED
INFORMATION COMMUNICATION
TECHNOLOGISTS
(CICT)
STUDY TEXT
CONTENT
15.1 Mobile devices and applications
Definition of mobile computing
Types of mobile devices
Uses of mobile devices
Overview of mobile applications
Mobile browsers
CONTENT PAGE
Mobile computing is a generic term used to refer to a variety of devices that allow people to
access data and information from where ever they are.
Also Known As: mobile device
Examples: Mobile computing can use cell phone connections to make phone calls as well as
connecting to the Internet.
Some participants used the "other" field to answer laptops and e-Readers. I also considered
whether portable game consoles and digital audio guides (as some museums use) should be
considered mobile devices.
Wikipedia's definition is pretty broad. To them a mobile is "small, hand-held computing device,
typically having a display screen with touch input and/or a miniature keyboard and less than 2
pounds (0.91 kg)". Wikipedia lists calculators, digital cameras, and MP3 players as mobile
device. I normally love Wikipedia but I think they are stretching the term to mean pretty much
any portable electronic device.
Perhaps these are all just types of handheld computing devices. I think my definition fits the core
functionality of what a device needs as a category term.
Handheld devices have become ruggedized for use in mobile field management. Uses include
digitizing notes, sending and receiving invoices, asset management, recording signatures,
managing parts, and scanning barcodes.
Recent developments in mobile collaboration systems employ handheld devices that combine
video, audio and on-screen drawing capabilities to enable multi-party conferencing in real-time,
independent of location.
Handheld computers are available in a variety of form factors, including smartphones on the low
end, handheld PDAs, Ultra-Mobile PCs and Tablet PCs (Palm OS, WebOS).
Users can watch television through Internet on mobile devices. Mobile television receivers have
existed since the 1960s, and in the 21st century mobile phone providers began making television
available on cellular phones.
Nowadays, mobile devices can create, sync, and share everything we want despite of distance or
specifications of mobile devices. In the medical field, mobile devices are quickly becoming
essential tools for accessing clinical information such as drugs, treatment, and even medical
calculation.
Due to the popularity of Candy Crush and other mobile device games, online casinos are also
offering casino games on mobile devices. The casino games are available on iOS, Android,
Windows Phone and Windows. Available games are roulette, blackjack and several different
types of slots. Most casinos have a play for free option.
In the military field, mobile devices have created new opportunities for the Army to deliver
training and educational materials to soldiers around the world.
Mobile application development is a term used to denote the act or process by which
application software is developed for handheld devices, such as personal digital assistants,
enterprise digital assistants or mobile phones. These applications can be pre-installed on phones
during manufacturing platforms, or delivered as web applications using server-side or client-side
processing (e.g. JavaScript) to provide an "application-like" experience within a Web browser.
Application software developers also have to consider a lengthy array of screen sizes, hardware
specifications and configurations because of intense competition in mobile software and changes
within each of the platforms. Mobile app development has been steadily growing, both in terms
of revenues and jobs created. A 2013 analyst report estimates there are 529,000 direct App
Economy jobs within the EU 28 members, 60% of which are mobile app developers.
As part of the development process, Mobile User Interface (UI) Design is also an essential in the
creation of mobile apps. Mobile UI considers constraints & contexts, screen, input and mobility
as outlines for design. The user is often the focus of interaction with their device, and the
interface entails components of both hardware and software. User input allows for the users to
manipulate a system, and device's output allows the system to indicate the effects of the users'
manipulation. Mobile UI design constraints include limited attention and form factors, such as a
mobile device's screen size for a user's hand(s). Mobile UI contexts signal cues from user
activity, such as location and scheduling that can be shown from user interactions within a
mobile application.
Overall, mobile UI design's goal is primarily for an understandable, user-friendly interface. The
UI of mobile apps should: consider users' limited attention, minimize keystrokes, and be task-
oriented with a minimum set of functions. This functionality is supported by Mobile enterprise
application platforms or Integrated development environments (IDEs).
Mobile UIs, or front-ends, rely on mobile back-ends to support access to enterprise systems. The
mobile back-end facilitates data routing, security, authentication, authorization, working off-line,
and service orchestration. This functionality is supported by a mix of middleware components
including mobile app servers, Mobile Backend as a service (MBaaS), and SOA infrastructure.
1 Platform
o 1.1 Front-end development tools
o 1.2 Back-end servers
o 1.3 Security add-on layers
o 1.4 System software
o 1.5 Mobile application testing
2 Application stores
3 Patents
Platform
The platform organizations need to develop, deploy and manage mobile apps is made from many
components, and tools allow a developer to write, test and deploy applications into the target
platform environment.
Front-end development tools are focused on the user interface and user experience (UI/UX) and
provide the following capabilities:
UI design tools
SDKs to access device features
Cross-platform accommodations/support
Back-end servers
Back-end tools pick up where the front-end tools leave off, and provide a set of reusable services
that are centrally managed and controlled and provide the following capabilities:
With BYOD becoming the norm within more enterprises, IT departments often need stop-gap,
tactical solutions that layer on top of existing apps, phones, and platform component. Features
include
System software
There are many system-level components that are required to have a functioning platform for
developing mobile apps.
Criteria for selecting a development platform usually contain the target mobile platforms,
existing infrastructure and development skills. When targeting more than one platform with
cross-platform development it is also important to consider the impact of the tool on the user
experience. Performance is another important criterion, as research on mobile applications
indicates a strong correlation between application performance and user satisfaction. In addition
to performance and other criteria, the availability of the technology and the project's requirement
may drive the development between native and cross-platform environments. To aid the choice
between native and cross-platform environments, some guidelines and benchmarks have been
published. Typically, cross-platform environments are reusable across multiple platforms,
leveraging a native container while using HTML, CSS, and JavaScript for the user interface. In
contrast, native environments are targeted at one platform for each of those environments. For
example, Apple iOS applications are developed using Xcode with Objective C and/or Swift,
Android development is done in the Eclipse IDE with the ADT (Android Developer Tools)
plugins, and Windows and BlackBerry also have their own development environment.
Mobile applications are first tested within the development environment using emulators and
later subjected to field testing. Emulators provide an inexpensive way to test applications on
mobile phones to which developers may not have physical access. The following are examples of
tools used for testing application across the most popular mobile operating systems.
Tools include
eggPlant: A GUI-based automated test tool for mobile application across all operating
systems and devices.
Ranorex: Test automation tools for mobile, web and desktop apps.
Testdroid: Real mobile devices and test automation tools for testing mobile and web
apps.
Application stores
Several initiatives exist both from mobile vendor and mobile operators around the world.
Application developers can propose and publish their applications on the stores, being rewarded
by a revenue sharing of the selling price. An example is Apple's App Store, where only approved
applications may be distributed and run on iOS devices (otherwise known as a walled garden).
There are approximately 700,000 iOS Applications. Google's Android Market (now known as
the "Play Store") has a large number of apps running on devices with Android OS. HP / Palm,
Inc have also created the Palm App Catalog where HP / Palm, IncwebOS device users can
download applications directly from the device or send a link to the application via a web
distribution method. Mobile operators such as Telefonica Group and Telecom Italia have
launched cross-platform application stores for their subscribers. Additionally, mobile phone
Patents
There are many patents applications pending for new mobile phone apps. Most of these are in the
technological fields of Business methods, Database management, Data transfer and Operator
interface.
On May 31, 2011, Lodsys asserted two of its four patents: U.S. Patent No. 7,620,565 ("the '565
patent") on a "customer-based design module" and U.S. Patent No. 7,222,078 ("the '078 patent")
on "Methods and Systems for Gathering Information from Units of a Commodity Across a
Network." against the following application developers:
Combay
Iconfactory
Illusion Labs
Shovelmate
Quickoffice
Richard Shinderman of Brooklyn, New York
Wulven Game Studios of Hanoi, Vietnam
Mobile browsers
A mobile browser is a web browser designed for use on a mobile device such as a mobile phone
or PDA. Mobile browsers are optimized so as to display Web content most effectively for small
screens on portable devices. Mobile browser software must be small and efficient to
accommodate the low memory capacity and low-bandwidth of wireless handheld devices.
Typically they were stripped-down web browsers, but some more modern mobile browsers can
handle more recent technologies like CSS 2.1, JavaScript, and Ajax.
Websites designed for access from these browsers are referred to as wireless portalsor
collectively as the Mobile Web. They may automatically create "mobile" versions of each page.
Mobile application development is a term used to denote the act or process by which
application software is developed for handheld devices, such as personal digital assistants,
enterprise digital assistants or mobile phones. These applications can be pre-installed on phones
during manufacturing platforms, or delivered as web applications using server-side or client-side
processing (e.g. JavaScript) to provide an "application-like" experience within a Web browser.
Application software developers also have to consider a lengthy array of screen sizes, hardware
specifications and configurations because of intense competition in mobile software and changes
within each of the platforms. Mobile app development has been steadily growing, both in terms
of revenues and jobs created. A 2013 analyst report estimates there are 529,000 direct App
Economy jobs within the EU 28 members, 60% of which are mobile app developers.
As part of the development process, Mobile User Interface (UI) Design is also an essential in the
creation of mobile apps. Mobile UI considers constraints & contexts, screen, input and mobility
as outlines for design. The user is often the focus of interaction with their device, and the
interface entails components of both hardware and software. User input allows for the users to
manipulate a system, and device's output allows the system to indicate the effects of the users'
manipulation.
Mobile UI design constraints include limited attention and form factors, such as a mobile
device's screen size for a user's hand(s). Mobile UI contexts signal cues from user activity, such
as location and scheduling that can be shown from user interactions within a mobile application.
Overall, mobile UI design's goal is primarily for an understandable, user-friendly interface. The
UI of mobile apps should: consider users' limited attention, minimize keystrokes, and be task-
oriented with a minimum set of functions. This functionality is supported by Mobile enterprise
application platforms or Integrated development environments (IDEs).
Mobile UIs, or front-ends, rely on mobile back-ends to support access to enterprise systems. The
mobile back-end facilitates data routing, security, authentication, authorization, working off-line,
and service orchestration. This functionality is supported by a mix of middleware components
including mobile app servers, Mobile Backend as a service (MBaaS), and SOA infrastructure.
Over the next few years, improving the convenience of mobile services will depend on
improving the use of context in delivering mobile experiences. Your business will need to
1. Context.
Your customer's mobile context consists of:
Preferences: The history and personal decisions the customer has shared with you or with social
networks.
Situation: The current location, of course, but other relevant factors could include the altitude,
environmental conditions and even speed the customer is experiencing.
Attitude: The feelings or emotions implied by the customer's actions and logistics.
Delivering a good contextual experience will require aggregating information from many
sources. It could be from the devices customers are carrying, the local context of devices and
sensors around them (e.g. a geofence that knows which airport gate they're at), an extended
network of things they care about (e.g. the maintenance status of the incoming airplane they are
about to take for their next flight, and the probability it will leave on time) and the historical
context of their preferences. Gathering this data is a major challenge because it will be stored on
multiple systems of record to which your app will need to connect.
2. Device Proliferation.
Another challenge facing mobile developers is device proliferation. It might seem like today's
mobile app development process is pretty well defined: Build your app, make sure it looks pretty
on a 4-inch smartphone and a 10-inch tablet, and then submit it to an app store. It's not quite that
easy now, and it'll be much tougher in the near future. A wide range of new device sizes and
changes to the nature of the apps themselves will increase the need for flexibility, especially on
the client. We're already seeing 5-inch phablets, 7-inch tablets, and Windows 8 devices of 20
inches or more. Collectively, these new devices will significantly expand the potential for
collecting contextual data about your customers. Here are some ideas of what changes you'll
face:
4. Heads-Up Interfaces.
Expect to see heads-up displays such as Google Glass go mainstream in the next five years as
Moore's law pushes processors to the point where such gadgets can be made powerful,
lightweight and perhaps even stylish. Augmented-reality apps that don't work well on a phone or
tablet could be transformative when ported to a device like Google Glass. A compelling example
would be an app that provides real-time information about the people you are talking to but
whose names you've forgotten.
But heads-up displays will create a whole new slate of problems for developers. We'll have to
adapt to peripheral cues such as reminders and alerts that don't block the user's vision. We'll also
need to integrate tactile and aural feedback such as voice commands and vibrating sensors that
alert users they need to take action.
9. Cloud-Powered Development.
The construction of modern applications will move onto the public cloud and public devices,
because the elasticity of services such as Amazon, Microsoft Azure and the Google Cloud
Platform mesh nicely with the unpredictable demand that mobile apps exact on server-side
infrastructure. With the move to a public cloud, the traditional organizational model that
separates development from IT operations will break down. Why? Mobile development requires
a rapid feedback cycle, and it's hard to execute if developers have to wait for IT operations teams
to respond to their change requests. Developer self-provisioning will re-balance the relationship
in favor of developers, away from traditional IT organizations -- because control over hardware
and infrastructure resources will no longer be absolute.
But with greater developer power comes greater responsibility for security and performance.
Expect more developers to be on call for application support in the new model, using triage to
handle defects and investigate degradation to production services. Those tasks have traditional
been the domain of systems administrators. Expect IT operations personnel to become integrated
into development teams and to start their work at the inception of an idea.
While debate rages on among various mobile development camps, businesses still have to create
and maintain mobile apps for their employees, business partners, and customers. The pure
HTML5/JavaScript/CSS3 mobile Web faction, the native-code purists, the hybrid mobile app
fans -- they all offer compelling arguments and approaches, but the one conclusion everyone
seems to reach, eventually, is that there is no single panacea. Each approach and tool set has
advantages and drawbacks.
The difficulty and cost of mobile app development has not escaped the notice of innovative
companies. We present here 10 low-code or no-code builders for mobile applications. Some
target more than one mobile platform, some target Web applications as well, but all are aimed at
getting your organization’s mobile project up and running quickly.
1 Alpha Anywhere
A low-code, rapid, wizard-driven, end-to-end builder with a Windows-based IDE, Alpha
Anywhere supports many databases and targets Web, mobile (iOS, Android, and Windows
Phone), and desktop applications. HTML apps can be built using a component-based designer
and responsively adapt to screen sizes from 4 inches to 4 feet. Alpha Anywhere integrates with
PhoneGap and Adobe PhoneGap Build, allowing the easy creation of hybrid mobile apps without
requiring the developer to install multiple native development environments or purchase a Mac.
The company is currently testing a unique solution for occasionally connected mobile apps that
rely on remote databases.
2 App Press
App Press is a Web-based no-code app creator that targets iPhone, iPad, and Android
applications. Geared for designers, App Press uses a Photoshop-like user interface for
assembling screens from visual assets using layers. On the back end, App Press is an Amazon
cloud-based service and platform. The company claims that designers can produce their first app
in one day, that with experience designers can create five apps a day, and that experienced
designers can train new designers on the platform.
3 AppArchitect
AppArchitect is a Web-based, no-code; drag-and-drop builder and platform for native iPhone
and iPad apps, which can be previewed in the AppArchitect Preview App, downloadable from
the iTunes App Store, and finished binaries can be downloaded to submit to the App Store. It
assembles plug-in building blocks that are written in Objective-C, and an AppArchitect SDK will
be available to extend the product's capabilities. The company plans to expand the product to
4 Form.com
Form.com is a Web-based enterprise platform for Web and mobile form solutions with a drag-
and-drop forms builder and flexible back-end technology. The builder can create new forms or
replicate existing paper forms, set up process-specific workflow and API integration, embed
logical transitions, allow the capture of images within the forms, capture digital signatures, and
enable form field autofill. Finished mobile forms can collect information when disconnected and
transfer it when connection has been restored.
5 iBuildApp
iBuildApp is a Web builder that offers customizable templates for iPhone, iPad, and Android
apps and promises that you can create an app in five minutes. Your app can be free if you accept
iBuildApp branding and very tight limits to the number of users and site visits, unlimited-user
white-labeled tablet apps cost $299 a month, and there are several plans in between the extremes.
For common app types, template-based systems like iBuildApp can sometimes produce usable
results, as long as the selection of widgets includes the functionality you need.
6 QuickBase
QuickBase is an online builder and platform for Web and mobile Web database applications. It
offers more than 300 customizable application templates, including the Complete Project
Manager shown in the slide. Users can build applications "from scratch" starting with a data
design and all QuickBase websites can also be viewed as mobile websites. While Mobile
QuickBase is not currently available in app form, the mobile website is eminently usable.
7 Salesforce1
Salesforce1 gives you the ability to accelerate the development and deployment of HTML5, iOS,
and Android mobile apps, as well as Web apps. In the simplest model, you use a mobile website
or downloadable generic Salesforce viewer app to work with your Force.com Web application.
One step up from that is to create a jQuery Mobile (shown in the slide), Angular.js, Backbone.js,
or Knockout HTML5 mobile app using a Salesforce Mobile Pack. At the most complicated level,
you can create native or hybrid apps for iOS and Android using the Salesforce Mobile SDK for
your mobile platform combined with the Native SDK tools. These apps all communicate with the
back end through a Connected App in Salesforce.
8 ViziApps
ViziApps combines an online visual designer and customizable sample apps with code
generation for mobile Web, as well as iOS and Android native apps. The ViziApps designer has
10 Appcelerator
Appcelerator combines an IDE, SDK, multiple frameworks, and back-end cloud services into an
enterprise-level system for mobile development. The Titanium SDK lets you develop native,
hybrid, and mobile Web applications from a single codebase.
11 Titanium Studio
Is an extensible, Eclipse-based IDE for building Titanium and Web apps, and Appcelerator
Cloud Services provide an array of automatically scaled network features and data objects for
your app. The Alloy framework is an Appcelerator framework designed to rapidly develop
Titanium applications, based on the MVC architecture and containing built-in support for
Backbone.js and Underscore.js. While Appcelerator is not a no-code solution, it provides
JavaScript-based tooling for iOS, Android, Tizen, BlackBerry, and mobile Web applications in
one place.
One of the very first steps in the app development process is choosing which programming
language to use. It seems like a simple decision, but different operating systems favor different
programming languages. If you want to immerse yourself in the app development world, below
are the top 5 programming languages that you should learn (or review if you are already senior
developer).
JavaScript:
JavaScript is probably the most common and most recognizable of the programming languages
needed for app development. It is used extensively in web browsing, and it has made the
transition to the mobile world. JavaScript is beneficial because it can be used across a variety of
Java:
Not to be confused with Javascript, Java is object-oriented programming language that is
platform independent (meaning it can be used across different operating systems), but it is used
extensively with Google’s Android mobile operating system. Object-oriented programming
languages are organized around objects and data rather than logic and actions. Java works by
categorizing objects and data together based on similar function as well as similar properties.
Because it shares a similar structure with basic C-based languages, Java is a great transition
language for intermediate developers because the syntax is much simpler than languages like
C++ and there are extensive libraries for beginners.
C#:
C# (pronounced C-sharp) is the default (and recommended) programming language for
Windows-based apps. With Windows Phone poised to make a comeback with Windows 10
Mobile, and the Windows App Store still desperately in need of well-made apps, learning C#
could give you a leg up in the Windows marketplace. C# is an object-oriented programming
language like Java, and it is based on the classical C-type languages. If you have a background in
basic programming languages, C# shouldn’t be hard to pick up.
C# plays the role in the Microsoft universe as the Objective-C plays in the Apple cosmos: It's an
expansion of C that directly addresses many of the unique features of the environment. The
Windows Mobile platform hasn't been the market-changer that many had predicted (and hoped),
but there's no denying the gravitational pull of Windows across multiple platforms. If your fleet
of mobile devices includes Windows then your suite of development languages should include
C#.
Swift:
Created by Apple, Swift was introduced at 2014 WWDC Apple showcase. Swift is a multi-
paradigm, compiled programming language designed to work with Apple’s iOS and OS X
systems. Swift is meant to be easier to learn and less bug-prone than Objective-C, but it works
with Apple’s Cocoa/Cocoa Touch frameworks, as well as existing Objective-C code, without
issue. Swift was developed with the idea of creating fast, high-performing apps simply and
easily.
Apple's latest APIs are Cocoa and Cocoa Touch. The language to write code for them is Swift.
According to Apple, Swift is written to work along with Objective-C, though it's obvious that the
company intends for many developers to turn to Swift for complete programming. Among other
things, Swift has been designed to eliminate the possibility for many of the security
PHP:
PHP is a server-side programming language which shares similar syntax with other C-based
programming languages, making it easy to pick up for C-based developers. PHP supports a large
range of database types, making it ideal for any application that needs access to a database. PHP
is also extremely flexible, allowing it to support object-oriented programming languages but it
can also function well without them. PHP is a great choice for creating the interfaces for mobile
applications, and PHP is very useful for simplifying the codes and functions of other languages.
Compared to other languages, PHP applications do tend a run a bit slower than others. But, as
PHP is open-source, improvements are being made constantly.
HTML5
If you want to build a Web-fronted app for mobile devices, the one near-certainty is HTML5.
The eventual standard will make various data types simple to insert, rationalize input parameters,
level the browser playing field, account for different screen sizes, and probably freshen your
breath and give you lush, manageable hair.
The problem is that HTML5 is still a proposed standard that is currently supported in a lot of
different ways by a lot of different browsers. It’s certainly possible to write HTML5 Web pages
now, and many people are doing just that. They just have to know that there might be slight
tweaks in the language in months to come and more substantial changes in the way browsers
handle HTML5.
From a cost and efficiency standpoint HTML5 has the advantage of building on the current
version of HTML so the learning curve is much shallower than that for a completely new
language. If you can cope with a bit of uncertainty and want to walk the browser-based path,
HTML5 is an obvious choice for a primary language.
Objective-C
While most of the world was developing software using C++, Apple went with Objective C as its
primary programming language. Like C++, Objective C is a C-language superset. It does many
of the same things for C that C++ does, though it has a number of functions that specifically deal
with graphics, I/O, and display functions. Objective-C is part of the Apple development
framework and is fully integrated into all iOS and MacOS frameworks. It is in the process,
though, of being replaced in the Apple ecosystem -- by Swift.
Mobile application management (MAM) describes software and services responsible for
provisioning and controlling access to internally developed and commercially available mobile
apps used in business settings on both company-provided and “bring your own” smartphones and
tablet computers (BYOD).
Mobile application management differs from mobile device management (MDM). As the
names suggest; MAM focuses on application management, it provides a lower degree of control
over the device, but a higher level of control over applications. MDM solutions manage the
down to device firmware and configuration settings and can include management of all
applications and application data.
App wrapping
App wrapping was initially a favored method of applying policy to applications as part of mobile
application management solutions.
App wrapping sets up a dynamic library and adds to an existing binary that controls certain
aspects of an application. For instance, at startup, you can change an app so that it requires
authentication using a local passkey. Or you could intercept a communication so that it would be
forced to use your company's virtual private network (VPN) or prevent that communication from
reaching a particular application that holds sensitive data.
Increasingly, the likes of Apple and Samsung are overcoming the issue of app wrapping. Aside
from the fact that app wrapping is a legal grey zone, and may not meet its actual aims, it is not
possible to adapt the entire operating system to deal with numerous wrapped apps. In general,
wrapped apps available in the app stores have also not proven to be successful due to their
inability to perform without MDM.
System features
An end-to-end MAM solution provides the ability to: control the provisioning, updating and
removal of mobile applications via an enterprise app store, monitor application performance and
With the advent of mobile devices, a new industry came into existence. Mobile devices are now
so popular that many users no longer buy desktop or laptop computers. Advertisers, seeing the
value of this new medium are taking for advantage of it, offering products, games, apps and
more. In this article you’ll learn about 10 design practices for building mobile apps. These
practices will help you get the results you seek and also satisfy your customers.
1. Before You Begin, Consider Your Audience: Before you take any time to build an app,
consider your audience. What do you hope to achieve? How do you envision your audience
using your app? These are important questions to consider up-front.
2. Check the App Stores: Many times people come up with a great idea for an app and start
to brainstorm how to build it. There’s only one problem. Despite how unique you might think
your idea is, there’s an excellent chance that someone might have already built it, or something
similar to it. If that’s the case, you would be wasting a ton of time (and money). If an app already
exists, you can use it as a template to create your own product, or you might consider partnering
with the creator(s) of that app and using it as part of your strategy.
3. Involve Potential Users in the Design Process: One danger of any design process is
working only with your team and not involving the end users at all. Then, when the design is
done and is released to the public, some or many aspects of your design might not translate well
to the real world. To avoid this problem, involve potential end users in the design process and
use their feedback to make changes as necessary.
4. Create a Storyboard: The storyboard is one of the most important aspects of the design
process. This is where you lay out the complete functionality of your app on paper. If there are
problems, you can resolve them at this stage. The storyboard allows you to plan out all aspects of
the design, including future components, such as plug-ins.
5. Make the App Easy to Understand: The app should be easy to understand with
descriptions to accompany graphics (if necessary) and additional instructions. One design flaw is
6. Avoid Overuse of Graphics and Animations: Both graphics and animations can add a
nice “Wow” factor to your app but there’s a major downside – slow loading times which
translate into a poor user experience. Whenever possible, either avoid the use of bitmaps or
animations or limit their use to only essential features. And if you do use graphics, use vector
graphics whenever possible. The files sizes from these are much smaller, so they’ll load faster.
7. Consider the Sizes of Buttons and Icons: When working with a mobile interface, you
have a limited amount of space and some designers add too many buttons/icons. Another
consideration is the size of the human fingertip. If the buttons/icons are too small, users could
make errors with selecting the wrong one. Likewise, if there’s not enough space between the
buttons/icons, that can cause trouble as well. If in doubt, test your layouts and get feedback.
8. Create a Core Application: This means taking the most important features and building
those into a core application experience. Additional functionality can be created by building
plug-ins that can be purchased as necessary by the user. This avoids overloading the core part of
the app with too many features.
9. Create a Consistent Workflow: This translates into making sure the user experience
remains the same on all platforms. If you change that for each device, you’ll confuse and annoy
your users.
10. Test the Design: With any design, this is the most important aspect. If you’ve been
following the strategies listed in this article you’ll be testing your app every step of the way.
Still, it’s important to test the finished product and not only once but several times with different
users. If there are problems, fix them, then test the result again.
Architecture Frame
The following table lists the key areas to consider as you develop your architecture. Refer to the
key issues in the table to understand where mistakes are most often made. The sections
following this table provide guidelines for each of these areas.
Mobile IP is an Internet Engineering Task Force (IETF) standard communications protocol that
is designed to allow mobile device users to move from one network to another while maintaining
their permanent IP address. Defined in Request for Comments (RFC) 2002, Mobile IP is an
enhancement of the Internet Protocol (IP) that adds mechanisms for forwarding Internet traffic to
mobile devices (known as mobile nodes) when they are connecting through to other than their
home network.
All the variations of Mobile IP assign each mobile node a permanent home address on its home
network and a care-of address that identifies the current location of the device within a network
and its subnets. Each time a user moves the device to a different network, it acquires a new care-
of address. A mobility agent on the home network associates each permanent address with its
care-of address. The mobile node sends the home agent a binding update each time it changes its
care-of address using Internet Control Message Protocol (ICMP).
In Mobile IPv4, traffic for the mobile node is sent to the home network but is intercepted by the
home agent and forwarded via tunneling mechanisms to the appropriate care-of address. Foreign
agents on the visited network helps to forward datagrams. Mobile IPv6 was developed to
Enhancements to the Mobile IP standard, such as Mobile IPv6 and Hierarchical Mobile IPv6
(HMIPv6), were developed to advance mobile communications by making the processes
involved less cumbersome.
Further explanation
Definition of terms
Home network
The home network of a mobile device is the network within which the device receives its
identifying IP address (home address).
Home address
The home address of a mobile device is the IP address assigned to the device within its home
network.
Foreign network
A foreign network is the network in which a mobile node is operating when away from its home
network.
Care-of address
The care-of address of a mobile device is the network-native IP address of the device when
operating in a foreign network.
Home agent
A home agent is a router on a mobile node’s home network which tunnels datagrams for delivery
to the mobile node when it is away from home. It maintains current location (IP address)
information for the mobile node. It is used with one or more foreign agents.
Foreign agent
A foreign agent is a router that stores information about mobile nodes visiting its network.
Foreign agents also advertise care-of-addresses which are used by Mobile IP.
Binding
A binding is the association of the home address with a care-of address.
The Mobile IP allows for location-independent routing of IP datagrams on the Internet. Each
mobile node is identified by its home address disregarding its current location in the Internet.
While away from its home network, a mobile node is associated with a care-of address which
identifies its current location and its home address is associated with the local endpoint of a
tunnel to its home agent. Mobile IP specifies how a mobile node registers with its home agent
and how the home agent routes datagrams to the mobile node through the tunnel....
Mobile IP is most often found in wired and wireless environments where users need to carry
their mobile devices across multiple LAN subnets. Examples of use are in roaming between
overlapping wireless systems, e.g., IP over DVB, WLAN, WiMAX and BWA.
Mobile IP is not required within cellular systems such as 3G, to provide transparency when
Internet users migrate between cellular towers, since these systems provide their own data link
layer handover and roaming mechanisms. However, it is often used in 3G systems to allow
seamless IP mobility between different packet data serving node (PDSN) domains.
Operational principles
The goal of IP Mobility is to maintain the TCP connection between a mobile host and a static
host while reducing the effects of location changes while the mobile host is moving around,
without having to change the underlying TCP/IP. To solve the problem, the RFC allows for a
kind of proxy agent to act as a middle-man between a mobile host and a correspondent host.
A mobile node has two addresses – a permanent home address and a care-of address (CoA),
which is associated with the network the mobile node is visiting. Two kinds of entities comprise
a Mobile IP implementation:
A home agent(HA) stores information about mobile nodes whose permanent home address is in
the home agent's network. The HA acts as a router on a mobile host's (MH) home network which
tunnels datagrams for delivery to the MH when it is away from home, maintains a location
directory (LD) for the MH.
A foreign agent (FA) stores information about mobile nodes visiting its network. Foreign agents
also advertise care-of addresses, which are used by Mobile IP. If there is no foreign agent in the
host network, the mobile device has to take care of getting an address and advertising that
address by its own means. The FA acts as a router on a MH’s visited network which provides
routing services to the MH while registered. FA detunnels and delivers datagrams to the MH that
were tunneled by the MH’s HA
The so-called Care of Address is a termination point of a tunnel toward a MH, for datagrams
forwarded to the MH while it is away from home.
Foreign agent care-of address: the address of a foreign agent that MH registers withco-located
care-of address: an externally obtained local address that a MH gets.
Mobile Nodes (MN) are responsible for discovering whether it is connected to its home network
or has moved to a foreign network. HA’s and FA’s broadcast their presence on each network to
which they are attached. They are not solely responsible for discovery, they only play a part.
A node wanting to communicate with the mobile node uses the permanent home address of the
mobile node as the destination address to send packets to. Because the home address logically
belongs to the network associated with the home agent, normal IP routing mechanisms forward
these packets to the home agent. Instead of forwarding these packets to a destination that is
physically in the same network as the home agent, the home agent redirects these packets
towards the remote address through an IP tunnel by encapsulating the datagram with a new IP
header using the care of address of the mobile node.
When acting as transmitter, a mobile node sends packets directly to the other communicating
node, without sending the packets through the home agent, using its permanent home address as
the source address for the IP packets. This is known as triangular routing or "route optimization"
(RO) mode. If needed, the foreign agent could employ reverse tunneling by tunneling the mobile
node's packets to the home agent, which in turn forwards them to the communicating node. This
is needed in networks whose gateway routers check that the source IP address of the mobile host
belongs to their subnet or discard the packet otherwise. In Mobile IPv6 (MIPv6), "reverse
tunneling" is the default behaviour, with RO being an optional behaviour.
In scenarios when both sides of communication are mobile nodes, communicating via Mobile IP
solutions adds additional overhead that decreases efficient packet payloads. As a solution, in
2012 researchers developed a method to decrease the size of overhead in situations, so that more
payloads can be transferred in each IP packet in the discussed scenarios. In the proposed method,
the tunnel manager is changed to act as a DNS, so that sending MN addresses are no longer
required.
Performance
A performance evaluation of Mobile IPv6, carried out by NEC Europe, can be found at the ACM
Digital Library, under the entry "A simulation study on the performance of mobile IPv6 in a
WLAN-based cellular network", from the Elsevier Computer Networks Journal (CNJ), special
issue on The New Internet Architecture, September 2002.
Additionally, a performance comparison between Mobile IPv6 and some of its proposed
enhancements (Hierarchical Mobile IPv6, Fast Handovers for Mobile IPv6 and their
Combination) is available under the entry "A performance comparison of Mobile IPv6,
Hierarchical Mobile IPv6, fast handovers for Mobile IPv6 and their combination", from the
ACM SIGMOBILE Mobile Computing and Communications Review (MC2R), Volume 7, Issue
4, October, 2003.
Researchers create support for mobile networking without requiring any pre-deployed
infrastructure as it currently is required by MIP. One such example is Interactive Protocol for
Mobile Networking (IPMN) which promises supporting mobility on a regular IP network just
from the network edges by intelligent signaling between IP at end-points and application layer
module with improved quality of service.
Researchers are also working to create support for mobile networking between entire subnets
with support from Mobile IPv6. One such example is Network Mobility (NEMO) Network
Mobility Basic Support Protocol by the IETF Network Mobility Working Group which supports
mobility for entire Mobile Networks that move and to attach to different points in the Internet.
The protocol is an extension of Mobile IPv6 and allows session continuity for every node in the
Mobile Network as the network moves.
Distribution
The two biggest app stores are Google Play for Android and App Store for iOS.
Google Play
Google Play (formerly known as the Android Market) is an international online software store
developed by Google for Android devices. It opened in October 2008. In August 2014, there
were approximately 1.3+ million apps available for Android and the estimated number of
applications downloaded from Google Play was 40 billion.
App Store
Apple's App Store for iOS was not the first app distribution service, but it ignited the mobile
revolution and was opened on July 10, 2008, and as of January 2011, reported over 10 billion
downloads. The original AppStore was first demonstrated to Steve Jobs in 1993 by Jesse Tayler
at NeXTWorld Expo As of June 6, 2011, there were 425,000 apps available, which had been
downloaded by 200 million iOS users. During Apple's 2012 Worldwide Developers Conference,
Apple CEO Tim Cook announced that the App Store has 650,000 available apps to download as
well as 30 billion apps downloaded from the app store until that date. From an alternative
perspective, figures seen in July 2013 by the BBC from tracking service Adeven indicate over
two-thirds of apps in the store are "zombies", barely ever installed by consumers.
Others
Amazon Appstore is an alternative application store for the Android operating system. It was
opened in March 2011, with 3800 applications. The Amazon Appstore's Android Apps can also
run on BlackBerry 10 devices.
BlackBerry World is the application store for BlackBerry 10 and BlackBerry OS devices. It
opened in April 2009 as BlackBerry App World. BlackBerry 10 users can also run Android apps.
Ovi (Nokia) for Nokia phones was launched internationally in May 2009. In May 2011, Nokia
announced plans to rebrand its Ovi product line under the Nokia brand and Ovi Store was
renamed Nokia Store in October 2011. Nokia Store will no longer allow developers to publish
new apps or app updates for its legacy Symbian and MeeGo operating systems from January
2014.
Windows Phone Store was introduced by Microsoft for its Windows Phone platform, which was
launched in October 2010. As of October 2012, it has over 120,000 apps available.
Windows Store was introduced by Microsoft for its Windows 8 and Windows RT platforms.
While it can also carry listings for traditional desktop programs certified for compatibility with
Windows 8, it is primarily used to distribute "Windows Store apps"—which are primarily built
for use on tablets and other touch-based devices (but can still be used with a keyboard and
mouse, and on desktop computers and laptops).
The Electronic AppWrapper was the first electronic distribution service to collectively provide
encryption and purchasing electronically.
There are many other independent app stores for Android devices.
The architectural approach is based on what features are needed. In turn, these features must be
based on the iOS platform. If there is only one approach that meets these requirements, the
decision making process is fairly simple. Typically, however, there are multiple architectural
approaches that could satisfy the requirements, and choosing the most appropriate design means
evaluating several factors, some of which are unique to mobile development.
Some of the mostcommonly considered factors are the deployment platforms being targeted, the
specific devices and user profiles, the contexts in which the application is most likely to be used,
and any off-line usability and connectivity profiles that the application must support. The
complexity of the workflow and the richness of the user experience that is required is probably
one of the most important factors that determine this choice. The choice of architecture will
undoubtedly have long term ramifications, and mobile app architects need to understand a
customer’s vision and road map for the application.
1. Native application
Pros:
Offers the best user experience; possible to build complex, rich, and responsive applications
that offer the best performance.
Has access to all the native features provided by the platform.
Fine grained control over local data caching makes it possible to implementapplications that
can function offline.
Ability to ensure transactional integrity in synchronizing offline data.
Cons:
Requires installation, up-grading, and uninstallation.
Is typically very device and platform specific.
Distribution of the application is more cumbersome and is often dependent on theApp Store.
Subject to App Store approval policy which is time consuming and might requireseveral
iterations.
3. Hybrid application
Hybrid applications are built by combining native components and web components. Web
components are built using HTML, CSS, and JavaScript and wrapped by a native container
(internal browser) that not only displays them but also gives them access to native functionality
through JavaScript.
Pros:
Existing web assets can be used.
This article discusses the software development lifecycle with respect to mobile applications, and
discusses some of the considerations required when building mobile projects. For developers
wishing to just jump right in and start building, this guide can be skipped and read later for a
more complete understanding of mobile development.
Overview
Building mobile applications can be as easy as opening up your IDE, throwing something
together, doing a quick bit of testing, and submitting to an App Store – all done in an afternoon.
Or it can be an extremely involved process that involves rigorous up-front design, usability
testing, QA testing on thousands of devices, a full beta lifecycle, and then deployment a number
of different ways.
In this document, we’re going to take a thorough introductory examination of building mobile
applications, including:
1. Process – The process of software development is called the Software Development
Lifecycle (SDLC). We’ll examine all phases of the SDLC with respect to mobile application
development, including: Inspiration, Design, Development, Stabilization, Deployment,
and Maintenance.
2. Considerations – There are a number of considerations when building mobile applications,
especially in contrast to traditional web or desktop applications. We’ll examine these
considerations and how they affect mobile development.
This document is intended to answer fundamental questions about mobile app development, for
new and experienced application developers alike. It takes a fairly comprehensive approach to
introducing most of the concepts you’ll run into during the entire Software Development
Lifecycle (SDLC). However, this document may not be for everyone, if you’re itching to just
start building applications, we recommend jumping ahead to either the Introduction to Mobile
1. Inception
The ubiquity and level of interaction people have with mobile devices means that nearly
everyone has an idea for a mobile app. Mobile devices open up a whole new way to interact with
computing, the web, and even corporate infrastructure.
The inception stage is all about defining and refining the idea for an app. In order to create a
successful app, it’s important to ask some fundamental questions. For example, if you’re
developing an app for distribution in a public app store, some considerations are:
Competitive Advantage – Are there similar apps out there already? If so, how does this
application differentiate from others?
If you’re intending for the app to be distributed in the enterprise:
Infrastructure Integration – What existing infrastructure will it integrate with or extend?
Additionally, you should evaluate the usage of the app in a mobile form factor:
Value – What value does this app bring users? How will they use it?
To help with designing the functionality of an app, it can be useful to define Actors and Use
Cases. Actors are roles within an application and are often users. Use cases are typically actions
or intents.
For instance, if you’re building a task tracking application, you might have two Actors: User and
Friend. A User might Create a Task, and Share a Task with a Friend. In this case, creating a task
and sharing a task are two distinct use cases that, in tandem with the Actors, will inform what
screens you’ll need to build, as well as what business entities and logic will need to be
developed.
If you’ve captured the appropriate use cases and actors, it’s much easier to begin designing an
application because you know exactly what you need to design, so the question becomes, how to
design it, rather than what to design.
Furthermore,
rmore, form factor also influences UX decisions. A tablet has far more real estate, so you
can fit more information, and often what needs multiple screens on a phone is compressed into
one for a tablet:
As with UX, it’s important to understand that each platform has its own design language, so a
well-designed application may still look different on each platform:
For good UI design inspiration, check out some of the following sites:
i. pttrns.com – (iOS only)
ii. androidpttrns.com - (Android only)
iii. lovelyui.com – (iOS, Android, and Windows Phone)
iv. mobiledesignpatterngallery.com – (iOS, Android, and Windows Phone)
Additionally, you can find graphic designer portfolios at sites such as Behance.com and
Dribbble.com. Designers from all over the world can be found there, often times in places where
the exchange rate is favorable, so good graphic design doesn’t necessarily have to cost a lot.
3 Development
The development phase usually starts very early. In fact, once an idea has some maturation in the
conceptual/inspiration phase, often a working prototype is developed that validates functionality,
assumptions, and helps to give an understanding of the scope of the work.
In the rest of the tutorials, we’ll focus largely on the development phase.
4 Stabilization
Stabilization is the process of working out the bugs in your app. Not just from a functional
standpoint, e.g.: "It crashes when I click this button,” but also Usability and Performance. It’s
best to start stabilization very early within the development process so that course corrections
It’s never too early to begin testing an application. For example, if a major issue is found in the
prototype stage, the UX of the app can still be modified to accommodate it. If a performance
issue is found in the alpha stage, it’s early enough to modify the architecture before a lot of code
has been built on top of false assumptions.
Typically, as an application moves further along in the lifecycle, it’s opened to more people to
try it out, test it, provide feedback, etc. For instance, prototype applications may only be shown
or made available to key stakeholders, whereas release candidate applications may be distributed
to customers that sign up for early access.
For early testing and deployment to relatively few devices, usually deploying straight from a
development machine is sufficient. However, as the audience widens, this can quickly become
cumbersome. As such, there are a number of test deployment options out there that make this
process much easier by allowing you to invite people to a testing pool, release builds over the
web, and provide tools that allow for user feedback.
Some of the most popular ones are:
a) Apple App Store – Apple’s App Store is a globally available online application
repository that is built into Mac OS X via iTunes. It’s by far the most popular
distribution method for applications and it allows developers to market and
distribute their apps online with very little effort.
b) Enterprise Deployment – Enterprise deployment is meant for internal
distribution of corporate applications that aren’t available publicly via the App
Store.
c) Ad-Hoc Deployment – Ad-hoc deployment is intended primarily for
development and testing and allows you to deploy to a limited number of properly
provisioned devices. When you deploy to a device via Xcode or Xamarin Studio,
it is known as ad-hoc deployment.
Android
All Android applications must be signed before being distributed. Developers sign their
applications by using their own certificate protected by a private key. This certificate can provide
a chain of authenticity that ties an application developer to the applications that developer has
built and released. It must be noted that while a development certificate for Android can be
signed by a recognized certificate authority, most developers do not opt to utilize these services,
and self-sign their certificates. The main purpose for certificates is to differentiate between
different developers and applications. Android uses this information to assist with enforcement
of delegation of permissions between applications and components running within the Android
OS.
Unlike other popular mobile platforms, Android takes a very open approach to app distribution.
Devices are not locked to a single, approved app store. Instead, anyone is free to create an app
store, and most Android phones allow apps to be installed from these third party stores.
This allows developers a potentially larger yet more complex distribution channel for their
applications. Google Play is Google’s official app store, but there are many others. A few
popular ones are:
i. AppBrain
ii. Amazon App Store for Android
iii. Handango
iv. GetJar
Microsoft provides detailed instructions for deploying Windows Phone apps during
development.
Follow these steps to publish apps for beta testing and release to the store. Developers can
submit their apps and then provide an install link to testers, before the app is reviewed and
published.
Common Considerations
Multitasking
There are two significant challenges to multitasking (having multiple applications running at
once) on a mobile device. First, given the limited screen real estate, it is difficult to display
multiple applications simultaneously. Therefore, on mobile devices only one app can be in the
foreground at one time. Second, having multiple applications open and performing tasks can
quickly eat battery power.
Each platform handles multitasking differently, which we’ll explore in a bit.
Form Factor
Mobile devices generally fall into two categories, phones and tablets, with a few crossover
devices in between. Developing for these form factors is generally very similar; however,
designing applications for them can be very different. Phones have very limited screen space,
and tablets, while bigger, are still mobile devices with less screen space than even most laptops.
Because of this, mobile platform UI controls have been designed specifically to be effective on
smaller form factors.
Limited Resources
Mobile devices get more and more powerful all the time, but they are still mobile devices that
have limited capabilities in comparison to desktop or notebook computers. For instance, desktop
developers generally don’t worry about memory capacities; they’re used to having both physical
and virtual memory in copious quantities, whereas on mobile devices you can quickly consume
all available memory just by loading a handful of high-quality pictures.
Additionally, processor-intensive applications such as games or text recognition can really tax
the mobile CPU and adversely affect device performance.
Because of considerations like these, it’s important to code smartly and to deploy early and often
to actual devices in order to validate responsiveness.
iOS Considerations
Multitasking
Multitasking is very tightly controlled in iOS, and there are a number of rules and behaviors that
your application must conform to when another application comes to the foreground, otherwise
your application will be terminated by iOS.
Device-Specific Resources
Within a particular form factor, hardware can vary greatly between different models. For
instance, some devices have a rear-facing camera, some also have a front-facing camera, and
some have none.
Some older devices (iPhone 3G and older) don’t even allow multitasking.
Because of these differences between device models, it’s important to check for the presence of a
feature before attempting to use it.
Android Considerations
Multitasking
Multitasking in Android has two components; the first is the activity lifecycle. Each screen in an
Android application is represented by an Activity, and there is a specific set of events that occur
when an application is placed in the background or comes to the foreground. Applications must
adhere to this lifecycle in order to create responsive, well-behaved applications. For more
information, see the Activity Lifecycle guide.
The second component to multitasking in Android is the use of Services. Services are long-
running processes that exist independent of an application and are used to execute processes
while the application is in the background. For more information see the Creating Services guide.
Security Considerations
Applications in the Android OS all run under a distinct, isolated identity with limited
permissions. By default, applications can do very little. For example, without special
permissions, an application cannot send a text message, determine the phone state, or even
access the Internet! In order to access these features, applications must specify in their
application manifest file which permissions they would like, and when they’re being installed;
the OS reads those permissions, notifies the user that the application is requesting those
permissions, and then allows the user to continue or cancel the installation. This is an essential
step in the Android distribution model, because of the open application store model, since
Device capabilities
Although Windows Phone hardware is fairly homogeneous due to the strict guidelines provided
by Microsoft, there are still components that are optional and therefore require special
considering while coding. Optional hardware capabilities include the camera, compass and
gyroscope. There is also a special class of low-memory (256MB) that requires special
consideration, or developers can opt-out of low-memory support.
Database
Both iOS and Android include the SQLite database engine that allows for sophisticated data
storage that also works cross-platform. Windows Phone 7 did not include a database, while
Windows Phone 7.1 and 8 include a local database engine that can only be queried with LINQ to
SQL and does not support Transact-SQL queries. There is an open-source port of SQLite
available that can be added to Windows Phone applications to provide familiar Transact-SQL
support and cross-platform compatibility.
Security Considerations
Windows Phone applications are run with a restricted set of permissions that isolates them from
one another and limits the operations they can perform. Network access must be performed via
specific APIs and inter-application communication can only be done via controlled mechanisms.
Access to the file-system is also restricted; the Isolated Storage API provides key-value pair
storage and the ability to create files and folders in a controlled fashion (refer to the Isolated
Storage Overview for more information).
Summary
This guide gave an introduction to the SDLC as it relates to mobile development. It introduced
general considerations for building mobile applications and examined a number of platform-
specific considerations including design, testing, and deployment.
If you are just starting out with Javascript, this post will definitely help you understand two very
important features of Javascript: Object and Array literals. Knowing their syntax will not only
help you understand how Titanium works, but will speed up your understanding of CommonJS
and JSON.
Perhaps you don’t know them by name, but the truth is that when you work with Javascript,
you’re working with Objects all the time without realizing it. Take a look at the following
example:
var carmake='Honda';
console.log(carmake.toUpperCa
1 var carmake='Honda';
2 console.log(carmake.toUpperCase());
Where did toUpperCase come from? Here you have used an Object. Every time you create a
String variable, you’re actually creating a String Object. This object has properties and methods,
toUpperCase being just one of them.. Get used to the word “Object”, because is the foundation
of modern programming.
Object Literals
Object Literals are Objects that you create on the fly. The syntax for Object literals is simple:
1 var person={
2 name: 'jack',
3 email: '[email protected]',
To access the values on this object, you can use “dot notation”, that is, the name of the object, a
dot, and the name of the property.
console.log(person.name);
console.log(person.tw itter); 1 console.log(person.name);
2 console.log(person.twitter);
Array Literals
Just like objects, arrays can also be created on the fly. The syntax rules for array literals:
console.log(arr[0]);
1 console.log(arr[0]);
The power of Javascript Object and Array literals comes by combining them. A combination of
Objects and Arrays make up a complex, multidimensional object.
var obj={
key:[ 1 var obj={
'value1', 2 key:[
'value2'
3 'value1',
4 'value2'
5 ]
6 };
7 console.log(obj.key[1]);
var arr=[
{
1 var arr=[
key:[ 2 {
'value1', 3 key:[
4 'value1',
5 'value2'
6 ]
7 }
8 ];
9 console.log(arr[0].key[1]);
This syntax is very popular in today’s web services and perhaps you’ve heard of JSON
(Javascript Object Notation). JSON is an implementation of this syntax designed to be a way of
transporting data across the Internet.
Titanium itself is a JavaScript SDK (Software Development Kit) that works as an “Object
Factory”. This means that it has methods that generate Objects, and most of the times these
methods receive Objects as arguments. It sounds more confusing than it is.
1 var win=Titanium.UI.createWindow({
The result of this operation is a Titanium Window Object stored in the variable win. However,
the createWindow method received an object as argument, and object with the properties
backgroundColor and fullscreen. As you can see, knowing how an object is constructed allows
you to understand that the createWindow() and the toUpperCase() methods are very similar. The
difference is that you are sending an Object Literal to the createWindow method.
What is an Array?
An array is a common data structure used to store an ordered list of items. The array elements are
typed. For example, you could create an array of characters to represent the vowels in the
alphabet:
Much like C or C++, Java arrays are indexed numerically on a 0-based system. This means the
first element in the array (that is, ‘a’) is at index 0, the second (‘e’) is at index 1, and so on.
Java makes working with arrays easier than many other programming languages. The array itself
is an object (of type array), with all the benefits thereof. For example, you can always check the
size of an array using its length property:
You can store any object or primitive type in an array. For example, you can store integers in an
array:
int aNums[] = { 2, 4, 6 };
Or, you could store non-primitive types like Strings (or any other class) in an array:
Sometimes, you may want to store objects of different types in an array. You can always take
advantage of inheritance and use a parent class for the array type. For example, the Object class
is the mother of all classes… so you could store different types in a single array like this:
The elements of a Java object array are references (or handles) to objects, not actual instances of
objects. An element value is null until it is assigned a valid instance of an object (that is, the
array is initialized automatically but you are responsible for assigning its values).
Declaring Arrays
There are a number of ways to declare an array in Java. As you’ve seen, you can declare an array
and immediately provide its elements using the C-style squiggly bracket syntax. For example,
the following Java code declares an array of integers of length 3 and initializes the array all in
one line:
int aNums[] = { 2, 4, 6 };
You can also declare an array of a specific size and then assign the value of each element
individually, like this:
There are several other ways to create arrays. For example, you can create the array variable and
assign it separately using the new keyword. You can also put the array brackets before the
variable name, if you desire (this is a style issue). For example, the following Java code defines
an array of String elements and then assigns them individually:
String [] aStopLightColors;
aStopLightColors = new String[3];
aStopLightColors[0] = new String("red");
aStopLightColors[1] = new String("yellow");
aStopLightColors[2] = new String("green");
As you have seen, you can assign array values by using the bracket syntax:
You can retrieve array values by index as well. For example, you could access the second
element in the array called aStopLightColors (defined in the previous section) as follows:
Iterating Arrays
Finally, arrays are often used as an ordered list of objects. Therefore, you may find that you want
to iterate through the array in order, accessing each element methodically.
There are a number of ways to do this in Java. Because you can always check the size of an array
programmatically, you can use any of the typical for or while loop methods you may find
familiar. For example, the following Java code declares a simple integer array of three numbers
and uses a simple for-loop to iterate through the items:
int aNums[] = { 2, 4, 6 };
for (int i = 0; i
JavaScript/Control Structures
The control structures within JavaScript allow the program flow to change within a unit of code
or function. These statements can determine whether or not given statements are executed, as
well as repeated execution of a block of code.
Most of the statements enlisted below are so-called conditional statements that can operate either
on a statement or a block of code enclosed with braces ({ and }). The same structures utilize
Booleans to determine whether or not a block gets executed, where any defined variable that is
neither zero nor an empty string is treated as true.
Conditional statements
if
The if statement is straightforward ‐ if the given expression is true, the statement or
statements will be executed. Otherwise, they are skipped.
if (a === b) {
document.body.innerHTML += "a equals b";
}
The if statement may also consist of multiple parts, incorporating else and else if sections. These
keywords are part of the if statement, and identify the code blocks that are executed, if the
preceding condition is false.
if (a === b) {
document.body.innerHTML += "a equals b";
} else if (a === c) {
document.body.innerHTML += "a equals c";
} else {
document.body.innerHTML += "a does not equal either b or c";
while
The while statement executes a given statement as long as a given expression is true. For
example, the code block below will increase the variable c to 10:
This control loop also recognizes the break and continue keywords. The break keyword causes
the immediate termination of the loop, allowing for the loop to terminate from anywhere within
the block.
The continue keyword finishes the current iteration of the while block or statement, and checks
the condition to see, if it is true. If it is true, the loop commences again.
do … while
The do … while statement executes a given statement as long as a given expression is true -
however, unlike the while statement, this control structure will always execute the statement or
block at least once. For example, the code block below will increase the variable c to 10:
do {
c += 1;
} while (c < 10);
As with while, break and continue are both recognized and operate in the same manner. break
exits the loop, and continue checks the condition before attempting to restart the loop.
for
The for statement allows greater control over the condition of iteration. While it has a conditional
statement, it also allows a pre-loop statement, and post-loop increment without affecting the
condition. The initial expression is executed once, and the conditional is always checked at the
beginning of each loop. At the end of the loop, the increment statement executes before the
condition is checked once again. The syntax is:
var c;
for (c = 0; c < 10; c += 1) {
// …
}
While the increment statement is normally used to increase a variable by one per loop iteration, it
can contain any statement, such as one that decreases the counter.
Break and continue are both recognized. The continue statement will still execute the increment
statement before the condition is checked.
A second version of this loop is the for .. in statement that has following form:
The order of the got elements is arbitrary. It should not be used when the object is of Array type
switch
The switch statement evaluates an expression, and determines flow control based on the result of
the expression:
switch(i) {
case 1:
// …
break;
case 2:
// …
break;
default:
// …
break;
}
When i gets evaluated, it's value is checked against each of the case labels. These case labels
appear in the switch statement and, if the value for the case matches i, continues the execution at
that point. If none of the case labels match, execution continues at the default label (or skips the
switch statement entirely, if none is present.)
The break keyword exits the switch statement, and appears at the end of each case in order to
prevent undesired code from executing. While the break keyword may be omitted (for example,
you want a block of code executed for multiple cases), it may be considered bad practice doing
so.
Omitting the break can be used to test for more than one value at a time:
switch(i) {
case 1:
case 2:
case 3:
// …
break;
case 4:
// …
break;
default:
// …
break;
}
In this case the program will run the same code in case i equals 1, 2 or 3.
with
The with statement is used to extend the scope chain for a block and has the following syntax:
with (expression) {
// statement
}
Pros
Reduce file size by reducing the need to repeat a lengthy object reference, and
Relieve the interpreter of parsing repeated object references.
Cons
The with statement forces the specified object to be searched first for all name lookups.
Therefore
all identifiers that aren't members of the specified object will be found more slowly in a
'with' block and should only be used to encompass code blocks that access members of
the object.
with makes it difficult for a human or a machine to find out which object was meant by
searching the scope chain.
Used with something else than a plain object, with may not be forward-compatible.
Therefore, the use of the with statement is not recommended, as it may be the source of
confusing bugs and compatibility issues. See the "Ambiguity Con" paragraph in the
"Description" section below for details.
Example
var area;
var r = 10;
with (Math) {
a = PI*r*r; // == a = Math.PI*r*r
x = r*cos(PI); // == a = r*Math.cos(Math.PI);
y = r*sin(PI/2); // == a = r*Math.sin(Math.PI/2);
}
The language is written in the form of HTML elements consisting of tags enclosed in angle
brackets (like <html>). Browsers do not display the HTML tags and scripts, but use them to
interpret the content of the page.
HTML can embed scripts written in languages such as JavaScript which affect the behavior of
HTML web pages. Web browsers can also refer to Cascading Style Sheets (CSS) to define the
look and layout of text and other material. The World Wide Web Consortium (W3C), maintainer
of both the HTML and the CSS standards, has encouraged the use of CSS over explicit
presentational HTML since 1997
Markup
HTML markup consists of several key components, including those called tags (and their
attributes), character-based data types, character references and entity references. HTML tags
most commonly come in pairs like <h1> and </h1>, although some represent empty
elementsand so are unpaired, for example <img>. The first tag in such a pair is the start tag, and
the second is the end tag (they are also called opening tags and closing tags).
Another important component of the HTML is document type declaration, which triggers
standards mode rendering.
The following is an example of the classic Hello world program, a common test employed for
comparing programming languages, scripting languages and markup languages. This example is
made using 9 lines of code:
<!DOCTYPE html>
<html>
<head>
<title>This is a title</title>
</head>
<body>
<p>Hello world!</p>
</body>
</html>
The Document Type Declaration <!DOCTYPE html> is for HTML5. If a declaration is not
included, various browsers will revert to "quirks mode" for rendering.
Elements
HTML documents imply a structure of nested HTML elements. These are indicated in the
document by HTML tags, enclosed in angle brackets thus: <p>
In the simple, general case, the extent of an element is indicated by a pair of tags: a "start tag"
<p> and "end tag" </p>. The text content of the element, if any, is placed between these tags.
Tags may also enclose further tag markup between the start and end, including a mixture of tags
and text. This indicates further (nested) elements, as children of the parent element.
The start tag may also include attributes within the tag. These indicate other information, such as
identifiers for sections within the document, identifiers used to bind style information to the
presentation of the document, and for some tags such as the <img> used to embed images, the
reference to the image resource.
Some elements, such as the line break<br>, do not permit any embedded content, either text or
further tags. These require only a single empty tag (akin to a start tag) and do not use an end tag.
Many tags, particularly the closing end tag for the very commonly-used paragraph element <p>,
are optional. An HTML browser or other agent can infer the closure for the end of an element
from the context and the structural rules defined by the HTML standard. These rules are complex
and not widely understood by most HTML coders.
Header of the HTML document:<head>...</head>. The title is included in the head, for example:
<head>
<title>The Title</title>
</head>
Headings: HTML headings are defined with the <h1> to <h6> tags:
Paragraphs:
Line breaks:<br>. The difference between <br> and <p> is that "br" breaks a line without
altering the semantic structure of the page, whereas "p" sections the page into paragraphs. Note
also that "br" is an empty element in that, although it may have attributes, it can take no content
and it may not have an end tag.
This is a link in HTML. To create a link the <a> tag is used. The href= attribute holds the
URL address of the link.
Comments:
Comments can help in the understanding of the markup and do not display in the webpage.
Presentational markup indicates the appearance of the text, regardless of its purpose
For example, <b>boldface</b> indicates that visual output devices should render
"boldface" in bold text, but gives little indication what devices that are unable to do this (such as
aural devices that read the text aloud) should do. In the case of both <b>bold</b> and
<i>italic</i>, there are other elements that may have equivalent visual renderings but that
are more semantic in nature, such as <strong>strong text</strong> and
<em>emphasised text</em> respectively. It is easier to see how an aural user agent
should interpret the latter two elements. However, they are not equivalent to their presentational
counterparts: it would be undesirable for a screen-reader to emphasize the name of a book, for
instance, but on a screen such a name would be italicized. Most presentational markup elements
have become deprecated under the HTML 4.0 specification in favor of using CSS for styling.
Attributes
Most of the attributes of an element are name-value pairs, separated by "=" and written within
the start tag of an element after the element's name. The value may be enclosed in single or
double quotes, although values consisting of certain characters can be left unquoted in HTML
(but not XHTML). Leaving attribute values unquoted is considered unsafe. In contrast with
name-value pair attributes, there are some attributes that affect the element simply by their
presence in the start tag of the element, like the ismap attribute for the img element.
The id attribute provides a document-wide unique identifier for an element. This is used
to identify the element so that style sheets can alter its presentational properties, and
scripts may alter, animate or delete its contents or presentation. Appended to the URL of
the page, it provides a globally unique identifier for the element, typically a sub-section
of the page. For example, the ID "Attributes" in
http://en.wikipedia.org/wiki/HTML#Attributes
The class attribute provides a way of classifying similar elements. This can be used for
semantic or presentation purposes. For example, an HTML document might semantically
use the designation class="notation" to indicate that all elements with this class
value are subordinate to the main text of the document. In presentation, such elements
might be gathered together and presented as footnotes on a page instead of appearing in
the place where they occur in the HTML source. Class attributes are used semantically in
microformats. Multiple class values may be specified; for example class="notation
important" puts the element into both the "notation" and the "important" classes.
An author may use the style attribute to assign presentational properties to a particular
element. It is considered better practice to use an element's id or class attributes to
select the element from within a stylesheet, though sometimes this can be too
cumbersome for a simple, specific, or ad hoc styling.
The title attribute is used to attach subtextual explanation to an element. In most
browsers this attribute is displayed as a tooltip.
The lang attribute identifies the natural language of the element's contents, which may
be different from that of the rest of the document. For example, in an English-language
document:
<p>Oh well, <span lang="fr">c'est la vie</span>, as they
say in France.</p>
The abbreviation element, abbr, can be used to demonstrate some of these attributes :
This example displays as HTML; in most browsers, pointing the cursor at the abbreviation
should display the title text "Hypertext Markup Language."
Most elements take the language-related attribute dir to specify text direction, such as with "rtl"
for right-to-left text in, for example, Arabic, Persian or Hebrew.
As of version 4.0, HTML defines a set of 252 character entity references and a set of 1,114,050
numeric character references, both of which allow individual characters to be written via simple
markup, rather than literally. A literal character and its markup counterpart are considered
equivalent and are rendered identically.
The ability to "escape" characters in this way allows for the characters < and & (when written as
< and &, respectively) to be interpreted as character data, rather than markup. For
example, a literal < normally indicates the start of a tag, and & normally indicates the start of a
character entity reference or numeric character reference; writing it as & or & or
& allows & to be included in the content of an element or in the value of an attribute. The
double-quote character ("), when not used to quote an attribute value, must also be escaped as
" or " or " when it appears within the attribute value itself.
Equivalently, the single-quote character ('), when not used to quote an attribute value, must also
be escaped as ' or ' (or as ' in HTML5 or XHTML documents) when it
appears within the attribute value itself. If document authors overlook the need to escape such
characters, some browsers can be very forgiving and try to use context to guess their intent. The
result is still invalid markup, which makes the document less accessible to other browsers and to
other user agents that may try to parse the document for search and indexing purposes for
example.
Escaping also allows for characters that are not easily typed, or that are not available in the
document's character encoding, to be represented within element and attribute content. For
example, the acute-accented e (é), a character typically found only on Western European and
South American keyboards, can be written in any HTML document as the entity reference
é or as the numeric references é or é, using characters that are
available on all keyboards and are supported in all character encodings. Unicode character
encodings such as UTF-8 are compatible with all modern browsers and allow direct access to
almost all the characters of the world's writing systems.
Data types
HTML defines several data types for element content, such as script data and stylesheet data, and
a plethora of types for attribute values, including IDs, names, URIs, numbers, units of length,
languages, media descriptors, colors, character encodings, dates and times, and so on. All of
these data types are specializations of character data.
HTML documents are required to start with a Document Type Declaration (informally, a
"doctype"). In browsers, the doctype helps to define the rendering mode—particularly whether to
use quirks mode.
The original purpose of the doctype was to enable parsing and validation of HTML documents
by SGML tools based on the Document Type Definition (DTD). The DTD to which the
DOCTYPE refers contains a machine-readable grammar specifying the permitted and prohibited
content for a document conforming to such a DTD. Browsers, on the other hand, do not
implement HTML as an application of SGML and by consequence do not read the DTD.
HTML5 does not define a DTD; therefore, in HTML5 the doctype declaration is simpler and
shorter:
<!DOCTYPE html>
This declaration references the DTD for the "strict" version of HTML 4.01. SGML-based
validators read the DTD in order to properly parse the document and to perform validation. In
modern browsers, a valid doctype activates standards mode as opposed to quirks mode.
In addition, HTML 4.01 provides Transitional and Frameset DTDs, as explained below.
Transitional type is the most inclusive, incorporating current tags as well as older or "deprecated"
tags, with the Strict DTD excluding deprecated tags. Frameset has all tags necessary to make
frames on a page along with the tags included in transitional type.
Semantic HTML
Semantic HTML is a way of writing HTML that emphasizes the meaning of the encoded
information over its presentation (look). HTML has included semantic markup from its
inception, but has also included presentational markup, such as <font>, <i> and <center>
tags. There are also the semantically neutral span and div tags. Since the late 1990s when
Cascading Style Sheets were beginning to work in most browsers, web authors have been
encouraged to avoid the use of presentational HTML markup with a view to the separation of
presentation and content.
An important type of web agent that does crawl and read web pages automatically, without prior
knowledge of what it might find, is the web crawler or search-engine spider. These software
agents are dependent on the semantic clarity of web pages they find as they use various
techniques and algorithms to read and index millions of web pages a day and provide web users
with search facilities without which the World Wide Web's usefulness would be greatly reduced.
In order for search-engine spiders to be able to rate the significance of pieces of text they find in
HTML documents, and also for those creating mashups and other hybrids as well as for more
automated agents as they are developed, the semantic structures that exist in HTML need to be
widely and uniformly applied to bring out the meaning of published text.
Presentational markup tags are deprecated in current HTML and XHTML recommendations and
are illegal in HTML5.
Good semantic HTML also improves the accessibility of web documents (see also Web Content
Accessibility Guidelines). For example, when a screen reader or audio browser can correctly
ascertain the structure of a document, it will not waste the visually impaired user's time by
reading out repeated or irrelevant information when it has been marked up correctly.
Delivery
HTML documents can be delivered by the same means as any other computer file. However,
they are most often delivered either by HTTP from a web server or by email.
The World Wide Web is composed primarily of HTML documents transmitted from web servers
to web browsers using the Hypertext Transfer Protocol (HTTP). However, HTTP is used to serve
images, sound, and other content, in addition to HTML. To allow the web browser to know how
to handle each document it receives, other information is transmitted along with the document.
In modern browsers, the MIME type that is sent with the HTML document may affect how the
document is initially interpreted. A document sent with the XHTML MIME type is expected to
be well-formed XML; syntax errors may cause the browser to fail to render it. The same
document sent with the HTML MIME type might be displayed successfully, since some
browsers are more lenient with HTML.
The W3C recommendations state that XHTML 1.0 documents that follow guidelines set forth in
the recommendation's Appendix C may be labeled with either MIME Type. XHTML 1.1 also
states that XHTML 1.1 documents should be labeled with either MIME type.
HTML e-mail
Most graphical email clients allow the use of a subset of HTML (often ill-defined) to provide
formatting and semantic markup not available with plain text. This may include typographic
information like coloured headings, emphasized and quoted text, inline images and diagrams.
Many such clients include both a GUI editor for composing HTML e-mail messages and a
rendering engine for displaying them. Use of HTML in e-mail is criticized by some because of
compatibility issues, because it can help disguise phishing attacks, because of accessibility issues
for blind or visually impaired people, because it can confuse spam filters and because the
message size is larger than plain text.
Naming conventions
The most common filename extension for files containing HTML is .html. A common
abbreviation of this is .htm, which originated because some early operating systems and file
systems, such as DOS and the limitations imposed by FAT data structure, limited file extensions
to three letters.
HTML Application
An HTML Application (HTA; file extension ".hta") is a Microsoft Windows application that
uses HTML and Dynamic HTML in a browser to provide the application's graphical interface. A
regular HTML file is confined to the security model of the web browser's security,
communicating only to web servers and manipulating only webpage objects and site cookies. An
HTA runs as a fully trusted application and therefore has more privileges, like
creation/editing/removal of files and Windows Registry entries. Because they operate outside the
browser's security model, HTAs cannot be executed via HTTP, but must be downloaded (just
like an EXE file) and executed from local file system.
HTML is precisely what we were trying to PREVENT— ever-breaking links, links going
outward only, quotes you can't follow to their origins, no version management, no rights
management.
Ted Nelson
Since its inception, HTML and its associated protocols gained acceptance relatively quickly.
However, no clear standards existed in the early years of the language. Though its creators
originally conceived of HTML as a semantic language devoid of presentation details, practical
uses pushed many presentational elements and attributes into the language, driven largely by the
various browser vendors. The latest standards surrounding HTML reflect efforts to overcome the
sometimes chaotic development of the language and to create a rational foundation for building
both meaningful and well-presented documents. To return HTML to its role as a semantic
language, the W3C has developed style languages such as CSS and XSL to shoulder the burden
of presentation. In conjunction, the HTML specification has slowly reined in the presentational
elements.
There are two axes differentiating various variations of HTML as currently specified: SGML-
based HTML versus XML-based HTML (referred to as XHTML) on one axis, and strict versus
transitional (loose) versus frameset on the other axis.
One difference in the latest HTML specifications lies in the distinction between the SGML-based
specification and the XML-based specification. The XML-based specification is usually called
XHTML to distinguish it clearly from the more traditional definition. However, the root element
name continues to be "html" even in the XHTML-specified HTML. The W3C intended XHTML
1.0 to be identical to HTML 4.01 except where limitations of XML over the more complex
SGML require workarounds. Because XHTML and HTML are closely related, they are
sometimes documented in parallel. In such circumstances, some authors conflate the two names
as (X)HTML or X(HTML).
Like HTML 4.01, XHTML 1.0 has three sub-specifications: strict, transitional and frameset.
Aside from the different opening declarations for a document, the differences between an HTML
4.01 and XHTML 1.0 document—in each of the corresponding DTDs—are largely syntactic.
The underlying syntax of HTML allows many shortcuts that XHTML does not, such as elements
with optional opening or closing tags, and even empty elements which must not have an end tag.
By contrast, XHTML requires all elements to have an opening tag and a closing tag. XHTML,
To understand the subtle differences between HTML and XHTML, consider the transformation
of a valid and well-formed XHTML 1.0 document that adheres to Appendix C (see below) into a
valid HTML 4.01 document. To make this translation requires the following steps:
1. The language for an element should be specified with a lang attribute rather than
the XHTML xml:lang attribute. XHTML uses XML's built in language-defining
functionality attribute.
2. Remove the XML namespace (xmlns=URI). HTML has no facilities for namespaces.
3. Change the document type declaration from XHTML 1.0 to HTML 4.01. (see DTD
section for further explanation).
4. If present, remove the XML declaration. (Typically this is: <?xml version="1.0"
encoding="utf-8"?>).
5. Ensure that the document's MIME type is set to text/html. For both HTML and
XHTML, this comes from the HTTP Content-Type header sent by the server.
6. Change the XML empty-element syntax to an HTML style empty element (<br/>
to <br>).
Those are the main changes necessary to translate a document from XHTML 1.0 to HTML 4.01.
To translate from HTML to XHTML would also require the addition of any omitted opening or
closing tags. Whether coding in HTML or XHTML it may just be best to always include the
optional tags within an HTML document rather than remembering which tags can be omitted.
A well-formed XHTML document adheres to all the syntax requirements of XML. A valid
document adheres to the content specification for XHTML, which describes the document
structure.
The W3C recommends several conventions to ensure an easy migration between HTML and
XHTML (see HTML Compatibility Guidelines). The following steps can be applied to XHTML
1.0 documents only:
Include both xml:lang and lang attributes on any elements assigning language.
Use the empty-element syntax only for elements specified as empty in HTML.
Include an extra space in empty-element tags: for example <br /> instead of <br/>.
By carefully following the W3C's compatibility guidelines, a user agent should be able to
interpret the document equally as HTML or XHTML. For documents that are XHTML 1.0 and
have been made compatible in this way, the W3C permits them to be served either as HTML
(with a text/htmlMIME type), or as XHTML (with an application/xhtml+xml or
application/xml MIME type). When delivered as XHTML, browsers should use an XML
parser, which adheres strictly to the XML specifications for parsing the document's contents.
HTML 4 defined three different versions of the language: Strict, Transitional (once called Loose)
and Frameset. The Strict version is intended for new documents and is considered best practice,
while the Transitional and Frameset versions were developed to make it easier to transition
documents that conformed to older HTML specification or didn't conform to any specification to
a version of HTML 4. The Transitional and Frameset versions allow for presentational markup,
which is omitted in the Strict version. Instead, cascading style sheets are encouraged to improve
the presentation of HTML documents. Because XHTML 1 only defines an XML syntax for the
language defined by HTML 4, the same differences apply to XHTML 1 as well.
The Transitional version allows the following parts of the vocabulary, which are not included in
the Strict version:
The Frameset version includes everything in the Transitional version, as well as the frameset
element (used instead of body) and the frame element.
In addition to the above transitional differences, the frameset specifications (whether XHTML
1.0 or HTML 4.01) specify a different content model, with frameset replacing body, that
contains either frame elements, or optionally noframes with a body.
As this list demonstrates, the loose versions of the specification are maintained for legacy
support. However, contrary to popular misconceptions, the move to XHTML does not imply a
removal of this legacy support. Rather the X in XML stands for extensible and the W3C is
modularizing the entire specification and opening it up to independent extensions. The primary
achievement in the move from XHTML 1.0 to XHTML 1.1 is the modularization of the entire
specification. The strict version of HTML is deployed in XHTML 1.1 through a set of modular
extensions to the base XHTML 1.1 specification. Likewise, someone looking for the loose
(transitional) or frameset specifications will find similar extended XHTML 1.1 support (much of
it is contained in the legacy or frame modules). The modularization also allows for separate
features to develop on their own timetable. So for example, XHTML 1.1 will allow quicker
migration to emerging XML standards such as MathML (a presentational and semantic math
language based on XML) and XForms—a new highly advanced web-form technology to replace
the existing HTML forms.
In summary, the HTML 4 specification primarily reined in all the various HTML
implementations into a single clearly written specification based on SGML. XHTML 1.0, ported
this specification, as is, to the new XML defined specification. Next, XHTML 1.1 takes
advantage of the extensible nature of XML and modularizes the whole specification. XHTML
2.0 was intended to be the first step in adding new features to the specification in a standards-
body-based approach.
HTML5 variations
The WhatWG considers their work as living standard HTML for what constitutes the state of the
art in major browser implementations by Apple (Safari), Google (Chrome), Mozilla (Firefox),
Opera (Opera), and others. HTML5 is specified by the HTML Working Group of the W3C
following the W3C process. As of 2013 both specifications are similar and mostly derived from
HTML lacks some of the features found in earlier hypertext systems, such as source tracking, fat
links and others. Even some hypertext features that were in early versions of HTML have been
ignored by most popular web browsers until recently, such as the link element and in-browser
Web page editing.
Sometimes Web services or browser manufacturers remedy these shortcomings. For instance,
wikis and content management systems allow surfers to edit the Web pages they visit.
WYSIWYG editors
There are some WYSIWYG editors (What You See Is What You Get), in which the user lays out
everything as it is to appear in the HTML document using a graphical user interface (GUI), often
similar to word processors. The editor renders the document rather than show the code, so
authors do not require extensive knowledge of HTML.
The WYSIWYG editing model has been criticized, primarily because of the low quality of the
generated code; there are voices advocating a change to the WYSIWYM model (What You See
Is What You Mean).
WYSIWYG editors remain a controversial topic because of their perceived flaws such as:
Relying mainly on layout as opposed to meaning, often using markup that does not
convey the intended meaning but simply copies the layout.
Often producing extremely verbose and redundant code that fails to make use of the
cascading nature of HTML and CSS.
Often producing ungrammatical markup, called tag soup or semantically incorrect
markup (such as <em> for italics).
As a great deal of the information in HTML documents is not in the layout, the model has
been criticized for its "what you see is all you get"-natur
What is CSS?
Cascading Style Sheets (CSS) is a style sheet language used for describing the presentation of a
document written in a markup language.
CSS is used to define styles for your web pages, including the design, layout and variations in
display for different devices and screen sizes.
HTML was NEVER intended to contain tags for formatting a web page. HTML was created to
describe the content of a web page, like:
<h1>This is a heading</h1>
When tags like <font>, and color attributes were added to the HTML 3.2 specification, it started
a nightmare for web developers. Development of large websites, where fonts and color
information were added to every single page, became a long and expensive process.
To solve this problem, the World Wide Web Consortium (W3C) created CSS. CSS was created
to specify the document's style, not its content.
In HTML 4.0, and later, all formatting should be removed from the HTML page, and stored in
separate CSS files.
The style definitions are normally saved in external .css files.With an external stylesheet file, you
can change the look of an entire website by changing just one file!
CSS Syntax
CSS Example
A CSS declaration always ends with a semicolon, and declaration groups are surrounded by
curly braces:
p {color:red;text-align:center;}
To make the CSS code more readable, you can put one declaration on each line.
In the following example all <p> elements will be center-aligned, with a red text color:
Example
p{
color: red;
text-align: center;
}
Try it yourself »
CSS Comments
Comments are used to explain the code, and may help when you edit the source code at a later
date. Comments are ignored by browsers.
A CSS comment starts with /* and ends with */. Comments can also span multiple lines:
p{
color: red;
/* This is a single-line comment */
text-align: center;
}
/* This is
a multi-line
comment */
Introduction to CSS
As you’ve seen, browsers render certain HTML elements with distinct styles (for example,
headings are large and bold, paragraphs are followed by a blank line, and so forth). These styles
are very basic and are primarily intended to help the reader understand the structure and meaning
of the document.
To go beyond this simple structure-based rendering, you use Cascading Style Sheets (CSS). CSS
is a stylesheet language that you use to define the visual presentation of an HTML document.
You can use CSS to define simple things like the text color, size, and style (bold, italic, etc.), or
complex things like page layout, gradients, opacity, and much more.
Example 1-4 shows a CSS rule that instructs the browser to display any text in the body element
using the color red. In this example, body is the selector (this specifies what is affected by the
rule) and the curly braces enclose the declaration (the rule itself). The declaration includes a set
of properties and their values. In this example, color is the property, and red is the value of
the color property.
Property names are predefined in the CSS specification, which means that you can’t just make
them up. Each property expects an appropriate value, and there can be lots of appropriate values
and value formats for a given property.
body {
color: red;
background-color: #808080;
font-size: 12px;
font-style: italic; font-weight: bold;
font-family: Arial;
}
Selectors come in a variety of flavors. If you want all of your hyperlinks (the a element) to
display in italics, add the following to your stylesheet:
a { font-style: italic; }
If you want to be more specific and only italicize the hyperlinks that are contained somewhere
within an h1 tag, add the following to your stylesheet:
h1 a { font-style: italic; }
You can also define your own custom selectors by adding id and/or class attributes to your
HTML tags. Consider the following HTML snippet:
If we add (more on this in a moment) .loud { font-style: italic; } to the CSS for
this HTML, Hi there! and Pizza will show up italicized because they both have the loud
Applying CSS by id is similar. To add a yellow background fill to the highlight paragraph
tag, use the following rule:
Here, the # symbol tells the CSS to look for an HTML tag with the ID highlight.
To recap, you can opt to select elements by tag name (e.g., body, h1, p), by class name (e.g.,
.loud, .subtle, .error), or by ID (e.g., #highlight, #login, #promo). And, you can
get more specific by chaining selectors together (e.g., h1 a, body ul .loud).
Note
There are differences between class and id. Use class attributes when you have more than
one item on the page with the same class value. Conversely, id values have to be unique to a
page.
When I first learned this, I figured I’d just always use class attributes so I wouldn’t have to worry
about whether I was duping an ID value. However, selecting elements by ID is much faster than
by class, so you can hurt your performance by overusing class selectors.
Applying a stylesheet
So now you understand the basics of CSS, but how do you apply a stylesheet to an HTML page?
Quite simple, actually! First, you save the CSS somewhere on your server (usually in the same
directory as your HTML file, though you can put it in a subdirectory). Next, link to the stylesheet
in the head of the HTML document, as shown in Example 1-6. The href attribute in this
example is a relative path, meaning it points to a text file named screen.css in the same directory
as the HTML page. You can also specify absolute links, such as the following:
http://example.com/screen.css
Note
If you are saving your HTML files on your local machine, you’ll want to keep things simple: put
the CSS file in the same directory as the HTML file and use a relative path, as shown in
<html>
<head>
<title>My Awesome Page</title>
<link rel="stylesheet" href="screen.css" type="text/css" />
</head>
<body>
<h1 class="loud">Hi there!</h1>
<p id="highlight"> Thanks for visiting my web page.</p>
<p>I hope you like it.</p>
<ul>
<li class="loud">Pizza</li>
<li>Beer</li>
<li>Dogs</li>
</ul>
</body>
</html>
Example 1-7 shows the contents of screen.css. You should save this file in the same location as
the HTML file.
body {
font-size: 12px;
font-weight: bold;
font-family: Arial;
}
a { font-style: italic; }
h1 a { font-style: italic; }
Note
For a quick and thorough crash course in CSS, I highly recommend CSS Pocket Reference:
Visual Presentation for the Web by Eric Meyer (O’Reilly). Meyer is the last word when it comes
to CSS, and this particular book is short enough to read during the typical morning carpool
(unless you are the person driving, in which case it could take considerably longer
longer—did I say
“crash” course?).
Introduction to JavaScript
At this point you know how to structure a document with HTML and how to modify its visual
presentation with CSS. Now I’ll show you how JavaScript can make the web do stuff.
JavaScript is a scripting language that you can add to an HTML page to make it more interactive
and convenient for the user. For example, you can write some JavaScript that will inspect the
values typed in a form to make sure they are valid. Or, you can have JavaScript sh
show or hide
elements of a page depending on where the user clicks. JavaScript can even contact the web
server to execute database changes without refreshing the current web page.
Like any modern scripting language, JavaScript has variables, arrays, objects, and all the typical
control structures (e.g., if, while,
while for). Example 1-8 shows a snippet of JavaScript that
illustrates many core concepts of the language (don’t try putting this in your HTML file yet; I’ll
show you how to combine HTML and JavaScript in a moment).
Define an array (a list of values) named foods that contains three elements.
Here are some points about JavaScript’s syntax that are worth noting:
For our purposes, the most important feature of JavaScript is that it can interact with the
elements of an HTML page (the cool kids call this “manipulating the DOM”). Example 1-9
shows a simple bit of JavaScript that changes some text on the page when the user clicks on the
h1. Create a new file in your text editor, save it as onclick.html,, and open the document in your
browser. Click the text labeled “Click me!” and watch it change.
Note
DOM stands for Document Object Model and in this context it represents the browser’s
understanding of an HTML page. You can read more about the DOM here:
http://en.wikipedia.org/wiki/Document_Object_Model
http://en.wikipedia.org/wiki/Document_Object_Model.
<html>
<head>
<title>My Awesome Page</title>
<script type="text/javascript" charset="utf-8">
charset="utf
function sayHello() {
Here’s an explanation:
Back in the bad old days of web development, different browsers had different support for
JavaScript. This meant that your code might run in Safari 2 but not in Internet Explorer 6. You
had to take great pains to test each browser (and even different versions of the same browser) to
make sure your code would work for everyone.
everyone. As the number of browsers and browser versions
grew, it became impossible to test and maintain your JavaScript code for every environment. At
that time, web programming with JavaScript was hell.
Note
<html>
<head>
<title>My Awesome Page</title>
<script type="text/javascript" src="jquery.js"></script>
<script type="text/javascript" charset="utf-8">
charset="utf
function sayHello() {
$('#foo').text('Hi there!');
}
</script>
</head>
<body>
<h1 id="foo" onclick="sayHello()">Click me!</h1>
</body>
</html>
This line includes the jquery.js library. It uses a relative path, meaning the file exists in the
same directory as the page that is using it (this example won’t function correctly unless the
jQuery library, jquery.js,, is there). However, you can include it directly from a variety of places
where it’s available.
Notice the reductionn in the amount of code we need to write to replace the text in the h1
element. This might not seem like a big deal in such a trivial example, but I can assure you that
it’s a lifesaver in complex solutions.
Introduction
The Document Object Model (DOM) is an Application Programming Interface (API) for HTML
and XML documents.. It defines the logical structure of documents and the way a document is
accessed and manipulated. In the DOM specification, the term "document" is used in the broad
With the Document Object Model, programmers can build documents, navigate their structure,
and add, modify, or delete elements and content. Anything found in an HTML or XML
document can be accessed, changed, deleted, or added using the Document Object Model, with a
few exceptions - in particular, the DOM interfaces for the XML internal and external subsets
have not yet been specified.
As a W3C specification, one important objective for the Document Object Model is to provide a
standard programming interface that can be used in a wide variety of environments and
applications. The DOM is designed to be used with any programming language. In order to
provide a precise, language-independent specification of the DOM interfaces, we have chosen to
define the specifications in OMG IDL, as defined in the CORBA 2.2 specification. In addition to
the OMG IDL specification, we provide language bindings for Java and ECMAScript (an
industry-standard scripting language based on JavaScript and JScript). Note: OMG IDL is used
only as a language-independent and implementation-neutral way to specify interfaces. Various
other IDLs could have been used. In general, IDLs are designed for specific computing
environments. The Document Object Model can be implemented in any computing environment,
and does not require the object binding runtimes generally associated with such IDLs.
The DOM is a programming API for documents. It closely resembles the structure of the
documents it models. For instance, consider this table, taken from an HTML document:
<TABLE>
<TBODY>
<TR>
<TD>Shady Grove</TD>
<TD>Aeolian</TD>
</TR>
<TR>
<TD>Over the River, Charlie</TD>
<TD>Dorian</TD>
</TR>
</TBODY>
</TABLE>
In the DOM, documents have a logical structure which is very much like a tree; to be more
precise, it is like a "forest" or "grove", which can contain more than one tree. However, the DOM
does not specify that documents must be implemented as a tree or a grove, nor does it specify
how the relationships among objects be implemented. The DOM is a logical model that may be
implemented in any convenient manner. In this specification, we use the term structure model to
describe the tree-like representation of a document; we specifically avoid terms like "tree" or
"grove" in order to avoid implying a particular implementation. One important property of DOM
structure models is structural isomorphism: if any two Document Object Model implementations
are used to create a representation of the same document, they will create the same structure
model, with precisely the same objects and relationships.
The name "Document Object Model" was chosen because it is an "object model" in the
traditional object oriented design sense: documents are modeled using objects, and the model
encompasses not only the structure of a document, but also the behavior of a document and the
objects of which it is composed. In other words, the nodes in the above diagram do not represent
a data structure; they represent objects, which have functions and identity. As an object model,
the DOM identifies:
The structure of SGML documents has traditionally been represented by an abstract data model,
not by an object model. In an abstract data model, the model is centered around the data. In
object oriented programming languages, the data itself is encapsulated in objects that hide the
The Document Object Model currently consists of two parts, DOM Core and DOM HTML. The
DOM Core represents the functionality used for XML documents, and also serves as the basis for
DOM HTML. A compliant implementation of the DOM must implement all of the fundamental
interfaces in the Core chapter with the semantics as defined. Further, it must implement at least
one of the HTML DOM and the extended (XML) interfaces with the semantics as defined.
This section is designed to give a more precise understanding of the DOM by distinguishing it
from other systems that may seem to be like it.
Although the Document Object Model was strongly influenced by "Dynamic HTML", in
Level 1, it does not implement all of "Dynamic HTML". In particular, events have not yet
been defined. Level 1 is designed to lay a firm foundation for this kind of functionality by
providing a robust, flexible model of the document itself.
The Document Object Model is not a binary specification. DOM programs written in the
same language will be source code compatible across platforms, but the DOM does not
define any form of binary interoperability.
The Document Object Model is not a way of persisting objects to XML or HTML.
Instead of specifying how objects may be represented in XML, the DOM specifies how
XML and HTML documents are represented as objects, so that they may be used in
object oriented programs.
The Document Object Model is not a set of data structures; it is an object model that
specifies interfaces. Although this document contains diagrams showing parent/child
relationships, these are logical relationships defined by the programming interfaces, not
representations of any particular internal data structures.
The Document Object Model does not define "the true inner semantics" of XML or
HTML. The semantics of those languages are defined by W3C Recommendations for
these languages. The DOM is a programming model designed to respect these semantics.
The DOM does not have any ramifications for the way you write XML and HTML
documents; any document that can be written in these languages can be represented in the
DOM.
The Document Object Model, despite its name, is not a competitor to the Component
Object Model (COM). COM, like CORBA, is a language independent way to specify
interfaces and objects; the DOM is a set of interfaces and objects designed for managing
HTML and XML documents. The DOM may be implemented using language-
independent systems like COM or CORBA; it may also be implemented using language-
specific bindings like the Java or ECMAScript bindings specified in this document.
The DOM originated as a specification to allow JavaScript scripts and Java programs to be
portable among Web browsers. "Dynamic HTML" was the immediate ancestor of the Document
Object Model, and it was originally thought of largely in terms of browsers. However, when the
DOM Working Group was formed at W3C, it was also joined by vendors in other domains,
including HTML or XML editors and document repositories. Several of these vendors had
worked with SGML before XML was developed; as a result, the DOM has been influenced by
SGML Groves and the HyTime standard. Some of these vendors had also developed their own
object models for documents in order to provide an API for SGML/XML editors or document
repositories, and these object models have also influenced the DOM.
In the fundamental DOM interfaces, there are no objects representing entities. Numeric character
references, and references to the pre-defined entities in HTML and XML, are replaced by the
single character that makes up the entity's replacement. For example, in:
the "&" will be replaced by the character "&", and the text in the P element will form a
single continuous sequence of characters. Since numeric character references and pre-defined
entities are not recognized as such in CDATA sections, or the SCRIPT and STYLE elements in
HTML, they are not replaced by the single character they appear to refer to. If the example above
were enclosed in a CDATA section, the "&" would not be replaced by "&"; neither would
the <p> be recognized as a start tag. The representations of general entities, both internal and
external, are defined within the extended (XML) interfaces of the Level 1 specification.
The DOM specifies interfaces which may be used to manage XML or HTML documents. It is
important to realize that these interfaces are an abstraction - much like "abstract base classes" in
C++, they are a means of specifying a way to access and manipulate an application's internal
1. Attributes defined in the IDL do not imply concrete objects which must have specific
data members - in the language bindings, they are translated to a pair of get()/set()
functions, not to a data member. (Read-only functions have only a get() function in the
language bindings).
2. DOM applications may provide additional interfaces and objects not found in this
specification and still be considered DOM compliant.
3. Because we specify interfaces and not the actual objects that are to be created, the DOM
can not know what constructors to call for an implementation. In general, DOM users call
the createXXX() methods on the Document class to create document structures, and
DOM implementations create their own internal representations of these structures in
their implementations of the createXXX() functions.
Limitations of Level 1
The DOM Level 1 specification is intentionally limited to those methods needed to represent and
manipulate document structure and content. The plan is for future Levels of the DOM
specification to provide:
1. A structure model for the internal subset and the external subset.
2. Validation against a schema.
3. Control for rendering documents via style sheets.
4. Access control.
5. Thread-safety.
6. Events.
Unless a tag for a framework or library is also included, a pure JavaScript answer is expected for
questions with the javascript tag.
JavaScript runs on nearly every Operating System, and an engine is included in almost every
mainstream web browser. Developed in 1995 by Brendan Eich at Netscape Communications, it
was originally called LiveScript but was renamed to JavaScript due to Netscape's friendly
relationship with Sun Microsystems at the time.
Mozilla's spidermonkey, the first JavaScript engine ever written, currently used in
Mozilla Firefox.
v8, Google's JavaScript interpreter, used in Google Chrome.
node.js, a platform which enables server-side applications to be written in JavaScript.
Windows includes jscript, a JavaScript variant in Windows Script Host.
Mozilla also offers rhino, an implementation of JavaScript built in Java, typically
embedded into Java applications to provide scripting to end users.
webkit (except for the Chromium project) implements the javascriptcore engine.
actionscript (originally derived from HyperTalk) is now an ECMAScript dialect and uses
a lot of ECMAScript APIs.
duktape Embeddable, portable ECMAScript engine in C with small memory footprint.
JavaScript is typically used to manipulate the Document Object Model (DOM) and Cascading
Style Sheets (CSS) within the browser, offering user interface scripting, animation, automation,
client-side validation, and much more.
However, with the recent emergence of platforms such as Node.js, JavaScript can now be used to
write server-side applications. In addition, it is also used in environments that aren't web-based,
like PDF documents, site-specific browsers, desktop widgets etc.
Nomenclature
The change of name from LiveScript to JavaScript roughly coincided with Netscape adding
support for Java technology in its Netscape Navigator web browser. The final choice of
name caused confusion, giving the impression that the language was a spin-off of the Java
programming language, and the choice has been characterized as a marketing ploy by Netscape
to give JavaScript the cachet of what was then the hot new web programming language.
People often use the term JavaScript informally. The language and the term originated from
Netscape. ECMAScript, JavaScript, and JScript are terms that are easy to confuse.
The differences today for those who use JavaScript are negligible; people generally do not
distinguish the JavaScript and JScript variations from ECMAScript.
ECMAScript versions
Most modern browsers implement JavaScript based on the ecmascript-6 specification, although
some fail to implement some ES6 features. However, older browsers such as Internet Explorer 8
implement the ECMAScript 3 specification, which does not contain functions such as
Function.prototype.bind and even JSON.parse, amongst others.
What is JavaScript?
JavaScript is a scripting language, created for making html-pages live. It turns the web into
something more powerful than just interlinked html pages.
JavaScript has nothing in common with Java. It is a completely different language with a similar
naming. JavaScript has the language specification called ECMAScript.
Some people say JavaScript is like Python, some find it similar to Ruby or Self. The truth is that
JavaScript is on its own, a really elegant but specific language.
Modify HTML page, write text in it, add or remove tags, change styles etc.
Execute code on events: mouse clicks and movements, keyboard input, etc.
Send requests to server and load data without reloading of the page. This technology is
often called "AJAX".
Get and set cookies, ask for data, output messages…
…And much, much more!
Modern JavaScript is a generic language. It is not browser-only. There are console programs and
server Node.JS written in JavaScript. In this tutorial we’re talking about in-browser JavaScript
only.
That’s because you surely don’t want a web-page script to execute with your privileges:
read/write on hard disk, install software etc. The script must have strict security limitation not to
harm your system, so you can open the page and feel safe. There are non-standard mechanisms
of “signing” JavaScript, but not widely supported yet.
JavaScript can’t read/write to hard disk, copy files and call other programs. It doesn’t
have direct access to the OS. Newer browsers provide such abilities, but in a very limited
and secure way.
JavaScript in one tab can’t affect other tabs/windows. There are exceptions, namely when
two windows come from same domain.
A page with JavaScript can do network requests on its own domain without limitations. A
request to another domain is also possible, but security measures apply.
Also, remember that JavaScript is alive, under constant development. New features are coming,
the modern ECMAScript standard brings nice features, new JavaScript engines work better and
faster.
When you plan to study a technology, say invest your time, it is always good to overview the
trends.
Besides the modern ECMAScript specification, which enhances the language itself, the
browsers-makers are adopting features from HTML 5. That’s a related standard, or more
precisely a pack of standards, containing many features which people have been wanting for
ages.
Just a few:
Most topics of HTML5 are still in “draft” stage, but browsers tend to adopt them.The title
“HTML5” is a bit misleading. As you saw, the new standard is not just about HTML, but about
interaction and advanced browser features.
The trend is: JavaScript is enhancing its abilities. It is becoming more and more powerful, trying
to reach desktop apps.
Modern browsers improve their engines to achieve higher JavaScript execution speed. They also
fix bugs and try to follow the standard.The trend is: JavaScript is becoming faster and more
stable.
Well, to be sincere, there is a minor problem with HTML5, named “Browsers run too fast”.
Sometimes browsers adopt a feature which is in not fully described in the standard (draft stage),
just because it is so cool they can’t wait.
But then, with time, the standard evolves and changes, so browsers have to reimplement or
correct the feature. This breaks the code which relied on the earlier version. So we’d better think
twice before using such draft-stage solutions. This mainly refers to an advanced stuff.
jQuery's syntax is designed to make it easier to navigate a document, select DOM elements,
create animations, handle events, and develop Ajax applications. jQuery also provides
capabilities for developers to create plug-ins on top of the JavaScript library. This enables
developers to create abstractions for low-level interaction and animation, advanced effects and
high-level, theme-able widgets. The modular approach to the jQuery library allows the creation
of powerful dynamic web pages and web applications.
The set of jQuery core features—DOM element selections, traversal and manipulation—enabled
by its selector engine (named "Sizzle" from v1.3), created a new "programming style", fusing
algorithms and DOM data structures. This style influenced the architecture of other JavaScript
frameworks like YUI v3 and Dojo, later stimulating the creation of the standard Selectors API.
Microsoft and Nokia bundle jQuery on their platforms. Microsoft includes it with Visual Studio
for use within Microsoft's ASP.NET AJAX framework and ASP.NET MVC Framework while
Nokia has integrated it into the Web Run-Time widget development platform. jQuery has also
been used in MediaWiki since version 1.16
jQuery Mobile is a touch-optimized web framework (also known as a mobile framework), more
specifically a JavaScript library, currently being developed by the jQuery project team. The
www.someakenya.com Contact: 0707 737 890 Page 95
development focuses on creating a framework compatible with a wide variety of smartphones
and tablet computers, made necessary by the growing but heterogeneous tablet and smartphone
market. The jQuery Mobile framework is compatible with other mobile app frameworks and
platforms such as PhoneGap, Worklight and more.
Features
Compatible with all major desktop browsers as well as all major mobile platforms,
including Android, iOS, Windows Phone, Blackberry, WebOS, Symbian.
Built on top of jQuery core so it has a minimal learning curve for people already familiar
with jQuery syntax.
Theming framework that allows creation of custom themes.
Limited dependencies and lightweight to optimize speed.
The same underlying codebase will automatically scale to any screen.
HTML5-driven configuration for laying out pages with minimal scripting.
AJAX-powered navigation with animated page transitions that provides ability to create
semantic URLs through pushState.
UI widgets that are touch-optimized and platform-agnostic
Example usage
$('div').on('tap', function(event){
alert('element tapped ');
});
$(window).load(function() { // better to use $(document).ready(function(){
$('.List li').on('click touchstart', function() {
$('.Div').slideDown('500');
});
A basic example
What follows is a basic jQuery Mobile project utilizing HTML5 semantic elements. It is
important to link to the jQuery and jQuery Mobile JavaScript libraries, and stylesheet (the files
can be downloaded and hosted locally, but it is recommended to link to the files hosted on the
jQuery CDN).
A screen of the project is defined by a section HTML5 element, and data-role of page. Note
that data-role is a jQuery Mobile construct, and not an HTML5 one. A page may have header
and footer elements with data-role of header and footer, respectively. In between, there
may be an article element, with data-role of content. Lastly, a nav element, with data-
role of navbar may be present.
In the example below, two other data- attributes are used. The data-theme attribute tells the
browser what theme to render. The data-add-back-btn attribute adds a back button to the page
if set to true.
Lastly, icons can be added to elements via the data-icon attribute. jQuery Mobile has fifty
commonly-used icons built in.
Data-role – Specifies the role of the element, such as header, content, footer, etc.
Data-theme – Specifies which design theme to use for elements within a container. Can be set
to: a or b.
Data-position – Specifies whether the element should be fixed, in which case it will render at
the top (for header) or bottom (for footer).
Data-transition – Specifies one of ten built-in animations to use when loading new pages.
Data-icon – Specifies one of fifty built-in icons that can be added to an element.
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>jQuery Mobile Example</title>
<meta name="viewport" content="initial-scale=1, user-
scalable=no, width=device-width">
<link rel="stylesheet"
href="http://code.jquery.com/mobile/1.4.5/jquery.mobile-1.4.5.min.css">
<script src="http://code.jquery.com/jquery-
1.11.2.min.js"></script>
<script
src="http://code.jquery.com/mobile/1.4.5/jquery.mobile-
1.4.5.min.js"></script>
</head>
<body>
<section data-role="page" id="first" data-theme="a">
<header data-role="header" data-position="fixed">
<article data-role="content">
<h2>Hello, world!</h2>
<a href="#second" data-role="button" data-inline="true" data-
transition="flow" data-icon="carat-r" data-iconpos="right">Go to Page 2</a>
</article>
<article data-role="content">
<h2>Example Page</h2>
</article>
Theming
jQuery Mobile provides a powerful theming framework that allows developers to customize
color schemes and certain CSS aspects of UI features. Developers can use the jQuery Mobile
ThemeRoller application to customize these appearances and create highly branded experiences.
After developing a theme in the ThemeRoller application, programmers can download a custom
CSS file and include it in their project to use their custom theme.
Each theme can contain up to 26 unique color "swatches," each of which consists of a header bar,
content body, and button states. Combining different swatches allows developers to create a
wider range of visual effects than they would be able to with just one swatch per theme.
Switching between different swatches within a theme is as simple as adding an attribute called
"data-theme" to HTML elements.
The default jQuery Mobile theme comes with two different color swatches, named "a" and "b".
Here is an example of how to create a toolbar with the "b" swatch:
There are already a handful of open source style themes that are developed and supported by
third-party organizations. One such open source style theme is the Metro style theme that was
developed and released by Microsoft Open Technologies, Inc. The Metro style theme is meant to
mimic the UI of the Metro (design language) that Microsoft uses in its mobile operating systems.
iOS is the operating system that runs on iPad, iPhone, and iPod touch devices. The operating
system manages the device hardware and provides the technologies required to implement native
apps. The operating system also ships with various system apps, such as Phone, Mail, and Safari,
that provide standard system services to the user.
The iOS Software Development Kit (SDK) contains the tools and interfaces needed to develop,
install, run, and test native apps that appear on an iOS device’s Home screen. Native apps are
built using the iOS system frameworks and Objective-C language and run directly on iOS.
Unlike web apps, native apps are installed physically on a device and are therefore always
available to the user, even when the device is in Airplane mode. They reside next to other system
apps, and both the app and any user data is synced to the user’s computer through iTunes.
At a Glance
The iOS SDK provides the resources you need to develop native iOS apps. Understanding a little
about the technologies and tools contained in the SDK can help you make better choices about
how to design and implement your apps.
At the highest level, iOS acts as an intermediary between the underlying hardware and the apps
you create. Apps do not talk to the underlying hardware directly. Instead, they communicate with
the hardware through a set of well-defined system interfaces. These interfaces make it easy to
write apps that work consistently on devices having different hardware capabilities.
The implementation of iOS technologies can be viewed as a set of layers, which are shown in
Figure I-1. Lower layers contain fundamental services and technologies. Higher-level layers
build upon the lower layers and provide more sophisticated services and technologies.
As you write your code, it is recommended that you prefer the use of higher-level
higher level frameworks
over lower-level
level frameworks whenever possible. The higher-level
higher level frameworks are there to
provide object-oriented abstractions for lower-level
lower level constructs. These abstractions generally
make it much easier to write code because they reduce the amount of code you have to write and
encapsulate potentially complex features, such as sockets and threads. You may use lower
lower-level
frameworks and technologies, too, if they contain features not exposed byby the higher
higher-level
frameworks.
Window-based
based application and MUC
There are several approaches which can be taken when developing applications for Windows
Mobile devices. In this topic, we'll look at the various options and provide links to sources of
more information.
Visual C++
Visual C++ is known as a "native" development language, as it talks directly to the hardware for
the Windows Mobile device, with no intervening layers (unlike Visual C#, for example).
Programming using C++ can be challenging, as it is not a trivial language to learn. Any errors in
a C++ program, for example, accessing memory that has been freed, or forgetting to free
memory, can potentially crash the entire device.
The advantages of using Visual C++ are execution speed, application size and flexibility.
Applications written in C++ run very quickly and consume minimal resources: fast
fast-action games
A good way to learn Visual C++ is to investigate the free Visual Studio Visual C++ Express
Edition, watch the video training and WebCasts, and read through the documentation. Although
the Express Edition of Visual Studio does not allow you to develop applications for Windows
Mobile, almost everything you will learn about application development can be applied directly
to mobile devices.
Visual C++ applications can interact with the Windows Mobile device by calling the Win32
APIs (Application Program Interface functions). These APIs are functions that perform particular
actions, such as playing a sound or drawing a button on the screen. There are thousands of these
APIs (Windows Mobile supports a subset of the complete "desktop" Windows set of Win32
APIs), and they are documented in the section entitled Windows Mobile Features (Native).
When browsing this section, pay particular attention to the fact that some APIs are available only
in Windows Embedded CE - a platform that is related to, but separate from, Windows Mobile. A
table in the top right of each topic will clarify which API is supported by which platform.
If you have experience developing for Windows using Visual C++, you will not find the
transition to Windows Mobile particularly jarring. You should read the sections covering
installing and using the tools, and then the topic Making use of Device-Specific Features which
will highlight the unique abilities of Windows Mobile devices.
To begin a Visual C++ application, start Visual Studio, and select File, New, Project and select
Smart Device under the Visual C++ node.
If you are new to both programming and Windows Mobile, it may be a good idea to begin with
Visual C#, and then transition to Visual C++.
Visual C# and Visual Basic .NET are "managed" development languages. Not only are they
relatively easy to learn, but they support the .NET Compact Framework - a library of classes that
perform a lot of frequently used programming tasks, to greatly simplify application development.
The development tools for C# and Visual Basic .NET include a fully what-you-see-is-what-you-
get user interface designer. You can drag and drop buttons and other controls directly onto your
application's window (called a "form" in managed programming), and then double-click to
access the underlying code. This approach makes creating an application's user interface
extremely fast and easy.
If you have experience developing applications for Windows using Visual C#, the transition
should be relatively painless. The Compact Framework is a subset of the .NET Framework, so
some functionality may require a slight reworking of your code.
Visual C# is a great way to learn programming. You can learn more about using Visual C# on
MSDN: for example, here is a topic entitled the Visual C# Programming Guide. To learn more
about Visual Basic, here is another topic on MSDN: Getting Started with Visual Basic.
For more information, see the topic Developing with Managed Code.
To begin a Visual C# or Visual Basic .NET application, start Visual Studio, and select File, New,
Project and select Smart Device under the relevant language node.
Client-side JScript
The web browser included with Windows Mobile devices - Internet Explorer Mobile - supports
JScript. JScript is a superset of the language most commonly known as JavaScript. JScript
programs are plain text files that are executed by the web browser. They can be embedded in an
HTML page, or stored in separate files.
A JScript application is executed inside the web browser, and uses the web browser's window for
input and output. It is possible to make use of AJAX programming techniques to provide a
degree of user interaction, and to communicate with a remote server. Due to the nature of
JScript, applications cannot access local data other than through cookies, which will introduce
some limitations.
No developer tools other than a text editor are required to create a JScript application. The
program may be stored locally, or accessed from a Web Server. For more information, see the
section Programming with Internet Explorer Mobile and AJAX.
ASP.NET
For an introduction to developing for mobile devices using ASP.NET, see Creating ASP.NET
Mobile Web Pages.
Objective-C programming
Audience
This reference has been prepared for the beginners to help them understand the basic to advanced
concepts related to Objective-C Programming languages.
Prerequisites
Before you start doing practice with various types of examples given in this reference, I'm
making an assumption that you are already aware about what is a computer program and what is
a computer programming language?
#import <Foundation/Foundation.h>
int main()
{
/* my first program in Objective-C */
NSLog(@"Hello, World! \n");
return 0;
}
If you’re reading this series then I’ll hazard a guess that you already know, but for those of you
who don’t, don’t worry as by the end of this part you’ll know what it is back-to-front and inside-
out.
Objective-C is an object oriented language which lies on top of the C language (but I bet you
guessed that part!). It’s primary use in modern computing is on Mac OS X as a desktop language
and also on iPhone OS (or as it is now called: iOS). It was originally the main language for
NeXTSTEP OS, also known as the operating system Apple bought and descended Mac OS X
from, which explains why its primary home today lies on Appleʼs operating systems.
Because Objective-C is a strict superset of C, we are free to use C in an Objective-C file and it
will compile fine. Because any compiler of Objective-C will also compile any straight C code
passed into it, we have all the power of C along with the power of objects provided by Objective-
C.
If you’re a little confused at this point, think of it this way: everything C can do, Objective-C can
do too, but not the other way around.
Throughout this series, we will not focus on building applications for the iPhone. Instead, we
will concentrate more on the language itself and for this reason all you will need is a Mac with a
compiler such as GCC. If you’ve installed the developer tools from Apple (Xcode, Interface
Builder, etc), then GCC should already be installed. If not, then head to Appleʼs developer
website and get yourself a free copy.
As far as prerequisites go, while I don’t expect you to have a full background in computer
science, some knowledge of programming in general or of C in particular would definitely be a
bonus. If you don’t have much prior programming experience, don’t worry -you’ll pick it up in
no time!
If youʼre running Windows (which is unlikely as this tutorial is aimed at iPhone developers) you
can still compile Objective-C on your system using a compiler such as CygWin or MinGW. This
tutorial series is catered to Mac users, but if you are using Windows and encounter any problems
then be sure to leave a comment and Iʼll see if I can help.
NOTE:
Compiling is the process of "translating" a high-level computer language, like Objective-C or
PHP, into a low-level machine code that can be processed by a computer when the program is
executed.
All the programs that we see running in our fancy Mac OS operating system consist of a series of
instructions that are visually displayed to us in a GUI, or Graphical User Interface. In contrast to
the GUI program interaction with a mouse that most of us are familiar with, it is possible to issue
commands directly to the operating system through a text-based interface known as a "terminal"
or "command line."
The command line application in Mac OS is called Terminal and can be found in Applications -
> Utilities. Go ahead and open Terminal now (you can also search for it in Spotlight). Terminal
has several basic commands you should be aware of in order to properly utilize it. One of the
most important commands to know is cd, which stands for "change directory." This command
allows us to change where in the filesystem Terminal is reading from. We canʼt just tell Terminal
to compile our file if we donʼt show it where the file is first! To switch to a desired directory,
you can use a full path such as:
1cd /Users/MyName/Desktop/Test
You can also use relative paths, allowing you to only type a single folder name in some cases.
For example, if youʼre already in your Desktop folder, you could simply type:
1cd Test
What if you want to see where you currently are? The immediate folder name is displayed before
the prompt (the bit where you type). For example, if your prompt says Dan- Walkers-
MacBook: Desktop iDemonix$ I can assume Iʼm in the Desktop folder. If you aren't sure, you
can also type pwd to display the absolute filepath of the current location.
If you want to list what files and folders are in the current folder, use the list command: ls.
Finally, if you wish to go up a directory to a parent folder, type "cd .. ". So, if we were in the
Test folder, which is inside the Desktop folder, but we wanted to go to the Desktop folder
instead, we could type cd .. to go up to the parent directory, Desktop. If we wanted to get to the
Youʼve probably already guessed how it works: inputfile.m contains our code (.m is the
extension used for Objective-C files) and -o tells gcc we want our executable to be called
whatever we specify next, which in the example above is outputfile. To run our creation after
compiling, we simply type:
1./outputfile
Simple.
When you compile, the compiler will generate any errors, notifications or warnings related to the
syntax of your code. Errors generated when compiling are understandably referred to as
"compile-time" errors, and this is often the most stressful part of writing an application
(especially when your code isnʼt compiling because you put a single character in the wrong place
or forgot to end a line with a semi-colon). Compiling can also take time when youʼre writing
large applications consisting of multiple files, which is also another reason why compiling can be
a tedious experience. This fact has led to a ubiquitous programmer joke often seen on the t-shirts
of men with unkempt beards: "I'm not slacking off. My code is compiling."
The Basics
Objective-C itself isnʼt that hard to learn. Once you get to grips with the basic principles, you can
pick the rest up as you go along pretty easily. You do need to have an understanding of the
fundamentals of C programming though, and that is what the rest of this tutorial will cover.
1#include <stdio.h>
2intmain(){
3 printf("Hello World\n");
4 return0;
}
5
All this application will do when you run it is display the string "Hello World" in Terminal and
exit.
To try this for yourself, fire up Xcode and make a new Objective-C class. Delete all the code
Xcode gives you by default and stick the above code in. Once youʼve done that, you can compile
it using Terminal. Open Terminal and change to the location where your file is, if you saved to
the desktop then simply type cd desktop so that Terminal is now reading from your Desktop.
Then type this command:
Your program should compile with no errors. To run it, simply type:
1./program1
Awesome, so what actually happened there? Well, first we imported a library called stdio which
manages the standard i/o (input output) functions, like printf(). We then create a function called
main which should return an int or integer which is basically a number with no decimal point.
We then use the printf() function to output ʻHello Worldʼ in to terminal. The \n we use tells
Terminal to put a newline after the text. Finally, we return 0 (remember we said main should
return an integer) which tells the operating system everything went fine. We use the name main
because this is triggered automatically when the program is executed.
So far everything should be pretty simple: we wanted to write some text to Terminal, so we
imported a library with a function for writing text, then we used a function from that library to
write the text. Imagine that what you import is a physical library and printf() is one of the books
available.
Variables
Soldiering ahead, weʼre now on to variables. One of the fundamental things we need to be able
to do in our applications is temporarily store data. We do this using variables, which are
containers that can hold various types of data and be manipulated in various ways. We use
variables to store all sorts of data, but we must first tell the compiler what weʼre going to store in
it. Here are several of the most important variables that you should know about for now:
When weʼre not using variables, weʼre often using constants. A constant will never change: we
always know what the value will be. If we combine constants we get a constant expression,
which we will always know the result of. For example:
1123 + 2 = 125
This is a constant expression, 123 + 2 will always equal 125, no matter what. If we substituted a
constant for a variable, the new expression would look like this:
1123 + i = ?
Because i is a dynamic variable, we do not definitely know the result of this equation. We can
change i to whatever we want and get a different result. This should give you an idea of how
variables work.
One thing we still need to know is how do we display variables like we displayed “Hello World”
above? We still use the printf() function, except it changes a little this time:
1 #include <stdio.h>
2 intmain(){
3 intsomeNumber = 123;
printf("My number is %i \n", someNumber);
4
return0;
5
}
6
What weʼve done here is told the function printf() where we want our integer to appear, then
where it can be found. This is different to a lot of languages such as PHP where you could just
place the variable in the text.
We are not just limited to just one variable in printf(). The function can accept multiple
parameters separated by commas, so we can pass in as many as we have formatting signs for in
the text. Above we use %i as a formatting sign because we were including an integer. Other
variables have their own format specifiers:
%i - integer
%f - float
%e - double
%c - char
Imagine you have a sentence that is 11 characters long (like ʻHello Worldʼ - donʼt forget to
include the space), a character array is like having 11 charʼs but all glued together. This means
that the value of the character array overall is ʻHello Worldʼ but char[0] is ʻHʼ. In brackets is the
char youʼre after, because we put 0 we get the first character. Donʼt forget that counting in arrays
usually starts from 0, not 1.
Conditionals
When an application needs to make a decision, we use a conditional. Without conditionals, every
time you ran your application it would be exactly the same, like watching a movie. By making
decisions based on variables, input or anything else, we can make the application change - this
could be as simple as a user entering a serial number or pressing a button more than 10 times.
There are a few different types of conditionals, but for now weʼre just going to look at the most
common and basic: the if statement. An if statement does what it sounds like, it checks to see if
something is true, then acts either way. For example:
1 #include <stdio.h>
2 intmain()
3 {
if(1 == 1) { // This is always true
4
// Do some stuff here
5
}
6
7 return0;
8 }
9
If 1 is equal to 1, then whatever is between the brackets is executed. You might also be
wondering why we used two equals signs instead of one. Using two equal signs is an equality
operator, which checks to see if the two are equal to each other. If we use a single equal sign then
weʼre trying to assign the first value to the second value.
Above, since 1 will always be the same as 1, whatever is in the brackets would be executed.
What if we wanted to do something if this wasnʼt true though? Thatʼs where else comes in. By
using else we can run code when the if conditional returns false, like so:
01 intmain(){
02 if(1==1){
Of course, in real life, we wouldn't be checking to make sure 1 is the same as 1, but the point is
made. Consider an application that closes if you press the close button three times (annoying but
relevant). You could check in the brackets to see how many times it has been pushed. If it is
lower than 3, then your else block could execute code to tell the user how many more times the
button must be pushed to exit. We’ll look at conditionals more when we come to use them in our
applications further along in the series.
Loops
Now let's investigate a programming loop. Loops, as the name suggests, let us loop through a
piece of code and execute it multiple times. This can come in very handy in situations such as
populating a list or repeating a piece of code until a conditional returns true.
There are three types of loops, in order of most common: for, while, and do. Each one is used to
repeat execution of a block of code, but they function differently. Here are examples of each:
01 // if loop
02 intmain () {
03 inti = 9;
intx = 0;
04
05
for(x = 0; x < i; x++){
06 printf("Count is: %i\n", x);
07 }
08
09 return0;
10 }
11
This may look a little complex at first, but it really isnʼt. In the parentheses after for is the
initiator, a conditional, and the action. When the for loop starts it executes the initiator, which in
our case above sets x to 0. Each time the loop runs (including the very first time) it checks the
conditional, which is "is x smaller than i?" Finally, after each loop through the code, the loop
runs the action - which above increments x by one. Simple. Since x is increasing by one each
1 // while loop
2 intmain () {
3 intx = 0;
while(x < 10){
4
printf("Count is: %i\n", x); //Watch OUT! Something is missing.
5
}
6
7 return0;
8 }
9
Similar to the for loop, the while loop will execute the code between the brackets until the
conditional is false. Since x is 0 and we don't change it in the code block, the above would run
forever, creating an "infinite loop." If you wish to increment x, then in the case of our while loop
you would do this between the brackets:
01 // while loop
02 intmain () {
03 intx = 0;
while(x < 10){
04
x++;
05
printf("Count is: %i\n", x);
06 }
07
08 return0;
09 }
10
The do loop is essentially the while loop, except the conditional runs after the block of code.
What this means is when using a do loop, the code is guaranteed to run at least once:
01 // do loop
02 intmain () {
03 intx = 0;
do{
04
x++;
05
printf("Count is: %i\n", x);
06 } while(x < 10);
07
08 return0;
09 }
10
Pointers can cause a lot of confusion with newcomers to programming or just newcomers to C. It
is also not immediately clear to some people how they are useful, but youʼll gradually learn this
over time. So, what is a pointer?
As the name implies, pointers point to a location. Specifically, locations in computer memory.
Think of it like this, when we create a variable (let's say it's an integer called ʻfooʼ as is so
popular with programming theory) and give it a value of, for example 123, we have just that - a
variable with a value of 123. Now, if we setup a pointer to foo, then we have a way of indirectly
accessing it. That is, we have a pointer of type int that points to foo which holds the value '123.'
This would be done in code like so:
Clear as mud? Donʼt sweat it. Pointers are hard - often considered the hardest thing to learn when
picking up the C language. Pointers will eventually become second nature to you though, and
there will be more on pointers within Objective-C further in this series.
Wrapping Up
You've just been given a crash-course overview of the C language fundamentals. This part of the
series was intended to be a quick primer on C to get you ready and prepared for the rest of the
series, and should have been especially useful to those who are already familiar with
programming in another language. If you are new to programming in general or are still in doubt
about any basic of C, re-read the above and feel free to leave questions in the comments.
Before next time, be sure to try and compile your own programs using the code above. Set
yourself small challenges, such as making a loop execute 10 times and count each time through
the loop using printf. There’s no harm in trying and experimenting, if it goes wrong then itʼs
probably even better as it’ll get you on the right track to troubleshooting your own code.
Challenge
For this week, we will end on a simple challenge. You are to create three programs that count to
10 using each type of loop. Since we will use loops often in Objective-C, itʼs good that you learn
to create them by heart. That should be pretty easy, so try to count down from 10 to 1 afterwards
(if ++ increments by one, what could be the code to decrement by 1?).
Designing user experiences for handheld and mobile devices entails much more than scaling
traditional desktop interface paradigms to smaller screens. Effective handheld and mobile user
interface guidelines depend on an overlapping, but distinctive set of mobile user interface
guidelines that take into account mobile user interface factors including:
Quantity and visibility of information that can be displayed at smaller mobile device screen sizes
and lower resolutions.
Design of user interface control ergonomics and touch screen interactions that are constrained by
mobile / handheld device size and capabilities.
Task analysis of mobile user interaction behaviors. When interacting with a mobile or handheld
device, a user is more likely to be multitasking and have divided attention compared to
traditional desktop user interfaces.
The continuing convergence of digital interfaces with physical products is putting interaction
designers in a position where knowledge of anthropometrics, kinesthetics, and other non-
cognitive human capabilities is valuable for creating effective mobile user interface design
solutions.
Knowledge needed: Nothing but a keen eye for detail and basic Photoshop
Requires: Pen, paper, Photoshop (or any graphic editor that can output .pngs)
Project time: 2-5 hours
Your first project in iPhone UI design normally comes about in one of two ways. Maybe your
client has entrusted you with their iPhone app after completing a website for them. Or maybe
you’ve been a bit cheeky and enhanced your skill set a tad because it’s an area that interests you,
and you now find yourself with an app to design, wondering where to start. Either
way, designing for iOS4 devices is totally different to the web and shouldn’t be launched into by
simply opening Photoshop, as tempting as it may be.
Your mantra should be: keep it simple. You won’t be able to provide the exact same experience
someone might be used to on the web or deliver hundreds of complex functions in a single app
without overloading the user. Make peace with this and strip out functions as necessary. Work
out what will be the core functionality. What will people use your app for, above all else?
I always start with what’s called the “App Definition Statement” (ADS): a couple of sentences
that depict the core function of your app and the intended audience. It’s kept intentionally short
and should be referred back to throughout the process to curb adding unnecessary functions and
keep the project on track, avoiding the inevitable scope creep as people get overexcited about
future feature possibilities.
Start with a couple of sentences that summarize your app's core functions, known as the
Application Definition Statement
Once you’ve written your ADS, decide the design type that would best fit your app. There are
five main types of app: ‘serious tool’, ‘fun tool’, ‘serious entertainment’, ‘fun entertainment’ and
‘utility’. Each involves its own unique user interface decisions.
Serious tools always keep a minimal color palette and are driven by data rather than graphics.
Good examples are Mail, Dropbox (see middle below) and Instapaper. If you have a look at the
UI of all of these apps, you’ll quickly get an idea of how a serious tool should be presented.
Fun tools encourage leisurely productivity: they want you to pick up the app and explore, but
with a definite purpose. They have moderate use of color and graphics and are always fairly
simplistic in hierarchy. Good examples of fun tools are Deliveries (see left below), The Ocado
app and Where To?
Serious entertainment sounds a bit like an oxymoron: it took me a while to get my head around
this category. Basically, these are apps that use standard controls top and bottom but the content
they hold is custom and meant to be entertaining. The best examples of these are Sky News app
(see right above), Movies Now! And the iTunes app.
Last but not least comes my favourite category to design for – utility apps. Apple calls these the
“fast food” of apps, as they’re normally used for 30 seconds or less. Normally single-screen or
stacked up like a deck of cards one under each other, they’re great fun to design for as they’re
always graphically rich. Examples of utility apps are Weather, Ego and Phases (see left above).
03. Wireframing
You can wireframe however you like – paper, Fireworks, Photoshop – but don’t get too detailed:
keep to shades of grey and white and block elements. Don’t produce detailed icons or anything
that could steer your client away from the main task of signing off the functionality; they can
sign off design elements later.
Think about the gestures a user will need to use to get from one screen to another or to refresh a
page and sign these off at this stage as well. You might also want to bear in mind whether you’re
going to support a different landscape mode, in which case you’ll need to wireframe this too.
Giving the user a visually rich landscape mode can really add brownie points to your app.
Create your wireframe however you like: there's nothing wrong with good old-fashioned pen and
paper
You’re also going to need to think about the space for touch gestures, such as buttons. The
minimum hit size on an iPhone is 44 x 22px: anything smaller than this and a user might get
frustrated with mis-hitting their intended buttons. The ideal fingertip target is a comfortable 44 x
44px. You also have to think about the space between anything a user will need to touch. The
recommended minimum space between elements is between 12 and 22px.
If you’re presenting your wireframes to clients, take the time to produce comprehensive
documents, annotate where required and reinforce any design or UX decisions you’ve had to
make for the greater good of the app. By putting your thoughts down on paper and explaining
details concisely, you’ll minimise questions and queries from the client throughout the process.
I always embed my screens into a Keynote document, with the screen on the left and a paragraph
about the screen to the right. I never embed full quality artwork because I’ve had my fingers
burned too many times. It’s a good habit to slightly downsize the artwork for presentation.
If you're presenting your wireframes to clients, produce comprehensive documents to put your
design in context
You’ll notice that the apps that ship with your iPhone or iPad are of the highest quality, that
attention to detail is in abundance and that they’re pleasing to the eye. The apps that get the
greatest graphical reviews are those that follow suit. The iPhone and iPad are in such close
proximity to your eye level that it’s possible to make the most subtle of textures and gradients
noticeable. Flat, block colours can work well but just adding hints of gradients, texture and
realism can lift your app from good to fantastic.
Other elements that can make an app beautiful are text highlights, tactile backgrounds, subtle
shadows, high gloss finishes and clean, crisp, custom icons within your app (of course, all used
sparingly and appropriately).
Studying the UI of your favourite app will help you to see these little details but the best apps are
always those that get the UI and the UX right, such as the Twitter for iPhone app (formerly
Tweetie). The “pull to refresh” has become a standard gesture across many apps and coupled
with a beautiful user interface: it’s no wonder it has become a favourite amongst Twitter iPhone
While you’re designing, keep your documents neat and tidy. If you’re working in Photoshop,
make sure you group and name layers sensibly. Whilst Photoshop is an industry favourite, you
can work in any program that can export PNG files, as this is what is used for development.
It’s normal to hand over a Photoshop document to a developer at the end of the process: unlike
the web, this isn’t frowned upon. It’s always worth asking your developer whether they’d like
you to pre-slice the UI into PNG files.
05. Icons
I’d argue that your icon is one of the most important things you’ll design: it will be someone’s
first contact with your app on the App Store. Start on paper: it’s expendable and easy to let your
ideas flow without committing to anything. Once you have something you feel you could
develop further and better on the computer, move forward. The best icons always portray a
single, defined silhouette and tell a story of what your app represents.
Try to leave text out of the icon: it rarely works and is mostly unnecessary as your app name will
be presented below the icon anyway.
Spotlight icons often get forgotten. They’re 29 x 29px with a border radius of 5px. These are
seen on the search screens of the iPhone (see picture on the left).
Your standard iPhone icon is 57 x 57px with a border radius of 10px and iPad icons 72 x 72px
with a border radius of 12px.
Don’t forget about your icons for the App Store, which are a supersize 512 x 512px but normally
scaled to 175 x 175px for the App Store.
Apple’s approval process can sometimes take you down winding paths and leave you scratching
your head as to why your app has been rejected and asimilar app has been let through. Although
there’s no holy grail for getting approval, there are certain things you can do to maximize your
chances.
If possible, purely use the SDK. Some “gap” methods use forms of JavaScript that Apple doesn’t
always like. Using purely the SDK, coupled with Cocoa and Objective C, you’ll know your
back-end code won’t ever be a problem.
Something else Apple doesn’t like is use of its icons or imagery. I made this mistake once. I used
what I thought was a generic podcast icon but it actually belongs to Apple and they don’t want
you using it in any of your apps. By the same token, don’t use any Apple trademarks such as
Apple also isn’t too keen on you using popular stock icons found around the internet. I’ve seen
apps get rejected for this reason before, with the developers being asked to swap them out for
“something more custom”.
Once upon a time it was easy to estimate your design workload for the iPhone. Nowadays,
designing for this platform can be very time-consuming due to having to produce another set of
artwork for Retina display. And if you’re designing for an iPad version of an app, this is a third
set of artwork that needs to be completed.
On a normal app (by which I mean not a game, as these are the most timeconsuming
graphically), I can usually get two screens completed per day; but never charge per screen. If
you’ve done your UX and wireframing correctly, you should have condensed the number of
screens to a nice, usable figure and so you’ll have shot yourself in the foot if you’re charging per
screen.
When wireframing don't produce detailed icons or anything that could steer your client away
from signing off the functionality
Finally, iPhone 4 Retina display artwork is automatically used when a developer adds the images
to the app package by adding @2x to the end of the filename. For example, if you had a logo,
you would need two files, logo. png and [email protected] with @2x depicting the artwork suitable
for iPhone 4.
If you’re producing custom icons within your app, you should get into the habit of producing
them in vector format, as these can then be used across all screen sizes and simply added to your
Photoshop document.
08. Wrap up
Following these guidelines should stand you in good stead for your first iPhone app. Above all;
keep your solutions simple, elegant and clean. Take time at the start to work out the best design
route and you’ll be on your way to success.
No matter how you measure it, mobile is huge and growing. The convergence of cloud
computing, ubiquitous broadband, and affordable mobile devices has begun to transform every
aspect of our societies. Analysts predict that by 2015, mobile phones will overtake desktop
computers as our primary means for accessing the internet.
In order to keep pace with this rapidly changing landscape, designers and developers – and the
people who work with them – need to start thinking about mobile as a primary project goal; not
something tacked onto a desktop-centric project as an afterthought.
Mobile is different
Although they are often lumped together as computing devices, smartphones and desktop
computers are very different: small screen vs big screen, intermittent vs reliable connectivity,
low vs high bandwidth, battery powered vs plugged in, and so on. Given this list, one might be
tempted to think of mobile devices as underpowered versions of 'real' computers. But this would
be a mistake.
In fact, the reverse is true: smartphones are actually more powerful than desktops in many ways.
They are highly personal, always on, always with us, usually connected and directly addressable.
Given the many differences between mobile and desktop computing devices, it should come as
no shock that designing for mobile is very different than designing for the desktop. From my
workshops, I’ve compiled a list of 10 principles of mobile interface design that help people
familiar with desktop design and development unleash the unique power of the mobile platform.
Because of the differences between mobile and desktop, it’s imperative to get yourself into a
mobile mindset before getting started.
Be focused: More is not better. Edit your features ruthlessly. You are going to have to
leave stuff out.
Be unique: Know what makes your app different and amplify it. There are lots of fish in
the sea of mobile apps. If there’s nothing special about your app, why would anyone pick
it?
Be charming: Mobile devices are intensely personal. They are our constant companions.
Apps that are friendly, reliable and fun are a delight to use, and people will become quite
attached to the experience.
Be considerate: App developers too often focus on what would be fun to develop, their
own mental model of the app or their personal business goals. These are good places to
start, but you have to put yourself in your users' shoes if you ever hope to create an
engaging experience.
The image of the busy professional racing through the airport with a bag in one hand and
smartphone in the other is what lots of people picture when they think about mobile computing
context. It is certainly one context, but it’s not the only one. To begin to put ourselves in the
shoes of our users, we need to consider three major mobile contexts: Bored, Busy and Lost.
Bored: There are a lot of people using their smartphones on the couch at home. In this
context, immersive and delightful experiences geared toward a longer usage session are a
great fit. Still, interruptions are highly likely so be sure your app can pick up where your
user left off. Examples: Facebook, Twitter, Angry Birds, web browser.
Different apps call for different approaches, designs and techniques. That said, the inherent
nature of a pocket-sized touchscreen device suggests several global guidelines; ie, the stuff that
always matters.
Clear for iOS is a to-do list app that has no chrome at all; it’s pure content
Controls: When you do have to add controls, try to put them at the bottom of the screen
(in other words beneath the content). Think of an adding machine, a bathroom scale or
even a computer – the controls are beneath the display. If they weren’t, we wouldn’t be
able to see what was going on with the content while we were using them.
Contrast this real-world design consideration with traditional web or desktop software,
where navigation and menu bars are virtually always at the top. This makes sense in a
mouse context because the pointer is nearly invisible. Not so with the 'meat pointer' at the
end of your arm.
Scrolling: Avoid scrolling. I can assure you that 'below the fold' exists for mobile. Also,
having a non-scrolling screen has a more solid and dependable 'feel' than a scrolling view
because it’s more predictable. Of course, certain screens have to scroll, but it’s good to
avoid it where you can. If you think discoverability might be an issue, you can reverse
animate scrollable content into its default position to give a subtle but effective indication
that there is more content out of view.
There are plenty of novel navigation models for mobile apps (Path’s radial corner nav springs to
mind) but if you're going to use one of common navigation models, be sure to pick the one that
makes the most sense for your app.
Typing stinks even on the best devices, so you should do what you can to make it easier for your
users. For example:
There are about a dozen keyboard variations on popular smartphones (text, number,
email, URL and so on). Consider each of your input fields and be sure to display the
keyboard that will be most useful for the data entry being done.
Auto-correct can be so hilariously frustrating that there is a website devoted to it.
Consider each of your input fields and decide which auto entry options should be enabled
(such as auto-correct, auto-capitalisation and auto-complete).
If your app invites a lot of typing, you should ensure you support landscape orientation
for fat-thumbed folks like me.
06. Gestures
One of the most iconic aspects of modern touch interfaces is that they support gesture-based user
interaction. As cool as gestures are, there are several things you need to keep in mind:
Invisible: Gestures are invisible, so discovery is an issue. You have to decide how to
reveal their existence to the user. The cleverest approach I’ve seen is on the promotional
iPads mounted in Apple’s retail stores. When a page first loads, any scrollable areas do a
quick 'reverse scroll' into their default position. This immediately invites a swipe or flick
gesture from the user without having to explicitly indicate which areas are scrollable.
Two hands: Multi-touch gestures require two-handed operation. I find this particularly
evident in the native Maps app on iOS which uses a pinch open gesture to zoom out.
When I’m traveling in a foreign city with a coffee in one hand and my phone in the other,
this is an annoying limitation. Android addresses this issue by including zoom in/out
buttons overlaid on the map (which means you can continue enjoying your coffee while
hoofing it around London).
Nice to have: In most cases, I consider gestures a 'nice to have' but not critical. Sort of
like keyboard shortcuts – power users will love them, but most people won’t even know
they are there.
07. Orientation
Portrait is by far the most popular orientation so optimize for this case first.
If your app invites lots of typing, you should support landscape orientation so people can
access the larger keyboard.
When orientation changes unexpectedly, it’s, well… disorienting. If you think your app
will be used for long periods of time (for example, the Kindle Reader app), consider
adding an orientation lock right in the app.
08. Communications
Provide feedback: Provide instant feedback for every interaction. If you don’t, the user
will wonder if the app has frozen up, or if they missed the target they were trying to hit.
The feedback could be tactile (like the Android ‘thump’ vibration), or visual
(highlighting a tapped button, for instance). If the user has requested an action that is
going to take a long time, display a spinner or progress bar to let them know that you
received their request and are working on it.
Modal alerts: Modal alerts are extremely pushy and intrusive to the user’s flow, so you
should only use them when something is seriously wrong. Even then, try to mitigate the
intensity by keeping language reassuring and friendly. Remember not to use modal alerts
for 'FYI' type information.
Confirmations: When you have to ask a user to confirm an action, it’s acceptable to
display a modal confirmation dialog (such as 'Are you sure you want to delete this
draft?'). Confirmations are less intrusive than alerts because they are in response to a user
action and therefore in context and perhaps even expected. Be sure to make the 'safest'
choice the default button in the dialog to help avoid inadvertent destructive actions.
09. Launching
When a user goes back into your app after having used it previously, you should resume
operations right where the user left off. This will give the illusion of speed and contribute to an
overall feel of responsiveness.
If possible, the launch screen you display when the app is first loading should be a 'content-less'
image of your app. Anything that looks interactive (such as buttons, links, icons, content) will
create frustration by inviting failed interactions.
NOTE: Resist the temptation to place branding materials on your launch screen. They make the
user feel as if they’re viewing an ad and they’ll resent you for making them sit though it. Of
course, a branded launch screen doesn’t last any longer than an empty one, but the perception of
delay exists regardless.
Your icon: Your icon has to compete for attention in a sea of other icons. That being the
case, think of it more as the business card than an art piece. Be literal – show what your
app does. Use a strong silhouette and keep text to a minimum. A polished icon suggests a
polished app, so it’s worth devoting serious time and money to doing it right.
First launch: First launch is a make or break situation. If a new user gets confused or
frustrated while trying to acquaint themselves with your app, they’ll ditch it ASAP. If
your app provides complex functionality, you might want to include a 'tips and tricks'
overlay, or perhaps a few panels of orientation screens. Note that this is not a substitute
Conclusion
Mobile computing represents a staggering opportunity for web designers and developers who
want to become productive on mobile. Yes, there is a bit of a learning curve, but much of a web
professional’s legacy experience, skills, and tools will translate nicely. Admittedly, the rate of
change in the mobile world can be a bit daunting at times – but hey… at least it’s never boring.
Introduction
In the first article in this series, I provided a quick introduction to Objective-C, and talked a bit
about memory management, working with the controls, and persisting information to files. In
this article, I want to introduce some of the graphics functionality. I will be using the iPad
emulator as the target device for this article because of the much better display surface that it
provides. But the APIs shown here will work on the iPad, iPhone, and iPod Touch. Since these
APIs were ported from Mac OS X, these APIs will work on a Macintosh as well.
Prerequisites
To take advantage of this article, you will want to have some familiarity with Objective-C and
iPhone development. If you don't, then you will want to take a look at the first article I wrote on
iPhone development. You'll also want to be comfortable with math (algebra and some
trigonometry) as graphics and math go hand in hand. The only hardware you need for this article
is an Intel based Macintosh running Snow Leopard and the iOS SDK.
Available APIs
The iPhone supports two graphics API families: Core Graphics/Quartz 2D and OpenGL ES.
OpenGL ES is a cross platform graphics API. Quartz 2D is an Apple specific API. It is a part of
the Core Graphics framework. OpenGL ES is a slimmed down version of a much larger graphics
API: OpenGL. Keep in mind that OpenGL ES is an application programming interface; it
describes the available functions, structures, semantics on how they are used, and the behaviours
that the functions should have. How a device manufacturer chooses to implement these
behaviours and conform to this specification would be their implementation. I point this out
because I come across a lot of conversations based on a misunderstanding of the difference
Representing Colors
There are several different ways to represent a color digitally. The typical way is to express a
single color by expressing the intensities of the primary colors that when mixed together will
result in the same color. The primary colors are red, green, and blue. If you were thinking of
yellow as being a primary color and not green, then you are probably thinking of the primary
subtractive colors (relevant when using paint on paper, but not when illuminating pixels). There
are other systems of digitally representing colors supported by Quartz 2D, but I won't discuss
them here. I'll only use colors expressed in red, green, and blue. This is also called the RGB
color space. Each one of the components of these colors is expressed as a floating point number.
The lowest intensity is 0, and the highest intensity is 1.0.
In addition to those pixel intensities, there's a fourth color component usually named "Alpha".
The alpha component is used to represent a level of transparency. If a color is completely opaque
(non-transparent), this value will be 1.0. If a color is completely transparent (and thus invisible),
the value will be 0. When an RGB color also has an alpha component depending on the system
being looked at, this will either be called ARGB color space or RGBA color space (the
difference being where the alpha component is located). Within the rest of this material, RGBA
will be used to describe colors of this type. While Quartz 2D supports a number of different color
formats, OpenGL ES only supports RGBA.
Screen Coordinates
When positioning items on the screen, you'll often use points (CGPoint) to position items on the
screen. It is a natural to assume that a point coordinate and a pixel coordinate are the same. But
in iOS, this isn't always the case. A point doesn't necessarily map to a pixel of the same
coordinate. The mapping is handled by the system. You get to see it come into play the most
when you are looking at how one application runs on devices with different pixel resolutions. If
you want to see the relationship between the pixels and points, you can look at the scale factor
that is exposed by the UIImage, UIScreen, or UIView classes.
With Quartz 2D, you are rendering to either a view or an in-memory image. The surface on
which you draw has a color, and if you call various functions to render onto a surface if you are
drawing with transparent colors, then the color will mix with whatever is under it as it is drawn.
In the example programs, we'll start off with drawing to UIView so that you can immediately
jump into seeing how Quartz 2D works. To do this, we will create a new view class derived from
UIView and will make calls to draw with Quartz 2D in the object's (void)drawRect:(CGRect)rect
method.
The Core Graphics APIs all act within a context. You'll need to get the context for your view and
pass it to the Quartz 2D functions to render. If you were rendering to an in memory image, then
you would pass its context instead. The context of your view can be acquired with the following
function call:
Open Xcode and create a new iOS View based application named MyDrawingApp. Once the
application is created, click on the Classes folder. We are going to create a new UIView control
and perform our rendering within that view. Create a new Cocoa Touch class file by right-
clicking on the Classes folder and selecting "Add New File". Select Objective-C class and
choose UIView as the Subclass of setting. (The default is NSObject. Make sure this isn't
selected.) Click on "Next", and when you are prompted for a name for the file, enter
"MyDrawingView.m". Both a *.h and a *.m file will be created.
For this first program, the only thing I want to do is get something drawing on the screen; other
than drawing something on the screen, there's nothing more that this program will do. Open the
*.m file for the class that you just added. We'll start off with overriding the classes initialization
method. Our instances of this class are going to be created within the Interface Builder. Objects
created that way are initialized with a call to initWithCoder: instead of a call to init. So that's the
method we need to override.
Right now, there's nothing that we need to do in the initialization method. But I've had you
include it here as a place holder for other code. To display this view on the phone, we are going
to set it as the base class for the applications. Within Xcode, find
MyDrawingAppViewController.xib and open it in the Interface Builder. Press command-4 to
ensure that the identity inspector is open. You'll see that currently the view is set to inherit from
UIView. We want to instead have it inherit from our class MyDrawingView. Save your changes
and close the Interface Builder. Compile and run your code to make sure that all is in order. Once
you've done this, we are ready to start drawing!
In MyDrawingView.m, there is a method named drawRect: that contains no code. That's where
we are going to place our drawing code. We'll need to get our graphics context, set our drawing
color and other properties, and then draw our shapes on the screen. For now, let's draw a simple
line.
//Set the width of the "pen" that will be used for drawing
CGContextSetLineWidth(context,4);
//Set the color of the pen to be used
CGContextSetStrokeColorWithColor(context, currentColor.CGColor);
//Move the pen to the upper left hand corner
CGContextMoveToPoint(context, 0, 0);
//and draw a line to position 100,100 (down and to the right)
CGContextAddLineToPoint(context, 100, 100);
//Apply our stroke settings to the line.
CGContextStrokePath(context);
[currentColor release];
}
While not directly related to graphics, I want to venture into a bit on touch interactions. This
program would probably be more interesting if it were interactive. We are going to change it so
that the line will be drawn between two points that you select by dragging your finger on the
screen. We are also going to change the program to persist its reference to the color instead of
grabbing a new one every time the screen is refreshed. Open the MyDrawingViewView.h file and
make the following additions:
#import<uikit/uikit.h>
@end
The appropriate @synthasize statements will need to be added to the top of the
MyDrawingView.m file. Add the following to that file:
#import "MyDrawingView.h"
@implementation MyDrawingView
@synthesize fromPoint;
@synthesize toPoint;
@synthesize currentColor;
I've not said anything about touch interactions up to this point. I'll talk about touch events and
other event handling in another article; for now, I'm going to take the satisfying route and speed
through the interactions of interest. There are three events that we will need to respond to to add
touch interactions to the program. touchesBegan:, touchesEnded:, and touchesMoved:. The code
for the needed events is as follows. Add it to your MyDrawingView.m file.
www.someakenya.com Contact: 0707 737 890 Page 134
- (void) touchesBegan:(NSSet*)touches withEvent:(UIEvent*)event
{
UITouch* touchPoint = [touches anyObject];
fromPoint = [touchPoint locationInView:self];
toPoint = [touchPoint locationInView:self];
[self setNeedsDisplay];
}
The only things left are to change our drawing code so that instead of drawing between two fixed
points, it will draw between the points that we touched, and remove the declaration and release
of currentColor within our drawing code (since we are now using a member property to store the
color).
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context,4);
CGContextSetStrokeColorWithColor(context, currentColor.CGColor);
CGContextMoveToPoint(context,fromPoint.x , fromPoint.y);
CGContextAddLineToPoint(context, toPoint.x, toPoint.y);
CGContextStrokePath(context);
}
Run the program and try dragging your finger (or mouse) on the screen in various points. You'll
see the line draw between points that you touch.
There are two images available on the iPhone. These are CGImage and UIImage. CGImage is a
struct that contains image data that can be passed around to various Core Graphics functions.
UIImage is an Objective-C C class. By far, the UIImage class is much easier
easier to use, so let's start
with using it to draw an image in our program. Find an image on your computer that's under
500x500 pixels. The image can be a PNG or JPEG file. Within your project in Xcode, you will
see a folder called Resources.. Click-and-drag
Click your image to the Resources folder in Xcode, and
when prompted, select the option to Copy items into destination group's folder (if needed)
needed). I'm
using a file named office.jpg,, and will refer to my image file by this name. Remember to replace
this with the name
me of your image.
Within the MyDrawingView.h file, declare a new UIImage* variable named backgroundImage.
In the MyDrawingView.m implementation file, add a @syntasize backgrounImage; statement.
We need to load the image from the resources when the view is initialized. Within the -
(id)initWithCoder: method, add backgroundImage = [UIImage imageNamed:@"office.jpg"];.
Remember to replace @"office.jpg" with the name of your image file. This line will load the
image from the resources. Within the top of the -(void)drawRect:
d)drawRect: method, add the following two
lines:
If you run the program now, it will have a background image rendered behind the lines that you
draw.
There's a conceptual layer of separation between the physical resolution of the screen of an iOS
device and the coordinates that you use for drawing. In many graphical environments, the terms
point and pixel could be used interchangeably. On iOS devices, the Operating System will map
points to pixels. Drawing something at position (10,25) may or may not cause an object to appear
at 10 pixels from the left and 25 pixels from the top. The relationship between points and the
actual pixels can be queried though a scale factor that can be read from UIScreen, UIView, or
UIImage. You can see the result of this separation in logical vs. physical coordinates when
looking at the same program running on an iPhone 3Gs and iPhone 4. Assuming the developer
hasn't done anything to take advantage of the higher resolution of the iPhone 4's screen, when the
code draws a line or an image on the device's screen, it will take up the same amount of
proportional space on the device's screen.
Vector based operations such as drawing rectangles, lines, and other geometric shapes will work
on standard and higher resolution devices just fine without any need to adjust the code. For
bitmaps, there's a bit of additional work that you'll need to do. You will need to have a standard
and high resolution version of your image to get the best possible results. The name for your
resources should conform to a specific pattern. There's a pattern for standard resolution devices
and another for high resolution devices.
The [DeviceModifier] part of the resource name is optional. It can be the string ~iphone or
~ipad. The main difference between the names of the low and high resolution versions of the
image is the '@2' in the name. The width and height of the high resolution image should be twice
the width and height of the standard resolution image. (To any one familiar with MIP maps, this
will be familiar.)
Paths
A path describes a shape. Paths can be made of lines, rectangle, ellipses, and other shapes.
Coordinates within a drawing space are specified using points. It's easy to think of points as
pixels, but they are not the same thing (more on that in the Points vs. Pixels section). In general,
you'll be communicating points by passing a pair of floating point numbers or using a CGPoint
Curved lines (more specifically, Bezier curves) can be generated with the function
CGContextAddCurveToPoint. The curved line will start at the point that the last drawing
operation occurred (remember that you can change this point using CGMovePointToDraw), and
will end at the point specified in the function call, and its curve will be affected by two control
points that are also passed in the function call. If you've never worked with Bezier curves before,
there's a good article on them at Wikipedia.org.
If you need to create a complex path (a path composed of many paths), you'd start off by calling
CGContextBeginPath, and then set the starting point of your path with a call to
CGContextMoveToPoint. Then make calls to add shapes to the path. When you are done, close
the path with CGContextClosePath. Creating a path doesn't render it to the screen. It's not
rendered until you paint it. Once it has been painted, the path is removed from the graphics
context and you can begin rendering a new path (or some other operation).
The filling rules for simple geometries is straightforward, and doesn't need much explanation;
the area inside the lines is filled. When creating your own custom paths with borders that
overlap, the rules for the area that gets filled are a little more complex. According to the Apple
documentation, the rule used is called the nonzero winding number rule (found here). The
procedure described for deciding whether a certain point is within the area to be filled or not is a
little abstract. Choose the point you want to test, and draw a line from it to beyond the borders of
the drawing, counting the number of path segments that it intersects. Starting with a count of
zero, add to your count every time the line intersects a path segment going from left to right, and
subtract every time it crosses a path segment going from right to left. If the result is odd, then the
point should be filled. If the result is even, then the point should not be filled. An alternative rule
is to simply count the number of times the line drawn in the above procedure crosses a path
segment irrespective of the direction of the segment. If the result is even, then don't fill the point.
Otherwise the point is to be filled. This is called the even-odd rule.
Clipping
CGContextClip will apply the current path against the current clipping area.
CGContextClipToRect will apply a rectangle to the clipping area. CGContectClipToRects will
apply multiply rectangles to the clipping area.
Gradients
A gradient is an area that gradually changes color. Quartz 2D offers two types of gradients: a
linear (or axial) gradient and a radial gradient. The changes in your gradient colors can also
include changes in the alpha value. There are two objects available for creating gradients:
CGShadingRef and CGGradient.
The CGGradient type is the easier of the two methods to use for creating a gradient. It takes a list
of locations and colors, and from that list, the color for each point in the gradient is calculated for
you. I only use RGB color space in my code examples so that's what I will be using for the color
space option for the gradients. Some of Apple's documentation will refer you to
CGColorSpaceCreateWithName(kCGColorSpaceGenericRGB); to do this, but ignore that. That
function is deprecated. Instead, use CGColorSpaceCreateDeviceRGB();. If you add the
following code to the beginning of the -(void)drawRect function and rerun the program, you'll
see a linear gradient rendered in the background.
If you wanted to do a radial gradient instead of a linear gradient, then instead of calling
CGContextDrawLinearGradient, you would need to call CGContextDrawRadialGradient().
The second circle of this radial gradient is centered with the center of the screen. So the gradient
stops with the circle. Optionally, the gradient could be set to continue beyond the circle or extend
before the beginning off the first circle. To do this, the last parameter should contain the option
kCGGradientDrawsAfterEndLocation to extend the gradient past the end point, or the option
kCGGradientDrawsBeforeStartLocation to have the gradient stretched to the area before the start
point. The result of using this option with the linear and radial gradients can be seen below.
Using CGShadingRef
CGShadingRef takes a CGFunction that you create which contains a function that is used for
calculating the colors in a gradient. The
The CGShading object also contains information on what
type of gradient is being generated (linear or radial), and the starting and ending points for the
gradient. Once the CGShading object is created and populated, the gradient is rendered with a
call to the
he function CGContextDrawShading.
When you create your shading function, there are three parameters that you'll need to define. The
function's return type is void.
void *info - Pointer to the data that you decide to pass to your function.
const float *inValue - The input values for your function. You define the input range for
this parameter.
float* outValues - An array to the output values for your function. You must have one
output value for each component of your color space, plus the alpha component. The
range for each component is between 0 and 1.
static void myCGFunction ( void * info, const float *in, float * outValue)
{
int componentCount = (int)info;
float phaseDelta = 2*3.1415963/(componentCount-1);
outValue[componentCount-1] = 1; //Set the alpha value to 1
for(int n=0;n<componentCount-1;++n)
{
outValue[n] = sin(phaseDelta*n+3.0*(*in));
}
}
Once this function is defined, you'll need to package it into a CGFunctionref structure. You can
use the CGFunctionCreate function to do this. In the following code, I initialize some variables
to use as the parameters for CGFunctionCreate and pass a pointer to my function. The end result
is stored in myFunctionRef.
CGFunctionRef myFunctionRef =
//I'm passing the number of components as the option value
CGFunctionCreate((void*)colorComponentCount,
1,//The callback function takes one value for its input
inputRange, //The range of the values for
colorComponentCount,
outputRange,
&callback
);
With the CGFunctionRef object, you create the appropriate CGSharingRef struct using either
CGShadingCreateAxial or CGShadingCreateRadial. You then render your gradient using
CGContextDrawShading.
Patterns
A pattern is a set of graphics operations that are repeated over and over again over a surface.
Quartz 2D will divide an area into subsections of cells, and will use a callback function defined
in your program to render each cell. The cells will be of uniform size, and there may be some
spacing between each row and each column in the cell (it's up to you how much spacing is
present). There are two types of patterns: color patterns and stencil patterns. Stencil patterns are
like masks; they don't have a color in and of themselves, but can be applied against a color.
Think of them as being like
ike a rubber stamp; you could apply ink of any color against a stamp and
the stamp itself doesn't have an inherent color. Once you have a pattern defined, it is used in
much the same way that you would use s solid color.
To start off, you'll need to define a function that renders your pattern. Much like the Shading
function (discussed in the gradient section),
tion), the first parameter will be data that you defined. The
next parameter is the context on which your pattern is to render. The function's prototype is
defined as follows:
www.someakenya.com Contact: 0707 737 890 Page 144
typedef void (*CGPatternDrawPatternCallback) (
void *info,
CGContextRef context
);
When using a pattern, the color space must be set. This is done through the
CGContextSetFillColorSpace function. In addition to the context, this function also takes a
CGColorSpaceRef object. You can create this using CGContextSetFillColorSpace, passing
NULL as its only parameter. After the colorspace has been set, it can be released with a call to
CGColorSpaceRelease.
The function for creating a pattern takes a lot of parameters. Let's take a look at the function's
prototype and then work through what each one of the parameters means:
As per usual, the info parameter contains data that you want to pass to your callback. The bounds
parameter contains the size of one cell within your pattern. The matrix parameter contains a
transformation matrix to be applied to the pattern. This could be used for operations such as
scaling or rotating the pattern. The xStep and yStep parameters contain the amount of horizontal
and vertical spacing to put between the pattern cells. The tiling parameter can have one of three
values.
isColored is set to true if the pattern is a color pattern, and false if it is a stencil pattern. The last
parameter is a CGPatternCallbacks structure. This struct is defined as follows:
struct CGPatternCallbacks
{
unsigned int version;
CGPatternDrawPatternCallback drawPattern;
CGPatternReleaseInfoCallback releaseInfo;
};
The version field should be set to 0. drawPattern is a pointer to your rendering function. If you
had any cleanup that needed to be done (releasing memory) after your pattern is done rendering,
a pointer to your cleanup function would go in releaseInfo. Otherwise, this parameter should be
NULL. For my example, I'm creating a simple pattern that is composed of a circle within a
square. I'm passing the size of the pattern in the info parameter.
CGContextSaveGState(context);
CGContextSetRGBFillColor(context, 0,1,1,1);
CGContextFillRect(context, *patternBoundaries);
CGContextSetFillColor(context, myFillColor);
CGContextFillEllipseInRect(context, *patternBoundaries);
CGContextFillPath(context);
CGContextRestoreGState(context);
}
CGContextRestoreGState(context);
}
CGContextSaveGState(context);
CGContextFillPath(context);
CGContextRestoreGState(context);
}
As a final example, I am going to remake a program that I made on my Zune HD some time ago
(the program is also present here on CodeProject.com). The program is a simple bubble level. I
want thee interface of this program to be pretty much the same as it was on my Zune HD.
However, unlike my Zune HD, I want to render all of the interface without using any graphics
assets. So all of the interfaces will be rendered with Core Graphics calls to rende
render gradients and
patterns.
At a quick glance, you can see there's a number of things that I'll have to render. A vertical and
horizontal level, and a circular level in the center. I can render the vertical and horizontal levels
with the same code. It will need to just rotate its orientation. So in breaking up, the rendering of
this program will result in three blocks of rendering code: one for the background, one for the
vertical/horizontal levels, and one for the bubble level.
Before I get down to rendering, I want to calculate the placement of each one of the screen
elements. The layout is actually designed around a square screen, and designed to stretch
horizontally or vertically. There are no iOS devices with square screens, but in going about
things this way, the UI seems to be able to accommodate both portrait and landscape modes
pretty well (this is a habit I picked up from Windows Mobile development). For the non-existent
square screen, I want the vertical and horizontal levels to occupy a fourth of the horizontal and a
fourth of the vertical space available. The circular level will consume a square area in the center
of the space that's left. To hold these positions, I've created three member CGRect elements
named verticalLevelPosition, horizontalLevelPosition, and circularLevelPosition. My
calculations are all done in a method named -(void)updateElementPositioning.
-(void)updateElementPositioning
{
float barWidth;
float circleWidth;
CGRect viewRect = self.bounds;
horizontalLevelPosition.size.height=barWidth;
horizontalLevelPosition.size.width = viewRect.size.width;
horizontalLevelPosition.origin.x=0;
horizontalLevelPosition.origin.y=0;
circularLevelPosition.size.width =
circularLevelPosition.size.height = circleWidth;
circularLevelPosition.origin.x =
verticalLevelPosition.size.width+verticalLevelPosition.origin.x+
((viewRect.size.width - verticalLevelPosition.size.width-circleWidth)/2);
circularLevelPosition.origin.y =
horizontalLevelPosition.size.height+horizontalLevelPosition.origin.y+
((viewRect.size.height-horizontalLevelPosition.size.height-circleWidth)/2);
}
To ensure that my calculations are correct, I've implemented a -(void)drawRect: method that just
fills those rectangles with colors so that I can see how they are positioned. The results are as I
need them to be.
-(void)drawRect:(CGRect)rect
{
float verticalRectColor[] = {1,0,0,1};
float horizontalRectColor[] = {0,1,0,1};
float circularRectColor[] = {0,0,1,1};
CGContextSetFillColor(context,verticalRectColor );
CGContextFillRect(context, verticalLevelPosition);
CGContextSetFillColor(context, horizontalRectColor);
CGContextFillRect(context, horizontalLevelPosition);
CGContextSetFillColor(context, circularRectColor);
CGContextFillRect(context, circularLevelPosition);
If you are an Apple purist and believe that all of your Apple development should be done with
Apple software, then you will probably disagree with the next steps that I do because I am going
to use Microsoft software on a Windows system. For the next steps, you can use a vector editing
software that you have as this is only to conceptualize what I need to do. The files generated
from the next step are not going to be consumed by anything.
I've started up Microsoft Expressions Design so that I can use it to sketch out the interface that
I'm assembling. For many of the actions that you perform in a vector editing program, you'll find
that it is easy to translate most actions to a few API calls. After playing around for a bit, I came
up with the follow design. It is composed
composed of five concentric circles; three with linear gradients,
one with a radial gradient, and one with a solid fill. The outer most circle has a diameter with
another circle of a slightly lesser diameter inside of it. The three remaining circles have the ssame
diameter (which is slightly less than that of the second circle).
I'm creating a new method to render the circular level. The function needs the context on which it
is to render, the rectangle by which the level is bound, and a margin to place around the circular
level. For now, I want to just make sure that I calculate the binding rectangles correctly.
innerCircle = outerCircle;
innerCircle.origin.x+=(innerCircle.size.width*(1-innerCircleFactor)/2);
innerCircle.origin.y+=(innerCircle.size.height*(1-innerCircleFactor)/2);
innerCircle.size.width*=innerCircleFactor;
innerCircle.size.height*=innerCircleFactor;
CGContextSetFillColor(context, outerCircleColor);
CGContextFillEllipseInRect(context, outerCircle);
CGContextSetFillColor(context, middleCircleColor);
CGContextFillEllipseInRect(context, middleCircle);
CGContextSetFillColor(context, innerCircleColor);
CGContextFillEllipseInRect(context, innerCircle);
}
The placement is good. So now I need to create my gradients. Microsoft Expressions Design
expresses colors in the format AARRGGBB, where each pair of those letters is a hex number
between 00 and FF that expresses the intensity of the color component. iOS accepts the color
components in floating point values.
values. So to convert each one of these colors to a floating point
value, I must divide it by 255. The first gradient I use has 4 points of color on it.
After creating the gradients and applying them to the rendered circles, I have something that
looks almost identical to what I had in Expressions Design.
I like the results that I got with the circular level, and proceeded with making the vertical and
horizontal levels. I wanted the ends of the levels to be slightly darker than
than the middle portion. To
accomplish this, I set a clipping area around the horizontal and vertical levels and rendered a
gradient circle on each end.
CGContextSetFillColor(context, levelBackgroundColor);
CGContextFillRect(context, targetRect);
CGContextSetFillColor(context, levelReflectionColor);
CGContextFillRect(context, reflectionRect);
CGContextDrawRadialGradient(context, shadingGradient,
gradientCenter1, 0, gradientCenter1, shadingRadius, 0);
CGContextDrawRadialGradient(context, shadingGradient,
gradientCenter2, 0, gradientCenter2, shadingRadius, 0);
-(void)accelerometer:(UIAccelerometer *)accelerometer
didAccelerate:(UIAcceleration *)acceleration
{
tiltDirection = atan2(acceleration.y, acceleration.x);
tiltMagnitude = MIN(1, sqrt( acceleration.x*acceleration.x+
acceleration.y*acceleration.y));
levelPosition.y = sin(tiltDirection)*tiltMagnitude;
levelPosition.x = -cos(tiltDirection)*tiltMagnitude;
[self setNeedsDisplay];
}
The bubbles themselves are just ellipses filled with gradients. The finished level looks like the
following:
Since this program uses an accelerometer, you'll need to deploy it to a real device to see it work.
But when you run the program, while you get visual results, something is noticeably wronwrong. The
Zune HD version of this program runs smoothly (see the video here). ). But this version of the
program doesn't run as smoothly. How can we fix this? I'm saving that for the next addition to
this article on using the Core Animation functionality.
I have been slowing migrating one of my apps to Swift. This post contains a few Swift
extensions
tensions that help simplify working with Core Data's localization API. The Swift code, for the
most part, is a direct port from existing Objective-C
Objective C Categories I have used for many years.
On OS X v10.4, localizationDictionary may return nil until Core Data lazily loads the dictionary
for its own purposes (for example, reporting a localized error).
I don't recall ever seeing a managed object model return a nil localization dictionary. Therefore,
the following code uses Swift's ! operator to force unwrap the returned optional.
Every Core Data model file can have a localized strings file. The strings file name is based on
the name of the model file. Here is the pattern:
{ModelFileName}Model.strings
HighRail.xcdatamodel
HighRailModel.strings
Given this convention you should not include Model in your model file name. Otherwise a
strings file for HighRailModel.xcdatamodel is HighRailModelModel.strings.
Let's look at an example strings file to see how to localize an entity name.
"Entity/HRLLayout" = "Layout";
"Entity/HRLEngine" = "Engine";
"Entity/SensorTrackModule" = "SensorTrack™";
extension NSManagedObjectModel {
Here is a an example:
let localizedName =
managedObjectModel.localizedEntityNameForEntityWithName(HRLEngine.entityName(
))
// Engine
Also, the method returns the given entity name if a localized string is not found in the strings file.
You may want to Fail Fast.
That works well. However, it's a little cumbersome to use if, for example, you need to ask an
instance of a managed object for its localized entity name.
extension NSEntityDescription {
extension NSManagedObject {
The managed object model localization dictionary may also contain localized property strings
and error strings. More on that in a future post.
Wrap Up
I am not sure if porting existing Objective-C Category methods to Swift extensions is the "right
way" to do things in Swift.
Be sure to read Apple's Localizing a Managed Object Model for additional information.
Multitasking
Multitasking for iOS was first released in June 2010 along with the release of iOS 4.0. Only
certain devices—iPhone 4, iPhone 3GS, and iPod Touch 3rd generation—were able to use
multitasking. The iPad did not get multitasking until the release of iOS 4.2.1 in November 2010.
Currently, multitasking is supported on iPhone 3GS or newer, iPod Touch 3rd generation or
newer, and all iPad models.
Implementation of multitasking in iOS has been criticized for its approach, which limits the work
that applications in the background can perform to a limited function set and requires application
developers to add explicit support for it.
Before iOS 4, multitasking was limited to a selection of the applications Apple included on the
device. Users could, however "jailbreak" their device in order to unofficially multitask. Starting
with iOS 4, on third-generation and newer iOS devices, multitasking is supported through seven
background APIs:
1. Newsstand – application can download content in the background to be ready for the user
2. External Accessory – application communicates with an external accessory and shares
data at regular intervals
3. Bluetooth Accessory – application communicates with a bluetooth accessory and shares
data at regular intervals
In iOS 7, Apple introduced a new multitasking feature, providing all apps with the ability to
perform background updates. This feature prefers to update the user's most frequently used apps
and prefers to use Wi-Fi networks over a cellular network, without markedly reducing the
device's battery life
Web services (sometimes called application services) are services (usually including some
combination of programming and data, but possibly including human resources as well) that are
made available from a business's Web server for Web users or other Web-connected programs.
Providers of Web services are generally known as application service provider s. Web services
range from such major services as storage management and customer relationship management
(CRM) down to much more limited services such as the furnishing of a stock quote and the
checking of bids for an auction item. The accelerating creation and availability of these services
is a major Web trend.
Users can access some Web services through a peer-to-peer arrangement rather than by going to
a central server. Some services can communicate with other services and this exchange of
procedures and data is generally enabled by a class of software known as middleware. Services
previously possible only with the older standardized service known as Electronic Data
Interchange (EDI) increasingly are likely to become Web services. Besides the standardization
and wide availability to users and businesses of the Internet itself, Web services are also
increasingly enabled by the use of the Extensible Markup Language (XML) as a means of
standardizing data formats and exchanging data. XML is the foundation for the Web Services
Description Language (WSDL).
Java reviews
Java for Mobile Devices is a set of technologies that let developers deliver applications and
services to all types of mobile handsets, ranging from price efficient feature-phones to the latest
smartphones. Java is currently running on over 3 billion phones worldwide, and growing. It
offers unrivaled potential for the distribution and monetization of mobile applications.
At the core of the Java Mobile Platform is Java Platform, Micro Edition (Java ME). Java ME
provides a robust, flexible environment for applications running on mobile and other embedded
devices: mobile phones, TV set-top boxes, e-readers, Blu-Ray readers, printers and more. For
over a decade, Oracle has been working along with leading mobile and embedded companies to
develop the Java ME Platform through the Java Community Process (JCP). A key achievement
has been the definition of the Mobile Services Architecture (MSA), setting a baseline of mobile
APIs that developer can target within their applications. In 2011, Oracle and partners will be
working within JCP to drive Java ME.next - a proposal for the modernization of Java ME .
In addition to its role within JCP, Oracle is also a provider of high performance Java ME
implementations and developer technologies being used to deploy tens of thousands of
applications worldwidein the mobile and embedded markets, including:
Oracle Java Wireless Client: a multitasking Java ME runtime optimized for the leading
mobile phone platforms.
Java ME SDK: a state-of-the-art toolbox for developing and testing mobile applications.
Light Weight UI Toolkit (LWUIT): a compact library for the creation of rich user
interfaces.
Oracle Java ME Embedded: designed and optimized to meet the unique requirements of
small, low power devices.
Androids SPK
Android application package (APK) is the packagefile format used by the Android operating
system for distribution and installation of application software and middleware.
APK files are a type of archive file, specifically in zip format packages based on the JAR file
format, with .apk as the filename extension. The MIME type associated with APK files is
application/vnd.android.package-archive.
APK file can be installed on Android powered devices just like installing software on PC. To
secure the device, there is an "Unknown Sources" setting in Settings menu which is disabled by
default. It must be enabled before installing any application with APK file. Enabling this setting
is not required when you are installing anything via Google Play.
Package contents
An APK file is an archive that usually contains the following files and directories:
META-INF directory:
o MANIFEST.MF: the Manifest file
o CERT.RSA: The certificate of the application.
o CERT.SF: The list of resources and SHA-1 digest of the corresponding lines in the
MANIFEST.MF file; for example:
Signature-Version: 1.0
Created-By: 1.0 (Android)
SHA1-Digest-Manifest: wxqnEAI0UA5nO5QJ8CGMwjkGGWE=
...
Name: res/layout/exchange_component_back_bottom.xml
SHA1-Digest: eACjMjESj7Zkf0cBFTZ0nqWrt7w=
...
Name: res/drawable-hdpi/icon.png
SHA1-Digest: DGEqylP8W0n0iV/ZzBx3MW0WGCA=
lib: the directory containing the compiled code that is specific to a software layer of a
processor, the directory is split into more directories within it:
o armeabi: compiled code for all ARM based processors only
o armeabi-v7a: compiled code for all ARMv7 and above based processors only
o arm64-v8a: compiled code for all ARMv8 arm64 and above based processors
only
o x86: compiled code for x86 processors only
Android software development is the process by which new applications are created for the
Android operating system. Applications are usually developed in Java programming language
using the Android software development kit (SDK), but other development environments are
also available.
As of July 2013, more than one million applications have been developed for Android, with over
25 billion downloads. A June 2011 research indicated that over 67% of mobile developers used
the platform, at the time of publication. In Q2 2012, around 105 million units of Android
smartphones were shipped which acquires a total share of 68% in overall smartphones sale till
Q2 2012
Android SDK
The Android software development kit (SDK) includes a comprehensive set of development
tools. These include a debugger, libraries, a handset emulator based on QEMU, documentation,
sample code, and tutorials. Currently supported development platforms include computers
running Linux (any modern desktop Linux distribution), Mac OS X 10.5.8 or later, and Windows
XP or later. As of March 2015, the SDK is not available on Android itself, but the software
development is possible by using specialized Android applications.Until around the end of 2014,
the officially supported integrated development environment (IDE) was Eclipse using the
Android Development Tools (ADT) Plugin, though IntelliJ IDEA IDE (all editions) fully
supports Android development out of the box, and NetBeans IDE also supports Android
development via a plugin. As of 2015, Android Studio, made by Google and powered by IntelliJ,
www.someakenya.com Contact: 0707 737 890 Page 165
is the official IDE; however, developers are free to use others. Additionally, developers may use
any text editor to edit Java and XML files, then use command line tools (Java Development Kit
and Apache Ant are required) to create, build and debug Android applications as well as control
attached Android devices (e.g., triggering a reboot, installing software package(s)
remotely).Enhancements to Android's SDK go hand in hand with the overall Android platform
development. The SDK also supports older versions of the Android platform in case developers
wish to target their applications at older devices. Development tools are downloadable
components, so after one has downloaded the latest version and platform, older platforms and
tools can also be downloaded for compatibility testing.
Android applications are packaged in .apk format and stored under /data/app folder on the
Android OS (the folder is accessible only to the root user for security reasons). APK package
contains .dex files (compiled byte code files called Dalvik executables), resource files, etc.
The Android Debug Bridge (ADB) is a toolkit included in the Android SDK package. It consists
of both client and server-side programs that communicate with one another. The ADB is
typically accessed through the command-line interface, although numerous graphical user
interfaces exist to control ADB.
In a security issue reported in March 2011, ADB was targeted as a vector to attempt to install a
rootkit on connected phones using a "resource exhaustion attack".
Fastboot
Fastboot is a diagnostic protocol included with the SDK package used primarily to modify the
flashfilesystem via a USB connection from host computer. It requires that the device be started in
a boot loader or Secondary Program Loader mode, in which only the most basic hardware
initialization is performed. After enabling the protocol on the device itself, it will accept a
specific set of commands sent to it via USB using a command line. Some of the most commonly
used fastboot commands include:
flash – rewrites a partition with a binary image stored on the host computer
erase – erases a specific partition
reboot – reboots the device into either the main operating system, the system recovery
partition or back into its boot loader
Android NDK
Libraries written in C, C++ and other languages can be compiled to ARM, MIPS or x86native
code and installed using the Android Native Development Kit (NDK). Native classes can be
called from Java code running under the Dalvik VM using the System.loadLibrary call, which
is part of the standard Android Java classes.Complete applications can be compiled and installed
using traditional development tools. However, according to the Android documentation, NDK
should not be used solely for developing applications only because the developer prefers to
program in C/C++, as using NDK increases complexity while most applications would not
benefit from using it.
The ADB debugger gives a root shell under the Android Emulator which allows ARM, MIPS or
x86 native code to be uploaded and executed. Native code can be compiled using GCC or the
Intel C++ Compiler on a standard PC. Running native code is complicated by Android's use of a
non-standard C library (libc, known as Bionic). The graphics library that Android uses to
arbitrate and control access to this device is called the Skia Graphics Library (SGL), and it has
been released under an open source licence. Skia has backends for both Win32 and Unix,
allowing the development of cross-platform applications, and it is the graphics engine underlying
the Google Chrome web browser.
Unlike Java application development based on an IDE such as Eclipse, the NDK is based on
command-line tools and requires invoking them manually to build, deploy and debug the apps.
Several third-party tools allow integrating the NDK into Eclipseand Visual Studio.
The Android 3.1 platform (also backported to Android 2.3.4) introduces Android Open
Accessory support, which allows external USB hardware (an Android USB accessory) to interact
with an Android-powered device in a special "accessory" mode. When an Android-powered
device is in accessory mode, the connected accessory acts as the USB host (powers the bus and
enumerates devices) and the Android-powered device acts as the USB device. Android USB
accessories are specifically designed to attach to Android-powered devices and adhere to a
simple protocol (Android accessory protocol) that allows them to detect Android-powered
devices that support accessory mode.
Since version 1.4 of the Go programming language, writing applications for Android is
supported without requiring any Java code, although with a restricted set of Android APIs.
On July 12, 2010, Google announced the availability of App Inventor for Android, a Web-based
visual development environment for novice programmers, based on MIT's Open Blocks Java
library and providing access to Android devices' GPS, accelerometer and orientation data, phone
functions, text messaging, speech-to-text conversion, contact data, persistent storage, and Web
services, initially including Amazon and Twitter. "We could only have done this because
Android’s architecture is so open," said the project director, MIT's Hal Abelson. Under
development for over a year, the block-editing tool has been taught to non-majors in computer
science at Harvard, MIT, Wellesley, Trinity College (Hartford,) and the University of San
Francisco, where Professor David Wolber developed an introductory computer science course
and tutorial book for non-computer science students based on App Inventor for AndroidIn the
second half of 2011, Google released the source code, terminated its Web service, and provided
funding for the creation of The MIT Center for Mobile Learning, led by the App Inventor creator
Hal Abelson and fellow MIT professors Eric Klopfer and Mitchel Resnick. Latest version
created as the result of Google–MIT collaboration was released in February 2012, while the first
version created solely by MIT was launched in March 2012 and upgraded to App Inventor 2 in
December 2013. As of 2014, App inventor is now maintained by MIT.
Basic4android
Corona SDK
Corona SDK is a software development kit (SDK) created by Walter Luh, founder of Corona
Labs Inc.. Corona SDK allows software programmers to build mobile applications for iPhone,
iPad and Android devices.
Delphi
Delphi can also be used for creating Android application in the Object Pascal language. The
latest release is Delphi XE8, developed by Embarcadero.
Kivy
Kivy is an open source Python library for developing multitouchapplication software with a
natural user interface (NUI) for a wide selection of devices. Kivy provides the possibility of
maintaining a single application for numerous operating systems ("code once, run everywhere").
Kivy has a custom-built deployment tool for deploying mobile applications called Buildozer,
which is available only for Linux. Buildozer is currently alpha software, but is far less
cumbersome than older Kivy deployment methods. Applications programmed with Kivy can be
submitted to any Android mobile application distribution platform.
Lazarus
The Lazarus IDE may be used to develop Android applications using Object Pascal (and other
Pascal dialects), based on the Free Pascal compiler starting from version 2.7.1.
Processing
The Processing environment, which also uses the Java language, has supported an Android mode
since version 1.5; integration with device camera and sensors is possible using the Ketai library.
Qt for Android enables Qt 5 applications to run on devices with Android v2.3.3 (API level 10) or
later.[42] Qt is a cross-platformapplication framework which can target platforms such as
Android, Linux, iOS, Sailfish OS and Windows. Qt application development is done in standard
C++ and QML, requiring both the Android NDK and SDK. Qt Creator is the integrated
development environment provided with the Qt Framework for multi-platform application
development.
RubyMotion
RubyMotion is a toolchain to write native mobile apps in Ruby. As of version 3.0, RubyMotion
supports Android. RubyMotion Android apps can call into the entire set of Java Android APIs
from Ruby, can use 3rd-party Java libraries, and are statically compiled into machine code.
SDL
The SDL library offers also a development possibility beside Java, allowing the development
with C and the simple porting of existing SDL and native C applications. By injection of a small
Java shim and JNI the usage of native SDL code is possible, allowing Android ports like e.g. the
Jagged Alliance 2video game.
Visual Studio 2015 supports cross-platform development, letting C++ developers create projects
from templates for Android native-activity applications, or create high-performance shared
libraries to include in other solutions. Its features include platform-specific IntelliSense,
breakpoints, device deployment and emulation.
Xamarin
With a C# shared codebase, developers can use Xamarin to write native iOS, Android, and
Windows apps with native user interfaces and share code across multiple platforms. Xamarin has
over 505,000 developers in more than 120 countries around the world as of February 2014.
The Android Developer Challenge was a competition to find the most innovative application for
Android. Google offered prizes totaling 10 million US dollars, distributed between ADC I and
ADC II. ADC I accepted submissions from January 2 to April 14, 2008. The 50 most promising
entries, announced on May 12, 2008, each received a $25,000 award to further development. It
ADC II was announced on May 27, 2009.] The first round of the ADC II closed on October 6,
2009. The first-round winners of ADC II comprising the top 200 applications were announced on
November 5, 2009. Voting for the second round also opened on the same day and ended on
November 25. Google announced the top winners of ADC II on November 30, with
SweetDreams, What the Doodle!? and WaveSecure being nominated the overall winners of the
challenge.
Community-based firmware
There is a community of open-source enthusiasts that build and share Android-based firmware
with a number of customizations and additional features, such as FLAC lossless audio support
and the ability to store downloaded applications on the microSD card. This usually involves
rooting the device. Rooting allows users root access to the operating system, enabling full control
of the phone. Rooting has several disadvantages as well, including increased risk of hacking,
high chances of bricking, losing warranty, increased virus attack risks, etc. However, rooting
allows custom firmwares to be installed, although the device's boot loader must also be
unlocked. Modified firmwares allow users of older phones to use applications available only on
newer releases.Those firmware packages are updated frequently, incorporate elements of
Android functionality that haven't yet been officially released within a carrier-sanctioned
firmware, and tend to have fewer limitations. CyanogenMod and OMFGB are examples of such
firmware.
On September 24, 2009, Google issued a cease and desist letterto the modder Cyanogen, citing
issues with the re-distribution of Google's closed-source applications within the custom
firmware. Even though most of Android OS is open source, phones come packaged with closed-
source Google applications for functionality such as the Google Play and GPS navigation.
Google has asserted that these applications can only be provided through approved distribution
channels by licensed distributors. Cyanogen has complied with Google's wishes and is
continuing to distribute this mod without the proprietary software. It has provided a method to
back up licensed Google applications during the mod's install process and restore them when the
process is complete.
Java standards
Obstacles to development include the fact that Android does not use established Java standards,
that is, Java SE and ME. This prevents compatibility between Java applications written for those
platforms and those written for the Android platform. Android only reuses the Java language
syntax and semantics, but it does not provide the full class libraries and APIs bundled with Java
The Resources
The Resources is everything of the application except the Java code. On Android Platform
there are many kind of Resources such as text, color, layout, dimension, etc. Android provides
the specific folders the specific resources type, the root folder of resources is res/ and this
folder contains many sub folder for each resource type.
Animation Resources
Define pre-determined animations.
Tween animations are saved in res/anim/ and accessed from the R.anim class.
Frame animations are saved in res/drawable/ and accessed from the R.drawable class.
Color State List Resource
Define color resources that changes based on the View state.
Saved in res/color/ and accessed from the R.color class.
Drawable Resources
Define various graphics with bitmaps or XML.
Saved in res/drawable/ and accessed from the R.drawable class.
Layout Resource
Define the layout for your application UI.
Saved in res/layout/ and accessed from the R.layout class.
Menu Resource
Define the contents of your application menus.
Saved in res/menu/ and accessed from the R.menu class.
String Resources
Define strings, string arrays, and plurals (and include string formatting and styling).
Saved in res/values/ and accessed from the R.string, R.array, and R.plurals
classes.
Style Resource
Define the look and format for UI elements.
Saved in res/values/ and accessed from the R.style class.
More Resource Types
android.view
Provides classes that expose basic user interface classes that handle screen layout and interaction
with the user.
Annotations
ViewDebug.CapturedViewProperty This annotation can be used to mark fields and methods to be
dumped when the view is captured.
ViewDebug.ExportedProperty This annotation can be used to mark fields and methods to be
dumped by the view server.
ViewDebug.FlagToString Defines a mapping from a flag to a String.
ViewDebug.IntToString Defines a mapping from an int value to a String.
Interfaces
ActionMode.Callback Callback interface for action modes.
ActionProvider.VisibilityListener Listens to changes in visibility as reported by
refreshVisibility().
Choreographer.FrameCallback Implement this interface to receive a callback
when a new display frame is being rendered.
CollapsibleActionView When a View implements this interface it will
receive callbacks when expanded or collapsed as
an action view alongside the optional, app-
specified callbacks to
MenuItem.OnActionExpandListener.
ContextMenu Extension of Menu for context menus providing
functionality to modify the header of the context
menu.
ContextMenu.ContextMenuInfo Additional information regarding the creation of
the context menu.
GestureDetector.OnContextClickListener The listener that is used to notify when a context
click occurs.
GestureDetector.OnDoubleTapListener The listener that is used to notify when a double-
tap or a confirmed single-tap occur.
GestureDetector.OnGestureListener The listener that is used to notify when gestures
Classes
AbsSavedState A Parcelable implementation that should be
used by inheritance hierarchies to ensure the state
of all classes along the chain is saved.
ActionMode Represents a contextual mode of the user
interface.
ActionMode.Callback2 Extension of ActionMode.Callback to
provide content rect information.
ActionProvider An ActionProvider defines rich menu interaction
in a single component.
Choreographer Coordinates the timing of animations, input and
drawing.
ContextThemeWrapper A ContextWrapper that allows you to modify the
theme from what is in the wrapped context.
Display Provides information about the size and density
of a logical display.
Display.Mode A mode supported by a given display.
DragEvent Represents an event that is sent out by the system
at various times during a drag and drop
operation.
FocusFinder The algorithm used for finding the next focusable
view in a given direction from a view that
currently has focus.
FrameStats This is the base class for frame statistics.
GestureDetector Detects various gestures and events using the
supplied MotionEvents.
GestureDetector.SimpleOnGestureListener A convenience class to extend when you only
Enums
ViewDebug.HierarchyTraceType This enum was deprecated in API level 16. This enum is now unused
ViewDebug.RecyclerTraceType This enum was deprecated in API level 16. This enum is now unused
Exceptions
InflateException This exception is thrown by an inflater on error
conditions.
KeyCharacterMap.UnavailableException Thrown by load(int) when a key character map could
not be loaded.
Surface.OutOfResourcesException Exception thrown when a Canvas couldn't be locked
with lockCanvas(Rect), or when a SurfaceTexture
could not successfully be allocated.
Intents are asynchronous messages which allow application components to request functionality
from other Android components. Intents allow you to interact with components from the same
applications as well as with components contributed by other applications. For example, an
activity can start an external activity for taking a picture.
Intents are objects of the android.content.Intent type. Your code can send them to the
Android system defining the components you are targeting. For example, via the
startActivity() method you can define that the intent should be used to start an activity.
An intent can contain data via a Bundle. This data can be used by the receiving component.
To start an activity, use the method startActivity(intent). This method is defined on the
Context object which Activity extends.
1.3. Sub-activities
Activities which are started by other Android activities are called sub-activities. This wording
makes it easier to describe which activity is meant.
You can also start services via intents. Use the startService(Intent) method call for that.
2. Intents types
2.1. Different types of intents
An application can define the target component directly in the intent (explicit intent) or ask the
Android system to evaluate registered components based on the intent data (implicit intents).
Explicit intents explicitly define the component which should be called by the Android system,
by using the Java class as identifier.
The following shows how to create an explicit intent and send it to the Android system. If the
class specified in the intent represents an activity, the Android system starts it.
Explicit intents are typically used within on application as the classes in an application are
controlled by the application developer.
Implicit intents specify the action which should be performed and optionally data which provides
content for the action.
For example, the following tells the Android system to view a webpage. All installed web
browsers should be registered to the corresponding intent data via an intent filter.
If an implicit intent is sent to the Android system, it searches for all components which are
registered for the specific action and the fitting data type.
If only one component is found, Android starts this component directly. If several components
are identified by the Android system, the user will get a selection dialog and can decide which
component should be used for the intent.
A component can register itself for actions. See Section 4.1, “Intent filter” for details.
An intent contains certain header data, e.g., the desired action, the type, etc. Optionally an intent
can also contain additional data based on an instance of the Bundle class which can be retrieved
from the intent via the getExtras() method.
You can also add data directly to the Bundle via the overloaded putExtra() methods of the
Intent objects. Extras are key/value pairs. The key is always of type String. As value you can
use the primitive data types (int, float, ...) plus objects of type String, Bundle, Parceable
and Serializable.
The receiving component can access this information via the getAction() and getData()
methods on the Intent object. This Intent object can be retrieved via the getIntent()
method.
The component which receives the intent can use the getIntent().getExtras() method call to
get the extra data. That is demonstrated in the following code snippet.
Lots of Android applications allow you to share some data with other people, e.g., the Facebook,
G+, Gmail and Twitter application. You can send data to one of these components. The
following code snippet demonstrates the usage of such an intent within your application.
An activity can be closed via the back button on the phone. In this case the finish() method is
performed. If the activity was started with the startActivity(Intent) method call, the caller
requires no result or feedback from the activity which now is closed.
If you start the activity with the startActivityForResult() method call, you expect feedback
from the sub-activity. Once the sub-activity ends, the onActivityResult() method on the sub-
activity is called and you can perform actions based on the result.
In the startActivityForResult() method call you can specify a result code to determine
which activity you started. This result code is returned to you. The started activity can also set a
result code which the caller can use to determine if the activity was canceled or not.
The following example code demonstrates how to trigger an intent with the
startActivityForResult() method.
If the sub-activity is finished, it can send data back to its caller via an Intent. This is done in the
finish() method.
@Override
public void finish() {
// Prepare data intent
Intent data = new Intent();
data.putExtra("returnKey1", "Swinging on a star. ");
data.putExtra("returnKey2", "You could be better then you are. ");
// Activity finished ok, return the data
setResult(RESULT_OK, data);
super.finish();
}
Once the sub-activity finishes, the onActivityResult() method in the calling activity is called.
@Override
protected void onActivityResult(int requestCode, int resultCode, Intent data)
{
if (resultCode == RESULT_OK && requestCode == REQUEST_CODE) {
if (data.hasExtra("returnKey1")) {
Toast.makeText(this, data.getExtras().getString("returnKey1"),
Toast.LENGTH_SHORT).show();
}
}
}
Intents are used to signal to the Android system that a certain event has occurred. Intents often
describe the action which should be performed and provide data upon which such an action
should be done. For example, your application can start a browser component for a certain URL
via an intent. This is demonstrated by the following example.
But how does the Android system identify the components which can react to a certain intent?
If an intent is sent to the Android system, the Android platform runs a receiver determination. It
uses the data included in the intent. If several components have registered for the same intent
filter, the user can decide which component should be started.
You can register your Android components via intent filters for certain events. If a component
does not define one, it can only be called by explicit intents. This chapter gives an example for
registering a component for an intent. The key for this registration is that your component
registers for the correct action, mime-type and specifies the correct meta-data.
If you send such an intent to your system, the Android system determines all registered Android
components for this intent. If several components have registered for this intent, the user can
select which one should be used.
The following code will register an Activity for the Intent which is triggered when someone
wants to open a webpage.
<activityandroid:name=".BrowserActivitiy"
android:label="@string/app_name">
<intent-filter>
<actionandroid:name="android.intent.action.VIEW" />
<categoryandroid:name="android.intent.category.DEFAULT" />
<dataandroid:scheme="http"/>
</intent-filter>
</activity>
The following example registers an activity for the ACTION_SEND intent. It declares itself only
relevant for the text/plain mime type.
<activity
<categoryandroid:name="android.intent.category.DEFAULT" />
<dataandroid:mimeType="text/plain" />
</intent-filter>
</activity>
If a component does not define an intent filter, it can only be called by explicit intents.
Intents can be used to send broadcast messages into the Android system. A broadcast receiver
can register to an event and is notified if such an event is sent.
Your application can register to system events, e.g., a new email has arrived, system boot is
complete or a phone call is received and react accordingly.
Sometimes you want to determine if a component has registered for an intent. For example, you
want to check if a certain intent receiver is available and in case a component is available, you
enable a functionality in your application.
The following example code checks if a component has registered for a certain intent. Construct
your intent as you are desired to trigger it and pass it to the following method.
Based on the result you can adjust your application. For example, you could disable or hide
certain menu items.
An Intent is a messaging object you can use to request an action from another app component.
Although intents facilitate communication between components in several ways, there are three
fundamental use-cases:
To start an activity:
An Activity represents a single screen in an app. You can start a new instance of an
Activity by passing an Intent to startActivity(). The Intent describes the activity
to start and carries any necessary data.
If you want to receive a result from the activity when it finishes, call
startActivityForResult(). Your activity receives the result as a separate Intent
object in your activity's onActivityResult() callback. For more information, see the
Activities guide.
To start a service:
If the service is designed with a client-server interface, you can bind to the service from
another component by passing an Intent to bindService(). For more information, see
the Services guide.
To deliver a broadcast:
A broadcast is a message that any app can receive. The system delivers various
broadcasts for system events, such as when the system boots up or the device starts
charging. You can deliver a broadcast to other apps by passing an Intent to
sendBroadcast(), sendOrderedBroadcast(), or sendStickyBroadcast().
Intent Types
Explicit intents specify the component to start by name (the fully-qualified class name).
You'll typically use an explicit intent to start a component in your own app, because you
Figure 1. Illustration of how an implicit intent is delivered through the system to sta
start another
activity: [1]Activity A creates an Intent with an action description and passes it to
startActivity(). [2] The Android System searches all apps for an intent filter that matches the
Activity B
intent. When a match is found, [3] the system starts the matching activity (Activity B) by
invoking its onCreate() method and passing it the Intent.
When you create an implicit intent, the Android system finds the appropriate component to start
by comparing the contents of the intent to the intent filters declared in the manifest file of other
apps on the device. If the intent matches an intent filter, the system starts that component and
delivers it the Intent object. If multiple intent filters
filters are compatible, the system displays a
dialog so the user can pick which app to use.
An intent filter is an expression in an app's manifest file that specifies the type of intents that the
component would like to receive. For instance, by declaring an intent filter for an activity, you
make it possible for other apps to directly start your activity with a certain kind of intent.
Likewise, if you do not declare any intent filters for an activity, then it can be started only with
an explicit intent.
Building an Intent
An Intent object carries information that the Android system uses to determine which
component to start (such as the exact component name or component category that should
receive the intent), plus information that the recipient component uses in order to properly
perform the action (such as the action to take and the data to act upon).
Component name
The name of the component to start.
This is optional, but it's the critical piece of information that makes an intent explicit, meaning
that the intent should be delivered only to the app component defined by the component name.
Without a component name, the intent is implicit and the system decides which component
should receive the intent based on the other intent information (such as the action, data, and
category—described below). So if you need to start a specific component in your app, you
should specify the component name.
Note: When starting a Service, you should always specify the component name. Otherwise,
you cannot be certain what service will respond to the intent, and the user cannot see which
service starts.
This field of the Intent is a ComponentName object, which you can specify using a fully
qualified class name of the target component, including the package name of the app. For
example, com.example.ExampleActivity. You can set the component name with
setComponent(), setClass(), setClassName(), or with the Intent constructor.
Action
A string that specifies the generic action to perform (such as view or pick).
In the case of a broadcast intent, this is the action that took place and is being reported. The
action largely determines how the rest of the intent is structured—particularly what is contained
in the data and extras.
ACTION_VIEW
Use this action in an intent with startActivity() when you have some information that an
activity can show to the user, such as a photo to view in a gallery app, or an address to view in a
map app.
ACTION_SEND
Also known as the "share" intent, you should use this in an intent with startActivity() when
you have some data that the user can share through another app, such as an email app or social
sharing app.
See the Intent class reference for more constants that define generic actions. Other actions are
defined elsewhere in the Android framework, such as in Settings for actions that open specific
screens in the system's Settings app.
You can specify the action for an intent with setAction() or with an Intent constructor.
If you define your own actions, be sure to include your app's package name as a prefix. For
example:
When creating an intent, it's often important to specify the type of data (its MIME type) in
addition to its URI. For example, an activity that's able to display images probably won't be able
to play an audio file, even though the URI formats could be similar. So specifying the MIME
type of your data helps the Android system find the best component to receive your intent.
However, the MIME type can sometimes be inferred from the URI—particularly when the data
is a content: URI, which indicates the data is located on the device and controlled by a
ContentProvider, which makes the data MIME type visible to the system.
To set only the data URI, call setData(). To set only the MIME type, call setType(). If
necessary, you can set both explicitly with setDataAndType().
Category
A string containing additional information about the kind of component that should handle the
intent. Any number of category descriptions can be placed in an intent, but most intents do not
require a category. Here are some common categories:
CATEGORY_BROWSABLE
The target activity allows itself to be started by a web browser to display data referenced by a
link—such as an image or an e-mail message.
CATEGORY_LAUNCHER
The activity is the initial activity of a task and is listed in the system's application launcher.
See the Intent class description for the full list of categories.
These properties listed above (component name, action, data, and category) represent the
defining characteristics of an intent. By reading these properties, the Android system is able to
resolve which app component it should start.
However, an intent can carry additional information that does not affect how it is resolved to an
app component. An intent can also supply:
Extras
Key-value pairs that carry additional information required to accomplish the requested action.
Just as some actions use particular kinds of data URIs, some actions also use particular extras.
You can add extra data with various putExtra() methods, each accepting two parameters: the
key name and the value. You can also create a Bundle object with all the extra data, then insert
the Bundle in the Intent with putExtras().
For example, when creating an intent to send an email with ACTION_SEND, you can specify the
"to" recipient with the EXTRA_EMAIL key, and specify the "subject" with the EXTRA_SUBJECT key.
The Intent class specifies many EXTRA_* constants for standardized data types. If you need to
declare your own extra keys (for intents that your app receives), be sure to include your app's
package name as a prefix. For example:
An explicit intent is one that you use to launch a specific app component, such as a particular
activity or service in your app. To create an explicit intent, define the component name for the
Intent object—all other intent properties are optional.
For example, if you built a service in your app, named DownloadService, designed to download
a file from the web, you can start it with the following code:
The Intent(Context, Class) constructor supplies the app Context and the component a
Class object. As such, this intent explicitly starts the DownloadService class in the app.
For more information about building and starting a service, see the Services guide.
An implicit intent specifies an action that can invoke any app on the device able to perform the
action. Using an implicit intent is useful when your app cannot perform the action, but other apps
probably can and you'd like the user to pick which app to use.
For example, if you have content you want the user to share with other people, create an intent
with the ACTION_SEND action and add extras that specify the content to share. When you call
startActivity() with that intent, the user can pick an app through which to share the content.
Caution: It's possible that a user won't have any apps that handle the implicit intent you send to
startActivity(). If that happens, the call will fail and your app will crash. To verify that an
activity will receive the intent, call resolveActivity() on your Intent object. If the result is
non-null, then there is at least one app that can handle the intent and it's safe to call
Note: In this case, a URI is not used, but the intent's data type is declared to specify the content
carried by the extras.
When startActivity() is called, the system examines all of the installed apps to determine
which ones can handle this kind of intent (an intent with the ACTION_SEND action and that carries
"text/plain" data). If there's only one app that can handle it, that app opens immediately and is
given the intent. If multiple activities accept the intent, the system displays a dialog so the user
can pick which app to use.
Introduction
Android provides several options for you to save persistent application data. The solution you
choose depends on your specific needs, such as whether the data should be private to your
application or accessible to other applications (and the user) and how much space your data
requires. In this example we are using Intent, Internal Storage and External Storage.
Intent: An Intent provides a facility for performing late runtime binding between the code in
different applications. Its most significant use is in the launching of activities, where it can be
thought of as the glue between activities. It is basically a passive data structure holding an
abstract description of an action to be performed.
You can save files directly on the device's internal storage. By default, files saved to the internal
storage are private to your application and other applications cannot access them (nor can the
user). When the user uninstalls your application, these files are removed.
To Write File: Call openFileOutput() with the name of the file and the operating mode. This
returns a FileOutputStream. Write to the file with write().Close the stream with close().
MODE_PRIVATE will create the file (or replace a file of the same name) and make it private to
your application. Other modes available are: MODE_APPEND, MODE_WORLD_READABLE, and
MODE_WORLD_WRITEABLE.
Every Android-compatible device supports a shared "external storage" that you can use to
save files. This can be a removable storage media (such as an SD card) or an internal (non-
removable) storage. Files saved to the external storage are world-readable and can be modified
by the user when they enable USB mass storage to transfer files on a computer.Writing to this
path requires the WRITE_EXTERNAL_STORAGE permission
Before you do any work with the external storage, you should always call
getExternalStorageState() to check whether the storage is available. This example checks
whether the external storage is available to read and write. The getExternalStorageState()
method returns other states that you might want to check, such as whether the media is being
shared (connected to a computer), is missing entirely, has been removed badly, etc. You can use
these to notify the user with more information when your application needs to access the media.
if (Environment.MEDIA_MOUNTED.equals(state)) {
// We can read and write the media
mExternalStorageAvailable = true;
Log.i("isSdReadable", "External storage card is readable.");
} else if (Environment.MEDIA_MOUNTED_READ_ONLY.equals(state)) {
// We can only read the media
Log.i("isSdReadable", "External storage card is readable.");
mExternalStorageAvailable = true;
} else {
// Something else is wrong. It may be one of many other
// states, but all we need to know is we can neither read nor
// write
mExternalStorageAvailable = false;
}
} catch (Exception ex) {
Code Part
Function readSDCardFileOption() to use read file from SD Card memory.In this function I
have used file location directly which is "file:///sdcard/my.html". Call Intent service to
open browser and load html file in browser from SD card memory.
Next function readInternalStorageOption() to use read filee from internal memory.In this
function I have used file location by calling method
getApplication().getFilesDir().getAbsolutePath() then add file seprator and file name.
Call Intent service to open browser and load html file in browser from internal memor
memory.
}
}
}
}
try {
String sfilename = "my.html";
FileOutputStream fos = this.openFileOutput(sfilename,
Context.MODE_PRIVATE | Context.MODE_WORLD_READABLE);
fos.write(html.getBytes());
fos.flush();
fos.close();
Toast.makeText(getBaseContext(),
"Write file in external memory 'my.html'",
Toast.LENGTH_SHORT).show();
NGTH_SHORT).show();
} catch (Exception e) {
}
Finally, function writeFileOnSDCard() to use write file in External memory (SD Card). To
find memory status in Android phone I have used method isSdReadable(). Method
isSdReadable() returns boolean value.
lue. If External Storage is available the value true will
return else return true.
try {
if (isSdReadable()) {
String fullPath = Environment.getExternalStorageDirectory()
.getAbsolutePath();
File myFile = new File(fullPath + File.separator + "/my.html");
} catch (Exception e) {
A Thread is a concurrent unit of execution. It has its own call stack for methods being invoked,
their arguments and local variables. Each application has at least one thread running when it is
started, the main thread, in the main ThreadGroup. The runtime keeps its own threads in the
system thread group.
There are two ways to execute code in a new thread. You can either subclass Thread and
overriding its run() method, or construct a new Thread and pass a Runnable to the constructor.
In either case, the start() method must be called to actually execute the new Thread.
Each Thread has an integer priority that affect how the thread is scheduled by the OS. A new
thread inherits the priority of its parent. A thread's priority can be set using the
setPriority(int) method.
Mobile application testing is a process by which application software developed for hand held
mobile devices is tested for its functionality, usability and consistency. Mobile application
testing can be automated or manual type of testing. Mobile applications either come pre-installed
or can be installed from mobile software distribution platforms. Mobile devices have witnessed a
phenomenal growth in the past few years. A study conducted by the Yankee Group predicts the
generation of $4.2 billion in revenue by 2013 through 7 billion U.S. smartphone app downloads
MORE NOTES:
With rampant use of mobiles, it has become mandatory for developers to come up with various
kinds of software and needless to say, they cannot be used without checking their functionality
first which is the entire purpose behind testing. The software is tested for all possible problems
and in all kinds of environments so that when flaws emerge they can be fixed. At one point of
time they were manually checked but with time the testing method has shifted from manual to
using software and tools for testing. Manual evaluation has many drawbacks including
cumbersome, time consuming and pricey. So, now evaluation has evolved to using software for
evaluating automatically, making things easier, faster and clearer for everybody involved. Some
of the advantages of using this method has been listed here.
Saving time- For most people this is the biggest advantages. These days there is continuous
running and racing towards the finish line. So many things to do but so little time; this is why
using this method has been like a blessing for many people. This is true especially in case of
regression testing which involves retesting an application if there has been any introduction of
new features or even when changes are made to existing features. This change may result from
fixing of defects, refactoring, etc. Since regression evaluation aims to make sure that the function
is as expected, all the test scripts must be run. But time constraints often limit the number of tests
run but using this method, the time issue can be easily resolved. You can save time and focus
your attention on other relevant areas.
Repeatability- You can re-run exactly the same tests in a similar manner which eliminates any
risk that is generally associated with human errors. Any kind of error may occur in manual
testing which may result in defects that may not be identified. In fact, this method also eliminates
the risk of reporting of bugs that are invalid which may result in waste of time for both testers as
well as developers.
Speed- The tests run by tools are quite faster than those conducted manually by humans which
again adds to the time-saving factor.
Increasing coverage- Test suites or software created for the evaluation are often created in a
manner that each and every feature in the application is covered, making them comprehensive.
Better quality- More tests are run with fewer resources and in less time, increasing the quality
of the application manifold times.
Better understanding- This approach leads testers to get an intimate and rich understanding of
content, structure, logic, data and flow of the application. This is because information is
presented visually which makes it very easy for the human mind to interpret and understand the
details.
Cost reduction- Manual evaluation often involves a lot of costly resources; you can save money
as well as precious resources by employing this kind of evaluation.
There are some disadvantages associated with the process as well. The formulation needs some
amount of patience along with proficiency; you have to debug this properly in order to ensure
that it works in the desired manner. However, the advantages are far too many as well as
powerful to ignore this testing. This process leads testers to craft tests which are consistent,
thorough, efficient as well as accurate. In turn the app's quality is improved, delighting the users.
Web based software testing teams canhelp you in testing your application within allocated
budgets and time schedules.
1. Variety of Mobile Devices- Mobile devices differ in screen sizes, input methods
(QWERTY, touch, normal) with different hardware capabilities.
2. Diversity in Mobile Platforms/OS- There are different Mobile Operating Systems in the
market. The major ones are Android, IOS, BREW, BREWMP, Symbian, Windows Phone,
and BlackBerry (RIM). Each operating system has its own limitations. Testing a single
application across multiple devices running on the same platform and every platform poses a
unique challenge for testers.
1 Functional Testing - Functional testing ensures that the application is working as per the
requirements. Most of the test conducted for this is driven by the user interface and call
flows.
2 Laboratory Testing - Laboratory testing, usually carried out by network carriers, is done
by simulating the complete wireless network. This test is performed to find out any
glitches when a mobile application uses voice and/or data connection to perform some
functions.
3 Performance Testing - This testing process is undertaken to check the performance and
behavior of the application under certain conditions such as low battery, bad network
coverage, low available memory, simultaneous access to application’s server by several
users and other conditions. Performance of an application can be affected from two sides:
Application’s server side and client’s side. Performance testing is carried out to check
both.
4 Memory Leakage Testing - Memory leakage happens when a computer program or
application is unable to manage the memory it is allocated resulting in poor performance
of the application and the overall slowdown of the system. As mobile devices have
significant constraints of available memory, memory leakage testing is crucial for the
proper functioning of an application
5 Interrupt Testing - An application while functioning may face several interruptions like
incoming calls or network coverage outage and recovery. The different types of
interruptions are:
Incoming and Outgoing SMS and MMS
Incoming and Outgoing calls
Incoming Notifications
Battery Removal
Cable Insertion and Removal for data transfer
Network outage and recovery
Media Player on/off
Device Power cycle
Testing tools
Some tools that are being used to test code quality in general for mobile applications are as
follows:
For Android
1 Android Lint - This is integrated with Eclipse IDE for Android. This will point out
potential bugs, performance problems.
2 Find Bugs - This is an open source library for static analysis in Java code.
3 Maveryx - Maveryx for Android is an automated testing tool for functional, regression,
GUI, and data-driven testing of Android mobile application
For iPhone
1. Clang Static Analyzer - An open source tool for running static analysis for iPhone code.
2. Analyze code from XCode - done during compile time.
A quick look at new malware threats discovered in the wild shows that mobile operating systems
such as iOS and (especially) Android are increasingly becoming targets for malware, just as
Windows, MacOS, and Linux have been for years. Anybody who wants to use a mobile device
to access the Internet should install and update antimalware software for his or her smartphone or
tablet. This goes double for anyone who wants to use such a device for work.
Most experts recommend that all mobile device communications be encrypted as a matter of
course, simply because wireless communications are so easy to intercept and snoop on. Those
same experts go one step further to recommend that any communications between a mobile
device and a company or cloud-based system or service require use of a VPN for access to be
allowed to occur. VPNs not only include strong encryption, they also provide opportunities for
logging, management and strong authentication of users who wish to use a mobile device to
access applications, services or remote desktops or systems.
Many modern mobile devices include local security options such as built-in biometrics —
fingerprint scanners, facial recognition, voiceprint recognition and so forth — but even older
devices will work with small, portable security tokens (or one-time passwords issued through a
variety of means such as email and automated phone systems). Beyond a simple account and
password, mobile devices should be used with multiple forms of authentication to make sure that
possession of a mobile device doesn't automatically grant access to important information and
systems.
Likewise, users should be instructed to enable and use passwords to access their mobile devices.
Companies or organizations should consider whether the danger of loss and exposure means that
some number of failed login attempts should cause the device to wipe its internal storage clean.
(Most modern systems include an ability to remotely wipe a smartphone or tablet, but mobile
device management systems can bring that capability to older devices as well.)
Companies or organizations that issue mobile devices to employees should establish policies to
limit or block the use of third-party software. This is the best way to prevent possible
compromise and security breaches resulting from intentional or drive-by installation of rogue
software, replete with backdoors and "black gateways" to siphon information into the wrong
hands.
For BYOD management, the safest course is to require such users to log into a remote virtual
work environment. Then, the only information that goes to the mobile device is the screen output
from work applications and systems; data therefore doesn't persist once the remote session ends.
Since remote access invariably occurs through VPN connections, communications are secure as
well — and companies can (and should) implement security policies that prevent download of
files to mobile devices.
It's important to understand what kinds of uses, systems and applications mobile users really
need to access. Directing mobile traffic through special gateways with customized firewalls and
security controls in place — such as protocol and content filtering and data loss prevention tools
— keeps mobile workers focused on what they can and should be doing away from the office.
This also adds protection to other, more valuable assets they don't need to access on a mobile
device anyway.
6. Choose (or Require) Secure Mobile Devices, Help Users Lock Them Down
Mobile devices should be configured to avoid unsecured wireless networks, and Bluetooth
should be hidden from discovery. In fact, when not in active use for headsets and headphones,
Bluetooth should be disabled altogether. Prepare a recommended configuration for personal
mobile devices used for work — and implement such configurations before the intended users
get to work on their devices.
At least once a year, companies and organizations should hire a reputable security testing firm to
audit their mobile security and conduct penetration testing on the mobile devices they use. Such
firms can also help with remediation and mitigation of any issues they discover, as will
sometimes be the case. Hire the pros to do unto your mobile devices what the bad guys will try to
do unto you sooner or later, though, and you'll be able to protect yourself from the kinds of
threats they can present.
While mobile security may have its own special issues and challenges, it's all part of the security
infrastructure you must put in place to protect your employees, your assets and, ultimately, your
reputation and business mission. By taking appropriate steps to safeguard against loss and
mitigate risks, your employees and contractors will be able to take advantage of the incredible
benefits that mobile devices can bring to the workplace.
Just remember the old adage about an ounce of prevention. That way, you're not saddled with
costs or slapped with legal liabilities or penalties for failing to exercise proper prudence,
compliance and best practices.
Risky business:
When a mobile device is lost or stolen, any business data it contains is jeopardized. Laws, such
as California SB1386 (and similar laws introduced in 35 states last year), require companies to
notify individuals whose private information may have been compromised. And businesses that
violate industry mandates like HIPAA and GLBA face hefty fines or even jail time. But many
companies cannot even enumerate the data carried by lost or stolen mobile devices.
A growing number of workers are using PDAs and smartphones to access business networks and
applications. In the Nokia study, commonly-used mobile applications included e-mail, instant
messaging, corporate database access, sales force automation, field service, CRM and
ERP/supply chain applications. Companies without mobile-specific applications may still face
mobile exposure through traditional applications. For example, many employees synchronize
company e-mail onto PDAs or forward messages to smartphones. Therefore, if lost or stolen,
these devices can be used to gain unauthorized access to an otherwise private network and
applications therein.
Additionally, many mobile devices now support multiple wireless interfaces, creating new attack
vectors. Mobile phones with Bluetooth can be "BlueBugged" (used by an attacker to place calls)
or "BlueSnarfed" (accessed to retrieve contacts and calendars). Cradled PDAs can become Wi-Fi
bridges into corporate networks. When used correctly, wireless interfaces can aid productivity,
but safeguards are needed to prevent misuse or attack.
To manage these risks, companies need to define which mobile devices are allowed and under
what conditions. They should place limits on network and application access, and on business
data storage and transfer. Security measures and practices should be required, and processes
defined to monitor and enforce compliance.
These decisions should be documented in a mobile device security policy -- a formal statement
of the rules by which mobile devices must abide when accessing business systems and data. Such
policies may include the following sections:
1. Objective: Identify the company, organizational unit and business purpose of the policy.
For example, the intent of the policy may be to prevent disclosure of company-
confidential data when transferred to or stored on PDAs and mobile phones, no matter
who owns those devices.
2. Ownership and authority: Identify those responsible for policy creation and
maintenance (development team), those responsible for policy monitoring and
enforcement (compliance team), and those responsible for policy approval and
management oversight (the policy's owners).
3. Scope: Identify the users/groups and devices that must adhere to this policy when
accessing business networks, services and data. Enumerate the mobile device models and
minimum OS versions allowed to access or store business data. Identify the
organizational units that are (or are not) permitted to do so. For example, you may forbid
business data storage on unapproved devices, or you may require users to register
personal devices before using them for business.
4. Risk assessment: Identify the business data and communication covered by this policy --
your company assets that may be placed at risk by mobile devices. For each asset,
identify threats and business impacts, taking into consideration both probability and cost.
For example, when a mobile device is lost, hardware replacement is probably just a small
fraction of the impact. If your risk assessment determines that data carried by a mobile
device is more valuable than the device itself, this may lead you to focus on data backup
and confidentiality as your top priority.
5. Security measures: Identify recommended and required mobile security measures and
practices, including:
o Power-on authentication to control lost/stolen device use
o File/folder encryption to prevent unauthorized data disclosure
o Backup and restore to protect against business data loss or corruption
o Secure communication to stop eavesdropping and backdoor network access
o Mobile firewalls to inhibit wireless-borne attacks against devices
o Mobile antivirus and IDS to detect and prevent device compromise
For example, your policy may mandate authentication, specifying the minimum length
and complexity for passwords and any applications that are excluded from authentication
(e.g., accepting incoming phone calls without entering a password). Your policy may also
define a process for mobile password reset that is convenient yet safe for users who
cannot easily return to the office.
6. Acceptable usage: Define what users must do to comply with this policy, including
procedures required for device registration, security software download and installation,
and policy configuration and update. Enumerate best practices that users are required to
follow, including banned activities. If users understand what they can and cannot do and
why, they will be less frustrated and more likely to comply with stated policy.
For example, you may implement a mobile security system that automatically detects any
PDA cradled to a corporate desktop. That system may prompt the user for self-
registration and then push security software and policy onto the PDA. Your policy might
explain this procedure and require that users cradle any purchased PDA to their office
desktop before using it to store business data. It might also describe unauthorized use that
will be blocked, like beaming business data over Bluetooth or copying data to removable
storage.
7. Deployment process: Define how you plan to implement and verify your mobile security
policy. It is a good idea to begin with a trial, taking both your mobile security software
and defined procedures out for a test drive with a small group of users. Many security
policies fail because they prove impractical to deploy or use. Working out these kinks
before requiring everyone to follow your policy will increase voluntary compliance and
overall effectiveness. Don't forget to include training for administrators and users in your
deployment process.
8. Auditing and enforcement: Voluntary compliance is nice, but insufficient for truly
managing business risk. Effective policies ensure compliance through monitoring and
enforcement. For example, you may adopt a mobile security system that checks for a
correctly-configured security agent whenever a PDA or phone is synchronized over-the-
air or cradled. Be sure to consider all points of network entry (e.g., e-mail server, VPN
gateway, Wi-Fi AP, desktop PC cradle), and define a business process to deal with non-
compliance and intrusion. Some mobile security systems can hard-reset devices that have
been stolen or appear to be under attack, but your policy should clearly define the
conditions under which this potentially destructive step will be invoked.
Submit: Applications are auto-submitted using APIs or interactively via a simple web interface
to our cloud-based platform.
Analyze: Dozens of analyses are performed, both statically, to identify how the application
works and dynamically as the application runs in a sandbox, to identify hundreds of code
vulnerabilities and risky app behaviors.
Quantify: Advanced machine learning technology generates a risk rating for each application by
comparing its behavioral profile to millions of data points from known applications, both
malicious and safe.
Inform: Our static and behavioral intelligence informs your policy development process, an
important step for mobile application security programs. Our policy engine provides
administrators with the ability to design and test rules before they are deployed for business
units, geographies or workgroups.
Enforce: Integrate intelligence from our cloud-based platform with leading MDM solutions such
as IBM/Fiberlink, MobileIron and VMware/AirWatch, or with custom in-house solutions via
APIs, to enforce policies on end-user devices and enterprise app stores
It’s when mobile computing processing is done in cloud; data is stored in cloud, and the mobile
device used as an output device.
Security strategies
To mitigate those risks and avoid those losses, pay close attention to mobile security best
practices. Here are the Top 5 building blocks of every effective mobile security strategy:
Collaborate
First things first: get everyone in a room. And I do mean everyone. A comprehensive security
strategy relies on the input, coordination and participation of all departments, not just IT.
Similarly, vulnerabilities impact every department. If employee data is on the Web, that is not
just an IT problem, that’s an HR problem. If secure patents are exposed through insecure file
sharing, legal needs to know about it. Sensitive financial information being emailed? That could
impact anyone and everyone. Every effective strategy is also a collective strategy, and
developing that strategy starts by getting HR, legal, financial, operations, IT and executive
leadership together to start talking about where your data is and where it is going.
Evaluate
Building an effective strategy starts with conducting an assessment. When people start to
appreciate just how many places their data is today, they are generally appalled. Audits and risk
assessments can help determine not just how easy it is to access that data, but also where the data
is, and what it’s being used for: key prerequisites to developing an effective security strategy.
Revamp
Remediate the fixable issues and plug the obvious technical and procedural leaks. This is
actually the easy part. As a general rule, consult with a trained security professional and fix the
easy stuff first. Most damaging data theft/losses are not the result of next-generation hackers, but
of careless mobile use, broken business practice or avoidable user error.
While the technical details are a big piece of the mobile security puzzle, policies and procedures
are equally – if not more – important. Things like document limitations, mobile device
“hygiene”, mobile ID access limits and responsible password practices are critical to an effective
strategy. Employee training and education, and subsequent/ongoing management, monitoring
and review of those practices are not only the best way to structure a mobile security strategy
that is sustainable, but also helps minimize your exposure and reduce your liability to litigation.
Anticipate
No mobile security plan is complete without a worst-case scenario response plan. A disaster
recovery/response plan is not only wise; it is necessary, because even the most robust security
protocols cannot guarantee 100% security. Not all mobile security is related to data loss. It could
be as simple as mobile misuse: such as an employee tweeting something inappropriate, sensitive
or profane. An effective response should include media materials and clear strategies for
communications, messaging and response that encompass a range of potential scenarios.
Impermeable mobile security is a pipe dream. But if you design a responsive and responsible
strategy based on the above priorities, you can dramatically improve your level of mobile
protection and greatly decrease the chances that you will suffer a truly damaging loss of sensitive
data.
Mobile applications can have complicated threat models, so security testing needs to examine a
number of different aspects of these systems. There are three major types of security testing tools
to look into for mobile app security testing: static, dynamic and forensic. Comprehensive testing
programs should use a combination of these vendor-provided and third-party tools.
Static
Static testing tools look at the application while at rest -- either the source code or the application
binary. These can be good for identifying certain types of vulnerabilities in how the code will run
on the device, usually associated with dataflow and buffer handling. Some commercial static
security analysis tools and services have the capability to test mobile application code. It is
important to work with the vendor to get a clear understanding of exactly what types of
vulnerabilities can and cannot be identified, because most security static analysis tools were
originally optimized for testing Web-based applications.
In Android environments, tools exist that extract both DEX assembly code as well as recover
Java source code from Android applications. Examples of these tools include DeDexer, which
generates DEX assembly code from an Android DEX application binary, and dex2jar, which
converts DEX application binaries to standard Java JAR files. Standard Java analysis tools such
as FindBugs can then be used to analyze these JARs. In addition, the Java bytecode can be
converted back into Java source code with Java decompilers such as JD-GUI. This sets the stage
for manual security analysis of an Android app.
You'll find a set of scripts that automate many static security testing preparation tasks for both
iOS and Android at www.smartphonesdumbapps.com and the associated Google Code
repository.
Dynamic
Dynamic testing tools allow security analysts to observe the behavior of running systems in order
to identify potential issues. The most common dynamic analysis tools used in mobile app
security testing are proxies that allow security analysts to observe -- and potentially change --
communications between mobile application clients and supporting Web services. One example
of such a proxy tool is the OWASP Zed Attack Proxy. With proxy tools, security analysts can
reverse engineer communication protocols and craft potentially malicious messages that would
never be sent by legitimate mobile clients. This allows the messages to attack the server-side
resources that are a critical component of any nontrivial mobile application system.
Forensic
Forensic tools allow security analysts to examine artifacts that are left behind by an application
after it has been run. Common things analysts might look for include hard-coded passwords or
other credentials stored in configuration files, sensitive data stored in application databases and
unexpected data stored in Web browser component caches. Analysts can also use forensic tools
to look at how components of mobile applications are stored on the device to determine if
available operating system access control facilities have been properly used.
The SQLite database engine is available natively on both iOS and Android systems and is a
common way for app developers to store data in a familiar relational database-like environment.
Utilities such as the SQLite Database Browser can be used to examine SQLite database files
once they have been recovered from a target system.
The following is an extensive library of security solutions articles and guides that are meant to be
helpful and informative resources on a range of security solutions topics, from web application
security to information and network security solutions to mobile and internet security solutions.
Application Testing Tool Application testing is an important part of securing your enterprise.
By identifying vulnerability in software before it is deployed or purchased, web application
testing tools help ward off threats and the negative impact they can have on competitiveness and
profits.
Code Review Tools Code review is an examination of computer source code. A code review
tool finds and fixes mistakes introduced into an application in the development phase, improving
both the overall quality of software and the developers' skills.
Penetration Testing Penetration testing tools are used as part of a penetration test to automate
certain tasks, improve testing efficiency and discover issues that might be difficult to find using
manual analysis techniques alone.
Security Review Software The goal of a software security review is to identify and understand
the vulnerabilities that can be exploited in the code your organization leverages. Your business
may leverage software and code from a variety of sources, including both internally developed
code, outsourced development and purchased third-party software.
Software Testing Tools As the enterprise network has become more secure, attackers have
turned their attention to the application layer, which, according to Gartner, now contains 90
percent of all vulnerabilities. To protect the enterprise, security administrators must perform
detailed software testing and code analysis when developing or buying software.
increased the mobile capacity and speed using new modulation techniques e.g. GSM
Introduction of radio communication in close proximity
Introduction of 4G
GPS (geographic positioning system) commonly used in navigation
Access to WIMAX (Worldwide Interoperability for Microwave Access) infrastructure