Unit 2-2

Download as pdf or txt
Download as pdf or txt
You are on page 1of 38

UNIT-2

GENERIC UI DEVELOPMENT:

The Generic User Interface (Generic UI, GUI) framework allows you to create UI screens using
Java and XML. XML is optional but it provides a declarative approach to the screen layout and
reduces the amount of code which is required for building the user interface.

The application screens consist of the following parts:


● Descriptors – XML files for declarative definition of the screen layout and data
components.

● Controllers – Java classes for handling events generated by the screen and its UI controls
and for programmatic manipulation with the screen components.
The code of application screens interacts with visual component interfaces (VCL Interfaces).
These interfaces are implemented using the Vaadin framework components.
Visual Components Library (VCL) contains a large set of ready-to-use components.
Data components provide a unified interface for binding visual components to entities and for
working with entities in screen controllers.
Infrastructure includes the main application window and other common client mechanisms.
The Generic User Interface (Generic UI, GUI) framework allows you to create UI screens
using Java and XML. XML is optional but it provides a declarative approach to the
screen layout and reduces the amount of code which is required for building the user
interface.

BUILDING GENERIC USER INTERFACES


As mentioned, the reason for building a generic user interface for mobile systems is the wide
variety of devices and user interfaces that an application might need to support. The idea here
is to layer the different parts of the user interface, build a generic user interface, and then
specialize it to a given device or type of user interface using a suitable mechanism such as
XSLT. Extensible Stylesheet Language Transformations (XSLT), better known as XSL
transformations, is a language for transforming Extensible Markup Languague (XML)
documents into other structured documents. This is done by using a style sheet defining
template rules for transforming a given input XML document into an appropriate output
document with the help of an XSL processor. The following presents the applications that can
benefit from a layered user interface approach:
1. Applications that change frequently: Many applications change very frequently. Such constant
changing of state may be caused by the business model that the application serves or a variety
of other reasons.
2. Applications that support a wide variety of devices: the developer should well know that
mobile applications need to support a variety of device types.
3. Applications that must have many loosely coupled parts: One of the advantages of building a
generic user interface to a system is that it enables loose coupling between the user interface
components themselves and among the look-and-feel components, interaction logic, and
application flow logic.
4. Applications that offer multiple user interfaces with a range of complexity: A good reason to
justify building systems with generic user interfaces is the requirement of supporting multiple
user interfaces, each with some difference in the required feature sets.

So, there is no one solution to all problems or even all problems of the same type, in case the
mobile application development problem. Assess the needs of the application user, the
available budget, and all other consideration before choosing an architectural solution to
implement. It is very possible that your system may not require developing a generic user
interface. Unusual performance requirements, the static nature of the application, the required
implementation of a restricted set of devices.

MULTIMODAL UI

Multimodal interfaces support user input and processing of two or more modalities--such as
speech, pen, touch and multi-touch, gestures, gaze, and virtual keyboard. These input
modalities may cost together on an interface, but be used either simultaneously or alternately.
The input may imvolve recognition-based technologies (eg,speech, gesture), simpler discrete
input (eg, keyboard, touch), or sensor-based information. Some of these modalities may be
capable of expressing semantically rich information and creating new content (eg, speech,
writing, keyboard), while other are limited to making discrete selections and controlling the
system display (eg, touching a URL to open it, pinching to shrink a visual display). As will be
discussed in this chapter, there are numerous types of multimodal interface with different
characteristics that have proliferated during the past decade. This general tred toward
multimodal interfaces aims to support more natural, flexible, and expressively powerful user
input to computers, compared with past keyboard-and-mouse interfaces that are limited to
discrete input.
TREE STRUCTURE OF MULTIMODAL UI

Definition and types of multimodal interface:

Multimodal Interfaces
support input and processing of two or more modalities, such as speech, pen, touch and multi-
touch, gestures, gaze, and virtual keyboard, which may be used simultaneously or alternately.
User input modes can involve recognition-based technologies (eg, speech) or discrete input (eg,
keyboard, touch). Some modes may express semantically rich information (eg, pen, speech,
keyboard), while others are limited to simple selection and manipulation actions that control the
system display (eg, gestures, touch, sensors).

Fusion-based Multimodal Interfaces


co-process information from two or more input modalities. They aim to recognize naturally
occurring forms of human language and behaving, which incorporate one or more recognition-
based technologies (eg, speech, pen, vision). More advanced ones process meaning from two
modes involving recognition-based technologies, such as speech and gestures, to produce an
overall meaning interpretation (see chapter 8). Simpler ones jointly process information from
one mode or sensor (eg, pointing/touching an object, location while mobile) to constrain the
interpretation of another recognition-based mode (eg. speech).

Alternative-Mode Multimodal Interfaces


provide an interface with two or more input options, but users enter information with one
modality at a time and the system processes each input individually.

Multimodal Interfaces for Content Creation


incorporate high-bandwidth or semantically rich modes, which can be used to create, modify,
and interact with system application content (eg.drawing lake on a map). They typically involve
a recognition-based technology
Multimodal Interfaces for Controlling the System Display
incorporate input modes limited to controlling the system or its display, such as zooming or
turning the system on, rather than cre ating or interacting with application content. Examples
include touch and multi-touch (eg, for selection), gestures (eg, flicking to paginate), and sensor-
based controls (eg, tilt for angle of screen display).

Active Input Modes


are ones that are deployed by the user intentionally as explicit input to a computer system (eg,
speaking, writing, typing, gesturing, pointing)

Passive Input Modes


refer to naturally occurring user behavior or actions that are recognized and processed by the
system (e.g., facial expressions, gaze, physiological or brain wave patterns, sensor input such
as location). They involve user or contextual input that is unobtrusively and massively monitored
without requiring any explicit user command to a computer

Temporally Cascaded Multimodal Interfaces


are ones that process two or more user modalities that tend to be sequenced in a particular
temporal order (e.g., gaze, gesture, speech), such that partial information supplied by
recognition of an earlier mode (e.g., gaze) constrains interpretation of a later one (e.g., manual
selection), which then may jointly constrain interpretation of third mode (e.g., speech). Such
interfaces may combine active input modes, passive ones, or both types of input.

Multimodal-Multisensor Interfaces
combine one or more user input modalities with sensor information that involves passive input
from contextual cues (e.g., location, acceleration, proximity, tilt) that a user does not need to
consciously engage. They aim to incorporate passive ly-tracked sensor input to transparently
facilitate user interaction, which may be combined with an active input mode (e.g., speech) or a
passive one (e.g., facial expression). The type and number of sensors incorporated into
multimodal interfaces has been expanding rapidly on cell phones, in cars, robots, and other
applications-resulting in explosive growth of multimod al-multisensor interfaces .

Visemes
refer to the classification of visible lip movements that correspond with audible phonemes
during continuous articulated speech. Many audio-visual speech recognition systems co-
process these two sources of information multimodally to enhance the robustness of recognition
Multichannel User Interfaces

Applicability parameters allow a model to use multiple Uls, each targeted to a different channel
of use.

Applicability helps you present the UI that's most appropriate to the context.

•You may need to configure the same model in multiple host applications,each having different
Ul requirements.

• Host application A is used by self-service customers with elementary knowledge of your


product line. You might need to present a simplified UI for Product X that guides the user
through each step of the configuration, and hides some product details that might be confusing.
Host application B is used by internal sales fulfillment staff who are very familiar with your
product line. You might need to present a full-featured UI for Product X that exposes every

•Host application B is used by internal sales fulfillment staff who are very familiar with your
product line. You might need to present a full-featured UI for Product X that exposes every
option, in a layout that enables users to reach those options most efficiently.

• You may need to present the same product to the same type of audience, but in different
countries. Consequently you need to present the Ul in multiple languages.

To provide for such multiple requirements, you can set the applicability parameters for a UI.

Setting Applicability Parameters


On the Overview tab for the Ul, you can choose the applications and languages for which your
user interface is applicable.

1. Edit your configurator model and navigate to the Overview subtab of the User Interfaces tab.

2. Under Applicability, select a parameter:

• Applications sets the applications that the UI will be used for. For example, if you select
Order Management, then the UI will be presented when Configurator is invoked by Oracle
Fusion Order Management.

• Languages sets the languages that the UI will be used for.For example, if you select Korean
and American English, then the Ul will be presented when Configurator is invoked by
applications using one of those languages.

3. The default setting for each parameter is All, meaning that the UI is available at run time to all
channels.

4. Select the Selected setting. The Select button becomes enabled.

By default, the currently selected parameter is None. If you leave the setting as None, then the
UI will not be available at run time to any of that parameter's options. If no Uls are available,
then the default UI is use.

5. Click the Select button. The selection dialog box for the parameter presents a list of available
options, from which you select one or more to determine the applicability of the UI.

6. If more than one UI has the same applicability parameter settings, then the sequence of Uls
in the table on the User Interfaces tab determines which UI will be used at run time.
To change the sequence in the table of Uls, select a UI then select one of the Move commands
on the Actions menu.

GESTURE BASED UI:

Gesture-based UI refers to using specific physical gestures in order to operate an interface. Take
your smartphone for instance. You can already interact with your phone without using the
keypad by swiping, tapping, pinching, and scrolling. The latest smart devices also allow for
“touchless” gestures where users can scroll or click without ever touching the screen.
With some embedded GUIs, you can simply tilt or shake the device to engage with it using built-
in accelerometers, gyroscopes, or magnetic sensors. Some products on the market today also
support advanced camera and sensor technology that pick up facial expressions and eye
movements to scroll, click, and interact.

Understanding Gestures for UI Design


Tapping, swiping, dragging, long-pressing – these are but a few of the gestures that have come to
dominate our digital experiences. Touch screen iPhones mainstreamed mobile gestures years
ago, and we haven’t looked back since.

Gestures affect how we interact with interfaces, including phones, laptops and iPads. But we
don’t have to look far to find a gestural interface beyond our work and entertainment devices.
It’s no longer uncommon to use gestures when interacting with car screens or bathroom sinks.
Natural User Interfaces (NUIs) are so natural to users that the interface feels, and sometimes is,
invisible, like a touch screen interface. Some NUIs even use gesture control, allowing users to
interact with the interface without direct physical contact. BMW recently released a gesture
control feature that gives users touchless control over car volume, calls and more.
Gestures are growing more common in user interface design and play increasingly complex roles
in our everyday lives.
As technology advances, UX and UI designers and businesses will need to adapt. You don’t have
to know all the technological intricacies or have an in-depth knowledge of computer intelligence.
Still, you should have a basic understanding of the capabilities, functions and best design
practices for gesture technology.

What Makes a Good Gesture?


Gestures are a way of communicating. We’ve long used hand gestures and head nods to help
convey meaning, and now, gestures play a role in communicating with user interfaces.
Good gestures provide effective, efficient communication that aligns with our way of thinking.
Our thoughts and knowledge influence how we speak, and they influence our use of gestures,
especially in UI design. Consider how much easier it is for younger generations who grow up
around modern technology to pick up on gestures – or how the act of swiping mimics pushing or
wiping something away. It’s why understanding your users is essential, even in gesture design.
Gestures cross the barrier between the physical and digital realms, allowing us to interact with
digital media with our bodies. In some ways, it makes using digital applications more fun, but
this isn’t enough to make a gesture a good one.

Benefits of Gesture Technology


The wide use of gestural interfaces is due to the many benefits that come with them. Three of the
most significant benefits of gestures are cleaner interfaces, ease of use and improved task
completion.

1. Cleaner Interfaces
Humans consume more content than ever before, businesses use more data and technology
continues to provide more services. With this increase in content, it’s easy for interfaces and
displays to appear cluttered. Designers can use gestures to reduce the number of visual elements,
like buttons, that take up space.
2. Ease of Use
As discussed above, interactions become more natural with a gesture-based interface. The ease
of simple hand gestures allows us to use technology with minimal effort at maximum speed.

3. Improved Task Completion


Task completion rates and conversion rates increase when there’s less a user has to do to
complete a task. You’re more likely to finish a task when it takes less effort. A gesture-based
user interface capitalizes on this by making tasks simple and quick. They can even reduce the
number of steps it takes to complete a task.

Types of Gestures in UI Design


Design for touch has led to the development of many types of gestures, the most common of
which are tapping and swiping. There are three categories of gesture:

1. Navigational gestures (to navigate)


2. Action gestures (to take action)
3. Transform gestures (to manipulate content)

The following are some of the most common gestures across interfaces that all (or almost all) of
users are familiar with – even if not consciously. We mention screens, but you can substitute the
screen for a touchpad or any other gesture interface.

Tap
A tap gesture is when you tap on the screen with one finger to open or select something, like an
app or page. Here’s a tip: Design clickable interface elements so that the entire box or row is
clickable – not just the text. Giving users more space increases usability.

Double-Tap
Double-tapping is when you tap the screen twice in a row in close succession. Many applications
use this gesture to zoom in, but on Instagram, users can double-tap a photo to like it.

Swipe
Swiping involves moving your finger across the screen in one direction, touching down on one
side and lifting your finger on the other. Swipe gestures are often used for scrolling or switching
between pages. Tinder uses swiping right to match with a profile and swiping left to pass over
one.

Multiple-Finger Swipe
You can also conduct a swipe gesture with two or three fingers. This is a common feature on
laptop touchpads that use two- and three-finger swipes for different actions.

Drag
Dragging uses the same general motion as a swipe, only you move your finger slower and don’t
lift it until you’ve pulled the object to where you want it to be. You use dragging to move an
item to a new location, like when re-organizing your phone apps.

Fling
Like swiping, a fling gesture is when you move your finger across the screen at a high speed.
Unlike a drag, your finger doesn’t remain in contact with an element. Flings are often used to
remove something from view.

Long Press
A long press is when you tap the screen but hold your finger down for longer than usual. Long
presses open up menu options, like when you hold text to copy it or hold down an app to delete
it.

Pinch
One of many two-finger gestures, a pinch is when you hold two fingers apart on the screen and
then drag them towards each other in a pinching motion. Pinch gestures are often used to zoom
back out after zooming in. Sometimes they present a view of all your open screens for navigation
purposes.

Pinch-Open or Spread
A pinch-open or spread gesture is the opposite of a pinch. You hold your two fingers down close
together and then spread them apart. Spreading, like double-tapping, is generally used to zoom
in.

Rotation
To do a rotation, press on the screen with two fingers and rotate them in a circular motion. The
best example of rotation is when you turn the map on Google Maps to see what’s around you.

Designing Gestures 101


Use What People Know
Gestures have been around for a while, so for most gestures, general guidelines exist.
And in most cases, there are rules you’ll want to follow when designing gestures for an interface.
When creating an app, for example, you’ll need to consider which interfaces users will use your
app on. There is the chance that users will download your app on Android and Apple phones,
both of which already use product-specific gestures. You’ll need to evaluate the gestures of your
product’s interfaces and decide how you’ll take advantage of them or if it’s worth it to add
gestures users are not familiar with.
Here are some handy gesture and motion guidelines for popular product interfaces.
● Google Gesture Guidelines
● Microsoft Gesture Guidelines
● Apple Gesture Guidelines
● Android Gesture Guidelines

When designing gesture-based user interfaces, it’s good practice to stick with what users know.
You can get creative if it’s called for, but a level of consistency among gestures and interfaces
helps keep them intuitive to users, increasing the usability of your product.
If you think a new gesture is in store, you need to test it extensively before implementing it.
You’ll conduct a series of user research methods to test the usability, effectiveness, learning
curves and user satisfaction with a gesture before releasing it to the public.
You have the option to reuse a well-known gesture for a different purpose, but again, you should
test the effectiveness of this strategy in advance. The benefit here is that users are at least
familiar with the motion.

Take, for example, Instagram’s use of the double-tap to like or “heart” a post. A double-tap is
usually used to zoom in, but it works well for Instagram’s purpose. It’s also a great study in
efficiency: Tapping the heart below a post requires one less tap but more aim. The alternative
double-tap method allows users to scroll faster since they have the whole image to aim for, and
it’s intuitive to tap the object you’re liking.

Designers have begun to develop a design language with hands, circles and arrows for
communicating gesture intent to product developers and strategists. This language is near
universal with minimal deviation.

Think Outside the Screen


Gestures exist in everyday scenarios outside of phone and laptop use. A growing number of
public restrooms have installed motion-sensitive sinks, air dryers and paper towel dispensers.
These devices also prevent the spread of germs – a nifty trait during flu season. Meanwhile, self-
driving cars are being enforced with gesture recognition technology to improve their
effectiveness and safety.
But you can still get creative with phone gestures while thinking outside of the screen. Devices
have been using rotation and shaking as methods of interaction for years now. For example,
Apple’s ‘Shake to Undo’ gives users the option to undo an action by shaking their phone. And by
now, you’re probably familiar with rotating screens horizontally to watch a video on full screen.
As long as they are tested first, creative gesture technologies can take products further and
increase usability.

Gestures and Accessibility


Gestures, like all things, should be accessible. Accessibility refers to making a product accessible
and usable to all people in all contexts, including people with disabilities. Gestures should adhere
to accessible design best practices to contribute to an equal environment, comply with the
Americans With Disabilities Act (ADA) and allow everyone who could benefit from your
product to use it.
Outside of making sure interface gestures are accessible, it’s worth considering how you can use
gestures to improve accessibility. Apple realized that the iPhone’s flat, textureless screens
presented an obstacle to blind users. So, they used their gesture-based interface to create
additional accessibility-based gestures that help the visually impaired use their products.
Where do we commonly see gesture-based UI?
The mobile device market isn’t the only industry that’s already using gesture-based interactions.
Gestural UI is also commonplace in the gaming, automotive, and medical industries. Popular
consoles, such as Xbox, use cameras and sensors to track player movements and gestures for
many of their interactive games. Automotive engineers are integrating GUI interfaces in their
driver information displays to change temperature and volume by making a touchless gesture in
front of the screen. Jaguar Land Rover, for example, teamed up with Cambridge University to
develop a gesture reading system in response to the COVID-19 pandemic:
"In the ‘new normal’ once lockdowns around the world are lifted, a greater emphasis will be
placed on safe, clean mobility where personal space and hygiene will carry premiums.”
In hospitals, gesture-based UI is being used to help doctors and surgeons view images and
records without having to step foot out of the operating room. These are just a few examples of
the many industries that are exploring all the benefits that this can provide for its customers.
USE OF TOUCHLESS GESTURES :
Consumers’ attitudes toward public touchscreens have quickly changed since the COVID-19
pandemic started. Now with social distancing and hygiene being top of mind, there is further
need for “touchless” devices as more consumers now fear having to physically touch screens in
public areas. In fact, experts have already noticed a sudden decline in fingerprint technology
shipments worldwide due to hygiene concerns. The demand for touchless options to pay for
items at the grocery store, access money from a banking machine, or sign for packages, has
already started to fuel the future of touchless GUIs.
The evolution of GUI technologies has also improved UX design and development processes in
general. Previous UI interfaces quickly became a huge source of frustration for consumers. Low-
quality touchscreens on smart devices created poor response times and precision, faulty
fingerprint scanning, and even damage from wear and tear. Today, consumers want higher-
quality touchscreens with better graphics, faster response times, and touchless GUI designs that
are far more hygienic and convenient to use.
Advancements in displays, cameras, and sensors over the past few years have opened the
gateway for better experiences for consumers and developers alike. For example, the recent
update to the Google Fit app allows mobile device users to measure their heart rate and
respiratory rate to track day-to-day wellness, all using the device’s camera.

The future of embedded GUIs


Experts believe that we are only at the tip of the iceberg regarding the gesture capabilities of
embedded GUIs. As the technology evolves, we can expect to see improved responsiveness,
moving towards the ability to predict what users are going to do before they even do it. The
gaming industry is already developing GUI-enhanced software that utilizes gaze-tracking and
telepathic-based science to create better virtual reality experiences. The medical industry is
moving towards highly advanced GUI devices for high-risk environments that will reduce the
transmission of germs and bacteria and speed up necessary processes.
There’s no doubt that the demand for gesture-based GUIs is exploding as well as the technology
around it. Developers and design teams that are integrating these technologies now are already
starting to get a leg up on their competition. For any embedded GUI team, now is the time to
start putting touchless gestures into your development framework.

Android supports a range of touch gestures such as tap, double-tap, pinch, swipe, scroll, long
press, drag, and fling. Drag and fling may seem similar but drag is the type of scrolling that
occurs when a user drags their finger across the touchscreen, while a fling gesture occurs when
the user drags and then lifts their finger quickly. A MotionEvent describes the state of touch
event via an action code. A long list of action codes is supported by Android:

● ACTION_DOWN: A touch event has started.


● ACTION_MOVE: A change that has occurred during the touch event (between
ACTION_DOWN and ACTION_UP).
● ACTION_UP: The touch event has finished. This contains the final release location.
● ACTION_CANCEL: The gesture was canceled.

Note: You should perform same action during ACTION_CANCEL and ACTION_UP event.

Important Methods

● getAction(): extract the action the user performed from the event parameter.

Important Interfaces
● GestureDetector.OnGestureListener : notifies users when a particular touch event has
occurred.
● GestureDetector.OnDoubleTapListener : notifies users when a double-tap event has
occurred.

Elements:
Knowing the essential factors in Mobile App Development will help increase your popularity
and following. You need a strategy that will improve your customer engagement and make the
mobile app development process a success. App development executed properly can help your
company or organization build a loyal user base at an affordable cost
User-friendly Navigation

Most users tend to show less patience to an app that has poor navigation. It pays to have a simple
app that allows users to find and use what they need with ease. You should aim to provide a
clutter-free experience. Focus on delivering a simple and easy to use app.
You do not need a complex interface and navigation system to appear modern, and sometimes
simplicity is the ultimate sophistication. Most users do not have all the time to waste trying to
explore your app; there are enough puzzle games in the app store for them to play.
App Content
In Mobile App Development, you need text to label your buttons, provide guidelines and explain
specific terminologies. The content in your app should be explicit and precise. Stuffing keywords
for SEO optimization may distort the message you are trying to communicate. Apps with
updated content and information look new to all users, potential and repeat customers. Most
experts talk about delivering accurate and specific content.
Graphic Design
Apps with quality graphic art or design are captivating to the eyes of the users. When you do not
wish to use text, it is essential to know that a picture is worth a thousand words. Therefore, you
need to use high quality and intuitive images, animations, visuals and designs to keep your users
engaged.
User Experience Design (UX)
The interaction between your app and the users should create positive emotions and attitudes. It
is more about how users feel when they are using your app. You need to design your app with
the users in mind. You need to come up with intuitive and interactive designs, making east easy
for the user to know the right thing to do.
You need to start with why users need your app, then move to what they can do with the product
and lastly take care of how they feel after or as they are using it. You might need the help of
psychology and interaction design to understand more about user behavior. This information
should help you make decisions on the type of features to use on your interface.
User Interface Design (Ui)

Work on your User Interface Design (UI) and app usability, which are the subsets of user
experience design. You will have to come up with decent and useful features.
Screen Resolution
When building your mobile application, you should also consider the screen size and resolution
of various mobile devices that operate on the platforms you wish to launch your app. In terms of
resolution, you have to achieve the right pixels per inch or standard screen resolution for your
app. More about this information is available on the app store technical guidelines.
Mobile device performance

● Mobile Application Size

Size matters especially when the user has limited storage capacity and slow internet. A well-
designed and straightforward app tends to consume less space. Size matters because it keeps the
mobile device’s CPU and RAM free, making your app easy and fast to load.

● Battery and Processor Usage

Some apps will overwork the processor leading to high battery usage. Gaming apps are known to
consume more juice from a mobile device. The development of your mobile app should be in
such a way that it does not consume too much energy.
Social Sharing Option
When listening to music, reading an article or playing a game, some users may want to share the
experience with others on social media. For this reason, an app should be able to integrate with
popular social networking sites for easy and fast sharing. With a built-in viral mechanism that
promotes social sharing, you let your customer market the app for you.
Security
The security of your app is the most crucial factor to consider above all other considerations.
You need to think about all the possible ways you can make your users safe. This factor takes
into account the type of technology you use to encrypt user information and data.

Compatibility
If your mobile app is to be available for all users, then you’ll have to consider the iOS or
Android versions available. As one upgrades from one version to the other, app updates should
also be available. Creating unique features for various platform versions makes it easy for the
user to see the change and keep them engaged.
Overall, your app has to be simple and loads quickly. App users are always looking for
something new and attractive to try. The success of your project relies on the ten most important
elements of mobile app development.

About 5280 Software LLC


5280 Software LLC, located in Denver, Colorado is a premier software development firm. We
have worked with a variety of clients over the years. Our expert team of developers have helped
small to medium sized businesses, startups, as well as enterprise level clients such as
RingCentral. If you are looking to build a mobile app, we can deliver flawless apple ios mobile
apps as well as android mobile apps. These apps can be designed for smartphones, ipad and
android tablets, wearable such as smart watches, and even smart TVs. A website is powerful
tool for branding and showcasing the products or services you offer. We can build you a
WordPress, E-Commerce, or custom coded website at competitive prices.
Once the site is launched, or if you already have a website, we offer Search Engine Optimization
Services to achieve higher search engine rankings. Contact us today with your website’s URL
and we will put together a free SEO analysis of your site with pricing and timeline to get your
site ranked on the first page of Google and other search engines. Our experienced and dedicated
team has a proven track record and provide regular reporting on tasks completed and keyword
movements during your SEO project. We study your site analytics to determine how the
campaign should be adjusted over time.
Many businesses use multiple software packages and services to run their business. We offer
custom software development services to streamline your business process. We have developed
custom dashboards that can pull data from multiple points and display all this information in one
place. Do you use an off the shelf CRM? 5280 Software LLC can customize this CRM or even
build a custom CRM from the ground up based on your requirements. If you need a custom
Windows desktop or Mac desktop application, feel free to reach out to us.
All projects are built according to a clearly defined scope of work. If you need assistance with
scope creation, we offer this as a service. This scope of work will be used in the provided
development contract. Make sure whomever you hire for your project provides you a
development contract with clearly defined pricing, development timeline, payment schedule, and
deliverables. While there are many approaches to tackle the complexity of the development
journey, we offer customized plans to properly deliver with every mvp, sprint, beta, added
functionalities and full development project. We strive to deliver the best approach for mockup
screens, wireframes and layouts, and workflows before the actual development begins. 5280
Software LLC uses the most modern testing tools required to ensure we deliver robust and bug
free applications. We are here for your at every stage and phase of the development life-cycle as
well as for future updates, maintenance processes, and deployment.
If you have an idea for a mobile app, please emails us your full name or company name and we
will send you a signed NDA. We send NDAs to all potential clients to guarantee your ideas and
documents will be kept in confidence. This is the first step we take with working with
individuals and entrepreneurs as well as companies. It is imperative for agencies to provide you
an NDA to safeguard your app idea. Once an NDA is in place, feel free to book a free discovery
call. You can also fill out our questionnaire to share the details of your project before your
call. We are the leading firm specializing in building solutions for the web, iphones and other
smart devices including IOT. We can create a custom solution on a variety of platforms using
the most current popular programming languages. Weather it is a cross platform hybrid app or a
natively programmed solution, we have you covered!
We look forward to speaking with you about your project. 5280 Software LLC is the premier
quality team in the USA that can develop your iOS iphone mobile app, android mobile app,
website, or custom software. Every innovative developer on staff covers the entire needs of the
client and can solve your problems with modern, cutting edge coding trends and technology
stacks. We have the expertise in developing custom solutions that are feature rich. Let us turn
your idea into reality!

6 Necessary Elements For Designing A Perfect Mobile App


User Interface
It is time to discover the factors that have the biggest influence on creating an amazing user
interface with the fundamental mobile app design elements below.

Color
Here is the first one: color. Color is one of the most important elements of mobile design. When
users open your app, what is the first thing they observe? That’s right, the main color. The
Internet is filled with studies and articles about the psychology behind colors in marketing and
all these principles apply also to mobile apps so you have to think about the color while
building your mobile interface and it to your mobile app design elements. Let’s take for
example the article published in Growth Tower about this topic. We mentioned it before but it
worth reminding you because it can be very useful for providing the right emotions for your
users. A simple color could change everything about your mobile app ui. Take a look at the
scheme below and think about your users, their location, and their characteristics. This way you
will find the right complex of colors to match your app’s style.

Font
Next, with the same thoughts in mind consider the way your content is displayed. In case you
have a funny app or game, then a hilarious font will enhance the comic effect. On the other hand,
if you own a serious app which presents real facts from the economic world then make sure that
the typography chosen will reveal the gravity of the news presented. You should not forget the
strength you give to your content with font while designing app user interface. The font you
select for your app can build or ruin your customers’ interest in an instant. At the same time, you
must remember the principles listed above and to stick to the style selected for your app user
interface for avoiding confusion. So, font is one of the most necessary mobile app design
elements.

Icons
Necessary elements of mobile design, of course, include icons. Those small images are more
important than you can imagine while designing mobile app ui. They create a great impact for
the overall perception of users about your app. There are various types of icons but we will
enumerate only the main ones:

● App Icons – for representing an app;


● Clarifying Icons – explain certain tasks;
● Interactive Icons – used mainly for navigation;
● Decorative Icons – created for a more attractive look;

No matter what kind of icons you choose, you have to keep in mind that they need to be clear
and to express exactly the type of action you are waiting for your customers. Avoid similarities
with your app user interface that can generate confusion or hesitation.

Illustrations
Don’t forget to include illustrations in your must mobile app design elements list! It is needless
to say that all creatives added inside your app have to follow the highest quality standards.
Besides that, they need to be handled in a smart way in order to reflect the point you are trying to
prove with your text. Just like fonts, illustrations need to follow a specific theme and to be
carefully chosen for creating the wanted impact. Remember that mobile devices come in
different sizes and with various screen resolutions and you have to provide the best experience
for all of your users so your app ui should be adaptable to different devices. At the same time be
careful with the licenses and make sure that you have the rights to use them for your app user
interface.

Brand Design
With a clever user interface, you can attract customers interested in the features offered by your
creation. Never forget that your app represents your brand in the eyes of smartphone users. Your
mobile ui should represent your brand perfectly. Add your logo inside the app and make sure that
users are aware of the fact that your company provides high-quality services and every time they
see this small picture they will know that they can trust your products. It is about building a long-
lasting relationship between your business and your customers.

Navigation
Besides colors, fonts, images and other visual effects, you have to make sure that users don’t get
lost inside your app. Customers should be able to find what they need from your app easily and
this happens with a good navigation in mobile interface. In every moment they have to know
where they are and the next step required for the wanted activity. Use every instrument you have
for guiding them and for describing the necessary steps they need to make for achieving their
purpose. You have to find the right balance between interactivity and simplicity but keep in mind
that a tangled interface isn’t benefiting from any type of app, not even for puzzle games.

Layouts:
A layout defines the structure for a user interface in your app, such as in an
activity. All elements in the layout are built using a hierarchy of View and
ViewGroup objects. A View usually draws something the user can see and
interact with. Whereas a ViewGroup is an invisible container that defines the
layout structure for View and other ViewGroup objects, as shown in figure 1.

Figure 1. Illustration of a view hierarchy, which defines a UI layout


The View objects are usually called "widgets" and can be one of many subclasses,
such as Button or TextView. The ViewGroup objects are usually called
"layouts" can be one of many types that provide a different layout structure, such
as LinearLayout or ConstraintLayout .
You can declare a layout in two ways:

● Declare UI elements in XML. Android provides a straightforward XML


vocabulary that corresponds to the View classes and subclasses, such as
those for widgets and layouts.

You can also use Android Studio's Layout Editor to build your XML layout
using a drag-and-drop interface.

● Instantiate layout elements at runtime. Your app can create View and
ViewGroup objects (and manipulate their properties) programmatically.

Declaring your UI in XML allows you to separate the presentation of your app
from the code that controls its behavior. Using XML files also makes it easy to
provide different layouts for different screen sizes and orientations (discussed
further in Supporting Different Screen Sizes).
The Android framework gives you the flexibility to use either or both of these
methods to build your app's UI. For example, you can declare your app's default
layouts in XML, and then modify the layout at runtime.
Tip: To debug your layout at runtime, use the Layout Inspector tool.
Write the XML
Using Android's XML vocabulary, you can quickly design UI layouts and the
screen elements they contain, in the same way you create web pages in HTML —
with a series of nested elements.
Each layout file must contain exactly one root element, which must be a View or
ViewGroup object. Once you've defined the root element, you can add additional
layout objects or widgets as child elements to gradually build a View hierarchy that
defines your layout. For example, here's an XML layout that uses a vertical
LinearLayout to hold a TextView and a Button:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout
xmlns:android="http://schemas.android.com/apk/res/andro
id"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical" >
<TextView android:id="@+id/text"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hello, I am a TextView" />
<Button android:id="@+id/button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hello, I am a Button" />
</LinearLayout>
After you've declared your layout in XML, save the file with the .xml extension,
in your Android project's res/layout/ directory, so it will properly compile.
More information about the syntax for a layout XML file is available in the Layout
Resources document.
Load the XML Resource
When you compile your app, each XML layout file is compiled into a View
resource. You should load the layout resource from your app code, in your
Activity.onCreate() callback implementation. Do so by calling
setContentView(), passing it the reference to your layout resource in the
form of: R.layout.layout_file_name. For example, if your XML layout is
saved as main_layout.xml, you would load it for your Activity like so:

public void onCreate(Bundle savedInstanceState) {


super.onCreate(savedInstanceState);
setContentView(R.layout.main_layout);
}
The onCreate() callback method in your Activity is called by the Android
framework when your Activity is launched (see the discussion about lifecycles, in
the Activities document).
Attributes
Every View and ViewGroup object supports their own variety of XML attributes.
Some attributes are specific to a View object (for example, TextView supports the
textSize attribute), but these attributes are also inherited by any View objects
that may extend this class. Some are common to all View objects, because they are
inherited from the root View class (like the id attribute). And, other attributes are
considered "layout parameters," which are attributes that describe certain layout
orientations of the View object, as defined by that object's parent ViewGroup
object.
ID
Any View object may have an integer ID associated with it, to uniquely identify
the View within the tree. When the app is compiled, this ID is referenced as an
integer, but the ID is typically assigned in the layout XML file as a string, in the
id attribute. This is an XML attribute common to all View objects (defined by the
View class) and you will use it very often. The syntax for an ID, inside an XML
tag is:
android:id="@+id/my_button"
The at-symbol (@) at the beginning of the string indicates that the XML parser
should parse and expand the rest of the ID string and identify it as an ID resource.
The plus-symbol (+) means that this is a new resource name that must be created
and added to our resources (in the R.java file). There are a number of other ID
resources that are offered by the Android framework. When referencing an
Android resource ID, you do not need the plus-symbol, but must add the
android package namespace, like so:
android:id="@android:id/empty"
With the android package namespace in place, we're now referencing an ID
from the android.R resources class, rather than the local resources class.
In order to create views and reference them from the app, a common pattern is to:

1. Define a view/widget in the layout file and assign it a unique ID:

<Button android:id="@+id/my_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/my_button_text"/>
Then create an instance of the view object and capture it from the layout
(typically in the onCreate() method):

KotlinJava
2. Button myButton = (Button)
findViewById(R.id.my_button);
Defining IDs for view objects is important when creating a RelativeLayout.
In a relative layout, sibling views can define their layout relative to another sibling
view, which is referenced by the unique ID.
An ID need not be unique throughout the entire tree, but it should be unique within
the part of the tree you are searching (which may often be the entire tree, so it's
best to be completely unique when possible).
Note: With Android Studio 3.6 and higher, the view binding feature can replace
findViewById() calls and provides compile-time type safety for code that
interacts with views. Consider using view binding instead of findViewById().
Layout Parameters
XML layout attributes named layout_something define layout parameters for
the View that are appropriate for the ViewGroup in which it resides.
Every ViewGroup class implements a nested class that extends
ViewGroup.LayoutParams. This subclass contains property types that define
the size and position for each child view, as appropriate for the view group. As you
can see in figure 2, the parent view group defines layout parameters for each child
view (including the child view group).

Figure 2. Visualization of a view hierarchy with layout parameters associated with


each view
Note that every LayoutParams subclass has its own syntax for setting values. Each
child element must define LayoutParams that are appropriate for its parent, though
it may also define different LayoutParams for its own children.
All view groups include a width and height (layout_width and
layout_height), and each view is required to define them. Many
LayoutParams also include optional margins and borders.
You can specify width and height with exact measurements, though you probably
won't want to do this often. More often, you will use one of these constants to set
the width or height:

● wrap_content tells your view to size itself to the dimensions required by its
content.
● match_parent tells your view to become as big as its parent view group will
allow.

In general, specifying a layout width and height using absolute units such as pixels
is not recommended. Instead, using relative measurements such as density-
independent pixel units (dp), wrap_content, or match_parent, is a better approach,
because it helps ensure that your app will display properly across a variety of
device screen sizes. The accepted measurement types are defined in the Available
Resources document.
Layout Position
The geometry of a view is that of a rectangle. A view has a location, expressed as a
pair of left and top coordinates, and two dimensions, expressed as a width and a
height. The unit for location and dimensions is the pixel.
It is possible to retrieve the location of a view by invoking the methods
getLeft() and getTop(). The former returns the left, or X, coordinate of the
rectangle representing the view. The latter returns the top, or Y, coordinate of the
rectangle representing the view. These methods both return the location of the view
relative to its parent. For instance, when getLeft() returns 20, that means the
view is located 20 pixels to the right of the left edge of its direct parent.
In addition, several convenience methods are offered to avoid unnecessary
computations, namely getRight() and getBottom(). These methods return
the coordinates of the right and bottom edges of the rectangle representing the
view. For instance, calling getRight() is similar to the following computation:
getLeft() + getWidth().
Size, Padding and Margins
The size of a view is expressed with a width and a height. A view actually
possesses two pairs of width and height values.
The first pair is known as measured width and measured height. These dimensions
define how big a view wants to be within its parent. The measured dimensions can
be obtained by calling getMeasuredWidth() and
getMeasuredHeight().
The second pair is simply known as width and height, or sometimes drawing width
and drawing height. These dimensions define the actual size of the view on screen,
at drawing time and after layout. These values may, but do not have to, be different
from the measured width and height. The width and height can be obtained by
calling getWidth() and getHeight().
To measure its dimensions, a view takes into account its padding. The padding is
expressed in pixels for the left, top, right and bottom parts of the view. Padding can
be used to offset the content of the view by a specific number of pixels. For
instance, a left padding of 2 will push the view's content by 2 pixels to the right of
the left edge. Padding can be set using the setPadding(int, int, int,
int) method and queried by calling getPaddingLeft(),
getPaddingTop(), getPaddingRight() and getPaddingBottom().
Even though a view can define a padding, it does not provide any support for
margins. However, view groups provide such a support. Refer to ViewGroup and
ViewGroup.MarginLayoutParams for further information.
For more information about dimensions, see Dimension Values.
Common Layouts
Each subclass of the ViewGroup class provides a unique way to display the
views you nest within it. Below are some of the more common layout types that
are built into the Android platform.
Note: Although you can nest one or more layouts within another layout to achieve
your UI design, you should strive to keep your layout hierarchy as shallow as
possible. Your layout draws faster if it has fewer nested layouts (a wide view
hierarchy is better than a deep view hierarchy).
Linear Layout
A layout that organizes its children into a single horizontal or vertical row. It
creates a scrollbar if the length of the window exceeds the length of the screen.
Relative Layout

Enables you to specify the location of child objects relative to each other (child A
to the left of child B) or to the parent (aligned to the top of the parent).
Web View

Displays web pages.


Building Layouts with an Adapter
When the content for your layout is dynamic or not pre-determined, you can use a
layout that subclasses AdapterView to populate the layout with views at
runtime. A subclass of the AdapterView class uses an Adapter to bind data to
its layout. The Adapter behaves as a middleman between the data source and the
AdapterView layout—the Adapter retrieves the data (from a source such as
an array or a database query) and converts each entry into a view that can be added
into the AdapterView layout.
Common layouts backed by an adapter include:
List View

Displays a scrolling single column list.


Grid View

Displays a scrolling grid of columns and rows.


Filling an adapter view with data
You can populate an AdapterView such as ListView or GridView by
binding the AdapterView instance to an Adapter, which retrieves data from
an external source and creates a View that represents each data entry.
Android provides several subclasses of Adapter that are useful for retrieving
different kinds of data and building views for an AdapterView. The two most
common adapters are:
ArrayAdapter
Use this adapter when your data source is an array. By default,
ArrayAdapter creates a view for each array item by calling
toString() on each item and placing the contents in a TextView.

For example, if you have an array of strings you want to display in a


ListView, initialize a new ArrayAdapter using a constructor to
specify the layout for each string and the string array:
KotlinJava
ArrayAdapter<String> adapter = new
ArrayAdapter<String>(this,
android.R.layout.simple_list_item_1,
myStringArray);
The arguments for this constructor are:

● Your app Context


● The layout that contains a TextView for each string in the array
● The string array

Then simply call setAdapter() on your ListView:


KotlinJava
ListView listView = (ListView)
findViewById(R.id.listview);
listView.setAdapter(adapter);
To customize the appearance of each item you can override the
toString() method for the objects in your array. Or, to create a view for
each item that's something other than a TextView (for example, if you
want an ImageView for each array item), extend the ArrayAdapter
class and override getView() to return the type of view you want for each
item.
SimpleCursorAdapter

Use this adapter when your data comes from a Cursor. When using
SimpleCursorAdapter, you must specify a layout to use for each row
in the Cursor and which columns in the Cursor should be inserted into
which views of the layout. For example, if you want to create a list of
people's names and phone numbers, you can perform a query that returns
a Cursor containing a row for each person and columns for the names
and numbers. You then create a string array specifying which columns from
the Cursor you want in the layout for each result and an integer array
specifying the corresponding views that each column should be placed:

KotlinJava
String[] fromColumns =
{ContactsContract.Data.DISPLAY_NAME,
ContactsContract.CommonDataKind
s.Phone.NUMBER};
int[] toViews = {R.id.display_name, R.id.phone_number};
When you instantiate the SimpleCursorAdapter, pass the layout to use for
each result, the Cursor containing the results, and these two arrays:
SimpleCursorAdapter adapter = new
SimpleCursorAdapter(this,
R.layout.person_name_and_number, cursor,
fromColumns, toViews, 0);
ListView listView = getListView();
listView.setAdapter(adapter);
The SimpleCursorAdapter then creates a view for each row in the
Cursor using the provided layout by inserting each fromColumns item
into the corresponding toViews view.

If, during the course of your app's life, you change the underlying data that is read
by your adapter, you should call notifyDataSetChanged(). This will notify
the attached view that the data has been changed and it should refresh itself.
Handling click events
You can respond to click events on each item in an AdapterView by
implementing the AdapterView.OnItemClickListener interface. For
example:
// Create a message handling object as an anonymous
class.
private OnItemClickListener messageClickedHandler = new
OnItemClickListener() {
public void onItemClick(AdapterView parent, View v,
int position, long id) {
// Do something in response to the click
}
};

listView.setOnItemClickListener(messageClickedHandler);

Android Screen UI Components


An Activity shows the application’s UI. having widgets like buttons, labels etc
and are defined by an XML file like

<?xml version="1.0" encoding="utf-8"?>

<LinearLayout
xmlns:android="http://schemas.android.com/apk/res/android"

android:orientation="vertical"

android:layout_width="fill_parent"

android:layout_height="fill_parent"

>

<TextView

android:layout_width="fill_parent"

android:layout_height="wrap_content"

android:text="@string/hello"

/>

</LinearLayout>
During execution, the XML UI is processed by the onCreate() event handler
in Activity class, using the setContentView() method of theActivity class
as

@Override

public void onCreate(Bundle savedInstanceState) {

super.onCreate(savedInstanceState);

setContentView(R.layout.main);

}
Every XML element is compiled into its consecutive Android GUI class.

Layout
It is a type of resource which gives definition on what is drawn on the screen
or how elements are placed on the device’s screen and stored as XML files in
the /res/layout resource directory for the application. It can also be a type of
View class to organize other controls.

Using Eclipse for Layout

The Android Development Plug-in for Eclipse has an layout resource designer
for designing and previewing layout resources. It has two tab views namely,
the Layout view to preview how the controls will appear on various screens
and the XML view shows the XML definition.

The layout resource designer preview can’t replicate exactly how the layout
will appear to end users hence, testing on a properly configured emulator is
needed.

Defining an XML for Layout

It is an simple method for the UI design process and defining layout of user
interface controls with control attributes. Developer can easily access
complex controls like ListView or GridView, and manipulate the content of a
screen programmatically. As an example –
<?xml version=”1.0″ encoding=”utf-8″?>
<LinearLayout
xmlns:android=”http://schemas.android.com/apk/res/android”
android:orientation=”vertical”
android:layout_width=”fill_parent”
android:layout_height=”fill_parent”
android:gravity=”center”>
<TextView
android:layout_width=”fill_parent”
android:id=”@+id/PhotoLabel”
android:layout_height=”wrap_content”
android:text=”@string/my_text_label”
android:gravity=”center_horizontal”
android:textSize=”20dp” />
<ImageView
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:src=”@drawable/matterhorn”
android:adjustViewBounds=”true”
android:scaleType=”fitXY”
android:maxHeight=”250dp”
android:maxWidth=”250dp”
android:id=”@+id/Photo” />
</LinearLayout>
The layout is a simple screen with two controls of, first some text and then
an image below it and both are arranged in a vertically oriented
LinearLayout.

VOICE XML:

Voice XML is an Extensible Markup Language (XML) standard for storing and
processing digitized voice, input recognition and defining human and machine
voice interaction. Voice XML uses voice as an input to a machine for desired
processing, thereby facilitating voice application development.

VoiceXML (VXML) is a digital document standard for specifying interactive media


and voice dialogs between humans and computers. It is used for developing
audio and voice response applications, such as banking systems and automated
customer service portals.

The top-level element is <vxml>, which is mainly a container for dialogs. There are
two types of dialogs: forms and menus. Forms present information and gather
input; menus offer choices of what to do next. This example has a single form,
which contains a block that synthesizes and presents "Hello World!" to the user.

VoiceXML is the HTML of the voice web, the open standard markup language for
voice applications. Where HTML assumes a graphical web browser with display,
keyboard, and mouse, VoiceXML assumes a voice browser with audio output
(recorded messages and TTS synthesis), audio input (ASR), and keypad input
(DTMF).

The VoiceXML gateway, or voice portal, serves as the heart of any VoiceXML-
enabling network.

vxml . Select the parent folder where you want the new file placed. Optional:
Click Advanced to reveal or hide a section of the wizard used to create a linked
file. Check the Link to file in the file system check box if you want the new file to
reference a file in the file system.

VoiceXML applications are commonly used in many industries and segments of


commerce. These applications include order inquiry, package tracking, driving
directions, emergency notification, wake-up, flight tracking, voice access to email,
customer relationship management, prescription refilling, audio news magazines,
voice dialing, real-estate information and national directory assistance applications.

VoiceXML has tags that instruct the voice browser to provide speech synthesis,
automatic speech recognition, dialog management, and audio playback. The
following is an example of a VoiceXML document:

<vxml version="2.0"
xmlns="http://www.w3.org/2001/vxml">
<form>
<block>
<prompt>
Hello world!
</prompt>
</block>
</form>
</vxml>

When interpreted by a VoiceXML interpreter this will output "Hello world" with
synthesized speech.

Typically, HTTP is used as the transport protocol for fetching VoiceXML pages.
Some applications may use static VoiceXML pages, while others rely on dynamic
VoiceXML page generation using an application server like Tomcat, Weblogic,
IIS, or WebSphere.

Historically, VoiceXML platform vendors have implemented the standard in


different ways, and added proprietary features. But the VoiceXML 2.0 standard,
adopted as a W3C Recommendation on 16 March 2004, clarified most areas of
difference. The VoiceXML Forum, an industry group promoting the use of the
standard, provides a conformance testing process that certifies vendors'
implementations as conformant.

ADVANTAGES:

While you could certainly build voice applications without using a voice markup
language and a speech browser (for example, by writing your applications directly
to a speech API), using VoiceXML and a VoiceXML browser provide several
important capabilities:

● VoiceXML is a markup language that makes building voice applications


easier, in the same way that HTML simplifies building visual applications.
VoiceXML also reduces the amount of speech expertise that developers
need.
● VoiceXML applications can use the same existing back-end business logic as
their visual counterparts, enabling voice solutions to be introduced to new
markets quickly. Current and long-term development and maintenance
costs are minimized by leveraging the Web design skills and infrastructures
already present in the enterprise. Customers can benefit from a consistency
of experience between voice and visual applications.
● VoiceXML implements a client/server paradigm, where a Web server
provides VoiceXML documents that contain dialogs to be interpreted and
presented to a user. The user's responses are submitted to the Web server,
which responds by providing additional VoiceXML documents, as
appropriate. VoiceXML allows you to request documents and submit data
to server scripts using Universal Resource Identifiers (URIs). VoiceXML
documents can be static, or they can be dynamically generated by CGI
scripts, Java Beans, ASPs, JSPs, Java servlets, or other server-side logic.
● Unlike a proprietary Interactive Voice Response (IVR) system, VoiceXML
provides an open application development environment that generates
portable applications. This makes VoiceXML a cost-effective alternative for
providing voice access services.
● Most installed IVR systems today accept input from the telephone keypad
only. In contrast, VoiceXML is designed predominantly to accept spoken
input, but it can also accept DTMF input, if desired. As a result, VoiceXML
helps speed up customer interactions by providing a more natural interface
that replaces the traditional, hierarchical IVR menu tree with a streamlined
dialog using a flattened command structure.
● VoiceXML directly supports networked and Web-based applications,
meaning that a user at one location can access information or an
application provided by a server at another geographically or
organizationally distant location. This capitalizes on the connectivity and
commerce potential of the World Wide Web.
● Using a single VoiceXML browser to interpret streams of markup language
originating from multiple locations provides the user with a seamless
conversational experience across independent applications. For example, a
voice portal application might allow a user to temporarily suspend an
airline purchase transaction to interact with a banking application on a
different server to check an account balance.
● VoiceXML supports local processing and validation of user input.
● VoiceXML supports playback of prerecorded audio files.
● VoiceXML supports recording of user input. The resulting audio can be
played back locally or uploaded to the server for storage, processing, or
playback at a later time.
● VoiceXML defines a set of events corresponding to such activities as a user
request for help, the failure of a user to respond within a timeout period,
and an unrecognized user response. A VoiceXML application can provide
catch elements that respond appropriately to a given event for a particular
context.
● VoiceXML supports context-specific and tapered help using a system of
events and catch elements. Help can be tapered by specifying a count for
each event handler, so that different event handlers are executed
depending on the number of times that the event has occurred in the
specified context. This can be used to provide increasingly more detailed
messages each time the user asks for help.
● VoiceXML supports subdialogs, which are roughly the equivalent of
function or method calls. Subdialogs can be used to provide a
disambiguation or confirmation dialog, and to create reusable dialog
components.

You might also like