Unit 2-2
Unit 2-2
Unit 2-2
GENERIC UI DEVELOPMENT:
The Generic User Interface (Generic UI, GUI) framework allows you to create UI screens using
Java and XML. XML is optional but it provides a declarative approach to the screen layout and
reduces the amount of code which is required for building the user interface.
● Controllers – Java classes for handling events generated by the screen and its UI controls
and for programmatic manipulation with the screen components.
The code of application screens interacts with visual component interfaces (VCL Interfaces).
These interfaces are implemented using the Vaadin framework components.
Visual Components Library (VCL) contains a large set of ready-to-use components.
Data components provide a unified interface for binding visual components to entities and for
working with entities in screen controllers.
Infrastructure includes the main application window and other common client mechanisms.
The Generic User Interface (Generic UI, GUI) framework allows you to create UI screens
using Java and XML. XML is optional but it provides a declarative approach to the
screen layout and reduces the amount of code which is required for building the user
interface.
So, there is no one solution to all problems or even all problems of the same type, in case the
mobile application development problem. Assess the needs of the application user, the
available budget, and all other consideration before choosing an architectural solution to
implement. It is very possible that your system may not require developing a generic user
interface. Unusual performance requirements, the static nature of the application, the required
implementation of a restricted set of devices.
MULTIMODAL UI
Multimodal interfaces support user input and processing of two or more modalities--such as
speech, pen, touch and multi-touch, gestures, gaze, and virtual keyboard. These input
modalities may cost together on an interface, but be used either simultaneously or alternately.
The input may imvolve recognition-based technologies (eg,speech, gesture), simpler discrete
input (eg, keyboard, touch), or sensor-based information. Some of these modalities may be
capable of expressing semantically rich information and creating new content (eg, speech,
writing, keyboard), while other are limited to making discrete selections and controlling the
system display (eg, touching a URL to open it, pinching to shrink a visual display). As will be
discussed in this chapter, there are numerous types of multimodal interface with different
characteristics that have proliferated during the past decade. This general tred toward
multimodal interfaces aims to support more natural, flexible, and expressively powerful user
input to computers, compared with past keyboard-and-mouse interfaces that are limited to
discrete input.
TREE STRUCTURE OF MULTIMODAL UI
Multimodal Interfaces
support input and processing of two or more modalities, such as speech, pen, touch and multi-
touch, gestures, gaze, and virtual keyboard, which may be used simultaneously or alternately.
User input modes can involve recognition-based technologies (eg, speech) or discrete input (eg,
keyboard, touch). Some modes may express semantically rich information (eg, pen, speech,
keyboard), while others are limited to simple selection and manipulation actions that control the
system display (eg, gestures, touch, sensors).
Multimodal-Multisensor Interfaces
combine one or more user input modalities with sensor information that involves passive input
from contextual cues (e.g., location, acceleration, proximity, tilt) that a user does not need to
consciously engage. They aim to incorporate passive ly-tracked sensor input to transparently
facilitate user interaction, which may be combined with an active input mode (e.g., speech) or a
passive one (e.g., facial expression). The type and number of sensors incorporated into
multimodal interfaces has been expanding rapidly on cell phones, in cars, robots, and other
applications-resulting in explosive growth of multimod al-multisensor interfaces .
Visemes
refer to the classification of visible lip movements that correspond with audible phonemes
during continuous articulated speech. Many audio-visual speech recognition systems co-
process these two sources of information multimodally to enhance the robustness of recognition
Multichannel User Interfaces
Applicability parameters allow a model to use multiple Uls, each targeted to a different channel
of use.
Applicability helps you present the UI that's most appropriate to the context.
•You may need to configure the same model in multiple host applications,each having different
Ul requirements.
•Host application B is used by internal sales fulfillment staff who are very familiar with your
product line. You might need to present a full-featured UI for Product X that exposes every
option, in a layout that enables users to reach those options most efficiently.
• You may need to present the same product to the same type of audience, but in different
countries. Consequently you need to present the Ul in multiple languages.
To provide for such multiple requirements, you can set the applicability parameters for a UI.
1. Edit your configurator model and navigate to the Overview subtab of the User Interfaces tab.
• Applications sets the applications that the UI will be used for. For example, if you select
Order Management, then the UI will be presented when Configurator is invoked by Oracle
Fusion Order Management.
• Languages sets the languages that the UI will be used for.For example, if you select Korean
and American English, then the Ul will be presented when Configurator is invoked by
applications using one of those languages.
3. The default setting for each parameter is All, meaning that the UI is available at run time to all
channels.
By default, the currently selected parameter is None. If you leave the setting as None, then the
UI will not be available at run time to any of that parameter's options. If no Uls are available,
then the default UI is use.
5. Click the Select button. The selection dialog box for the parameter presents a list of available
options, from which you select one or more to determine the applicability of the UI.
6. If more than one UI has the same applicability parameter settings, then the sequence of Uls
in the table on the User Interfaces tab determines which UI will be used at run time.
To change the sequence in the table of Uls, select a UI then select one of the Move commands
on the Actions menu.
Gesture-based UI refers to using specific physical gestures in order to operate an interface. Take
your smartphone for instance. You can already interact with your phone without using the
keypad by swiping, tapping, pinching, and scrolling. The latest smart devices also allow for
“touchless” gestures where users can scroll or click without ever touching the screen.
With some embedded GUIs, you can simply tilt or shake the device to engage with it using built-
in accelerometers, gyroscopes, or magnetic sensors. Some products on the market today also
support advanced camera and sensor technology that pick up facial expressions and eye
movements to scroll, click, and interact.
Gestures affect how we interact with interfaces, including phones, laptops and iPads. But we
don’t have to look far to find a gestural interface beyond our work and entertainment devices.
It’s no longer uncommon to use gestures when interacting with car screens or bathroom sinks.
Natural User Interfaces (NUIs) are so natural to users that the interface feels, and sometimes is,
invisible, like a touch screen interface. Some NUIs even use gesture control, allowing users to
interact with the interface without direct physical contact. BMW recently released a gesture
control feature that gives users touchless control over car volume, calls and more.
Gestures are growing more common in user interface design and play increasingly complex roles
in our everyday lives.
As technology advances, UX and UI designers and businesses will need to adapt. You don’t have
to know all the technological intricacies or have an in-depth knowledge of computer intelligence.
Still, you should have a basic understanding of the capabilities, functions and best design
practices for gesture technology.
1. Cleaner Interfaces
Humans consume more content than ever before, businesses use more data and technology
continues to provide more services. With this increase in content, it’s easy for interfaces and
displays to appear cluttered. Designers can use gestures to reduce the number of visual elements,
like buttons, that take up space.
2. Ease of Use
As discussed above, interactions become more natural with a gesture-based interface. The ease
of simple hand gestures allows us to use technology with minimal effort at maximum speed.
The following are some of the most common gestures across interfaces that all (or almost all) of
users are familiar with – even if not consciously. We mention screens, but you can substitute the
screen for a touchpad or any other gesture interface.
Tap
A tap gesture is when you tap on the screen with one finger to open or select something, like an
app or page. Here’s a tip: Design clickable interface elements so that the entire box or row is
clickable – not just the text. Giving users more space increases usability.
Double-Tap
Double-tapping is when you tap the screen twice in a row in close succession. Many applications
use this gesture to zoom in, but on Instagram, users can double-tap a photo to like it.
Swipe
Swiping involves moving your finger across the screen in one direction, touching down on one
side and lifting your finger on the other. Swipe gestures are often used for scrolling or switching
between pages. Tinder uses swiping right to match with a profile and swiping left to pass over
one.
Multiple-Finger Swipe
You can also conduct a swipe gesture with two or three fingers. This is a common feature on
laptop touchpads that use two- and three-finger swipes for different actions.
Drag
Dragging uses the same general motion as a swipe, only you move your finger slower and don’t
lift it until you’ve pulled the object to where you want it to be. You use dragging to move an
item to a new location, like when re-organizing your phone apps.
Fling
Like swiping, a fling gesture is when you move your finger across the screen at a high speed.
Unlike a drag, your finger doesn’t remain in contact with an element. Flings are often used to
remove something from view.
Long Press
A long press is when you tap the screen but hold your finger down for longer than usual. Long
presses open up menu options, like when you hold text to copy it or hold down an app to delete
it.
Pinch
One of many two-finger gestures, a pinch is when you hold two fingers apart on the screen and
then drag them towards each other in a pinching motion. Pinch gestures are often used to zoom
back out after zooming in. Sometimes they present a view of all your open screens for navigation
purposes.
Pinch-Open or Spread
A pinch-open or spread gesture is the opposite of a pinch. You hold your two fingers down close
together and then spread them apart. Spreading, like double-tapping, is generally used to zoom
in.
Rotation
To do a rotation, press on the screen with two fingers and rotate them in a circular motion. The
best example of rotation is when you turn the map on Google Maps to see what’s around you.
When designing gesture-based user interfaces, it’s good practice to stick with what users know.
You can get creative if it’s called for, but a level of consistency among gestures and interfaces
helps keep them intuitive to users, increasing the usability of your product.
If you think a new gesture is in store, you need to test it extensively before implementing it.
You’ll conduct a series of user research methods to test the usability, effectiveness, learning
curves and user satisfaction with a gesture before releasing it to the public.
You have the option to reuse a well-known gesture for a different purpose, but again, you should
test the effectiveness of this strategy in advance. The benefit here is that users are at least
familiar with the motion.
Take, for example, Instagram’s use of the double-tap to like or “heart” a post. A double-tap is
usually used to zoom in, but it works well for Instagram’s purpose. It’s also a great study in
efficiency: Tapping the heart below a post requires one less tap but more aim. The alternative
double-tap method allows users to scroll faster since they have the whole image to aim for, and
it’s intuitive to tap the object you’re liking.
Designers have begun to develop a design language with hands, circles and arrows for
communicating gesture intent to product developers and strategists. This language is near
universal with minimal deviation.
Android supports a range of touch gestures such as tap, double-tap, pinch, swipe, scroll, long
press, drag, and fling. Drag and fling may seem similar but drag is the type of scrolling that
occurs when a user drags their finger across the touchscreen, while a fling gesture occurs when
the user drags and then lifts their finger quickly. A MotionEvent describes the state of touch
event via an action code. A long list of action codes is supported by Android:
Note: You should perform same action during ACTION_CANCEL and ACTION_UP event.
Important Methods
● getAction(): extract the action the user performed from the event parameter.
Important Interfaces
● GestureDetector.OnGestureListener : notifies users when a particular touch event has
occurred.
● GestureDetector.OnDoubleTapListener : notifies users when a double-tap event has
occurred.
Elements:
Knowing the essential factors in Mobile App Development will help increase your popularity
and following. You need a strategy that will improve your customer engagement and make the
mobile app development process a success. App development executed properly can help your
company or organization build a loyal user base at an affordable cost
User-friendly Navigation
Most users tend to show less patience to an app that has poor navigation. It pays to have a simple
app that allows users to find and use what they need with ease. You should aim to provide a
clutter-free experience. Focus on delivering a simple and easy to use app.
You do not need a complex interface and navigation system to appear modern, and sometimes
simplicity is the ultimate sophistication. Most users do not have all the time to waste trying to
explore your app; there are enough puzzle games in the app store for them to play.
App Content
In Mobile App Development, you need text to label your buttons, provide guidelines and explain
specific terminologies. The content in your app should be explicit and precise. Stuffing keywords
for SEO optimization may distort the message you are trying to communicate. Apps with
updated content and information look new to all users, potential and repeat customers. Most
experts talk about delivering accurate and specific content.
Graphic Design
Apps with quality graphic art or design are captivating to the eyes of the users. When you do not
wish to use text, it is essential to know that a picture is worth a thousand words. Therefore, you
need to use high quality and intuitive images, animations, visuals and designs to keep your users
engaged.
User Experience Design (UX)
The interaction between your app and the users should create positive emotions and attitudes. It
is more about how users feel when they are using your app. You need to design your app with
the users in mind. You need to come up with intuitive and interactive designs, making east easy
for the user to know the right thing to do.
You need to start with why users need your app, then move to what they can do with the product
and lastly take care of how they feel after or as they are using it. You might need the help of
psychology and interaction design to understand more about user behavior. This information
should help you make decisions on the type of features to use on your interface.
User Interface Design (Ui)
Work on your User Interface Design (UI) and app usability, which are the subsets of user
experience design. You will have to come up with decent and useful features.
Screen Resolution
When building your mobile application, you should also consider the screen size and resolution
of various mobile devices that operate on the platforms you wish to launch your app. In terms of
resolution, you have to achieve the right pixels per inch or standard screen resolution for your
app. More about this information is available on the app store technical guidelines.
Mobile device performance
Size matters especially when the user has limited storage capacity and slow internet. A well-
designed and straightforward app tends to consume less space. Size matters because it keeps the
mobile device’s CPU and RAM free, making your app easy and fast to load.
Some apps will overwork the processor leading to high battery usage. Gaming apps are known to
consume more juice from a mobile device. The development of your mobile app should be in
such a way that it does not consume too much energy.
Social Sharing Option
When listening to music, reading an article or playing a game, some users may want to share the
experience with others on social media. For this reason, an app should be able to integrate with
popular social networking sites for easy and fast sharing. With a built-in viral mechanism that
promotes social sharing, you let your customer market the app for you.
Security
The security of your app is the most crucial factor to consider above all other considerations.
You need to think about all the possible ways you can make your users safe. This factor takes
into account the type of technology you use to encrypt user information and data.
Compatibility
If your mobile app is to be available for all users, then you’ll have to consider the iOS or
Android versions available. As one upgrades from one version to the other, app updates should
also be available. Creating unique features for various platform versions makes it easy for the
user to see the change and keep them engaged.
Overall, your app has to be simple and loads quickly. App users are always looking for
something new and attractive to try. The success of your project relies on the ten most important
elements of mobile app development.
Color
Here is the first one: color. Color is one of the most important elements of mobile design. When
users open your app, what is the first thing they observe? That’s right, the main color. The
Internet is filled with studies and articles about the psychology behind colors in marketing and
all these principles apply also to mobile apps so you have to think about the color while
building your mobile interface and it to your mobile app design elements. Let’s take for
example the article published in Growth Tower about this topic. We mentioned it before but it
worth reminding you because it can be very useful for providing the right emotions for your
users. A simple color could change everything about your mobile app ui. Take a look at the
scheme below and think about your users, their location, and their characteristics. This way you
will find the right complex of colors to match your app’s style.
Font
Next, with the same thoughts in mind consider the way your content is displayed. In case you
have a funny app or game, then a hilarious font will enhance the comic effect. On the other hand,
if you own a serious app which presents real facts from the economic world then make sure that
the typography chosen will reveal the gravity of the news presented. You should not forget the
strength you give to your content with font while designing app user interface. The font you
select for your app can build or ruin your customers’ interest in an instant. At the same time, you
must remember the principles listed above and to stick to the style selected for your app user
interface for avoiding confusion. So, font is one of the most necessary mobile app design
elements.
Icons
Necessary elements of mobile design, of course, include icons. Those small images are more
important than you can imagine while designing mobile app ui. They create a great impact for
the overall perception of users about your app. There are various types of icons but we will
enumerate only the main ones:
No matter what kind of icons you choose, you have to keep in mind that they need to be clear
and to express exactly the type of action you are waiting for your customers. Avoid similarities
with your app user interface that can generate confusion or hesitation.
Illustrations
Don’t forget to include illustrations in your must mobile app design elements list! It is needless
to say that all creatives added inside your app have to follow the highest quality standards.
Besides that, they need to be handled in a smart way in order to reflect the point you are trying to
prove with your text. Just like fonts, illustrations need to follow a specific theme and to be
carefully chosen for creating the wanted impact. Remember that mobile devices come in
different sizes and with various screen resolutions and you have to provide the best experience
for all of your users so your app ui should be adaptable to different devices. At the same time be
careful with the licenses and make sure that you have the rights to use them for your app user
interface.
Brand Design
With a clever user interface, you can attract customers interested in the features offered by your
creation. Never forget that your app represents your brand in the eyes of smartphone users. Your
mobile ui should represent your brand perfectly. Add your logo inside the app and make sure that
users are aware of the fact that your company provides high-quality services and every time they
see this small picture they will know that they can trust your products. It is about building a long-
lasting relationship between your business and your customers.
Navigation
Besides colors, fonts, images and other visual effects, you have to make sure that users don’t get
lost inside your app. Customers should be able to find what they need from your app easily and
this happens with a good navigation in mobile interface. In every moment they have to know
where they are and the next step required for the wanted activity. Use every instrument you have
for guiding them and for describing the necessary steps they need to make for achieving their
purpose. You have to find the right balance between interactivity and simplicity but keep in mind
that a tangled interface isn’t benefiting from any type of app, not even for puzzle games.
Layouts:
A layout defines the structure for a user interface in your app, such as in an
activity. All elements in the layout are built using a hierarchy of View and
ViewGroup objects. A View usually draws something the user can see and
interact with. Whereas a ViewGroup is an invisible container that defines the
layout structure for View and other ViewGroup objects, as shown in figure 1.
You can also use Android Studio's Layout Editor to build your XML layout
using a drag-and-drop interface.
● Instantiate layout elements at runtime. Your app can create View and
ViewGroup objects (and manipulate their properties) programmatically.
Declaring your UI in XML allows you to separate the presentation of your app
from the code that controls its behavior. Using XML files also makes it easy to
provide different layouts for different screen sizes and orientations (discussed
further in Supporting Different Screen Sizes).
The Android framework gives you the flexibility to use either or both of these
methods to build your app's UI. For example, you can declare your app's default
layouts in XML, and then modify the layout at runtime.
Tip: To debug your layout at runtime, use the Layout Inspector tool.
Write the XML
Using Android's XML vocabulary, you can quickly design UI layouts and the
screen elements they contain, in the same way you create web pages in HTML —
with a series of nested elements.
Each layout file must contain exactly one root element, which must be a View or
ViewGroup object. Once you've defined the root element, you can add additional
layout objects or widgets as child elements to gradually build a View hierarchy that
defines your layout. For example, here's an XML layout that uses a vertical
LinearLayout to hold a TextView and a Button:
<?xml version="1.0" encoding="utf-8"?>
<LinearLayout
xmlns:android="http://schemas.android.com/apk/res/andro
id"
android:layout_width="match_parent"
android:layout_height="match_parent"
android:orientation="vertical" >
<TextView android:id="@+id/text"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hello, I am a TextView" />
<Button android:id="@+id/button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="Hello, I am a Button" />
</LinearLayout>
After you've declared your layout in XML, save the file with the .xml extension,
in your Android project's res/layout/ directory, so it will properly compile.
More information about the syntax for a layout XML file is available in the Layout
Resources document.
Load the XML Resource
When you compile your app, each XML layout file is compiled into a View
resource. You should load the layout resource from your app code, in your
Activity.onCreate() callback implementation. Do so by calling
setContentView(), passing it the reference to your layout resource in the
form of: R.layout.layout_file_name. For example, if your XML layout is
saved as main_layout.xml, you would load it for your Activity like so:
<Button android:id="@+id/my_button"
android:layout_width="wrap_content"
android:layout_height="wrap_content"
android:text="@string/my_button_text"/>
Then create an instance of the view object and capture it from the layout
(typically in the onCreate() method):
KotlinJava
2. Button myButton = (Button)
findViewById(R.id.my_button);
Defining IDs for view objects is important when creating a RelativeLayout.
In a relative layout, sibling views can define their layout relative to another sibling
view, which is referenced by the unique ID.
An ID need not be unique throughout the entire tree, but it should be unique within
the part of the tree you are searching (which may often be the entire tree, so it's
best to be completely unique when possible).
Note: With Android Studio 3.6 and higher, the view binding feature can replace
findViewById() calls and provides compile-time type safety for code that
interacts with views. Consider using view binding instead of findViewById().
Layout Parameters
XML layout attributes named layout_something define layout parameters for
the View that are appropriate for the ViewGroup in which it resides.
Every ViewGroup class implements a nested class that extends
ViewGroup.LayoutParams. This subclass contains property types that define
the size and position for each child view, as appropriate for the view group. As you
can see in figure 2, the parent view group defines layout parameters for each child
view (including the child view group).
● wrap_content tells your view to size itself to the dimensions required by its
content.
● match_parent tells your view to become as big as its parent view group will
allow.
In general, specifying a layout width and height using absolute units such as pixels
is not recommended. Instead, using relative measurements such as density-
independent pixel units (dp), wrap_content, or match_parent, is a better approach,
because it helps ensure that your app will display properly across a variety of
device screen sizes. The accepted measurement types are defined in the Available
Resources document.
Layout Position
The geometry of a view is that of a rectangle. A view has a location, expressed as a
pair of left and top coordinates, and two dimensions, expressed as a width and a
height. The unit for location and dimensions is the pixel.
It is possible to retrieve the location of a view by invoking the methods
getLeft() and getTop(). The former returns the left, or X, coordinate of the
rectangle representing the view. The latter returns the top, or Y, coordinate of the
rectangle representing the view. These methods both return the location of the view
relative to its parent. For instance, when getLeft() returns 20, that means the
view is located 20 pixels to the right of the left edge of its direct parent.
In addition, several convenience methods are offered to avoid unnecessary
computations, namely getRight() and getBottom(). These methods return
the coordinates of the right and bottom edges of the rectangle representing the
view. For instance, calling getRight() is similar to the following computation:
getLeft() + getWidth().
Size, Padding and Margins
The size of a view is expressed with a width and a height. A view actually
possesses two pairs of width and height values.
The first pair is known as measured width and measured height. These dimensions
define how big a view wants to be within its parent. The measured dimensions can
be obtained by calling getMeasuredWidth() and
getMeasuredHeight().
The second pair is simply known as width and height, or sometimes drawing width
and drawing height. These dimensions define the actual size of the view on screen,
at drawing time and after layout. These values may, but do not have to, be different
from the measured width and height. The width and height can be obtained by
calling getWidth() and getHeight().
To measure its dimensions, a view takes into account its padding. The padding is
expressed in pixels for the left, top, right and bottom parts of the view. Padding can
be used to offset the content of the view by a specific number of pixels. For
instance, a left padding of 2 will push the view's content by 2 pixels to the right of
the left edge. Padding can be set using the setPadding(int, int, int,
int) method and queried by calling getPaddingLeft(),
getPaddingTop(), getPaddingRight() and getPaddingBottom().
Even though a view can define a padding, it does not provide any support for
margins. However, view groups provide such a support. Refer to ViewGroup and
ViewGroup.MarginLayoutParams for further information.
For more information about dimensions, see Dimension Values.
Common Layouts
Each subclass of the ViewGroup class provides a unique way to display the
views you nest within it. Below are some of the more common layout types that
are built into the Android platform.
Note: Although you can nest one or more layouts within another layout to achieve
your UI design, you should strive to keep your layout hierarchy as shallow as
possible. Your layout draws faster if it has fewer nested layouts (a wide view
hierarchy is better than a deep view hierarchy).
Linear Layout
A layout that organizes its children into a single horizontal or vertical row. It
creates a scrollbar if the length of the window exceeds the length of the screen.
Relative Layout
Enables you to specify the location of child objects relative to each other (child A
to the left of child B) or to the parent (aligned to the top of the parent).
Web View
Use this adapter when your data comes from a Cursor. When using
SimpleCursorAdapter, you must specify a layout to use for each row
in the Cursor and which columns in the Cursor should be inserted into
which views of the layout. For example, if you want to create a list of
people's names and phone numbers, you can perform a query that returns
a Cursor containing a row for each person and columns for the names
and numbers. You then create a string array specifying which columns from
the Cursor you want in the layout for each result and an integer array
specifying the corresponding views that each column should be placed:
KotlinJava
String[] fromColumns =
{ContactsContract.Data.DISPLAY_NAME,
ContactsContract.CommonDataKind
s.Phone.NUMBER};
int[] toViews = {R.id.display_name, R.id.phone_number};
When you instantiate the SimpleCursorAdapter, pass the layout to use for
each result, the Cursor containing the results, and these two arrays:
SimpleCursorAdapter adapter = new
SimpleCursorAdapter(this,
R.layout.person_name_and_number, cursor,
fromColumns, toViews, 0);
ListView listView = getListView();
listView.setAdapter(adapter);
The SimpleCursorAdapter then creates a view for each row in the
Cursor using the provided layout by inserting each fromColumns item
into the corresponding toViews view.
If, during the course of your app's life, you change the underlying data that is read
by your adapter, you should call notifyDataSetChanged(). This will notify
the attached view that the data has been changed and it should refresh itself.
Handling click events
You can respond to click events on each item in an AdapterView by
implementing the AdapterView.OnItemClickListener interface. For
example:
// Create a message handling object as an anonymous
class.
private OnItemClickListener messageClickedHandler = new
OnItemClickListener() {
public void onItemClick(AdapterView parent, View v,
int position, long id) {
// Do something in response to the click
}
};
listView.setOnItemClickListener(messageClickedHandler);
<LinearLayout
xmlns:android="http://schemas.android.com/apk/res/android"
android:orientation="vertical"
android:layout_width="fill_parent"
android:layout_height="fill_parent"
>
<TextView
android:layout_width="fill_parent"
android:layout_height="wrap_content"
android:text="@string/hello"
/>
</LinearLayout>
During execution, the XML UI is processed by the onCreate() event handler
in Activity class, using the setContentView() method of theActivity class
as
@Override
super.onCreate(savedInstanceState);
setContentView(R.layout.main);
}
Every XML element is compiled into its consecutive Android GUI class.
Layout
It is a type of resource which gives definition on what is drawn on the screen
or how elements are placed on the device’s screen and stored as XML files in
the /res/layout resource directory for the application. It can also be a type of
View class to organize other controls.
The Android Development Plug-in for Eclipse has an layout resource designer
for designing and previewing layout resources. It has two tab views namely,
the Layout view to preview how the controls will appear on various screens
and the XML view shows the XML definition.
The layout resource designer preview can’t replicate exactly how the layout
will appear to end users hence, testing on a properly configured emulator is
needed.
It is an simple method for the UI design process and defining layout of user
interface controls with control attributes. Developer can easily access
complex controls like ListView or GridView, and manipulate the content of a
screen programmatically. As an example –
<?xml version=”1.0″ encoding=”utf-8″?>
<LinearLayout
xmlns:android=”http://schemas.android.com/apk/res/android”
android:orientation=”vertical”
android:layout_width=”fill_parent”
android:layout_height=”fill_parent”
android:gravity=”center”>
<TextView
android:layout_width=”fill_parent”
android:id=”@+id/PhotoLabel”
android:layout_height=”wrap_content”
android:text=”@string/my_text_label”
android:gravity=”center_horizontal”
android:textSize=”20dp” />
<ImageView
android:layout_width=”wrap_content”
android:layout_height=”wrap_content”
android:src=”@drawable/matterhorn”
android:adjustViewBounds=”true”
android:scaleType=”fitXY”
android:maxHeight=”250dp”
android:maxWidth=”250dp”
android:id=”@+id/Photo” />
</LinearLayout>
The layout is a simple screen with two controls of, first some text and then
an image below it and both are arranged in a vertically oriented
LinearLayout.
VOICE XML:
Voice XML is an Extensible Markup Language (XML) standard for storing and
processing digitized voice, input recognition and defining human and machine
voice interaction. Voice XML uses voice as an input to a machine for desired
processing, thereby facilitating voice application development.
The top-level element is <vxml>, which is mainly a container for dialogs. There are
two types of dialogs: forms and menus. Forms present information and gather
input; menus offer choices of what to do next. This example has a single form,
which contains a block that synthesizes and presents "Hello World!" to the user.
VoiceXML is the HTML of the voice web, the open standard markup language for
voice applications. Where HTML assumes a graphical web browser with display,
keyboard, and mouse, VoiceXML assumes a voice browser with audio output
(recorded messages and TTS synthesis), audio input (ASR), and keypad input
(DTMF).
The VoiceXML gateway, or voice portal, serves as the heart of any VoiceXML-
enabling network.
vxml . Select the parent folder where you want the new file placed. Optional:
Click Advanced to reveal or hide a section of the wizard used to create a linked
file. Check the Link to file in the file system check box if you want the new file to
reference a file in the file system.
VoiceXML has tags that instruct the voice browser to provide speech synthesis,
automatic speech recognition, dialog management, and audio playback. The
following is an example of a VoiceXML document:
<vxml version="2.0"
xmlns="http://www.w3.org/2001/vxml">
<form>
<block>
<prompt>
Hello world!
</prompt>
</block>
</form>
</vxml>
When interpreted by a VoiceXML interpreter this will output "Hello world" with
synthesized speech.
Typically, HTTP is used as the transport protocol for fetching VoiceXML pages.
Some applications may use static VoiceXML pages, while others rely on dynamic
VoiceXML page generation using an application server like Tomcat, Weblogic,
IIS, or WebSphere.
ADVANTAGES:
While you could certainly build voice applications without using a voice markup
language and a speech browser (for example, by writing your applications directly
to a speech API), using VoiceXML and a VoiceXML browser provide several
important capabilities: