CBT
CBT
Components are deployable units that provide a business service to their clients.
Components Characteristics
Properties Operations Reusability
Shareable Events Distributable
Deployable Self Containment Self Description
• Hardware
• The underlying operating system
• Their Application Server
• The network protocol they use
• Directory Services
• Distributed transaction management
• Security management
• Concurrent access management
• Persistence management
• Resource pooling (e.g. DB connections)
• Administration interface
• Load Balancing
• Fault tolerance
Modules:
The module will provide students with a knowledge and understanding of current and
emerging component technologies. The module is focused on two major themes: Object-
Oriented Middleware (OOM) and Message-Oriented Middleware (MOM). In the first
theme we examine the evolution of object-oriented programming into (distributed)
component models such as The Common Object Request Broker Architecture (CORBA),
The Component Object Model (COM), Java Remote Method Invocation (RMI) and Java
Beans. The common underlying requirements of these systems will be studied in detail
such as naming and binding issues and marshalling and un-marshalling of data. The
second theme will explore the emerging field of Message-Oriented Middleware with an
in-depth study into current MOM technologies such as Java Messaging System (JMS).In
their work on Modular Smalltalk [5], Allen Wirfs-Brock and Brian Wilkerson describe
the essential features of modules:
Modules are program units that manage the visibility and accessibility of names...
A module typically groups a set of class definitions and objects to implement some
service or abstraction. A module will frequently be the unit of division of responsibility
within a programming team....
The definition of a normal Smalltalk class includes a reference to a superclass, the name
of the new subclass, and the names of any new instance and class variables added by the
new subclass. Class variables are shared by all the instances of a class, and are visible to
all the methods of the class and its subclasses, if any.
In addition, the new subclass can provide its methods with access to named objects that
are shared on a subscription basis. Certain names in the Smalltalk system dictionary are
bound to global pool dictionaries that contain these sharable named objects. The new
subclass can subscribe to these objects by including selected global names in its list of
pool dictionaries. For example, a File class might be defined using the following
message:
Their are several new messages for defining modules and the private classes contained in
their domains. The definition of a module for managing an inventory might use the
following message:
Object moduleSubclass: #InventoryManager
instanceVariableNames: ''
classVariableNames: ''
poolDictionaries: ''!
A new private class can be added to the domain of the InventoryManager class using the
message:
In order to add a new private subclass of InventoryItem, we send the name of the private
class (#InventoryItem) as a message to the InventoryManager module:
The issues involved in this breaking of the module encapsulation will be considered
further in a later section.
Modules can be used to create nested subsystems. The following message creates a
nested module for managing the accounts in the InventoryManager module class.
Figure 1 depicts the structural relationships between the classes in the InventoryManager
module. Note that the graphic design notation of OMT [3] has been extended slightly to
show what classes are encapsulated inside a module class. The rounded rectangles
represent module domains. Note that the Smalltalk system dictionary is also considered
to be the system domain.
Encapsulating Private
Behavior
Ada encourages the division of code into separate modules called packages. Each
package can contain any combination of items.
• a group of related subprograms along with their shared data, with the data not
visible outside the package,
• one or more data types along with subprograms for manipulating those data types,
and
• a generic package that can be instantiated under varying conditions.
Interfaces:
All system processes are placed into separate components so that all of the data and
functions inside each component are semantically related (just as with the contents of
classes). Because of this principle, it is often said that components are modular and
cohesive.
With regard to system-wide co-ordination, components communicate with each other via
interfaces. When a component offers services to the rest of the system, it adopts a
provided interface which specifies the services that can be utilized by other components
and how. This interface can be seen as a signature of the component - the client does not
need to know about the inner workings of the component (implementation) in order to
make use of it. This principle results in components referred to as encapsulated. The
UML illustrations within this article represent provided interfaces by a lollipop-symbol
attached to the outer edge of the component.
The software components can represent standard parts designed particularly with regard
to their reuse and their closed functionality published by their contractually specified
interfaces.
Callbacks:
Modern software systems are often developed via composition of independent
components with well-defined interfaces and (formal) behavior specification of some
sort. When reliability of a software system built from components is a critical issue,
formal verification such as program model checking becomes a necessity. Since model
checking of the whole complex ("real-life") system at a time is prone to state explosion,
compositional methods have to be used. A basic idea of compositional model checking is
the checking of (local) properties of isolated components and inferring (global) properties
of the whole system fromthe local properties. This way, state explosion is partially
addressed, since a single isolated component typically triggers a smaller state space
compared to the whole system.
Thus checking of an isolated component (its implementation, i.e. for instance of its Java
code) is not directly possible, since also its environment has to be provided in the form of
a program (code). Thus, program model checking of a primitive component is associated
with the problem of missing environment [14]. A typical solution to it in case of JPF is to
construct an "artificial" environment (Java code) from an assumption formed as a
behavior model as in, where the behavior model is based on LTS defined either directly,
or in the formalism of behavior protocols. Then, JPF is applied to the complete program
composed of the component and environment.
Directory Services
A directory service is the software system that stores, organizes and provides access to
information in a directory. In software engineering, a directory is a map of the differences
between names and values. It allows the lookup of values given a name, similar to a
dictionary. As a word in a dictionary may have multiple definitions, in a directory, a
name may be associated with multiple, different pieces of information. Likewise, as a
word may have different parts of speech and different definitions, a name in a directory
may have many different types of data.
Directories may be very narrow in scope, supporting only a small set of node types and
data types, or they may be very broad, supporting an arbitrary or extensible set of types.
In a telephone directory, the nodes are names and the data items are telephone numbers.
In the DNS the nodes are domain names and the data items are IP addresses (and alias,
mail server names, etc.). In a directory used by a network operating system, the nodes
represent resources that are managed by the OS, including users, computers, printers and
other shared resources. Many different directory services have been used since the advent
of the Internet but this article focuses mainly on those that have descended from the
X.500 directory service.
1. In the Basis Objects subtree of the software component version that is based
on the underlying software component version.
2. In the subtree of the underlying software component version (at the same
level as all other software component versions).
This relationship is shown in the figure. In the case illustrated, the software component
version Based-On Component 1.0 is based on the software component version Basis 1.0.
Component Architecture:
The distinction between operating system and middleware functionality is, to some
extent, arbitrary. While core kernel functionality can only be provided by the operating
system itself, some functionality previously provided by separately sold middleware is
now integrated in operating systems. A typical example is the TCP/IP stack for
telecommunications, nowadays included in virtually every operating system.
Types of middleware
Functions of Middleware
In all of the above situations, applications use intermediate software that resides on top of
the operating systems and communication protocols to perform the following functions :
• Hiding distribution, i.e. the fact that an application is usually made up of many
interconnected parts running in distributed locations;
• Hiding the heterogeneity of the various hardware components, operating
systems and communication protocols;
In Java, an object of the Thread class can represent a thread. Thread can be implemented
through any one of two ways:
For creating a thread a class have to extend the Thread Class. For creating a thread by
this procedure you have to follow these steps:
C:\j2se6\thread>javac RunThread.java
C:\j2se6\thread>java RunThread
Thread started....
1. A Class implements the Runnable Interface, override the run() method to define
the code executed by thread. An object of this class is Runnable Object.
2. Create an object of Thread Class by passing a Runnable object as argument.
3. Invoke the start( ) method on the instance of the Thread class.
1. If you extend the Thread Class, that means that subclass cannot extend any other
Class, but if you implement Runnable interface then you can do this.
2. The class implementing the Runnable interface can avoid the full overhead of
Thread class which can be excessive.
Thread Priorities:
In Java, thread scheduler can use the thread priorities in the form of integer value to
each of its thread to determine the execution schedule of threads. Thread gets the ready-
to-run state according to their priorities. The thread scheduler provides the CPU time to
thread of highest priority during ready-to-run state. Priorities are integer values from 1
(lowest priority given by the constant Thread.MIN_PRIORITY) to 10 (highest priority
given by the constant Thread.MAX_PRIORITY). The default priority is
5(Thread.NORM_PRIORITY).
Constant Description
Thread.MIN_PRIORITY The maximum priority of any thread (an int value of 10)
Thread.MAX_PRIORITY The minimum priority of any thread (an int value of 1)
Thread.NORM_PRIORITY The normal priority of any thread (an int value of 5)
The methods that are used to set the priority of thread shown as:
setPriority() This is method is used to set the priority of thread.
getPriority() This method is used to get the priority of thread.
When a Java thread is created, it inherits its priority from the thread that created it. At
any given time, when multiple threads are ready to be executed, the runtime system
chooses the runnable thread with the highest priority for execution. In Java runtime
system, preemptive scheduling algorithm is applied. If at the execution time a thread
with a higher priority and all other threads are runnable then the runtime system chooses
the new higher priority thread for execution. On the other hand, if two threads of the
same priority are waiting to be executed by the CPU then the round-robin algorithm is
applied in which the scheduler chooses one of them to run according to their round of
time-slice.
Thread Scheduler
In the implementation of threading scheduler usually applies one of the two following
strategies:
current running thread leaves the runnable state and higher priority thread
enter to the runnable state.
• Time-Sliced (Round-Robin) Scheduling – A running thread is allowed to
be execute for the fixed time, after completion the time, current thread
indicates to the another thread to enter it in the runnable state.
States and State Transitions of Java Threads:
new
start
synchronized
runnable wait for lock
resume
suspend
pre-empt
schedule
synchronize notified
suspended
suspend
wait
running wait for
notify
stop, destroy
terminated
new
When a thread is in the "New Thread" state, it is merely an empty Thread object. No
system resources have been allocated for it yet. Thus when a thread is in this state, you
can only start the thread or stop it. Calling any method besides start() or stop() when a
thread is in this state makes no sense and causes an IllegalThreadStateException.
Runnable
The start() method creates the system resources necessary to run the thread, schedules the
thread to run, and calls the thread's run() method. At this point the thread is in the
"Runnable" state. This state is called "Runnable" rather than "Running" because the
thread might not actually be running when it is in this state. Many computers have a
single processor, making it impossible to run all "Runnable" threads at the same time. So,
the Java runtime system must implement a scheduling scheme that shares the processor
between all "Runnable" threads. (See Thread Priority for more information about
scheduling.) However, for most purposes you can think of the "Runnable" state as simply
"Running". When a thread is running--it's "Runnable" and is the current thread--the
instructions in its run() method are executing sequentially.
Not Runnable
A thread enters the "Not Runnable" state when one of these four events occurs:
Thread Synchronization
The monitor
Java uses the monitor concept as the basis of its thread synchronization. A monitor is
an object that can block and revive threads.
Java provides a way to lock the code for a thread which is currently executing it, and
making other threads that wish to use it wait until the first thread is finished. These
other threads are placed in the waiting state. Java is not as fair as the service station
because there is no queue. Any one of the waiting threads may get the monitor next,
regardless of the order they asked for it.
notifyAll() moves all the threads in the waiting state into the ready state where the
scheduler will decide when they will move to the running state. NotifyAll() is much
more commonly used than notify() because the latter could conceivably leave a thread
waiting for a very long time if it were unlucky.
• race conditions
• starvation
• deadlock
Java Beans:
• JavaBeans components are Java classes that can be easily reused and composed
together into applications.
• JavaBeans™ is a portable, platform-independent component model written in the
Java programming language. The JavaBeans architecture was built through a
collaborative industry effort and enables developers to write reusable components
in the Java programming language.
• Using JavaBeans-compliant application builder tools, you can combine these
components into applets, applications, or composite components.
• A Java Bean is a reusable software component (actually, a Java class) that can be
manipulated visually in a builder tool
• Examples of "builder tools":
JavaBean components are known as beans. Beans are dynamic in that they can be
changed or customized. Through the design mode of a builder tool, you use the property
sheet or bean customizer to customize the bean and then save (persist) your customized
beans.
JavaBeans Concepts
The JavaBeans™ architecture is based on a component model which enables
developers to create software units called components. Components are self-contained,
reusable software units that can be visually assembled into composite components,
applets, applications, and servlets using visual application builder tools. JavaBean
components are known as beans.
A set of APIs describes a component model for a particular language. The JavaBeans
API specification describes the core detailed elaboration for the JavaBeans component
architecture.
Beans are dynamic in that they can be changed or customized. Through the design
mode of a builder tool you can use the Properties window of the bean to customize the
bean and then save (persist) your beans using visual manipulation. You can select a
bean from the toolbox, drop it into a form, modify its appearance and behavior, define
its interaction with other beans, and combine it and other beans into an applet,
application, or a new bean.
• Builder tools discover a bean's features (that is, its properties, methods, and
events) by a process known as introspection. Beans support introspection in two
ways:
o By adhering to specific rules, known as design patterns, when naming bean
features. The Introspector class examines beans for these design patterns to
discover bean features. The Introspector class relies on the core reflection API.
The trail The Reflection API is an excellent place to learn about reflection.
o By explicitly providing property, method, and event information with a related
bean information class. A bean information class implements the BeanInfo
interface. A BeanInfo class explicitly lists those bean features that are to be
exposed to application builder tools.
• Properties are the appearance and behavior characteristics of a bean that can be
changed at design time. Builder tools introspect on a bean to discover its
properties and expose those properties for manipulation.
• Beans expose properties so they can be customized at design time. Customization
is supported in two ways: by using property editors, or by using more
sophisticated bean customizers.
• Beans use events to communicate with other beans. A bean that is to receive
events (a listener bean) registers with the bean that fires the event (a source bean).
Builder tools can examine a bean and determine which events that bean can fire
(send) and which it can handle (receive).
• Persistence enables beans to save and restore their state. After changing a bean's
properties, you can save the state of the bean and restore that bean at a later time
with the property changes intact. The JavaBeans architecture uses Java Object
Serialization to support persistence.
• A bean's methods are no different from Java methods, and can be called from
other beans or a scripting environment. By default all public methods are
exported.
Beans vary in functionality and purpose. You have probably met some of the following
beans in your programming practice:
The source of an event is the object that originates or fires the event. The source must
define the events it will fire, as well as the methods for registering listeners of those
events. A listener is an object that indicates that it is to be notified of events of a
particular type. Listeners register for events using the methods defined by the sources
of those events.
From the Properties lesson you discovered two event listeners. The
PropertyChangeListener(in the API reference documentation) interface provides a
notification whenever a bound property value is changed and the
VetoableChangeListener(in the API reference documentation) creates a notification
whenever a bean changes a constrained property value.
import java.awt.event.ActionEvent;
import java.awt.event.ActionListener;
import javax.swing.JTextArea;
import java.awt.BorderLayout;
import javax.swing.JButton;
import javax.swing.JFrame;
import javax.swing.WindowConstants;
public class ButtonHandler implements ActionListener {
/**
* Component that will contain messages about
* events generated.
*/
private JTextArea output;
/**
* Creates an ActionListener that will put messages in
* JTextArea everytime event received.
*/
public ButtonHandler( JTextArea output )
{
this.output = output;
}
/**
* When receives action event notification, appends
* message to the JTextArea passed into the constructor.
*/
public void actionPerformed( ActionEvent event )
{
this.output.append( "Action occurred: " + event + '\n' );
}
}
class ActionTester {
public static void main(String args[]) {
JFrame frame = new JFrame( "Button Handler" );
JTextArea area = new JTextArea( 6, 80 );
JButton button = new JButton( "Fire Event" );
button.addActionListener( new ButtonHandler( area ) );
frame.add( button, BorderLayout.NORTH );
frame.add( area, BorderLayout.CENTER );
frame.pack();
frame.setDefaultCloseOperation(
WindowConstants.DISPOSE_ON_CLOSE );
frame.setLocationRelativeTo( null );
frame.setVisible( true );
}
}
Bean Properties
A bean property is a named attribute of a bean that can affect its behavior or
appearance. Examples of bean properties include color, label, font, font size, and
display size. The JavaBeans specification defines the following types of bean
properties:
Simple – A bean property with a single value whose changes are independent of
changes in any other property. To add simple properties to a bean, add appropriate
getXXX and setXXX methods (or isXXX and setXXX methods for a boolean property).
To add simple properties to a bean, add appropriate getXXX and setXXX methods (or
isXXX and setXXX methods for a boolean property). The names of these methods follow
specific rules called design patterns. These design pattern-based method names provide
the following features:
Indexed – A bean property that supports a range of values instead of a single value.
An indexed property is an array of properties or objects that supports a range of values
and enables the accessor to specify an element of a property to read or write. Indexed
properties are specified by the following methods:
• Note that the distinction between the get and set methods for indexed properties is
subtle. The get method either has an argument that is the array index of the
property, or returns an array. The set method either has two arguments, namely an
integer array index and the property element object that is being set, or has the
entire array as an argument.
Bound – A bean property for which a change to the property results in a notification
being sent to some other bean. Bound properties support the
PropertyChangeListener class.
Sometimes when a Bean property changes, another object might need to be notified of
the change, and react to the change. Whenever a bound property changes, notification
of the change is sent to interested listeners.
The accessor methods for a bound property are defined in the same way as those for
simple properties. However, you also need to provide the event listener registration
methods forPropertyChangeListener classes and fire a PropertyChangeEvent event to
the PropertyChangeListener objects by calling their propertyChange methods
In order to listen for property changes, an object must be able to add and remove itself
from the listener list on the bean containing the bound property. It must also be able to
respond to the event notification method that signals a property change.
Constrained – A bean property for which a change to the property results in validation
by another bean. The other bean may reject the change if it is not appropriate. A bean
property is constrained if the bean supports the VetoableChangeListener and
PropertyChangeEvent classes, and if the set method for this property throws a
PropertyVetoException.
Constrained properties are more complicated than bound properties because they also
support property change listeners which happen to be vetoers. The following operations
in the setXXX method for the constrained property must be implemented in this order:
The accessor methods for a constrained property are defined in the same way as those
for simple properties, with the addition that the setXXX method throws a
PropertyVetoException exception. The syntax is as follows:
public void setPropertyName(PropertyType pt)
throws PropertyVetoException {code}
Handling Vetoes
If a registered listener vetoes a proposed property change by throwing a
PropertyVetoException exception, the source bean with the constrained property is
responsible for the following actions:
1. Catching exceptions.
2. Reverting to the old value for the property.
3. Issuing a new VetoableChangeListener.vetoableChange call to all listeners to
report the reversion.
Standard
Expert
Preferred
• Hidden – A bean property that can be changed. However, these properties are not
disclosed with the BeanInfo class.
• A Javabean is just a Java class with the following requirements:
Introspection is the automatic process of analyzing a bean's design patterns to reveal the
bean's properties, events, and methods. This process controls the publishing and
discovery of bean operations and properties. This lesson explains the purpose of
introspection, introduces the Introspection API, and gives an example of introspection
code.
Purpose of Introspection
Portability - Everything is done in the Java platform, so you can write components once,
reuse them everywhere. There are no extra specification files that need to be maintained
independently from your component code. There are no platform-specific issues to
contend with. Your component is not tied to one component model or one proprietary
platform. You get all the advantages of the evolving Java APIs, while maintaining the
portability of your components.
Introspection API
The JavaBeans API architecture supplies a set of classes and interfaces to provide
introspection.
The BeanInfo interface of the java.beans package defines a set of methods that allow
bean implementors to provide explicit information about their beans. By specifying
BeanInfo for a bean component, a developer can hide methods, specify an icon for the
toolbox, provide descriptive names for properties, define which properties are bound
properties, and much more.
The getBeanInfo(beanName) of the Introspector class can be used by builder tools and
other automated environments to provide detailed information about a bean. The
getBeanInfo method relies on the naming conventions for the bean's properties, events,
and methods. A call to getBeanInfo results in the introspection process analyzing the
bean’s classes and superclasses.
The Introspector class provides descriptor classes with information about properties,
events, and methods of a bean. Methods of this class locate any descriptor information
that has been explicitly supplied by the developer through BeanInfo classes. Then the
Introspector class applies the naming conventions to determine what properties the bean
has, the events to which it can listen, and those which it can send.
Each class represented in this group describes a particular attribute of the bean. For
example, the isBound method of the PropertyDescriptor class indicates whether a
PropertyChangeEvent event is fired when the value of this property changes.
JAR Files
The JavaTM Archive (JAR) file format enables you to bundle multiple files into a single
archive file. Typically a JAR file contains the class files and auxiliary resources
associated with applets and applications. The JAR file format provides many benefits:
• Security: You can digitally sign the contents of a JAR file. Users who recognize your
signature can then optionally grant your software security privileges it wouldn't
otherwise have.
• Decreased download time: If your applet is bundled in a JAR file, the applet's class
files and associated resources can be downloaded to a browser in a single HTTP
transaction without the need for opening a new connection for each file.
• Compression: The JAR format allows you to compress your files for efficient
storage.
• Packaging for extensions: The extensions framework provides a means by which
you can add functionality to the Java core platform, and the JAR file format defines
the packaging for extensions. Java 3D and JavaMail are examples of extensions
developed by SunTM. By using the JAR file format, you can turn your software into
extensions as well.
• Package Sealing: Packages stored in JAR files can be optionally sealed so that the
package can enforce version consistency. Sealing a package within a JAR file means
that all classes defined in that package must be found in the same JAR file.
• Package Versioning: A JAR file can hold data about the files it contains, such as
vendor and version information.
• Portability: The mechanism for handling JAR files is a standard part of the Java
platform's core API.
To perform basic tasks with JAR files, you use the Java Archive Tool provided as part
of the Java Development Kit. Because the Java Archive tool is invoked by using the jar
command, this tutorial refers to it as 'the Jar tool'. As a synopsis and preview of some of
the topics to be covered in this section, the following table summarizes common JAR
file operations:
This command will generate a compressed JAR file and place it in the current directory.
The command will also generate a default manifest file for the JAR archive. The
metadata in the JAR file, such as the entry names, comments, and contents of the
manifest, must be encoded in UTF8.
Viewing the Contents of a JAR File
The basic format of the command for viewing the contents of a JAR file is:
jar tf jar-file
Let's look at the options and argument used in this command:
• The t option indicates that you want to view the table of contents of the JAR file.
• The f option indicates that the JAR file whose contents are to be viewed is specified
on the command line.
• The jar-file argument is the path and name of the JAR file whose contents you
want to view.
This command will display the JAR file's table of contents to stdout.
• The x option indicates that you want to extract files from the JAR archive.
• The f options indicates that the JAR file from which files are to be extracted is
specified on the command line, rather than through stdin.
• The jar-file argument is the filename (or path and filename) of the JAR file from
which to extract files.
• archived-file(s) is an optional argument consisting of a space-separated list of
the files to be extracted from the archive. If this argument is not present, the Jar tool
will extract all the files in the archive.
When extracting files, the Jar tool makes copies of the desired files and writes them to the
current directory, reproducing the directory structure that the files have in the archive.
The original JAR file remains unchanged.
In this command:
• The u option indicates that you want to update an existing JAR file.
• The f option indicates that the JAR file to update is specified on the command line.
• jar-file is the existing JAR file that's to be updated.
• input-file(s) is a space-deliminated list of one or more files that you want to
add to the Jar file.
Any files already in the archive having the same pathname as a file being added will be
overwritten.
When we create a JAR file, it automatically receives a default manifest file. There can be
only one manifest file in an archive, and it always has the pathname
META-INF/MANIFEST.MF
When we create a JAR file, the default manifest file simply contains the following:
Manifest-Version: 1.0
Created-By: 1.6.0 (Sun Microsystems Inc.)
These lines show that a manifest's entries take the form of "header: value" pairs. The
name of a header is separated from its value by a colon. The default manifest conforms
to version 1.0 of the manifest specification and was created by the 1.6.0 version of the
JDK.
The manifest can also contain information about the other files that are packaged in the
archive. Exactly what file information should be recorded in the manifest depends on
how you intend to use the JAR file. The default manifest makes no assumptions about
what information it should record about other files. Digest information is not included in
the default manifest.
The Jar tool automatically puts a default manifest with the pathname META-
INF/MANIFEST.MF into any JAR file we create. We can enable special JAR file
functionality, such as package sealing, by modifying the default manifest. Typically,
modifying the default manifest involves adding special-purpose headers to the manifest
that allow the JAR file to perform a particular desired function.
To modify the manifest, you must first prepare a text file containing the information we
wish to add to the manifest. We then use the Jar tool's m option to add the information in
our file to the manifest.
The text file from which you are creating the manifest must end with a new line or
carriage return. The last line will not be parsed properly if it does not end with a new
line or carriage return. The basic command has this format:
The m and f options must be in the same order as the corresponding arguments. The
contents of the manifest must be encoded in UTF8.
The Reflection API
Uses of Reflection
Reflection is commonly used by programs which require the ability to examine or
modify the runtime behavior of applications running in the Java virtual machine. This
is a relatively advanced feature and should be used only by developers who have a
strong grasp of the fundamentals of the language. With that caveat in mind, reflection
is a powerful technique and can enable applications to perform operations which would
otherwise be impossible.
Java's Reflection API's makes it possible to inspect classes, interfaces, fields and
methods at runtime, without knowing the names of the classes, methods etc. at compile
time. It is also possible to instantiate new objects, invoke methods and get/set field
values using reflection.
Java Reflection is a quite powerful and can be very useful. For instance, when mapping
objects to tables in a database at runtime, like Butterfly Persistence does. Or, when
mapping the statements in a script language to method calls on real objects at runtime,
like Butterfly Container does when parsing its configuration scripts.
The ability to examine and manipulate a Java class from within itself may not sound like
very much, but in other programming languages this feature simply doesn't exist. For
example, there is no way in a Pascal, C, or C++ program to obtain information about the
functions defined within that program.
Extensibility Features
An application may make use of external, user-defined classes by creating instances of
extensibility objects using their fully-qualified names.
Class Browsers and Visual Development Environments
A class browser needs to be able to enumerate the members of classes. Visual
development environments can benefit from making use of type information available
in reflection to aid the developer in writing correct code.
Debuggers and Test Tools
Debuggers need to be able to examine private members on classes. Test harnesses can
make use of reflection to systematically call a discoverable set APIs defined on a class,
to insure a high level of code coverage in a test suite.
Drawbacks of Reflection
Reflection is powerful, but should not be used indiscriminately. If it is possible to
perform an operation without using reflection, then it is preferable to avoid using it. The
following concerns should be kept in mind when accessing code via reflection.
Performance Overhead
Because reflection involves types that are dynamically resolved, certain Java virtual
machine optimizations can not be performed. Consequently, reflective operations have
slower performance than their non-reflective counterparts, and should be avoided in
sections of code which are called frequently in performance-sensitive applications.
Security Restrictions
Reflection requires a runtime permission which may not be present when running under
a security manager. This is in an important consideration for code which has to run in a
restricted security context, such as in an Applet.
Exposure of Internals
Since reflection allows code to perform operations that would be illegal in non-
reflective code, such as accessing private fields and methods, the use of reflection can
result in unexpected side-effects, which may render code dysfunctional and may
destroy portability. Reflective code breaks abstractions and therefore may change
behavior with upgrades of the platform.
Java Reflection Classes
Every object is either a reference or primitive type. Reference types all inherit from
java.lang.Object. Classes, enums, arrays, and interfaces are all reference types. There is a
fixed set of primitive types: boolean, byte, short, int, long, char, float, and double.
Examples of reference types include java.lang.String, all of the wrapper classes for
primitive types such as java.lang.Double, the interface java.io.Serializable, and the enum
javax.swing.SortOrder.
For every type of object, the Java virtual machine instantiates an immutable instance of
java.lang.Class which provides methods to examine the runtime properties of the object
including its members and type information. Class also provides the ability to create
new classes and objects. Most importantly, it is the entry point for all of the Reflection
APIs. This lesson covers the most commonly used reflection operations involving
classes:
Object.getClass()
If an instance of an object is available, then the simplest way to get its Class is to invoke
Object.getClass(). Of course, this only works for reference types which all inherit from
Object. Some examples follow:
Class c = "foo".getClass();
Returns the Class for String
Class c = System.console().getClass();
There is a unique console associated with the virtual machine which is returned by the
static method System.console(). The value returned by getClass() is the Class
corresponding to java.io.Console.
enum E { A, B }
Class c = A.getClass();
A is is an instance of the enum E; thus getClass() returns the Class corresponding to the
enumeration type E.
byte[] bytes = new byte[1024];
Class c = bytes.getClass();
Since arrays are Objects, it is also possible to invoke getClass() on an instance of an
array. The returned Class corresponds to an array with component type byte.
import java.util.HashSet;
import java.util.Set;
Set<String> s = new HashSet<String>();
Class c = s.getClass();
In this case, java.util.Set is an interface to an object of type java.util.HashSet. The value
returned by getClass() is the class corresponding to java.util.HashSet.
Class.forName()
If the fully-qualified name of a class is available, it is possible to get the corresponding
Class using the static method Class.forName(). This cannot be used for primitive types.
The syntax for names of array classes is described by Class.getName(). This syntax is
applicable to references and primitive types.
Class c = Class.forName("com.duke.MyLocaleServiceProvider");
This statement will create a class from the given fully-qualified name.
Class cDoubleArray = Class.forName("[D");
Class cStringArray = Class.forName("[[Ljava.lang.String;");
The variable cDoubleArray will contain the Class corresponding to an array of primitive
type double (i.e. the same as double[].class). The cStringArray variable will contain the
Class corresponding to a two-dimensional array of String (i.e. identical to
String[][].class).
There are several Reflection APIs which return classes but these may only be accessed if
a Class has already been obtained either directly or indirectly.
Class.getSuperclass()
Returns the super class for the given class.
Class c = javax.swing.JButton.class.getSuperclass();
Returns the Class in which these members were declared. Anonymous classes will not
have a declaring class but will have an enclosing class.
import java.lang.reflect.Field;
Field f = System.class.getField("out");
Class c = f.getDeclaringClass();
Not all modifiers are allowed on all classes, for example an interface cannot be final
and an enum cannot be abstract. java.lang.reflect.Modifier contains declarations
for all possible modifiers. It also contains methods which may be used to decode the set
of modifiers returned by Class.getModifiers().
Discovering Class Members
There are two categories of methods provided in Class for accessing fields, methods, and
constructors: methods which enumerate these members and methods which search for
particular members. Also there are distinct methods for accessing members declared
directly on the class versus methods which search the superinterfaces and superclasses for
inherited members. The following table provides a summary of all the member-locating
methods and their characteristics.
Member Class API
Field getDeclaredField()
getField()
getDeclaredFields()
getFields()
Method getDeclaredMethod()
getMethod()
getDeclaredMethods()
getMethods()
Constructor getDeclaredConstructor()
getConstructor()
getDeclaredConstructors()
getConstructors()
A Simple Example
To see how reflection works, consider this simple example:
import java.lang.reflect.*;
Class c = Class.forName("java.lang.String");
to get the Class object for String. Another approach is to use:
Class c = int.class;
or
Class c = Integer.TYPE;
to obtain Class information on fundamental types. The latter approach accesses the
predefined TYPE field of the wrapper (such as Integer) for the fundamental type. The
second step is to call a method such as getDeclaredMethods, to get a list of all the
methods declared by the class. Once this information is in hand, then the third step is to
use the reflection API to manipulate the information. For example, the sequence:
Class c = Class.forName("java.lang.String");
Method m[] = c.getDeclaredMethods();
System.out.println(m[0].toString());
Summary
Java reflection is useful because it supports dynamic retrieval of information about
classes and data structures by name, and allows for their manipulation within an
executing Java program. This feature is extremely powerful and has no equivalent in
other conventional languages such as C, C++, Fortran, or Pascal.
Object Serialization
We all know the Java platform allows us to create reusable objects in memory. However,
all of those objects exist only as long as the Java virtual machine remains running. It
would be nice if the objects we create could exist beyond the lifetime of the virtual
machine, wouldn't it? Well, with object serialization, you can flatten your objects and
reuse them in powerful ways.
Let's look at a persistent class we'll use to demonstrate the serialization mechanism:
import java.io.Serializable;
import java.util.Date;
import java.util.Calendar;
public class PersistentTime implements Serializable
{
private Date time;
public PersistentTime() {
time = Calendar.getInstance().getTime(); }
public Date getTime(){
return time; } }
As you can see, the only thing we had to do differently from creating a normal class is
implement the java.io.Serializable interface. The completely empty
Serializable is only a marker interface -- it simply allows the serialization
mechanism to verify that the class is able to be persisted. Thus, we turn to the first rule
of serialization:
Rule #1: The object to be persisted must implement the Serializable interface or
inherit that implementation from its object hierarchy.
The next step is to actually persist the object. That is done with the
java.io.ObjectOutputStream class. That class is a filter stream--it is wrapped
around a lower-level byte stream (called a node stream) to handle the serialization
protocol for us. Node streams can be used to write to file systems or even across
sockets. That means we could easily transfer a flattened object across a network wire
and have it be rebuilt on the other side!
import java.io.ObjectOutputStream;
import java.io.FileOutputStream;
import java.io.IOException;
public class FlattenTime {
public static void main(String [] args){
String filename = "time.ser";
if(args.length > 0){
filename = args[0];}
PersistentTime time = new PersistentTime();
FileOutputStream fos = null;
ObjectOutputStream out = null;
try {
fos = new FileOutputStream(filename);
out = new ObjectOutputStream(fos);
out.writeObject(time);
out.close(); }
catch(IOException ex){
ex.printStackTrace();} } }
Later, we simply call the getTime() method to retrieve the time that the original
object flattened. The flatten time is compared to the current time to demonstrate that
the mechanism indeed worked as expected.
First, consider the situation in which an object is written to a stream and then written
again later. By default, an ObjectOutputStream will maintain a reference to an object
written to it. That means that if the state of the written object is written and then
written again, the new state will not be saved! Here is a code snippet that shows that
problem in action:
Thus, an Enterprise Java Bean is a remote object with semantics specified for creation,
invocation and deletion. The EJB container is assigned the system-level tasks mentioned
above. What a web container does for Java servlets and JSPs in a web server, the EJB
container is for EJBs.
EJB Architecture
1. There should be a mechanism to create the client-side and server-side proxy objects. A
client-side proxy represents the server-side object on the client-side. As far as the client is
concerned, the client-side proxy is equivalent to the server-side object. On the other hand,
the purpose of the server-side proxy is to provide the basic infrastructure to receive client
requests and delegate these request to the actual implementation object
3. There should be a way to inform the distributed component system that a specific
component is no longer in use by the client.
In order to meet these requirements, the EJB architecture specifies two kinds of interfaces
for each bean. They are home interface and remote interface. These interfaces specify the
bean contract to the clients. However, a bean developer need not provide implementation
for these interfaces. The home interface will contain methods to be used for creating
remote objects. The remote interface should include business methods that a bean is able
to serve to clients. One can consider using the home interface to specify a remote object
capable of creating objects conforming to the remote interface. That is, a home interface
is analogous to a factory of remote objects. These are regular Java interfaces extending
the javax.ejb.EJBHome and javax.ejb.EJBObject interfaces respectively.
As discussed below, the EJB architecture specifies three types of beans - session beans,
entity beans, and message-driven beans. A bean developer has to specify the home and
remote interfaces and also he has to implement one of these bean interfaces depending
upon the type of the bean. For instance, for session beans, he has to implement the
javax.ejb.SessionBean interface. The EJB architecture expects him to implement the
methods specified in the bean interface and the methods specified in the home and remote
interfaces. During the deployment time, he should specify the home and remote interfaces
and bean implementation class to define a bean. The EJB container relies on specific
method names and uses delegation for invoking methods on bean instances.
Thus regarding the first requirement, the EJB container generates the proxy objects for all
beans. For the second one, the EJB container for each bean implement a proxy object to
the home interface and publishes in the JNDI implementation of the J2EE platform. One
can use JNDI to look for this and obtain a reference. As this object implements the home
interface only, he can use one of the creation methods of the home object to get a proxy
to the remote interface of the bean. When one invokes a creation method on the home
proxy object, the container makes sure that a bean instance is created on the EJB
container runtime and its proxy is returned to the client. Once the client gets hold of the
proxy for the remote interface, it can directly access the services of the bean.
Finally, once the client decides to stop accessing the services of the bean, it can inform
the EJB container by calling a remote method on the bean. This signals the EJB container
to disassociate the bean instance from the proxy and that bean instance is ready to service
any other clients.
Types of EJBs
The EJB architecture is based on the concept that in an enterprise computing system,
database persistence-related logic should be independent of the business logic that relies
on the data. This happens to be a very useful technique for separating business logic
concerns from database concerns. This makes that business logic can deal with the
business data without worrying about how the data is stored in a relational database.
Basically entity beans model business concepts that can be expressed as nouns. For
example, an entity bean might represent a customer, a piece of equipment, an item in
inventory. Thus entity beans model real-world objects. These objects are usually
persistent records in some kind of database.
Session beans are for managing processes or tasks. A session bean is mainly for
coordinating particular kinds of activities. That is, session beans are plain remote objects
meant for abstracting business logic. The activity that a session bean represents is
fundamentally transient. A session bean does not represent anything in a database, but it
can access the database.
Thus an entity bean has persistent state whereas a session bean models interactions but
does not have persistent state.
Unlike session beans, entity beans have a client-independent identity. This is because an
entity bean encapsulates persistent data. The EJB architecture lets a developer to register
a primary key class to encapsulate the minimal set of attributes required to represent the
identity of an entity bean. Clients can use these primary key objects to accomplish the
database operations, such as create, locate, or delete entity beans. Since entity beans
represent persistent state, entity beans can be shared across different clients. Similar to
session beans, entity beans are also transactional, except for the fact that bean instances
are not allowed to programmatically control transactions.
These two types of beans are meant for synchronous invocation. That is, when a client
invokes a method on one of the above types, the client thread will be blocked till the EJB
container completes executing the method on the bean instance. Also these beans are
unable to service the messages which comes asynchronously over a messaging service
such as JMS. To overcome this deficiency, the EJB architecture has introduced a third
type of bean called message-driven bean. A message-driven bean is a bean instance that
can listen to messages from the JMS.
Unlike other types of beans, a message-driven bean is a local object without home and
remote interfaces. In a J2EE platform, message-driven beans are registered against JMS
destinations. When a JMS message receives a destination, the EJB container invokes the
associated message-driven bean. Thus message-driven beans do not require home and
remote interfaces as instances of these beans are created based on receipt of JMS
messages. This is an asynchronous activity and does not involve clients directly. The
main purpose of message-driven beans is to implement business logic in response to JMS
messages. For instance, take a B2B e-commerce application receiving a purchase order
via a JMS message as an XML document. On receipt of such a message in order to
persist this data and perform any business logic, one can implement a message-driven
bean and associate it with the corresponding JMS destination. Also these beans are
completely decoupled from the clients that send messages.
Session beans can be either stateful or stateless. Stateful session beans maintain
conversational state when used by a client. Conversational state is not written to a
database but can store some state in private variables during one method call and a
subsequent method call can rely on this state. Maintaining a conversational state allows a
client to carry on a conversation with a bean. As each method on the bean is invoked, the
state of the session bean may change and that change can affect subsequent method calls.
Stateless session beans do not maintain any conversational state. Each method is
completely independent and uses only data passed in its parameters. One can specify
whether a bean is stateful or not in the bean's deployment descriptor.
Beans using bean-managed persistence do all this work explicitly: the bean developer has
to write the code to manipulate the database. The EJB container tells the bean instance
when it is safe to insert, update, and delete its data from the database, but it provides no
other help. The bean instance has to do the persistence work itself.
EJB Container: The environment that surrounds the beans on the EJB server is often
referred to as the container. The container acts as an intermediary between the bean class
and the EJB server. The container manages the EJB objects and EJB homes for a
particular type of bean and helps these constructs to manage bean resources and apply the
primary services relevant to distributed systems to bean instances at run time. An EJB
server can have more than one container and each container in turn can accommodate
more than one enterprise bean. Container means a pre developed software. EJB’s cannot
survive outside containers.
Remote Interface: This interface for an enterprise bean defines the enterprise bean's
business methods that clients for this bean can access. The remote interface extends
javax.ejb.EJBObject, which in turn extends java.rmi.Remote.
Home interface: This interface defines the bean's life cycle methods such as creation of
new beans, removal of beans, and locating beans. The home interface extends
javax.ejb.EJBHome, which in turn extends java.rmi.Remote.
Bean Class: This class has to implement the bean's business methods in the remote
interface apart from some other callback methods. An entity bean must implement
javax.ejb.EntityBean and a session bean must implement javax.ejb.SessionBean. Both
EntityBean and Session Bean extend javax.ejb.EnterpriseBean.
Primary Key: This is a very simple class that provides a reference into the database.
This class has to implement java.io.Serializable. Only entity beans need a primary key.
Deployment Descriptors: Much of the information about how beans are managed at
runtime is not supplied in the interfaces and classes mentioned above. There are some
common primary services related with distributed systems apart from some specific
services such as security, transactions, naming that are being handled automatically by
EJB server. But still EJB server needs to know beforehand how to apply the primary
services to each bean class at runtime. Deployment descriptors exactly do this all
important task.
JAR Files: Jar files are ZIP files that are used specifically for packaging Java classes that
are ready to be used in some type of application. A Jar file containing one or more
enterprise beans includes the bean classes, remote interfaces, home interfaces, and
primary keys for each bean. It also contains one deployment descriptor.
Deployment is the process of reading the bean's JAR file, changing or adding properties
to the deployment descriptor, mapping the bean to the database, defining access control
in the security domain, and generating vendor-specific classes needed to support the bean
in the EJB environment. Every EJB server product comes with its own deployment tools
containing a graphical user interface and a set of command-line programs.
For clients like enterprise bean itself, Java RMI or CORBA client, to locate enterprise
beans on the net, Java EJB specifications specify the clients to use Java Naming and
Directory Interface (JNDI). JNDI is a standard Java extension that provides a uniform
Application Programming Interface (API) for accessing a wide range of naming and
directory services. The communication protocol may be Java RMI-IIOP or CORBA's
IIOP
There are some special integrated application development tools such as Inprise's
JBuilder, Sun's Forte and IBM's VisualAge, for designing EJBs in the market.
Enterprise Java Beans or EJB for short is the server-side component architecture for
the Java 2 Platform, Enterprise Edition (J2EE) platform. EJB technology enables rapid
and simplified development of distributed, transactional, secure and portable applications
based on Java technology.
Sun Microsystems in the beginning put forward Java Remote Method Invocation
(RMI) API as a distributed object computing technology. RMI specifies how to write
objects so that they can talk to each other no matter where on the network they are found.
At its core, however, RMI is nothing more than an API to which our distributed objects
must conform. RMI says nothing about about other characteristics normally required of
an enterprise-class distributed environment. For example, it does not say anything about
how a client might perform a search for RMI objects matching some criteria. It also does
not specify how those distributed objects work together to construct a single transaction.
Thus there is a realization for a need of a distributed component model.
A component model is a standard that defines how components are written so that
systems can be built from components by different developers with little or no
customization. There is already a component model called as JavaBeans in Java. It is a
component model that defines how we write user interface components so that they may
be plugged into third-party applications. The magic thing about JavaBeans is that there is
very little API behind the specification; we neither implement nor extend any special
classes and we need not call any special methods.
Enterprise JavaBeans is a more complex extension of this concept. While there are API
elements behind Enterprise JavaBeans, it is more than an API. It is a standard way of
writing distributed components so that the written components can be used with the
components we write in someone else's system. RMI does not support this ability for
several reasons listed below. Here comes the features that are not available with RMI.
1. Security - RMI does not worry about security. RMI alone basically leaves our system
wide open. Any one who has access to our RMI interfaces can forge access to the
underlying objects. If we do not impose complex security restrictions to authenticate
clients and verify access by writing extra code, we will have no security at all. Thus our
components are therefore unlikely to interoperate with other's components unless we
agree to some sort of security model.
2. Searching - RMI provides the ability to do a lookup only for a specific, registry-bound
object. It specifies nothing about how we find unbound objects or perform searches for a
group of objects meeting certain requirements. For example, writing a banking
application, we might want to support the ability to find all accounts with negative
balances. In order to do this in an RMI environment, we would have to write our own
search methods in bound objects. Our custom approach to handling searches will not
work with someone's else custom approach to searching without forcing clients to deal
with both search models.
4. Persistence - RMI does not care about how RMI objects persist across time. There is a
persistence utility that supports saving RMI objects to a database using JDBC. But it is
very difficult to integrate with RMI objects designed to use some other persistence model
because the other persistence model may have different persistence requirements.
Enterprise JavaBeans addresses all of these points so that we can literally pick and choose
the best designed business components from different vendors and make them work and
play well with one another in the same environment. EJB is now the standard component
model for capturing distributed business components. It hides from us the details we
might have to worry about ourself if we were writing an RMI application.
Application Servers
2. iPlanet
3. Oracle
4. Orion Server
5. WebSphere
6. NetDynamics
7. JRun Server
8. Tomcat
9. JOnAS
EJB Overview
EJB simplified distributed development
• DNS
• NDIS
• LDAP
• X.500
EJB Specification
The EJB specification defines interfaces between
EJB Roles
• Service & Tool Provider provides Server, Container and integrates with
distributed facilities
• EJB Provider creates EJB components
• Application Assembler assembles apps from per-built EJB components
• Deployment Specialist deploys apps and understands architecture issues
Java Remote Method Invocation (Java RMI) enables you to create distributed Java
technology-based applications that can communicate with other such applications.
Methods of remote Java objects can be invoked from other Java virtual machines,
possibly on different hosts.
With RMI we can write distributed programs in the Java programming language. RMI is
easy to use, you don't need to learn a separate interface definition language (IDL), and
you get Java's inherent "write once, run anywhere" benefit. Clients, remote interfaces,
and servers are written entirely in Java. RMI uses the Java Remote Method Protocol
(JRMP) for remote Java object communication. RMI lacks interoperability with other
languages because it does not use CORBA-IIOP as the communication protocol.
RMI uses object serialization to marshal and unmarshal parameters and does not truncate
types, supporting object-oriented polymorphism. The RMI registry is a lookup service for
ports.
1. The Stub and Skeleton layer, which intercepts method calls made by the client to
the interface reference variable and redirects these calls to a remote RMI service.
2. The Remote Reference layer understands how to interpret and manage references
made from clients to the remote service objects.
3. The bottom layer is the Transport layer, which is based on TCP/IP connections
between machines in a network. It provides basic connectivity, as well as some
firewall penetration strategies.
On top of the TCP/IP layer, RMI uses a wire-level protocol called Java Remote Method
Protocol (JRMP), which works like this:
1. Objects that require remote behavior should extend the RemoteObject class,
typically through the UnicastRemoteObject subclass.
a. The UnicastRemoteObject subclass exports the remote object to make it
available for servicing incoming RMI calls.
b. Exporting the remote object creates a new server socket, which is bound to
a port number.
c. A thread is also created that listens for connections on that socket. The
server is registered with a registry.
d. A client obtains details of connecting to the server from the registry.
e. Using the information from the registry, which includes the hostname and
the port details of the server's listening socket, the client connects to the
server.
2. When the client issues a remote method invocation to the server, it creates a
TCPConnection object, which opens a socket to the server on the port specified
and sends the RMI header information and the marshalled arguments through this
connection using the StreamRemoteCall class.
3. On the server side:
a. When a client connects to the server socket, a new thread is assigned to
deal with the incoming call. The original thread can continue listening to
the original socket so that additional calls from other clients can be made.
b. The server reads the header information and creates a RemoteCall object
of its own to deal with unmarshalling the RMI arguments from the socket.
c. The serviceCall() method of the Transport class services the incoming call
by dispatching it
d. The dispatch() method calls the appropriate method on the object and
pushes the result back down the wire.
e. If the server object throws an exception, the server catches it and marshals
it down the wire instead of the return value.
4. Back on the client side:
a. The return value of the RMI is unmarshalled and returned from the stub
back to the client code itself.
b. If an exception is thrown from the server, that is unmarshalled and thrown
from the stub.
The RMI subsystem implements reference counting based distributed garbage collection
(DGC) to provide automatic memory management facilities for remote server objects.
When the client creates (unmarshalls) a remote reference, it calls dirty() on the server
side Distributed Garbage Collector. After it has finished with the remote reference, it
calls the corresponding clean() method.
A reference to a remote object is leased for a period of time by the client holding the
reference. The lease period starts when the dirty call is received. The client has to renew
the leases, by making additional dirty calls, on the remote references it holds before such
leases expire. If the client does not renew the lease before it expires, the distributed
garbage collector assumes that the remote object is no longer referenced by that client.
DGCClient implements the client side of the RMI distributed garbage collection system.
The external interface to DGCClient is the registerRefs() method. When a LiveRef to a
remote object enters the JVM, it must be registered with the DGCClient to participate in
distributed garbage collection. When the first LiveRef to a particular remote object is
registered, a dirty call is made to the server-side distributed garbage collector for the
remote object, which returns a lease guaranteeing that the server-side DGC will not
collect the remote object for a certain period of time. While LiveRef instances to remote
objects on a particular server exist, the DGCClient periodically sends more dirty calls to
renew its lease. The DGCClient tracks the local availability of registered LiveRef
instances using phantom references. When the LiveRef instance for a particular remote
object is garbage collected locally, a clean() call is made to the server-side distributed
garbage collector, indicating that the server no longer needs to keep the remote object
alive for this client. The RenewCleanThread handles the asynchronous client-side DGC
activity by renewing the leases and making clean calls. So this thread waits until the next
lease renewal or until any phantom reference is queued for generating clean requests as
necessary.
MessageReceiverInterface.java
This part defines the RMI interface. The method (receiveMessage) of the
server class, which implements this interface, is called from the remote
client. In the remote client program, the type of the server class (which is the
remote class in this client class) is this interface.
RmiServer.java
This is the server program(class). In this class, the method
“receiveMessage”, which is called from the remote client, is defined. This
class is the implementation of the RMI interface.
RmiClient.java
This is the client program(class). The remote method is called from this
class.
Execution outline
RmiServer creates the “registry”. This is a kind of dictionary. Its key is a name
(which is the ID of a remote object) and its content is an object. This object is
looked up from a remote program by the name. This registry is accessed from a
remote object by the IP address (or host name) and the port number.
RmiServer binds the name “rmiServer” and it-self(RmiServer.class) in the
registry.
registry
“rmiServer”-RmiServer
RmiServer
RmiClient
receiveMessage(…)
registry
“rmiServer”-RmiServer
RmiClient RmiServer
rmiServer. receiveMessage(…)
RmiClient RmiServer
rmiServer.receiveMessage(…) receiveMessage(…)
Compile
javac RmiServer.java
rmic RmiServer
javac RmiClient.java
Execution
(at one host,) java RmiServer
(at another host) java RmiClient <server’s address> 3232 <message text>
The source codes
ReceiveMessageInterface.java
import java.rmi.*;
public interface ReceiveMessageInterface extends Remote
{
void receiveMessage(String x) throws RemoteException;
}
RmiServer.java
import java.rmi.*;
import java.rmi.registry.*;
import java.rmi.server.*;
import java.net.*;
RmiClient.java
import java.rmi.*;
import java.rmi.registry.*;
import java.net.*;
RMI-IIOP
RMI-IIOP is for Java programmers who want to program to the RMI interfaces, but use
IIOP as the underlying transport. RMI-IIOP provides interoperability with other
CORBA objects implemented in various languages - but only if all the remote interfaces
are originally defined as Java RMI interfaces. It is of particular interest to programmers
using Enterprise JavaBeans (EJB), since the remote object model for EJBs is RMI-
based.
The Java RMI system allows an object running in one Java Virtual Machine
(VM) to invoke methods on an object running in another Java VM. RMI
provides for remote communication between programs written in the Java
programming language via the Java Remote Method Protocol (JRMP).
The distributed Hello World example uses a client application to make a remote method
call via IIOP to a server running on the host from which the client was downloaded.
When the client runs, "Hello from MARS!" is displayed.
1. Define the functions of the remote class as an interface written in the Java
programming language
2. Write the implementation class
3. Write the server class
4. Write a client program that uses the remote service
• The remote interface must be declared public. Otherwise, a client will get an error
when attempting to load a remote object that implements the remote interface,
unless that client is in the same package as the remote interface.
• The remote interface extends the java.rmi.Remote interface.
• Each method must declare java.rmi.RemoteException (or a superclass of
RemoteException) in its throws clause, in addition to any application-specific
exceptions.
• The data type of any remote object that is passed as an argument or return value
(either directly or embedded within a local object) must be declared as the remote
interface type (for example, HelloInterface) not the implementation class
(HelloImpl).
Create the file HelloInterface.java. The following code is the interface definition
for the remote interface, HelloInterface, which contains just one method, sayHello:
//HelloInterface.java
import java.rmi.Remote;
public interface HelloInterface extends java.rmi.Remote {
public void sayHello( String from ) throws java.rmi.RemoteException; }
Because remote method invocations can fail in different ways from local method
invocations (due to network-related communication problems and server problems),
remote methods will report communication failures by throwing a
java.rmi.RemoteException.
As a convenience, the implementation class can extend a remote class, which in this
example is javax.rmi.PortableRemoteObject. By extending PortableRemoteObject, the
HelloImpl class can be used to create a remote object that uses IIOP-based transport for
communication.
The constructor for a remote class provides the same functionality as the constructor for a
non-remote class: it initializes the variables of each newly created instance of the class,
and returns an instance of the class to the program which called the constructor.
In addition, the remote object instance will need to be "exported". Exporting a remote
object makes it available to accept incoming remote method requests, by listening for
incoming calls to the remote object on an anonymous port. When you extend
javax.rmi.PortableRemoteObject, your class will be exported automatically upon
creation.
The implementation class for a remote object contains the code that implements each of
the remote methods specified in the remote interface. For example, here is the
implementation for the sayHello() method, which returns the string "Hello from
MARS!!" to the caller:
public void sayHello( String from ) throws java.rmi.RemoteException {
System.out.println( "Hello from " + from + "!!");
System.out.flush(); }
Arguments to, or return values from, remote methods can be any data type for the Java
platform, including objects, as long as those objects implement the interface
java.io.Serializable. Most of the core classes in java.lang and java.util implement the
Serializable interface. In RMI:
• By default, local objects are passed by copy, which means that all data members
(or fields) of an object are copied, except those marked as static or transient.
Please refer to the Java Object Serialization Specification for information on how
to alter the default serialization behavior.
• Remote objects are passed by reference. A reference to a remote object is actually
a reference to a stub, which is a client-side proxy for the remote object. Stubs are
described fully in the Java Remote Method Invocation Specification. We'll create
them later in this tutorial in the section: Use rmic to generate stubs and skeletons.
//HelloServer.java
import javax.naming.InitialContext;
import javax.naming.Context;
public class HelloServer {
public static void main(String[] args) {
try {
// Step 1: Instantiate the Hello servant
HelloImpl helloRef = new HelloImpl();
// Step 2: Publish the reference in the Naming Service using JNDI API
Context initialNamingContext = new InitialContext();
initialNamingContext.rebind("HelloService", helloRef );
System.out.println("Hello Server: Ready..."); }
catch (Exception e) {
System.out.println("Trouble: " + e);
e.printStackTrace(); } } }
The client application in this example remotely invokes the sayHello method in order
to get the string "Hello from MARS!!" to display when the client application runs.
Create the file HelloClient.java. Here is the source code for the client application:
//HelloClient.java
import java.rmi.RemoteException;
import java.net.MalformedURLException;
import java.rmi.NotBoundException;
import javax.rmi.*;
import java.util.Vector;
import javax.naming.NamingException;
import javax.naming.InitialContext;
import javax.naming.Context;
public class HelloClient {
public static void main( String args[] ) {
Context ic;
Object objref;
HelloInterface hi;
try {
ic = new InitialContext();
} catch( Exception e ) {
System.err.println( "Exception " + e + "Caught" );
e.printStackTrace( );
return; } } }
First, the client application gets a reference to the remote object implementation
(advertised as "HelloService") from the Naming Service using Java Naming and
Directory Interface [TM] (JNDI) calls. Like the Naming.rebind method, the
Naming.lookup method takes java.lang.String value representing the name of the
object to look up. You supply Naming.lookup() the name of the object you want to
look up, and it returns the object bound to that name. Naming.lookup() returns the
stub for the remote implementation of the Hello interface to its caller (HelloClient).
The client application invokes the remote sayHello() method on the server's remote
object, causing the string "Hello from MARS!!" to be displayed on the command line.
In this section, you compile the remote object implementation file, HelloImpl.java, in
order to create the .class files needed to run rmic. You then run the rmic compiler to
create stubs and skeletons. A stub is a client-side proxy for a remote object which
forwards RMI-IIOP calls to the server-side dispatcher, which in turn forwards the call to
the actual remote object implementation. The last task is to compile the remaining .java
source files to create .class files. The following tasks will be completed in this section:
You must stop and restart the server any time you modify a remote interface or use
modified/additional remote interfaces in a remote object implementation. Otherwise,
the type of the object reference bound in the Naming Service will not match the
modified class.
java -classpath .
-Djava.naming.factory.initial=com.sun.jndi.cosnaming.CNCtxFactory
-Djava.naming.provider.url=iiop://localhost:1050
HelloServer
The output should look like this:
Hello Server: Ready ...
ORBD and the Hello server will continue to run until they are explicitly stopped. On
Solaris, you can stop these processes using the pkill orbd and pkill HelloServer
commands from a terminal window. On Windows, you can type Ctrl+C in a prompt
window to kill the process.
CORBA
There are also Object Service specifications for lifecycle management, security,
transactions, and event notification, as well as many others.
• Common Facilities -- Like Object Service interfaces, these interfaces are also
horizontally-oriented, but unlike Object Services they are oriented towards end-
user applications. An example of such a facility is the Distributed Document
Component Facility (DDCF), a compound document Common Facility based on
OpenDoc. DDCF allows for the presentation and interchange of objects based on
a document model, for example, facilitating the linking of a spreadsheet object
into a report document.
• Domain Interfaces -- These interfaces fill roles similar to Object Services and
Common Facilities but are oriented towards specific application domains. For
example, one of the first OMG RFPs issued for Domain Interfaces is for Product
Data Management (PDM) Enablers for the manufacturing domain. Other OMG
RFPs will soon be issued in the telecommunications, medical, and financial
domains.
• Application Interfaces - These are interfaces developed specifically for a given
application. Because they are application-specific, and because the OMG does not
develop applications (only specifications), these interfaces are not standardized.
However, if over time it appears that certain broadly useful services emerge out of
a particular application domain, they might become candidates for future OMG
standardization.
Developers use the Interface Definition Language (IDL) to describe the interface to a
CORBA object. An IDL schema can then be used to generate Java code for the client and
server that will use the object. The same IDL schema could be used to generate either a
client or server in C++, Ada, or any other language that supports CORBA. You don't
write your implementation of a CORBA service in IDL - so you can continue to write in
pure Java code if you so wish.
The figure below graphically depicts a request. A client holds an object reference to a
distributed object. The object reference is typed by an interface. In the figure below the
object reference is typed by the Rabbit interface. The Object Request Broker, or ORB,
delivers the request to the object and returns any results to the client. In the figure, a jump
request returns an object reference typed by the AnotherObject interface.
The Common Object Request Broker Architecture (CORBA) from the Object
Management Group (OMG) provides a platform-independent, language-independent
architecture for writing distributed, object-oriented applications.
CORBA objects can reside in the same process, on the same machine, down the hall, or
across the planet. The Java language is an excellent language for writing CORBA
programs. Some of the features that account for this popularity include the clear mapping
from OMG IDL to the Java programming language, and the Java runtime environment's
built-in garbage collection.
The ORB
The ORB is the distributed service that implements the request to the remote object. It
locates the remote object on the network, communicates the request to the object, waits
for the results and when available communicates those results back to the client.
The ORB implements location transparency. Exactly the same request mechanism is used
by the client and the CORBA object regardless of where the object is located. It might be
in the same process with the client, down the hall or across the planet. The client cannot
tell the difference.
The ORB implements programming language independence for the request. The client
issuing the request can be written in a different programming language from the
implementation of the CORBA object. The ORB does the necessary translation between
programming languages. Language bindings are defined for all popular programming
languages.
// IDL
module BankSimple {
interface Bank {
...
};
interface Account {
...
};
};
The interfaces Bank and Account are scoped within the module BankSimple. IDL
definitions are available directly within the scope in which you define them. In other
naming scopes, you must use the scoping operator (::) to access these definitions. For
example, the fully scoped name of interfaces Bank and Account are BankSimple::Bank
and BankSimple::Account respectively.
IDL modules can be reopened. For example, a module declaration can appear several
times in a single IDL specification if each declaration contains different data types. In
most IDL specifications, this feature of modules is not required.
//IDL
module BankSimple {
interface Account {
// The account owner and balance.
readonly attribute string name;
readonly attribute CashAmount balance;
For example, the Account interface defines the attributes name and balance. These
attributes represent information about the account which the object implementation can
set, but which client applications can only read.
//IDL
module BankSimple {
typedef float CashAmount;
...
interface Account {
// Operations available on the account.
void deposit(in CashAmount amount);
void withdraw(in CashAmount amount);
...
};
};
Each operation takes a parameter and has a void return type. Each parameter definition
must specify the direction in which the parameter value is passed. The possible parameter
passing modes are as follows:
in The parameter is passed from the caller of the operation to the object.
out The parameter is passed from the object to the caller.
inout The parameter is passed in both directions.
Parameter passing modes clarify operation definitions and allow an IDL compiler to map
operations accurately to a target programming language.
Implicitly, all IDL operations can raise any of the CORBA system exceptions. No
reference to system exceptions appears in an IDL specification. To specify that an
operation can raise a user-defined exception, first define the exception structure and then
add an IDL raises clause to the operation definition. For example, the operation
withdraw() in interface Account could raise an exception to indicate that the withdrawal
has failed, as follows:
// IDL
module BankExceptions {
typedef float CashAmount;
...
interface Account {
exception InsufficientFunds {
string reason;
};
The raises clause follows the definition of operation withdraw() to indicate that this
operation can raise exception InsufficientFunds. If an operation can raise more then
one type of user-defined exception, include each exception identifier in the raises clause
and separate the identifiers using commas.
If you precede an operation definition with the keyword oneway, a client that calls the
operation will not block while the object processes the call. For example, you could add a
oneway operation to interface Account that sends a notice to an Account object, as
follows:
module BankSimple {
...
interface Account {
oneway void notice(in string text);
...
};
};
Orbix does not guarantee that a oneway operation call will succeed; so if a oneway
operation fails, a client may never know. There is only one circumstance in which Orbix
indicates failure of a oneway operation. If a oneway operation call fails before Orbix
transmits the call from the client address space, then Orbix raises a system exception. A
oneway operation can not have any out or inout parameters and can not return a value.
In addition, a oneway operation can not have an associated raises clause.
Consider the example of an Account object, where each client maintains a set of
identifiers, such as sys_time and sys_location that map to information that the
operation deposit() logs for each deposit received. To ensure that this information is
passed with every operation call, extend the definition of deposit() as follows:
// IDL
module BankSimple {
typedef float CashAmount;
...
interface Account {
void deposit(in CashAmount amount)
context("sys_time", "sys_location");
...
};
};
A context clause includes the identifiers for which the operation expects to receive
mappings.
// IDL
module BankSimple{
interface Account {
...
};
// IDL
module BankSimple {
interface Account {
...
};
interface PremiumAccount :
CheckingAccount, SavingsAccount {
...
};
};
Interface PremiumAccount is an example of multiple inheritance in IDL. Figure 3.1
illustrates the inheritance hierarchy for this interface. If you define an interface that
inherits from two interfaces which contain a constant, type, or exception definition of the
same name, you must fully scope that name when using that constant, type, or exception.
An interface can not inherit from two interfaces that include operations or attributes that
have the same name.
Figure 3.1: Multiple Inheritance of IDL Interfaces
interface Bank {
Account create_account (in string name);
Account find_account (in string name);
};
// Full definition of Account.
interface Account {
...
};
};
The syntax for a forward declaration is the keyword interface followed by the interface
identifier.
interface Account {
readonly attribute CashAmount balance;
readonly attribute Currency balanceCurrency;
...
};
};
In this example, attribute balanceCurrency in interface Account can take any one of the
values pound, dollar, yen, or franc.
Struct
A struct data type allows you to package a set of named members of various types, for
example:
// IDL
module BankSimple{
struct CustomerDetails {
string name;
short age;
};
interface Bank {
CustomerDetails getCustomerDetails
(in string name);
...
};
};
In this example, the struct CustomerDetails has two members. The operation
getCustomerDetails() returns a struct of type CustomerDetails that includes values
for the customer name and age.
Union
A union data type allows you to define a structure that can contain only one of several
alternative members at any given time. A union saves space in memory, as the amount of
storage required for a union is the amount necessary to store its largest member. All IDL
unions are discriminated. A discriminated union associates a label value with each
member. The value of the label indicates which member of the union currently stores a
value. For example, consider the following IDL union definition:
// IDL
struct DateStructure {
short Day;
short Month;
short Year;
};
String
An IDL string represents a character string, where each character can take any value of
the char basic type. If the maximum length of an IDL string is specified in the string
declaration, then the string is bounded. Otherwise the string is unbounded. The following
example shows how to declare bounded and unbounded strings:
// IDL
module BankSimple {
interface Account {
// A bounded string with maximum length 10.
attribute string<10> sortCode;
// An unbounded string.
readonly attribute string name;
...
};
};
Sequence
In IDL, you can declare a sequence of any IDL data type. An IDL sequence is similar to a
one-dimensional array of elements. An IDL sequence does not have a fixed length. If the
sequence has a fixed maximum length, then the sequence is bounded. Otherwise, the
sequence is unbounded. For example, the following code shows how to declare bounded
and unbounded sequences as members of an IDL struct:
// IDL
module BankSimple {
interface Account {
...
};
struct LimitedAccounts {
string bankSortCode<10>;
// Maximum length of sequence is 50.
sequence<Account, 50> accounts;
};
struct UnlimitedAccounts {
string bankSortCode<10>;
// No maximum length of sequence.
sequence<Account> accounts;
};
};
A sequence must be named by an IDL typedef declaration before it can be used as the
type of an IDL attribute or operation parameter. The following code illustrates this:
// IDL
module BankSimple {
typedef sequence<string> CustomerSeq;
interface Account {
void getCustomerList(out CustomerSeq names);
...
};
};
Arrays
In IDL, you can declare an array of any IDL data type. IDL arrays can be multi-
dimensional and always have a fixed size. For example, you can define an IDL struct
with an array member as follows:
// IDL
module BankSimple {
...
interface Account {
...
};
struct CustomerAccountInfo {
string name;
Account accounts[3];
};
interface Bank {
getCustomerAccountInfo (in string name,
out CustomerAccountInfo accounts);
...
};
};
In this example, struct CustomerAccountInfo provides access to an array of Account
objects for a bank customer, where each customer can have a maximum of three
accounts. An array must be named by an IDL typedef declaration before it can be used
as the type of an IDL attribute or operation parameter. The IDL typedef declaration
allows you define an alias for a data type. The following code illustrates this:
// IDL
module BankSimple {
interface Account {
...
};
typedef Account AccountArray[100];
interface Bank {
readonly attribute AccountArray accounts;
...
};
};
Note that an array is a less flexible data type than an IDL sequence, because an array
always has a fixed length. An IDL sequence always has a variable length, although it may
have an associated maximum length value.
Fixed
The fixed data type allows you to represent number in two parts: a digit and a scale. The
digit represents the length of the number, and the scale is a non-negative integer that
represents the position of the decimal point in the number, relative to the rightmost digit.
module BankSimple {
typedef fixed<10,4> ExchangeRate;
struct Rates {
ExchangeRate USRate;
ExchangeRate UKRate;
ExchangeRate IRRate;
};
};
In this case, the ExchangeRate type has a digit of size 10, and a scale of 4. This means
that it can represent numbers up to (+/-)999999.9999. The maximum value for the digits
is 31, and scale cannot be greater than digits. The maximum value that a fixed type can
hold is equal to the maximum value of a double. Scale can also be a negative number.
This means that the decimal point is moved scale digits in a rightward direction, causing
trailing zeros to be added to the value of the fixed. For example, fixed <3,-4> with a
numeric value of 123 actually represents the number 1230000. This provides a
mechanism for storing numbers with trailing zeros in an efficient manner. Note: Fixed
<3, -4> can also be represented as fixed <7, 0>. Constant fixed types can also be
declared in IDL. The digits and scale are automatically calculated from the constant
value. For example:
module Circle {
const fixed pi = 3.142857;
};
This yields a fixed type with a digits value of 7, and a scale value of 6.
// IDL
module BankSimple {
interface Account {
...
};
typedef Account StandardAccount;
};
The identifier StandardAccount can act as an alias for type Account in subsequent IDL
definitions. Note that CORBA does not specify whether the identifiers Account and
StandardAccount represent distinct IDL data types in this example.
Constants
IDL allows you to specify constant data values using one of several basic data types. To
declare a constant, use the IDL keyword const, for example:
// IDL
module BankSimple {
interface Bank {
const long MaxAccounts = 10000;
const float Factor = (10.0 - 6.5) * 3.91;
...
};
};
The value of an IDL constant cannot change. You can define a constant at any level of
scope in your IDL specification.
• the ability to create class libraries in one language that can be accessed and used
by other languages
• the ability to subclass from binaries even if they were written in a different
language
• the ability to add new methods and relocate existing methods without re-
compilation of the application
• the ability to insert new classes into the inheritance hierarchy without recompiling
the application.
Objects
SOM objects are derived from a root object which defines the essential behavior common
to all SOM objects. Factory methods are used to create SOM objects at run time. These
factory methods are invoked on a class object, in the SOM run-time.
Operations
The interface to a SOM object is defined by permitting the specification of operation
signatures which consist of an operation name and the argument and result types.
Operations are performed on methods which implement an objects behavior.
Requests
Client requests are invoked on objects by specifying the name of an object and the name
of the operation along with parameters to invoke on it. An object can support multiple
operations.
Messages
Methods are invoked on SOM objects. Methods can be relocated upward in the class
hierarchy without requiring the client to be re-compiled. SOM supports three different
method dispatching mechanisms; offset, name resolution, and dispatch function
resolution.
The "offset resolution" mechanism implies a static scheme for typing objects and is
roughly equivalent to the C++ virtual function concept. It offers the best performance
characteristics for SOM method resolution at a cost of some loss in flexibility.
This form of method resolution supports polymorphism that is based on the derivation of
the object's class. Name resolution supports access to objects whose class is not known at
compile time, and permits polymorphism based on the protocols that an object supports,
rather than its derivation.
The "dispatch function" resolution is a feature of SOM that permits method resolution to
be based on arbitrary rules known only in the domain of the receiving object. Dispatch
function resolution is a completely dynamic mechanism that permits run time type
checking, and open-ended forms of polymorphism. A distinguishing feature of SOM is
that all 3 forms of method resolution are complementary and can be intermixed within
client programs.
State
The state of SOM objects is accessed through published interfaces to an object. Invoking
operations on objects may cause state changes.
Object lifetime
SOM objects are created by invoking a create operation on a factory object in the SOM
run time. Once created, the object will exist until explicitly deleted or until the process
that created it no longer exists. A SOM object would need to make use of a persistence
mechanism in order to exist beyond the life of the process that created it. A persistence
mechanism is beyond the scope of this object model discussion, however, SOM could be
and has been used as the basis for building a variety of persistence frameworks.
Behavior/state grouping
SOM uses the classical object model in that a target object must be specified for each
operation.
Communication model
Since SOM is a basic mechanism, its run-time model is one where an operation occurs on
a single thread within a single process. However, SOM code permits concurrent
execution by multiple threads on systems where SOM supports the underlying threads
model, therefore, multi-threaded programs can use mutual exclusion mechanisms to
serialize updates to SOM objects with confidence that critical sections in SOM are thread
safe.
Complex object interactions that need to span process boundaries can be constructed on
SOM using standard inter-process communication facilities provided by the underlying
system. No serialization code is necessary if programming in a single thread, single
process model. A class library based on SOM is used to provide SOM with distributed
access to remote SOM objects.
Binding
SOM provides support for both early and late binding. These binding choices are on a per
method bases.
Polymorphism
The polymorphism provided by SOM depends on the method dispatching scheme
selected. If "offset resolution" is used then a static scheme for typing objects is used and
polymorphism is based strictly on class inheritance. If the "name resolution" method
dispatching is used then methods are located dynamically and polymorphism is based on
the actual protocols that objects honor.
Encapsulation
Access to the state of SOM objects is through the operations that makeup the objects
interface. Invoking operations on SOM objects can have side effects. SOM objects do
have private data that is not accessible by invoking external operations. In addition, it is
possible to define class attributes on SOM objects. These attributes are accessed via set
and get functions implemented for each attribute and are invoked in the same way as
methods.
• Name of the POA - The POA is given a name that must be unique with respect to
all other POAs with the same parent. In the following example, the POA is named
childPOA.
• POA Manager - Specify the POA Manager to be associated with the new POA.
If, as is shown in the following example, null is passed for this parameter, a new
POA Manager will be created. The user can also choose to pass the POA Manager
of another POA.
• Policy List - Specify the policy list to be associated with the POA to control its
behavior. In the following example, a persistent lifespan policy has already been
defined for this POA.
The following code snippet shows how the POA is created :
// Create a POA by passing the Persistent Policy
POA persistentPOA = rootPOA.create_POA("childPOA", null,
persistentPolicy );
Step 4: Activate the POAManager
Each POA object has an associated POAManager object that controls the processing state
of the POAs with which it is associated, such as whether requests to the POA are queued
or discarded. The POAManager can also deactivate the POA. A POA Manager may be
associated with one or more POA objects. The POAManager can have the following states:
• Holding - In this state, associated POAs will queue incoming requests.
• Active - In this state, associated POAs will start processing requests.
• Discarding - In this state, associated POAs will discard incoming requests.
• Inactive - In this state, associated POAs will reject the requests that have not
begun executing as well as as any new requests.
POA Managers are not automatically activated when they are created. The following
code snippet shows how the POAManager is activated . If the POA Manager is not
activated in this way, all calls to the Servant will hang because, by default, the POA
Manager is in the HOLD state.
// Activate PersistentPOA's POAManager. Without this step,
// all calls to Persistent Server will hang because POAManager
// will be in the 'HOLD' state.
persistentPOA.the_POAManager().activate( );
Step 5: Activate the servants
At any point in time, a CORBA object may or may not be associated with an active
servant. If the POA has the RETAIN policy, the servant and its associated Object Id are
entered into the Active Object Map of the appropriate POA. This type of activation can
be accomplished in one of the following ways.
• The server application itself explicitly activates individual objects (via the
activate_object or activate_object_with_id operations).
• The server application instructs the POA to activate objects on demand by having the
POA invoke a user-supplied servant manager. The server application registers this
servant manager with set_servant_manager.
• Under some circumstances (when the IMPLICIT_ACTIVATION policy is also in effect
and the language binding allows such an operation), the POA may implicitly activate
an object when the server application attempts to obtain a reference for a servant that
is not already active (that is, not associated with an Object Id).
If the USE_DEFAULT_SERVANT policy is also in effect, the server application instructs the
POA to activate unknown objects by having the POA invoke a single servant no matter
what the Object Id is. The server application registers this servant with set_servant.
If the POA has the NON_RETAIN policy, for every request, the POA may use either a
default servant or a servant manager to locate an active servant. From the POA's point of
view, the servant is active only for the duration of that one request. The POA does not
enter the servant-object association into the Active Object Map.
When using RMI-IIOP technology, your implementations use delegation (known as the
Tie model) to associate your implementation with the interface. When you create an
instance of your implementation, you also need to create a Tie object to associate it with
a CORBA interface. The following code snippet shows how to activate the Tie, if the
POA policy is USE_ACTIVE_OBJECT_MAP_ONLY. This sample code is from the RMI-IIOP
with POA example.
_HelloImpl_Tie tie = (_HelloImpl_Tie)Util.getTie( helloImpl );
String helloId = "hello";
byte[] id = helloId.getBytes();
tPOA.activate_object_with_id( id, tie );
NameComponent[] nc = rootContext.to_name(
"PersistentServerTutorial" );
rootContext.rebind( nc, persistentPOA.servant_to_reference(
servant ) );
• Server application directly creates a reference.
The following example is from the RMI-IIOP with POA example. In this
example, the following code directly creates a reference. In doing so, they bring
the abstract object into existence, but do not associate it with an active servant.
// Publish the object reference using the same object id
// used to activate the Tie object.
Context initialNamingContext = new InitialContext();
initialNamingContext.rebind("HelloService",
tPOA.create_reference_with_id(id,
tie._all_interfaces(tPOA,id)[0]) );
The ORB
The IBM ORB ships with the JVM and is used by the IBM WebSphere Application
Server. It is one of the enterprise features of the Java 2 Standard Edition. The ORB is a
tool and runtime component that provides distributed computing through the OMG-
defined CORBA IIOP communication protocol. The ORB runtime consists of a Java
implementation of a CORBA ORB. The ORB toolkit provides APIs and tools for both
the RMI programming model and the IDL programming model.
This separation of interface from implementation, enabled by OMG IDL, is the essence
of CORBA - how it enables interoperability, with all of the transparencies we've claimed.
The interface to each object is defined very strictly. In contrast, the implementation of an
object - its running code, and its data - is hidden from the rest of the system (that is,
encapsulated) behind a boundary that the client may not cross. Clients access objects
only through their advertised interface, invoking only those operations that that the object
exposes through its IDL interface, with only
those parameters (input and
output) that are included in the invocation
Figure 1 shows how everything fits together, at least within a single process: You
compile your IDL into client stubs and object skeletons, and write your object (shown on
the right) and a client for it (on the left). Stubs and skeletons serve as proxies for clients
and servers, respectively. Because IDL defines interfaces so strictly, the stub on the
client side has no trouble meshing perfectly with the skeleton on the server side, even if
the two are compiled into different programming languages, or even running on different
ORBs from different vendors.
In CORBA, every object instance has its own unique object reference, an identifying
electronic token. Clients use the object references to direct their invocations, identifying
to the ORB the exact instance they want to invoke (Ensuring, for example, that the books
you select go into your own shopping cart, and not into your neighbor's.) The client acts
as if it's invoking an operation on the object instance, but it's actually invoking on the
IDL stub which acts as a proxy. Passing through the stub on the client side, the invocation
continues through the ORB (Object Request Broker), and the skeleton on the
implementation side, to get to the object where it is executed. The CORBAservices
provide standard ways of passing object references around your network of CORBA
objects. Location Transparency keeps your applications flexible.
OMG has standardized this process at two key levels: First, the client knows the type of
object it's invoking (that it's a shopping cart object, for instance), and the client stub and
object skeleton are generated from the same IDL. This means that the client knows
exactly which operations it may invoke, what the input parameters are, and where they
have to go in the invocation; when the invocation reaches the target, everything is there
and in the right place. We've already seen how OMG IDL accomplishes this. Second, the
client's ORB and object's ORB must agree on a common protocol - that is, a
representation to specify the target object, operation, all parameters (input and output) of
every type that they may use, and how all of this is represented over the wire. OMG has
defined this also - it's the standard protocol IIOP. (ORBs may use other protocols besides
IIOP, and many do for various reasons. But virtually all speak the standard protocol IIOP
for reasons of interoperability, and because it's required by OMG for compliance.)
Although the ORB can tell from the object reference that the target object is remote, the
client can not. (The user may know that this also, because of other knowledge - for
instance, that all accounting objects run on the mainframe at the main office in Tulsa.)
There is nothing in the object reference token that the client holds and uses at invocation
time that identifies the location of the target object. This ensures location transparency -
the CORBA principle that simplifies the design of distributed object computing
applications.That ORB/Skeleton Architecture on the Server side doesn't look very
scalable. What did you leave out?
Using the ORB
To use the ORB, you need to understand the properties that the ORB contains. These
properties change the behavior of the ORB.
How the ORB works
This description explains the client side, and describes what the ORB does under the
cover and transparently to the client. Then, the important role of the ORB in the server
side is explained. This section describes a basic, typical RMI-IIOP session in which a
client accesses a remote object on a server implementing an interface named Sample. The
client then invokes a simple method called message(). The method returns a "Hello
World" string.
The client side
The client side operation of the ORB is described.
1. Stub creation:
In a simple distributed application, the client needs to know (in almost all the
cases) what object it is going to contact, and which method of this object it needs
to call. Because the ORB is a general framework, you must give it general
information about the method that you want to call.
2. ORB initialization:
In a stand-alone Java application, the client has to create an instance of the ORB.
3. Obtaining the remote object
Several methods exist by which the client can get a reference for the remote
object.
4. Remote method invocation:
The client holds a reference to the remote object that is an instance of the stub
class. The next step is to call the method on that reference. The stub implements
the Sample interface and therefore contains the message() method that the client
has called.
Stub creation
In a simple distributed application, the client needs to know (in almost all the cases) what
object it is going to contact, and which method of this object it needs to call. Because the
ORB is a general framework, you must give it general information about the method that
you want to call. For this reason, you implement a Java interface, Sample, which contains
the signatures of the methods that can be called in the remote object.
The client relies on the existence of a server that contains an object that is that Sample
interface. You must, therefore, create a proxy. This proxy is an object, called stub that
acts as an interface between client application and ORB.To create the stub, run the RMIC
compiler on the Java interface:
rmic -iiop Sample
This command generates a file and object named _Sample_Stub.class.
The presence of a stub is not always mandatory for a client application to operate. When
you use particular CORBA features such as the DII (Dynamic Invocation Interface), you
do not require a stub. The reason is that the proxy code is implemented directly by the
client application. You can also upload a stub from the server to which you are trying to
connect. See the CORBA specification for further details
ORB initialization
In a stand-alone Java application, the client has to create an instance of the ORB. This
instance is created by calling the static method init(...); for example:
ORB orb = ORB.init(args,props);
The parameters that are passed to the method are:
• A string array that contains property-value pairs
• A Java Properties object
For an applet, a similar method is used in which a Java Applet is passed instead of the
string array. The first step of the ORB initialization is the processing of the ORB
properties. The properties are processed in the following sequence:
1. Check in the applet parameter or application string array
2. Check in the properties parameter (if the parameter exists)
3. Check in the system properties
4. Check in the orb.properties file that is in the <user-home> directory (if the file
exists)
5. Check in the orb.properties file that is in the <java-home>/lib directory (if the file
exists)
6. Fall back on a hardcoded default behavior
The two properties ORBClass and ORBSingletonClass determine which ORB class has
to be instantiated. After this, the ORB starts and initializes the TCP transport layer. If the
ListenerPort property was set, the ORB also opens a server socket that is listening for
incoming requests, as a server-side ORB usually does. At the end of the init() method, the
ORB is fully functional and ready to support the client application.
Obtaining the remote object
Several methods exist by which the client can get a reference for the remote object.
Typically, this reference is a string, called an IOR (Interoperable Object Reference). For
example:
IOR:000000000000001d524d493a5......
This reference contains all the information needed to find the remote object. It also
contains some details of the settings of the server to which the object belongs. The client
ORB is not supposed to understand the details of the IOR, but to use it as a key. In other
words, the IOR is a reference to the remote object. However, when client and server are
both using an IBM ORB, extra features are coded in the IOR. For example, the IBM
ORB adds a proprietary field into the IOR, called IBM_PARTNER_VERSION. This
field holds a value like the following example:
49424d0a 00000008 00000000 1400 0005
where:
• The three initial bytes (from left to right) are the ASCII code for IBM, followed
by 0x0A, which specifies that the following bytes handle the partner version.
• The next 4 bytes encode the length of the remaining data (in this case 8 bytes)
• The next 4 null bytes are for future use.
• The 2 bytes for the Partner Version Major field (0x1400 in this example) define
the release of the ORB that is being used (1.4.0 in this case).
• The Minor field (0x0005) distinguishes in the same release, service refreshes that
contain changes that have affected the compatibility with earlier versions.
Because the IOR is not visible to application-level ORB programmers and the client ORB
does not know where to look for it, there is one more step, called the bootstrap process.
Basically, the client application needs to tell the ORB where the remote object reference
is located.
A typical example of bootstrapping is if you use a naming service: the client calls the
ORB method resolve_initial_references("NameService"), which returns (after narrowing)
a reference to the name server in the form of a NamingContext object. The ORB looks
for a name server in the local machine at the port 2809 (as default). If no name server
exists, or the name server is listening on another port, the ORB returns an exception. The
client application can specify a different host, port, or both by using the -ORBInitRef
and -ORBInitPort options.
Using the NamingContext and the name with which the Remote Object has been bound
in the name service, the client can retrieve a reference to the remote object. The reference
to the remote object that the client holds is always an instance of a Stub object; for
example _Sample_Stub.
ORB.resolve_initial_references() causes much system activity. The ORB starts by
creating a remote communication with the name server. This communication might
include several requests and replies. Typically, the client ORB first checks whether a
name server is listening, then asks for the specified remote reference. In an application
where performance is considered important, caching the remote reference is a better
alternative to repetitive use of the naming service. However, because the naming service
implementation is a transient type, the validity of the cached reference is tied to the time
in which the naming service is running. The IBM ORB implements an Interoperable
Naming Service as described in the CORBA 2.3 specification. This service includes a
new string format that can be passed as a parameter to the ORB methods
string_to_object() and resolve_initial_references(). By calling the previous two methods
where the string parameter has a corbaloc (or corbaname) format as, for example:
corbaloc:iiop:[email protected]:1050/AService
the client ORB uses GIOP 1.0 to send a request with a simple object key of AService to
port 1050 at host aserver.aworld.aorg. There, the client ORB expects to find a server for
the Aservice that is requested, and returns a reference to itself. You can then use this
reference to look for the remote object. This naming service is transient. It means that the
validity of the contained references expires when the name service or the server for the
remote object is stopped.
Remote method invocation
The client holds a reference to the remote object that is an instance of the stub class. The
next step is to call the method on that reference. The stub implements the Sample
interface and therefore contains the message() method that the client has called.
First, the stub code determines whether the implementation of the remote object is
located on the same ORB instance. If so, the object can be accessed without using the
Internet.
If the implementation of the remote object is located on the same ORB instance, the
performance improvement can be significant because a direct call to the object
implementation is done. If no local servant can be found, the stub first asks the ORB to
create a request by calling the _request() method, specifying the name of the method to
call and whether a reply is expected or not.
The CORBA specification imposes an extra layer of indirection between the ORB code
and the stub. This layer is commonly known as delegation. CORBA imposes the layer
using an interface named Delegate. This interface specifies a portable API for ORB-
vendor-specific implementation of the org.omg.CORBA.Object methods. Each stub
contains a delegate object, to which all org.omg.CORBA.Object method invocations are
forwarded. The delegate object allows a stub that is generated by the ORB from one
vendor to work with the delegate from the ORB of another vendor.
When creating a request, the ORB first checks whether the enableLocateRequest property
is set to true, in which case, a LocateRequest is created. The steps of creating this request
are like the full Request case.
The ORB obtains the IOR of the remote object (the one that was retrieved by a naming
service, for example) and passes the information that is contained in the IOR (Profile
object) to the transport layer.
The transport layer uses the information that is in the IOR (IP address, port number, and
object key) to create a connection if it does not exist. The ORB TCP/IP transport has an
implementation of a table of cached connections for improving performances, because
the creation of a new connection is a time-consuming process. The connection is not an
open communication channel to the server host. It is only an object that has the potential
to create and deliver a TCP/IP message to a location on the Internet. Typically, that
involves the creation of a Java™ socket and a reader thread that is ready to intercept the
server reply. The ORB.connect() method is called as part of this process.
When the ORB has the connection, it proceeds to create the Request message. The
message contains the header and the body of the request. The CORBA 2.3 specification
specifies the exact format. The header contains these items, for example:
• Local IP address
• Local port
• Remote IP address
• Remote port
• Message size
• Version of the CORBA stream format
• Byte sequence convention
• Request types
• Ids
The body of the request contains several service contexts and the name and parameters of
the method invocation. Parameters are typically serialized. A service context is some
extra information that the ORB includes in the request or reply, to add several other
functions. CORBA defines a few service contexts, such as the codebase and the codeset
service contexts. The first is used for the callback feature which is described in the
CORBA specification, the second is used to specify the encoding of strings.
In the next step, the stub calls _invoke(). Again, it is the delegate invoke() method that is
executed. The ORB in this chain of events calls the send() method on the connection that
writes the request to the socket buffer and then flushes it away. The delegate invoke()
method waits for a reply to arrive. The reader thread that was spun during the connection
creation gets the reply message, demarshals it, and returns the correct object.
Processing a request
The server ORB uses a single listener thread, and a reader thread for each connection or
client, to process an incoming message. During the ORB initialization, a listener thread
was created. The listener thread is listening on a default port (the next available port at
the time the thread was created). You can specify the listener port by using the
com.ibm.CORBA.ListenerPort property. When a request comes in through that port, the
listener thread first creates a connection with the client side. In this case, it is the TCP
transport layer that takes care of the details of the connection. As seen for the client side,
the ORB caches all the connections that it creates.
By using the connection, the listener thread spawns a reader thread to process the
incoming message. When dealing with multiple clients, the server ORB has a single
listener thread and one reader thread for each connection or client.
The reader thread does not fully read the request message, but instead creates an input
stream for the message to be piped into. Then, the reader thread picks up one of the
worker threads in the implemented pool (or creates one if none is present), and delegates
the reading of the message. The worker threads read all the fields in the message and
dispatch them to the tie, which unmarshals any parameters and calls the remote method.
The service contexts are then created and written to the response output stream with the
return value. The reply is sent back with a similar mechanism, as described in the client
side. After that, the connection is removed from the reader thread which eventually stops.
The POA Manager is an object that encapsulates the processing state of one or more
POAs. You can control and change the state of all POAs by using operations on the POA
manager. The adapter activator is an object that an application developer uses to activate
child POAs.
Fragmentation
The CORBA specification introduced the concept of fragmentation to handle the growing
complexity and size of marshalled objects in GIOP messages. Graphs of objects are
linearized and serialized inside a GIOP message under the IDL specification of
valuetypes. Fragmentation specifies the way a message can be split into several smaller
messages (fragments) and sent over the net.
The system administrator can set the ORB properties FragmentSize and
FragmentTimeout to obtain best performance in the existing net traffic. As a general
rule, the default value of 1024 bytes for the fragment size is a good trade-off in almost all
conditions. The fragment timeout must not be set to too low a value, or time-outs might
occur unnecessarily.
Portable interceptors
For some time, CORBA implementations have had non-standard mechanisms that allow
users to insert their own code into the ORB's flow of execution. This code, known as
interceptors, is called at particular stages during the processing of requests. It can directly
inspect and even manipulate requests. Because this message filtering mechanism is
extremely flexible and powerful, the OMG standardized interceptors in the CORBA 2.4.2
specification under the name "portable interceptors".
The idea of a portable interceptor is to define a standard interface to register and execute
application-independent code that, among other things, takes care of passing service
contexts. These interfaces are stored in the package org.omg.PortableInterceptor.* . The
implementation classes are in the com.ibm.rmi.pi.* package of the IBM® ORB. All the
interceptors implement the Interceptor interface.
Two classes of interceptors are defined: request interceptors and IOR (Interoperable
Object Reference) interceptors. Request interceptors are called during request mediation.
IOR interceptors are called when new object references are created so that service-
specific data can be added to the newly-created IOR in the form of tagged components.
The ORB calls request interceptors on the client and the server side to manipulate service
context information. Interceptors must register with the ORB for those interceptor points
that are to be executed.
Five interception points are on the client side:
• send_request (sending request)
• send_poll (sending request)
• receive_reply (receiving reply)
• receive_exception (receiving reply)
• receive_other (receiving reply)
Five interception points are on the server side:
• receive_request_service_contexts (receiving request)
• receive_request (receiving request)
• send_reply (sending reply)
• send_exception (sending reply)
• send_other (sending reply)
The only interceptor point for IOR interceptors is establish_component. The ORB calls
this interceptor point on all its registered IOR interceptors when it is assembling the set of
components that is to be included in the IOP profiles for a new object reference.
Registration of interceptors is done using the interface ORBInitializer.
Example:
public class MyInterceptor extends org.omg.CORBA.LocalObject
implements ClientRequestInterceptor, ServerRequestInterceptor
{
public String name() {
return "MyInterceptor";
}
The Naming Service allows you to associate abstract names with CORBA objects and
allows clients to find those objects by looking up the corresponding names. This service
is both very simple and very useful. A server that holds a CORBA object binds a name
to the object by contacting the Naming Service. To obtain a reference to the object, a
client requests the Naming Service to look up the object associated with a specified
name. This is known as resolving the object name. The Naming Service provides
interfaces defined in IDL that allow servers to bind names to objects and clients to
resolve those names.
Most CORBA applications make some use of the Naming Service. Locating a particular
object is a common requirement in distributed systems and the Naming Service provides
a simple, standard way to do this.
The Interface to the Naming Service
The Naming Service maintains a database of names and the objects associated with them.
An association between a name and an object is called a binding. The IDL interfaces to
the Naming Service provide operations to access the database of bindings. For example,
you can create new bindings, resolve names, and delete existing bindings.
struct NameComponent {
Istring id;
Istring kind;
};
A name is a sequence of these structures:
typedef sequence<NameComponent> Name;
The id member of a NameComponent is a simple identifier for the object; the kind
member is a secondary way to differentiate objects and is intended to be used by the
application layer. For example, you could use the kind member to distinguish the type of
the object being referred to. The semantics you choose for this member are not
interpreted by OrbixNames.
Both the id and kind members of a NameComponent are used in name resolution. Two
names that differ only in the kind member of one NameComponent are considered to be
different names.
Feature Description
Event filtering Filters allow consumers to receive only the events they are
and subscription interested in, and to tell suppliers which events are in
demand.
Event publication Suppliers can inform an event channel which events they
can supply, so consumers can subscribe to new event types
as they become available.
Event Communication
CORBA specifies two approaches to initiating the transfer of events between suppliers
and consumers
• push model: Suppliers initiate transfer of events by sending those events to the
channel. The channel then forwards them to any consumers connected to it.
• pull model: Consumers initiate the transfer of events by requesting them from the
channel. The channel requests events from the suppliers connected to it.
Push Model
In the push model, suppliers generate events and actively pass them to an event channel.
In this model, consumers wait for events to arrive from the channel. Figure3 illustrates a
push model architecture in which push suppliers communicate with push consumers
through the event channel.
Figure 4: Pull Model Suppliers and Consumers Communicating through an Event Channel
Event Propagation
In this architecture, the event channel invokes an IDL operation on an object in each
supplier to collect events. When a consumer invokes a similar operation on the event
channel, the channel forwards the events to the consumer that initiated the transfer.
Because suppliers and consumers are completely decoupled by the event channel, push
and pull models can be mixed in a single system. For example, suppliers can connect to
an event channel using the push model, while consumers connect using the pull model, as
shown in Figure5.
Figure 5: Push Suppliers and Pull Consumers Communicating through an Event Channel
Event Propagation
In this case, both suppliers and consumers participate in initiating event transfer. A
supplier invokes an operation on an object in the event channel to transfer an event to the
channel. A consumer then invokes another operation on an event channel object to
transfer the event data from the channel.
In the case where push consumers and pull suppliers are mixed, the event channel
actively propagates events by invoking IDL operations in objects in both suppliers and
consumers. The pull supplier would wait for the channel to invoke an event transfer
before sending events. Similarly, the push consumer would wait for the event channel to
invoke event transfer before receiving events.
Event Filtering
Filter objects screen events as they pass through the channel, and process those that meet
the filter constraints. The notification service defines two types of filters:
• Forwarding filters are set in a channel by clients that wish to restrict event delivery to
those events that meet certain constraints. These filters implement interface
CosNotifyFilter::Filter.
• Mapping filters are set by consumers to adjust the priority or lifetime settings of those
messages that meet filter constraints. These filters implement interface
CosNotifyFilter::MappingFilter.
The CORBA specification does not state how an OTS client connects to the transaction
factory in the TM, so the mechanism varies from one CORBA product to another.
However, the connection is likely to be made by calling resolve_initial_references(). The OTS
client calls TransactionFactory::create() to begin a transaction. This operation returns a
reference to a Control object.
The client must somehow communicate the Control object reference when invoking an
operation on an OTS-aware object. This could be achieved by explicitly passing the
Control reference as a parameter to the operation. However, it is more commonly
achieved by embedding the Control reference (along with other information) in a service
context that is transmitted with the request. The OTS specification defines a service
context structure for this purpose.
When the client wants to terminate a transaction, it calls Control::get_terminator() to
obtain a reference to the Terminator object and then calls commit() or rollback() on this.
If an OTS server accesses an XA-compliant database then the server invokes an OTS
operation that puts a Resource wrapper around the database. If the server uses a non-XA-
compliant database then the server developer must implement the Resource interface so
that its database can take part in two-phase commit transactions.
In the original OTS specification, an object indicated that it could take part in OTS
transactions by implementing an IDL interface that inherited from
CosTransactions::TransactionalObject. The _is_a() operation (which is provided by the
base Object type) was used by a client application to determine whether or not an object
reference was for a transactionally aware object. However, the OMG decided that this
approach was undesirable. In particular, it can result in a dramatic increase in the number
of IDL interface definitions.2 Eventually, the OMG decided that it would be be better if
whether or not an object was transactionally aware could be expressed as a quality of
service. In modern versions of the OTS, this goal is achieved by defining a new POA
Policy type that, if used, indicates that objects in that POA are transactionally aware. An
IOR interceptor detects the presence of this POA policy and embeds an OTS
TaggedComponent into IORs that originate from that POA. A client application can
check for the presence of this TaggedComponent to determine if an object is
transactionally aware.
When an operation in an OTS-enabled server receives a Control object, it can call
get_coordinator() to gain access to the transaction’s Coordinator object. The Coordinator
interface is a “wrapper” around the coordination logic that implements the two-phase
commit protocol. Its purpose is to interact with the Resource objects in OTS servers. The
server calls register_resource() on the Coordinator to register its resource (this
registration occurs only once per transaction). This informs the TM that the server’s
Resource is taking part in the transaction and so should be included in the two-phase
commit protocol when the transaction commits. This operation returns a reference to a
RecoveryCoordinator object for the transaction. The server stores this object reference in
a persistent storage area so that if the server crashes during the two-phase commit
protocol and is restarted then the server can contact the RecoveryCoordinator to
determine if the transaction should commit or roll-back.
During the two-phase commit, the TM invokes the prepare() operation on all Resource
objects that have taken part in the transaction. The return value of this operation is a Vote
that determines if the transaction will be committed or rolled back.
How OTS Builds on Top of Other Parts of CORBA
This section briefly discusses a simple subset of the API provided by OTS. This simple
subset is used by most OTS developers. The focus of this discussion is not to act as a
tutorial for developers, but rather to show how other aspects of CORBA (such as current
objects, portable interceptors and service contexts) are used as building blocks for more
powerful capabilities, such as OTS.
OTS defines a Current object. This object is accessed by calling
resolve_initial_references("TransactionCurrent"). The OTS Current object lets threads in
both client and server applications know with which transaction they are currently
associated.
An OTS client uses the begin(), commit() and rollback() operations on the Current object
to control the lifetime of a transaction. Internally, the Current object delegates to the
corresponding operations defined on the interfaces in the transaction manager. When a
client invokes an operation on an object, a portable request interceptor provided by OTS
embeds transactional context information obtained from the Current object in a service
context that is then transmitted with the request to the target object. A corresponding
portable request interceptor in the server extracts this transactional context information
from the service context and initializes the server’s Current object before dispatching to
the target operation. This means that the body of the operation executes within the
context of a transaction. Because of this, the operation does not need to begin-and-
commit or resume-and-suspend a transaction. Instead, these details are taken care of by
the portable interceptor and so the body of the operation can focus on using, say,
embedded SQL or JDBC to query/update the database.
The mechanism discussed above provides a simple API for developers and it is powerful
enough for the majority of applications. However, developers can, if they so choose,
avoid using the Current object and its associated portable interceptor, and instead
manually execute their own OTS-infrastructure code. Although this is more complex, it
provides a way for developers to integrate a non-XA-compliant database with OTS.
5 6 3
License Manager
Producer
Specific
License License
Service License
Figure 1 shows the interactions among the system components. The ProducerClient
component is some piece of software whose use we want to control. In this paper, we
refer also to it as the client application. The LicenceManager component is a black box
embodying specific licensing policies, and is not part of the licensing service pecification.
The client application connects to the LicenseServiceManager and requests an instance
of the ProducerSpecificLicenseService (steps 1 and 2), which it gets through steps 3
and 4. The exact means for getting that instance are left to the service designers. After
steps 5 and 6, the client application becomes effectively controlled. Before granting
authorization, ProducerSpecificLicenseService communicates with LicenceManager
which checks the available licenses and applicable rules and policies.
CORBA LifeCycle service
Life Cycle Service defines services and conventions for creating, deleting, copying and
moving objects. Because CORBA-based environments support distributed objects, the
Life Cycle Service defines conventions that allow clients to perform life cycle operations
on objects in different locations.
A client is any piece of code that initiates a life cycle operation for some object. A client
has a simple view of the life cycle operations. The client’s model of creation is defined in
terms of factory objects. A factory is an object that creates another object. Factories are
not special objects. As with any object, factories have well-defined IDL interfaces and
implementations in some programming language.
The above figure to create an object “over there” a client must posses an object reference
to a factory over there. The client simply issues a request on the factory. There is no
standard interface for a factory. Factories provide the client with specialized operations to
create and initialize new instances in a natural way for the implementation. The following
illustrates a factory for a document.
interface DocFactory {
Document create();
Document create_with_title(in string title);
Document create_for(in natural_language nl);
};
Factories are object implementation dependent. A different implementation of the
document could define a different factory interface. A generic factory is a creation
service. It provides a generic operation for creation.
Instead of invoking an object specific operation on a factory with statically defined
parameters, the client invokes a standard operation whose parameters can include
information about resource filters, state initialization, policy preferences, etc. To create
an object, a client must possess an object reference for a factory, which may be either a
generic factory or an object-specific factory, and issue an appropriate
request on the factory. As a result, a new object is created and typically an object
reference is returned. There is nothing special about this interaction. A factory assembles
the resources necessary for the existence of an object it creates. Therefore, the factory
represents a scope of resource allocation, which is the set of resources available to the
factory. A factory may support an interface that enables its clients to constrain the scope.
Clients find factory objects in the same fashion they find any object. Two common
scenarios for clients to find factories are:
A client that wishes to delete an object issues a remove1 request on an object supporting
the LifeCycle Object interface. The object receiving the request is called the target. To
delete an object, a client must posses an object reference supporting the LifeCycleObject
interface and issues a remove request on the object.
A client that wishes to move or copy an object issues a move or copy request on an
object supporting the LifeCycleObject interface. The object receiving the request is
called the target. The move and copy operations expect an object reference supporting the
FactoryFinder interface. The implementations of move and copy can use the factory
finder to find appropriate factories “over there.” This is invisible to the client. Client code
would simply issue a copy request on the document and pass it an object supporting the
FactoryFinder interface as an argument. When a client issues a copy request on a
target, it is assumed that the target, the factory finder, and the newly created object can all
communicate via the ORB. With externalization/internalization there is no such
assumption. In the presence of a future externalization service, the externalized form of
the object can exist outside of the ORB for arbitrary amounts of time, be transported by
means outside of the ORB and can be internalized in a different, disconnected ORB.
Factory Finders
Factory finders support operations which returns one or more factories. Clients pass
factory finders to the move and copy operations, which typically invoke this operation to
find a factory to interact with. The new copy or the migrated object will then be within
the scope of the factory finder. Some examples of locations that a factory finder might
represent are:
• somewhere on a work group’s local area network
• storage device A on machine X
• Susan’s notebook computer
Design Principles
Several principles have driven the design of the Life Cycle Service:
1. A factory object registered at a factory finder represents an implementation at that
location. Thus, a factory finder allows clients to query a location for an implementation.
2. Object implementations can embody knowledge of finding a factory, relative to a
location. Object implementations usually do not embody knowledge of location.
3. The desired result for life cycle operations such as copy and move depends on
relationships between the target object and other objects.
4. The Life Cycle Service is not dependent on any particular model of persistence and is
suitable for distributed, heterogeneous environments.
5. The design does not include an object equivalence service nor rely on global object
identifiers.
Lock Modes
The Concurrency Control Service defines several lock modes, which correspond to
different categories of access. Having a variety of lock modes allows more flexible
conflict resolution.
Lock Granularity
The Concurrency Control Service does not define the granularity of the resources that are
locked. It defines a lock set, which is a collection of locks associated with a single
resource. It is up to clients of the Concurrency Control Service to associate a lock set
with each resource. Typically, if an object is a resource, the object would internally create
and retain a lock set. However, the mapping between objects and resources (and lock
sets) is up to the object implementation; the mapping could be one to one, but it could
also be one to many, many to many, or many to one.
Lock Modes
Read, Write, and Upgrade Locks
The Concurrency Control service defines read (R) and write (W) lock modes that support
the conventional multiple readers, one writer policy. Read locks conflict with write locks,
and write locks conflict with other write locks. In addition, the Concurrency Control
service defines an upgrade (U) mode. An upgrade mode lock is a read lock that conflicts
with itself. It is useful for avoiding a common form of deadlock that occurs when two or
more clients attempt to read and then update the same resource. If more than one client
holds a read lock on the resource, a deadlock will occur as soon as one of the clients
requests a write lock on the resource. If each client requests a single upgrade lock
followed by a write lock, this deadlock will not occur.
Collections support the grouping of objects and support operations for the manipulation
of the objects as a group. Common collection types are queues, sets, bags, maps, etc.
Collection types differ in the “nature of grouping” exposed to the user. “Nature of
grouping” is reflected in the operations supported for the manipulation of objects as
members of a group. Collections, for example, can be ordered and thus support access to
an element at position ”i” while other collections may support associative access to
elements via a key. Collections may guarantee the uniqueness of elements while others
allow multiple occurrences of elements. A user chooses a collection type that matches the
application requirements based on manipulation capabilities. Collections are foundation
classes used in a broad range of applications; therefore, they have to meet the general
requirement to be able to collect elements of arbitrary type. On the other hand, a
collection instance usually is a homogenous collection in the sense that all elements
collected are of the same type, or support the same single interface.
Bag, SortedBag
A Bag is an unordered collection of zero or more elements with no key. Multiple
elements are supported. As element equality is supported, operations which require the
capability “test of element equality” (e.g., test on containment) can be offered. Example:
The implementation of a text file compression algorithm. The algorithm finds the most
frequently occurring words in sample files. During compression, the words with a high
frequency are replaced by a code (for example, an escape character followed by a one
character code). During re-installation of files, codes are replaced by the respective
words. Several types of collections may be used in this context. A Bag can be used during
the analysis of the sample text files to collect isolated words. After the analysis phase you
may ask for the number of occurrences for each word to construct a structure with the
255 words with the highest word counts. A Bag offers an operation for this, you do not
have to “count by hand,” which is less efficient. To find the 255 words with the highest
word count, a SortedRelation is the appropriate structure.
A SortedBag (as compared to a Bag) exposes and maintains a sorted order of the
elements based on a user-defined element comparison. Maintained elements in a sorted
order makes sense when printing or displaying the collection content in sorted order.
EqualitySequence
An EqualitySequence is an ordered collection of elements with no key. There is a first
and a last element. Each element, except the last one, has a next element and each
element, except the first one, has a previous element. As element equality is supported,
all operations that rely on the capability “test on element equality” can be offered, for
example, locating an element or test for containment. Example: An application that
arranges wagons to a train. The order of the wagons is important. The trailcar has to be
the first wagon, the first class wagons are arranged right behind the trailcar, the restaurant
has to be arranged right after the first class and before the second class wagons, and so
on.
Heap
A Heap is an unordered collection of zero or more elements without a key. Multiple
elements are supported. No element equality is supported. Example: A “trash can” on a
desktop which memorizes all objects moved to the trashcan as long as it is not emptied.
Whenever you move an object to the trashcan it is added to the heap. Sometimes you
move an object accidentally to the trashcan. In that case, you iterate in some order
through the trashcan to find the object - not using a test on element equality. When you
find it, you remove it from the trashcan. Sometimes you empty the trashcan and remove
all objects from the trashcan.
KeyBag, KeySortedBag
A KeyBag is an unordered collection of zero or more elements that have a key. Multiple
keys are supported. As no element equality is assumed, operations such as “test on
collection equality” or “set theoretical operation” are not offered. A KeySortedBag is
sorted by key. In addition to the operations supported for a KeyBag, all operations related
to ordering are offered
KeySet, KeySortedSet
A KeySet is an unordered collection of zero or more elements that have a key. Keys must
be unique. Defined element equality is not assumed; therefore, operations and semantics
which require the capability “element equality test" are not offered. A KeySortedSet is
sorted by key. In addition to the operations supported for a KeySet, all operations related
to ordering are offered. For example, operations exploiting the ordering, such as
“set_to_previous / set_to_next” and “access via position” are supported.
Map, SortedMap
A Map is an unordered collection of zero or more elements that have a key. Keys must be
unique. As defined, element equality is assumed access via the element value and all
operations which need to test on element equality, such as a test on containment for an
element, test for equality, and set theoretical operations can be offered for maps. A
SortedMap is sorted by key. In addition to the operations supported for a Map, all
operations related to ordering are offered. For example, operations exploiting the
ordering like “set_to_previous / set_to_next” and “access via position” are supported.
Relation, SortedRelation
A Relation is an unordered collection of zero or more elements with a key. Multiple keys
are supported. As defined element equality is assumed, test for equality of two collections
is offered as well as the set theoretical operations. A SortedRelation is sorted by key. In
addition to the operations supported for a Relation, all operations related to ordering are
offered. For example, operations that exploit ordering such as “set_to_previous /
set_to_next” and “access via position” are supported.
Set, SortedSet
A set is an unordered collection of zero or more elements without a key. Element equality
is supported; therefore, operations that require the capability “test on element equality”
such as intersection or union can be offered. A SortedSet is sorted with respect to a user-
defined element comparison. In addition to the operations supported for a Set, all
operations related to ordering are offered.
Sequence
A Sequence is an ordered collection of elements without a key. There is a first and a last
element. Each element (except the last one) has a next element and each element (except
the first one) has a previous element. No element equality is supported; therefore,
multiples may occur and access to elements via the element value is not possible. Access
to elements is possible via position/index.
Deque
A double ended queue may be considered as a sequence with restricted access. It is an
ordered collection of elements without a key and no element equality. As there is no
element equality, an element value may occur multiple times. There is a first and a last
element. You can only add an element as first or last element and only remove the first or
the last element from the Deque.
PriorityQueue
A PriorityQueue may be considered as a KeySortedBag with restricted access. It is an
ordered collection with zero or more elements. Multiple key values are supported. As no
element equality is defined, multiple element values may occur. Access to elements is via
key only and sorting is maintained by key. Accessing a PriorityQueue is restricted. You
can add an element relative to the ordering relation defined for keys and remove only the
first element (e.g., the one with highest priority).
Queue
A queue may be considered as a sequence with restricted access. It is an ordered
collection of elements with no key and no element equality. There is a first and a last
element. You can only add (enque) an element as last element and only remove (deque)
the first element from the Queue. That is, a queue exposes FIFO behavior.
Stack
A Stack may be considered as a sequence with restricted access. It is an ordered
collection of elements with no key and no element equality. There is a first and a last
element. You can only add (push) an element as last element (at the top) and only remove
(pop) the last element from the Stack (from the top). That is, a Stack exposes LIFO
behavior
Listing 1. An example
interface VideoPlayer {
VideoStream play();
int fastForward(in Speed spd);
int rewind(in Speed spd);
};
A component will have a set of interfaces that will let it "behave nicely" in its designated
container, and it will have to have one or more interfaces that clients will call to perform
business logic. You could say the former would be a management interface and the latter
a service interface. OMG IDL allowed you to express a set of interfaces as a new
interface that inherited all the interfaces. This combined interface was usually long and
complicated. An alternative is to define an entry point interface that is used to navigate to
the other interfaces of the object.
The OMG has long known about the Multiple Interface gap. In fact it issued a Request
for Proposal (RFP) regarding this topic back in 1996. That RFP was never specified
concretely by a document, but the CCM can be looked at as fulfilling the requirements of
that RFP. Component types specified by CIDL are allowed a combination of different
interfaces that are not related by inheritance.
A component type name is defined with the component keyword. The component has is
own interface, termed its "equivalent" interface, that is implicitly defined in the
component definition. The equivalent interface for the components VCR and DVD is
VideoPlayer; the relationship is declared through the supports keyword.
The component facets provide the business logic to expose to clients. These "facets" of
our component come to life through the keyword provides . Provided interfaces are the
"ports" that clients or other components can connect to. This is important because another
keyword introduced in CIDL is the uses keyword. Uses declarations define a dependency
of a component to an interface provided by another component or CORBA object. Clients
can then navigate to a specified interface at runtime using Navigation operations
generated by the provides declarations.
CCM Containers :
These act as the interface between a CORBA component and the outside world. A CCM
client never accesses a CORBA component directly. Any component access is done
through container-generated methods which in turn invoke the component's methods.
There are basically two basic types of containers. They are transient containers that may
contain transient, non-persistent components whose states are not saved at all and
persistent containers that contain persistent components whose states are saved between
invocations. Depending upon the types of components that they can execute, CCM
Containers may be divided into:
• Service containers
• Session containers,
• Entity containers, and
• Other containers
Container model
The Container manages component instances depending on component category. It offers
all its services through simplified APIs: it becomes component's window to the outside
world. The container offers series of local interfaces (internal interfaces) for establishing
component's context and allows component to implement series of callback methods.
Container Architecture
As shown in the figure above, the container manages a component. It creates and uses a
POA with requested features for specific component category. Client can use external
interfaces of a component to interact with component and home interfaces to manage life
cycle of a component.
External API Types
The external API types of a component are the contract between the component
developer and the component client. It consists of two types of interfaces mainly home
interface and application interface. Home interface allows client to obtain reference to
one of the application interfaces the component implements. From client's prospective
two design patterns are supported namely factory patterns for creating new components
and finder patterns for existing components. These patterns are distinguished by the
presence of primary key on home declaration.
Container API Type
The container API defines a API framework, i.e. contract between specific component
and container. It defines two types of container APIs namely session APIs and entity
APIs depending on component category explained below.
CORBA Usage Model
A CORBA usage model specifies the required interaction pattern between the container,
the POA and the CORBA services. It defines three types of usage model that are
distinguished by reference persistence and servant to ObjectId mapping.
• Stateless - which uses transient object references in conjunction with a POA
servant which can support any ObjectId.
• Conversational - which uses transient references with a POA servant that is
dedicated to specific ObjectId
• Durable - which uses persistence references with POA servant that is dedicated to
a specific ObjectId.
Application Servers
Application Servers which have become very popular in the last few years, provide the
platforms for the execution of transactional, server-side applications in the online world.
They are the modern cousins of traditional transaction processing monitors (TPMs) like
CICS. They play a central role in enabling electronic commerce in the web context. They
are built on the basis of more standardized protocols and APIs. One of the most important
features of Application Sservers is their ability to integrate the modern application
environments with legacy data sources like IMS, CICS, VSAM, etc. They provide a
number of connectors for this purpose, typically using asynchronous transactional
messaging technologies like MQSeries and JMS.
Traditional TPM-style requirements for industrial strength features like scalability,
availability, reliability and high performance are equally important for ASs also. Security
and authentication issues are additional important requirements in the web context. ASs
support DBMSs not only as storage engines for user data but also as repositories for
tracking their own state. Several caching technologies have been developed to improve
performance of ASs.
An Application server is a software framework dedicated to the efficient execution of
procedures (programs, routines, scripts) for supporting the construction of applications.
Advantages of application servers
Data and code integrity
By centralizing business logic on an individual server or on a small number of
server machines, updates and upgrades to the application for all users can be
guaranteed. There is no risk of old versions of the application accessing or
manipulating data in an older, incompatible manner.
Centralized configuration
Changes to the application configuration, such as a move of database server, or
system settings, can take place centrally.
Security
A central point through which service-providers can manage access to data and
portions of the application itself counts as a security benefit, devolving
responsibility for authentication away from the potentially insecure client layer
without exposing the database layer.
Performance
By limiting the network traffic to performance-tier traffic the client–server model
improves the performance of large applications in heavy usage
[citation needed]
environments.
Total Cost of Ownership (TCO)
In combination, the benefits above may result in cost savings to an organization
developing enterprise applications. In practice, however, the technical challenges
of writing software that conforms to that paradigm, combined with the need for
software distribution to distribute client code, somewhat negate these benefits.
Transaction Support
A transaction represents a unit of activity in which many updates to resources (on
the same or distributed data sources) can be made atomic (as an indivisible unit of
work). End-users can benefit from a system-wide standard behaviour, from
reduced time to develop, and from reduced costs. As the server does a lot of the
tedious code-generation, developers can focus on business logic.
Because Orbix E2A is neutral with respect to software component models and
programming languages, it allows developers to use their preferred tools to develop
business logic, and to expose business functions in their preferred deployment
environment. Further, Orbix E2A includes XMLBus technology, which not only
simplifies the creation of XML-based Web service applications, but also provides support
for XML communication to new and existing applications. Orbix E2A provides
integration tools for connecting to specific systems and technologies. For example, the
Orbix E2A Application Server Platform provides adapters for MQSeries, CICS, IMS, and
RDBMSs. Further, the Orbix E2A Application Server Platform can easily be integrated
with the Orbix E2A Web Services Integration Platform, to provide adapters for specific
ERP software products, including SAP, Siebel, PeopleSoft, Baan, and so on. Thus, Orbix
E2A supports point-to-point application integration; for example, for a project that needs
to use SAP R/3 as a data source. Orbix E2A makes it easy to use a combination of J2EE,
CORBA, Mainframe integration, and Web services to provide true peer-to-peer service
oriented architectures for enterprise-scale applications and environments. The Orbix E2A
Application Server Platform provides support for enterprise applications by way of three
broad classes of technology: CORBA, J2EE, and the Web services-related technologies
such as XML, SOAP, WSDL, UDDI, and ebXML.
OMG Model Driven Architecture
The simple PIM in Figure 3 represents a Customer and Account. At this level of
abstraction, the model describes important characteristics of the domain in terms of
classes and their attributes, but does not describe any platform-specific choices about
which technologies will be used to represent them. Figure 3 illustrates three specific
mappings, or transformations, defined to create the PSMs, together with the standards
used to express these mappings. For example, one approach is to export the PSM
expressed in UML into XMI format, using standard definitions expressed as either XML
Schema Definitions (XSD) or Document Type Definitions (DTD). This can then be used
as input to a code generation tool that produces interface definitions in Java for each of
the classes defined in the UML.
Usually, a set of rules is built into the code generation tool to perform the transformation.
However, the code generation tool often allows those rules to be specifically defined as
templates in a scripting language.
COM
Component Object Model (COM) is a binary-interface standard for software
componentry introduced by Microsoft in 1993. It is used to enable interprocess
communication and dynamic object creation in a large range of programming languages.
The term COM is often used in the Microsoft software development industry as an
umbrella term that encompasses the OLE, OLE Automation, ActiveX, COM+ and
DCOM technologies.
The essence of COM is a language-neutral way of implementing objects that can be used
in environments different from the one in which they were created, even across machine
boundaries. For well-authored components, COM allows reuse of objects with no
knowledge of their internal implementation, as it forces component implementers to
provide well-defined interfaces that are separate from the implementation. The different
allocation semantics of languages are accommodated by making objects responsible for
their own creation and destruction through reference-counting. Casting between different
interfaces of an object is achieved through the QueryInterface() function. The
preferred method of inheritance within COM is the creation of sub-objects to which
method calls are delegated.
Although the interface standard has been implemented on several platforms[citation needed],
COM is primarily used with Microsoft Windows. For some applications, COM has been
replaced at least to some extent by the Microsoft .NET framework, and support for Web
Services through the Windows Communication Foundation (WCF). However, COM
objects can be used with all .NET languages through .NET COM Interop.
Networked DCOM uses binary proprietary formats, while WCF encourages the use of
XML-based SOAP messaging. COM is very similar to other component software
interface technologies, such as CORBA and Java Beans, although each has its own
strengths and weaknesses. The characteristics of COM make it most suitable for the
development and deployment of desktop applications[citation needed]
, for which it was
originally designed.
COM Interfaces
All COM components must (at the very least) implement the standard IUnknown
interface, and thus all COM interfaces are derived from IUnknown. The IUnknown
interface consists of three methods: AddRef() and Release(), which implement
reference counting and controls the lifetime of interfaces; and QueryInterface(), which
by specifying an IID allows a caller to retrieve references to the different interfaces the
component implements. The effect of QueryInterface() is similar to dynamic_cast<>
in C++ or casts in Java and C#.
A COM component's interfaces are required to exhibit the reflexive, symmetric, and
transitive properties. The reflexive property refers to the ability for the
QueryInterface() call on a given interface with the interface's ID to return the same
instance of the interface. The symmetric property requires that when interface B is
retrieved from interface A via QueryInterface(), interface A is retrievable from
interface B as well. The transitive property requires that if interface B is obtainable from
interface A and interface C is obtainable from interface B, then interface C should be
retrievable from interface A.
An interface consists of a pointer to a virtual function table that contains a list of pointers
to the functions that implement the functions declared in the interface, in the same order
that they are declared in the interface. This technique of passing structures of function
pointers is very similar to the one used by OLE 1.0 to communicate with its system
libraries.
COM specifies many other standard interfaces used to allow inter-component
communication. For example, one such interface is IStream, which is exposed by
components that have data stream semantics (e.g. a FileStream component used to read
or write files). It has the expected Read and Write methods to perform stream reads and
writes. Another standard interface is IOleObject, which is exposed by components that
expect to be linked or embedded into a container. IOleObject contains methods that
allow callers to determine the size of the component's bounding rectangle, whether the
component supports operations like 'Open', 'Save' and so on.
COM Classes
Referring once again to the MyObject COM class: once a coclass definition for it has
been formalized in an IDL, and a Type Library compiled from it, the onus is on the
individual language compiler to read and appropriately interpret this Type Library and
then produce whatever code (in the specific compiler's language) necessary for a
developer to implement and ultimately produce the binary executable code which can be
deemed by COM to be of coclass MyObject.
Once an implementation of a COM coclass is built and is available in the system, next
comes the question of how to instantiate it. In languages like C++, we can use the
CoCreateInstance() API in which we specify the CLSID (CLSID_MyObject) of the
coclass as well as the interface (specified by the IID IID_IMyObject) from that coclass
that we want to use to interact with that coclass. Calling CoCreateInstance() like this:
CoCreateInstance(CLSID_MyObject,
NULL,
CLSCTX_INPROC_SERVER,
IID_IMyObject,
(void**)&m_pIMyObject);
COM Registry
In Windows, COM classes, interfaces and type libraries are listed by GUIDs in the
registry, under HKEY_CLASSES_ROOT\CLSID for classes and
HKEY_CLASSES_ROOT\Interface for interfaces. The COM libraries use the registry to
locate either the correct local libraries for each COM object or the network location for a
remote service.
HKCR\typelib
One or more CLSID kan be group into tp. it contains parameters for
linking in COM. The rest of the info in the COM parts of the REGISTRY,
is to give an application/object a CLSID.
Reference counting
The most fundamental COM interface of all, IUnknown (from which all COM interfaces
must be derived), supports two main concepts: feature exploration through the
QueryInterface method, and object lifetime management by including AddRef() and
Release(). Reference counts and feature exploration apply to objects (not to each
interface on an object) and thus must have a centralized implementation.
The COM specifications require a technique called reference counting to ensure that
individual objects remain alive as long as there are clients which have acquired access to
one or more of its interfaces and, conversely, that the same object is properly disposed of
when all code that used the object have finished with it and no longer require it. A COM
object is responsible for freeing its own memory once its reference count drops to zero.
For its implementation, a COM Object usually maintains an integer value that is used for
reference counting. When AddRef() is called via any of object's interfaces, this integer
value is incremented. When Release() is called, this integer is decremented. AddRef()
and Release() are the only means by which a client of a COM object is able to influence
its lifetime. The internal integer value remains a private member of the COM object and
will never be directly accessible.
The purpose of AddRef() is to indicate to the COM object that an additional reference to
itself has been affected and hence it is necessary to remain alive as long as this reference
is still valid. Conversely, the purpose of Release() is to indicate to the COM object that a
client (or a part of the client's code) has no further need for it and hence if this reference
count has dropped to zero, it may be time to destroy itself.
Certain languages (e.g. Visual Basic) provide automatic reference counting so that COM
object developers need not explicitly maintain any internal reference counter in their
source codes. Using COM in C, explicit reference counting is needed. In C++, a coder
may write the reference counting code or use a smart pointer that will manage all the
reference counting.
The following is a general guideline calling AddRef() and Release() to facilitate proper
reference counting in COM object:
• Functions (whether object methods or global functions) that return interface
references (via return value or via "out" parameter) should increment the
reference count of the underlying object before returning. Hence internally within
the function or method, AddRef() is called on the interface reference (to be
returned). An example of this is the QueryInterface() method of the IUnknown
interface. Hence it is imperative that developers be aware that the returned
interface reference has already been reference count incremented and not call
AddRef() on the returned interface reference yet another time.
• Release() must be called on an interface reference before that interface's pointer is
overwritten or goes out of scope.
• If a copy is made on an interface reference pointer, AddRef() should be called on
that pointer. After all, in this case, we are actually creating another reference on
the underlying object.
• AddRef() and Release() must be called on the specific interface which is being
referenced since an object may implement per-interface reference counts in order
to allocate internal resources only for the interfaces which are being referenced.
• Extra calls to these functions are not sent out to remote objects over the wire; a
proxy keeps only one reference on the remote object and maintains its own local
reference count.
COM Instantiation
COM standardizes the instantiation (i.e. creation) process of COM objects by requiring
the use of Class Factories. In order for a COM object to be created, two associated items
must exist:
• A Class ID.
• A Class Factory.
Each COM Class or CoClass must be associated with a unique Class ID (a GUID). It
must also be associated with its own Class Factory (that is achieved by using a
centralized registry). A Class Factory is itself a COM object. It is an object that must
expose the IClassFactory or IClassFactory2 (the latter with licensing support) interface.
The responsibility of such an object is to create other objects.
A class factory object is usually contained within the same executable code (i.e. the
server code) as the COM object itself. When a class factory is called upon to create a
target object, this target object's class id must be provided. This is how the class factory
knows which class of object to instantiate.
A single class factory object may create objects of more than one class. That is, two
objects of different class ids may be created by the same class factory object. However,
this is transparent to the COM system.
By delegating the responsibility of object creation into a separate object, a greater level of
abstraction is promoted, and the developer is given greater flexibility. For example,
implementation of the Singleton and other creation patterns is facilitated. Also, the
calling application is shielded from the COM object's memory allocation semantics by
the factory object.
In order for client applications to be able to acquire class factory objects, COM servers
must properly expose them. A class factory is exposed differently, depending on the
nature of the server code. A server which is DLL-based must export a
DllGetClassObject() global function. A server which is EXE-based registers the class
factory at runtime via the CoRegisterClassObject() Windows API function.
The following is a general outline of the sequence of object creation via its class factory:
The object's class factory is obtained via the CoGetClassObject() API (a standard
Windows API). As part of the call to CoGetClassObject(), the Class ID of the object (to
be created) must be supplied. The following C++ code demonstrates this:
IClassFactory* pIClassFactory = NULL;
CoGetClassObject(CLSID_SomeObject,CLSCTX_ALL,NULL,IID_IClassFactory,
LPVOID*)&pIClassFactory);
The above code indicates that the Class Factory object of a COM object, which is
identified by the class id CLSID_SomeObject, is required. This class factory object is
returned by way of its IClassFactory interface. The returned class factory object is then
requested to create an instance of the originally intended COM object. The following
C++ code demonstrates this:
ISomeObject* pISomeObject = NULL;
if (pIClassFactory)
{
pIClassFactory->CreateInstance (NULL,IID_ISomeObject,
(LPVOID*)&pISomeObject);
pIClassFactory->Release();
pIClassFactory = NULL;
}
The above code indicates the use of the Class Factory object's CreateInstance() method
to create an object which exposes an interface identified by the IID_ISomeObject GUID.
A pointer to the ISomeObject interface of this object is returned. Also note that because
the class factory object is itself a COM object, it needs to be released when it is no longer
required (i.e. its Release() method must be called).
The above demonstrates, at the most basic level, the use of a class factory to instantiate
an object. Higher level constructs are also available, some of which do not even involve
direct use of the Windows APIs.
For example, the CoCreateInstance() API can be used by an application to directly
create a COM object without acquiring the object's class factory. However, internally, the
CoCreateInstance() API itself will invoke the CoGetClassObject() API to obtain the
object's class factory and then use the class factory's CreateInstance() method to create
the COM object.
VBScript supplies the New keyword as well as the CreateObject() global function for
object instantiation. These language constructs encapsulate the acquisition of the class
factory object of the target object (via the CoGetClassObject() API) followed by the
invocation of the IClassFactory::CreateInstance() method. Other languages, e.g.
PowerBuilder's PowerScript may also provide their own high-level object creation
constructs. However, CoGetClassObject() and the IClassFactory interface remain the
most fundamental object creation technique.
COM Reflection
At the time of the inception of COM technologies, the only way for a client to find out
what features an object would offer, was to actually create one instance and call into its
QueryInterface method (part of the required IUnknown interface). This way of
exploration became awkward for many applications, including the selection of
appropriate components for a certain task, and tools to help a developer understand how
to use methods provided by an object.
As a result, COM Type Libraries were introduced, through which components can
describe themselves. A type library contains information such as the CLSID of a
component, the IIDs of the interfaces the component implements, and descriptions of
each of the methods of those interfaces. Type libraries are typically used by Rapid
Application Development (RAD) environments such as Visual Basic or Visual Studio to
assist developers of client applications.
Sample Code
Following are two examples that illustrate the COM concepts covered in the article. The
code is also contained in the article's sample project.
Using a COM object with a single interface
The first example shows how to use a COM object that exposes a single interface. This is
the simplest case you'll ever encounter. The code uses the Active Desktop coclass
contained in the shell to retrieve the filename of the current wallpaper. You will need to
have the Active Desktop installed for this code to work.
The steps involved are:
1. Initialize the COM library.
2. Create a COM object used to interact with the Active Desktop, and get an
IActiveDesktop interface.
3. Call the GetWallpaper() method of the COM object.
4. If GetWallpaper() succeeds, print the filename of the wallpaper.
5. Release the interface.
6. Uninitialize the COM library.
WCHAR wszWallpaper [MAX_PATH];
CString strPath;
HRESULT hr;
IActiveDesktop* pIAD;
1. Initialize the COM library (make Windows load the DLLs). Normally
you wouldcall this in your InitInstance() or other startup code. In
MFC apps, use AfxOleInit() instead.
CoInitialize ( NULL );
Containment
Everything in COM is related with the interfaces. Containment is also implemented at the
interface level. The COM containment is same as the C++ containment, in which the
outer component is a client of an inner component. The outer component has pointers to
interfaces on the inner component. The inner component is not exposed directly to the
client and hence only the IUnknown of the outer component will be exposed to the client.
In Containment, the outer component forwards (delegates) the calls to the inner
component.
There could be two scenarios in which the containment can be implemented. The first
case is the outer component implements its own interfaces and uses the interfaces of the
inner component. The second case could be that the outer component reimplements an
interface supported by the inner component and forward the call to the interface of the
inner component.
In Contain ment, the outer component is acting as a client and using the interface of the
inner component. In implementing the containment, the inner component and client are
unaware of the fact that they are being the part of the containment implementation. The
outer component has to be modified to support the containment.
Sample code
This article will explore the first scenario to explain the containment technique. In this
sample code, the outer component utilizes the functionality, which is being provided by
an inner component. The outer component needs some modification to accommodate the
inner component as a contained object. The client and an inner component won’t be
affected and will be unaware of the fact that they are taking part in the containment
implementation. This sample code will demonstrate that the client is unfamiliar with the
fact that the outer component is using the services of an inner component.
The outer component i.e. CMath will have a new member variable m_pISqaure, which is
a pointer to ISquare interface on the inner component.
if(pMath == NULL) {
return E_OUTOFMEMORY;
}
// Here, the Outer Component initializes the inner component.
// The CoCreateInstance is called by
// the outer component during its creation and
// it queries for the ISquare interface on the inner
// component, and if the calls succeeds it stores the pointer
// in its variable m_pISqaure.
cout<<"Call to Create the Inner Component" << endl;
hResult =CoCreateInstance (CLSID_CSquare, NULL,
CLSCTX_INPROC_SERVER,
IID_ISquare,(void**)&pMath->m_pISquare);
cout<<"CoCreateInstance for CSquare has been called"<< endl;
hResult = pMath->QueryInterface(iid,ppv);
if(SUCCEEDED(hResult)) {
pMath->Release();
}
}
Interfaces and Versioning
A good versioning mechanism allows one system component to be updated without
requiring updates to all the other components in the system. Versioning in COM is
implemented using interfaces and IUnknown::QueryInterface. The COM design
completely eliminates the need for things like version repositories or central management
of component versions.
When a software module is updated, it is generally to add new functionality or to improve
existing functionality. In COM, you add new functionality to your component object by
adding support for new interfaces. Since the existing interfaces don't change, other
components that rely on those interfaces continue to work. Newer components that know
about the new interfaces can use those newly exposed interfaces. Because
QueryInterface calls are made at run time without any expensive call to some
"capabilities database" (as used in some other system object models), the current
capabilities of a component object can be efficiently evaluated each time the component
is used; when new features become available, applications that know how to use them
will begin to do so immediately.
Improving existing functionality is even easier. Because the syntax and semantics of an
interface remain constant, you are free to change the implementation of an interface,
without breaking other developers components that rely on the interface. For example,
say you have a component that supports the (hypothetical) IStack interface, which would
include methods like Push and Pop. You've currently implemented the interface as an
array, but you decide that a linked list would be more appropriate. Since the methods and
parameters do not change, you can freely replace the old implementation with a new one,
and applications that use your component will get the improved linked list functionality
"for free."
Windows and OLE use this technique to provide improved system support. For example,
in OLE today, structured storage is implemented as a set of interfaces which currently use
the C run-time file input/output functions internally. In Windows 2000 (the next version
of Windows NT), those same interfaces will write directly to the file system. The syntax
and semantics of the interfaces remain constant; only the implementation changes.
Existing applications will be able to use the new implementation without any changes;
they get the improved functionality "for free."
The combination of the use of interfaces (immutable, well-defined "functionality sets"
that are extruded by components) and QueryInterface (the ability to cheaply determine
at run time the capabilities of a specific component object) enable COM to provide an
architecture in which components can be dynamically updated, without requiring updates
to other reliant components. This is a fundamental strength of COM over other proposed
object models. COM solves the versioning/evolution problem where the functionality of
objects can change independently of clients of that object without rendering existing
clients incompatible. In other words, COM defines a system in which components
continue to support the interfaces through which they provided services to older clients,
as well as support new and better interfaces through which they can provide services to
newer clients. At run time old and new clients can safely coexist with a given component
object. Errors can only occur at easily handled times: bind time or during a
QueryInterface call. There is no chance for a random crash such as those that occur
when an expected method on an object simply does not exist or its parameters have
changed.
Language Independence
Components can be implemented in a number of different programming languages and
used from clients that are written using completely different programming languages.
Again, this is because COM, unlike an object-oriented programming language, represents
a binary object standard, not a source code standard. This is a fundamental benefit of a
component software architecture over object-oriented programming (OOP) languages.
Objects defined in an OOP language typically interact only with other objects defined in
the same language. This necessarily limits their reuse. At the same time, an OOP
language can be used in building COM components, so the two technologies are actually
quite complementary. COM can be used to "package" and further encapsulate OOP
objects into components for widespread reuse, even within very different programming
languages.
.NET Components
• It is a platform neutral framework.
• .NET provides a common set of class libraries, which can be accessed from any
.NET based programming language. There will not be separate set of classes and
libraries for each language. If you know any one .NET language, you can write
code in any .NET language!
Assemblies are mainly introduced to solve the problems of versioning, DLL conflicts,
and simplifying the process of deployment. Most end users have encountered versioning
or deployment problems when they do install a new application or a new version of an
existing one. There are many situations where you install a new application only to find
an existing one stopped working, and the system can not recover from that. Many
developers spent a lot of time trying to retain the registry entries consistence in order to
activate a COM class. All this frustration occurs because of versioning problems that
occur with component-based applications.
Versioning Problems
There are two versioning problems that arise with WIN32 applications. The first one is
that versioning rules are enforced by the operating system not between the pieces of an
application. Backward compatibility between the new piece of code and the old one is the
current approach of versioning and this is hard to maintain in most applications. Beside
that only a single version of an application is allowed to be present and executing on a
computer at any given time. The second problem is that there is no way to preserve
consistency between groups of components that are built together and the current present
group at run time.
DLL Conflicts
As a result of the above two versioning problems, DLL conflicts do occur. Which is:
when installing a new application an existing one may break because of that the new one
installed a new version of a component or a DLL that is not fully backward compatible
with the previous one.
The Solution
To solve the above problems, Microsoft began a new approach in its Windows 2000
platform. Windows 2000 gives you the ability to place DLL files used by your
application in the same directory as your application's exe file, so that your application
can use the right version it was designed for using. Beside that, Windows 2000 locks files
that exist in the System32 directory to prevent their replacement when new applications
are installed, and this prevents the DLLs that are used by existing applications from being
replaced and so prevents the crashing of existing applications.
The .NET framework introduces assemblies as an evolution towards the complete
solution of versioning problems and DLL conflicts. Assemblies on their core design give
developers the ability to specify version rules between components, offer the
infrastructure required to enforce these rules, and allowing multiple versions of the
component to be run side by side at the same time.
How Does It Work?
You may recall that an assembly manifest contains the versioning requirements of the
current assembly. The version of the assembly and the versions of the required
assemblies and/or components are recorded in the manifest. So, when you run an
application, the .NET runtime checks the assembly manifest of your application and
executes the version of assemblies or components that are recorded in the manifest. To
gain the advantages of versioning you must give your assembly a strong name (will be
explained later).
Assembly Version
An assembly can have two types of versions. The first one which we call "Version
Number" consists of a four-part string with the following format:
<Major Version>.<Minor Version>.<Build Number>.<Revision Number>
For example a version number of 3.5.20.1 indicates 3 as the major version, 5 as the minor
version, 20 as the build number, and 1 as the revision number. Note that the version
number is stored in the assembly's manifest. The second type of versions is called
"Informational Version". The informational version consists of a string that contains the
version number besides additional information like packaging, marketing literature, or
product name. This type of version is used for informational purposes only, and is not
used at runtime for calculating versioning related decisions.
Assembly Locations
An assembly can be placed into one of the following three locations:
1. Under the application directory or subdirectories. This is the most common
location for placing an assembly. If your assembly uses a culture other than the
default one which is "en-US", you have to put it under a subdirectory with this
culture name.
2. In the global assembly cache, which is a machine code cache installed whenever
the common language runtime is installed. You deploy your assembly to the
global assembly cache when you want to share it with multiple applications.
3. On an ftp server.
The location of an assembly determines whether the common language runtime can
locate it when it is referenced and whether this assembly can be shared with other
applications or not.
The Manifest
How .NET makes sure the right version of every assembly is used?
In the bad old days of COM, shared components were usually dumped into that
unorganized Windows warehouse, the System directory, and then "registered" in
Windows so other programs could locate them. This is a big reason why the registry is
such a critical and sensitive part of Windows. Without it, one piece of software can't
locate another one. (The registry is still used in .NET, by the way. But .NET doesn't use it
to identify components anymore.) In .NET, every assembly starts with something called a
manifest that takes the place of the information that was formerly placed in the registry.
The manifest contains metadata (data about data) telling the CLR what it needs to know
to execute the assembly instructions. One of the things the CLR needs to know is the
version number of components used by the assembly. To illustrate the point, let's
compare the manifest of a very simple 1.1 and 2.0 .NET component (DLL file) using the
ILDASM utility. This is a standard tool that is installed with the .NET Framework and it
lets you look inside the IL code in an assembly.
.NET Application Domains
Overview
Before .NET framework 2.0 technology, the only way used to isolate applications
running on the same machine is by the means of process boundaries. Each application run
within a process, and each process has its own boundaries and memory addresses relative
to it and this is how isolation from other processes was performed.
.NET framework 2.0 introduces a new boundary called the Application Domains. Each
application running within its main process boundaries and its application domain
boundaries. So, you can think of the application domain as an extra shell to isolate the
application and making it more secure and robust.
The above is not the main advantage of application domains. The main advantage is the
ability to run several applications domains in a single process or application. All of this is
performed while maintaning the same level and quality of isolation that would exist in
separate processes, without the need of making cross-process calls or switching between
processes.
Advantages
The following advantages of application domains answer this question.
• In terms of isolation, code running in one application domain can not access code
or resources running in another application domain.
• In terms of security, you can run more than one set of web controls in a single
browser process. Each set of them is running in a separate application domain so
each one can not access the data or resources of the other sets. You can control
the permissions granted to a given piece of code by controlling the application
domain inside which the code is running.
• In terms of robustness, fault in code running in one application domain can not
affect other applications although they all are running inside the same process.
Individual application domain can be stopped without stopping the entire process,
you can simply unload the code running in a single application domain.
So, from the above advantages, you can observe that by using application domains you
can create rather robust .NET applications. It increases isolation, stability, and security of
your application.
Relation Between Application Domains and Assemblies
Most development and runtime environments has a definition for the building blocks of
an application. Assemblies are the building blocks of .NET framework applications. They
are the fundamental unite of deployment. An assembly consists of types and resources
working together to form a logical unit of the functionality of your application. You can
divide your .NET application into assemblies. The assembly file can have an .EXE or a
.DLL extension.
As we mentioned previously, you can run more than one application domain within your
application. Each application domain will run a given piece of code. An assembly is
simply the piece of code we mean here. So, each application domain can run an assembly
within the entire application. This is the relation between application domains and
assemblies.
Whenever we start an application, we're actually starting a Win32 process and running
our application inside it. These processes use resource such as memory, objects, kernel
and blah blah blah. Each Win32 process contains at the least one thread (eventually we
end up running multiple threads) and if we are to run other tasks or open up other
applications through our application, these tasks will belong to our particular Win32
process running on a collection of multiple threads.
One of the characteristics of a Win32 process is that it is very much similar to a virtual
boundary. It's pretty easy to communicate within a process but the same is restricted to a
certain level outside that particular Win32 process. To interact with other Win32
processes, we would require some special mechanisms to work on as there are a couple of
security contexts we would have to take into consideration and also the need to restrict
what a Win32 process can and should do on a particular system. So who takes care of
running a process and what are the factors involved in running a process successfully? The
execution of a process and running our code within it is usually the domain and prerogative of the
operating system. There are many complex situations and issues that the Operating System has to
handle while maintaining an active process.
.NET Context
All applications run within an application domain, and this domain can contain numerous
contexts…an application which is executing within a given app domain isn’t necessarily
tied to one specific context though, it is free to switch contexts freely (meaning it is
context agile); thus getting a reference to a specific object means obtaining a direct
reference which makes it impossible to hook into and perform processing on the
messages the object contains (method calls, exceptions, etc). By deriving an object from
ContextBoundObject you force the runtime to isolate the object into a single context
where it will remain for it’s entire lifetime. Any hooks into this object from other
contexts are done via a runtime generated proxy which is a reference to the object; there
are no direct hooks into the object itself. As it’s a proxy, it is now possible to write your
own sinks to hook into the message chain and perform any type of processing you’d like.
This is comparable to remoting in .Net (and indeed most of the objects needed for this
live in the System.Runtime.Remoting namespace), albeit on a smaller scale.
So how does this relate to real world programming? There are certain services that need
to be applied to every layer of the application and are not domain specific; in the case of
the typical 3 tier app which contains a data layer, a business layer, and a presentation
layer; they all need common services such as logging and exception management. The
most common approach to solve this is to write a separate utility library that
accomplishes the functionality you need, and then just reference it from your
projects…this can create unneeded dependencies though, and in the case of exceptions
means lots of try/catch blocks with logging code in each catch block (in the case of a web
application, I do realize this can be centralized in Application_OnError which is great in
theory, but is usually not used out in the real world). This can get a quite messy in larger
applications. In this case, I’ll attempt to provide an alternate solution to this by providing
a simple example that centralizes exception logging by using a ContextAttribute and a
ContextBoundObject which allows the attribute to hook into the message chain.
A channel is needed inside an application domain: calling objects across contexts.If
you’ve previously written COM+ components, you already know about COM+ contexts.
Contexts in .NET are very similar. A context is a boundary containing a collection of
objects. Likewise, with a COM+ context, the objects in such a collection require the same
usage rules that are defined by the context attributes.
As you already know, a single process can have multiple application domains. An
application domain is something as a subprocess with security boundaries. An application
domain can have different contexts. A context is used to group objects with similar
execution requirements. Contexts are composed from a set of properties and are used for
interception: when a context-bound object is accessed by a different context, an
interceptor can do some work before the call reaches the object. This can be used for
thread synchronization, transactions, and security management, for example.
A class derived from MarshalByRefObject is bound to the application domain. Outside
the application domain a proxy is needed to access the object. A class derived from
ContextBoundObject, which itself derives from MarshalByRefObject, is bound to a
context. Outside the context, a proxy is needed to access the object.
Context-bound objects can have context attributes. A context-bound object without
context attributes is created in the context of the creator. A context-bound object with
context attributes is created in a new context or in the creator’s context if the attributes
are compatible.
To further understand contexts, you must be familiar with these terms:
• Creating an application domain creates the default context in this application domain.
If a new object is instantiated that needs different context properties, a new context is
created.
• Context attributes can be assigned to classes derived from ContextBoundObject. You
can create a custom attribute class by implementing the interface IContextAttribute.
The .NET Framework has one context attribute class in the namespace
System.Runtime.Remoting.Contexts: SynchronizationAttribute.
• Context attributes define context properties that are needed for an object. A context
property class implements the interface IContextProperty. Active properties
contribute message sinks to the call chain. The class ContextAttribute implements
both IContextProperty and IContextAttribute, and can be used as a base class for
custom attributes.
• A message sink is an interceptor for a method call. With a message sink, method calls
can be intercepted. Properties can contribute to message sinks.
Activation
A new context is created if an instance of a class that’s created needs a context different
from the calling context. The attribute classes that are associated with the target class are
asked if all the properties of the current context are acceptable. If any of these properties
are unacceptable, the runtime asks for all property classes associated with the attribute
class and creates a new context. The runtime then asks the property classes for the sinks
they want to install. A property class can implement one of the IContributeXXXSink
interfaces to contribute sink objects. Several of these interfaces are available to go with
the variety of sinks.
Communication Between Contexts
How does the communication between contexts happen? The client uses a proxy instead
of the real object. The proxy creates a message that is transferred to a channel, and sinks
can intercept. Does this sound familiar? It ought to. The same mechanism is used for
communication across different application domains or different systems. A TCP or
HTTP channel is not required for the communication across contexts, but a channel is
used here too. CrossContextChannel can use the same virtual memory in both the client
and server sides of the channel, and formatters are not required for crossing contexts.
.NET Remoting
Remoting enables software components to interact across application domains. The
components interacting with each other can be in different processes and systems. This
enables us to create n-tier Distributed applications. Here is a simplified illustration of the
.NET Remoting Architecture.
Client Object is the object or component that needs to communicate with(call) a remote
object.
The Server (Remote) Object receives the request from the client object and responds.
Proxy Object When the client object needs to call a method from the Remote Object it
uses a proxy object to do this. Every public method that defined in the remote object class
can be made available in the proxy and thus can be called from clients. The proxy object
acts as a representative of the remote object. It ensures that all calls made on the proxy
are forwarded to the correct remote object instance. There are two types of proxies
transparent and real proxy
The TransparentProxy contains a list of all classes, as well as interface methods of the
remote object. It examines if the call made by the client object is a valid method of the
remote object and if an instance of the remote object resides in the same application
domain as the proxy. If this is true, a simple method call is routed to the remote object. If
the object is in a different application domain, the call (call parameters on the stack are
packaged into an IMessage object) is forwarded to a RealProxy class by calling its
Invoke method. This class is then responsible for forwarding messages to the remote
object.
Formatter
The formatting can be done be done in three ways –
...........a) Binary
...........b) SOAP or
...........c) Custom.
The remoting framework comes with two formatters: the binary and SOAP formatters.
The binary formatter is extremely fast, and encodes method calls in a proprietary, binary
format. The SOAP formatter is slower. Developers can also write their own Custom
Formatter and have the ability to use that.
Channels
Channels are used to transport messages to and from remote objects. You can choose a
TcpChannel or a HttpChannel or extend one of these to suit your requirements.
HTTP channel : The HTTP channel transports messages to and from remote objects
using the SOAP protocol. However, All messages are passed through the SOAP
formatter, where the message is changed into XML and serialized, and the required
SOAP headers are added to the stream. The resulting data stream is then transported to
the target URI using the HTTP protocol.
TCP Channel : The TCP channel uses a binary formatter to serialize all messages to a
binary stream and transport the stream to the target URI using the TCP protocol. It is also
possible to configure the TCP channel to the SOAP formatter.
Connectors
Connector components
The connector for SAP is written in Java and consists of two parts: the vision connector
framework and connector modules (the connector's application-specific component, the
connector framework, and business object handlers). The vision connector framework
provides a metadata-driven layer of abstraction to the connector framework used by all
WebSphere business integration system adapters. The vision connector framework
extends the methods in the Adapter Framework. The connector modules extend the
methods in the vision connector framework and communicate with an SAP application.
Figure illustrates the architecture of the connector and the relationship of the system-wide
and vision connector frameworks. The visionConnectorAgent class can implement any
number of connector modules.
Figure Architecture of the connector for SAP
Some symbols can changed into other shapes. When you drag them into the canvas, a
dialog popup. Then you can choose the types.
For example:
Modified Actor Ref. An actor is an active architectural component of a software system.
Actors interact with their environment via ports. A dynamic actor is created and
destroyed by the containing actor.
Ports provide an interface between actors using protocols that define how information
should be accessed and changed.
ROOM Port Type
• Relay port - Shares the interface between a contained class and the container
class.
• Conjugated port - Handles both the outgoing and incoming messages of its
protocol. Conjugated ports are usually white colored.
• External end port - Communicates with the actor's state machine or behavior.
• Internal end port - Connects a component actor to the container actor's behavior.
It is illustrated using the same notation as an external end port, but the port is
placed inside the container border rather than on it.
Transition points
• Initial transition point - Indicates the first transition within the state.
• Choice point - Indicates a choice between transition path segments. One path will
be the default.
• Non-extending transition point - Marks the end of a transition that does not extend
outside of the state context.
• Non-extending transition point - Illustrates an incoming transition.
Contextual Framework
Contextual Composition
The final form of composition in our little list is contextual composition. The typical
examples here are EJB containers and .NET enterprise services contexts. Unlike
connection-oriented composition, contextual composition is implicit and asymmetric. A
component instance inside a container instance benefits from container-supplied services
and from a container-maintained abstraction (simplification) of the world outside the
container. For example, a container can add transactional processing and limit what a
contained instance can and cannot see.
How is a container different from a platform? It depends. Some platforms actually have
container properties. For example, operating systems that strongly isolate processes form
process containers. As a result, two applications running in separate processes of such an
operating system can communicate with each other only through the mechanisms
provided and controlled by the operating system. Other platforms don't form containers:
They don't intercept all incoming and all outgoing communications and thus can be
bypassed.
How is a container different from a set of connections? A container completely encloses
its contained instances, while the same can be established only by convention for peer-to-
peer connections. Also, a container is implicitly connected to all instances inside—all
enclosed instances uniformly benefit from their single shared container instance's
services and all are uniformly constrained by that container's policies.
EJB containers
An Enterprise JavaBeans (EJB) container provides a run-time environment for enterprise
beans within the application server. The container handles all aspects of an enterprise
bean's operation within the application server and acts as an intermediary between the
user-written business logic within the bean and the rest of the application server
environment.
One or more EJB modules, each containing one or more enterprise beans, can be installed
in a single container. The EJB container provides many services to the enterprise bean,
including the following:
• Beginning, committing, and rolling back transactions as necessary.
• Maintaining pools of enterprise bean instances ready for incoming requests and
moving these instances between the inactive pools and an active state, ensuring
that threading conditions within the bean are satisfied.
• Most importantly, automatically synchronizing data in an entity bean's instance
variables with corresponding data items stored in persistent storage.
EJB Roles
In RMI there are two fundamental roles in the RMI environment: the client of the remote
object, and the object itself, which acts as a kind of server or service provider. These two
roles exist in the EJB environment as well, but EJB adds a third role, called the container
provider. The container provider is responsible for implementing all the extra services for
an EJB object : transaction processing, security, object persistence, and resource pooling.
If you're familiar with CORBA, you can think of the EJB container as being roughly
equivalent to the ORB in CORBA, with a few of the CORBA services thrown in as well.
In EJB, however, the container is strictly a server-side entity. The client doesn't need its
own container to use EJB objects, but an EJB object needs to have a container in order to
be exported for remote use. Figure shows a conceptual diagram of how the three EJB
roles interact with each other.
Cross-Development Environment
Developers new to embedded development often struggle with the concepts and
differences between native and cross-development environments. Indeed, there are often
three compilers and three (or more) versions of standard header files such as stdlib.h.
Debugging an application on your target embedded system can be difficult without the
right tools and host-based utilities. You must manage and separate the files and utilities
designed to run on your host system from those you intend to use on your target.
When we use the term host in this context, we are referring to the development
workstation that is sitting on your desktop and running your favorite Linux desktop
distribution. Conversely, when we use the term target we are referring to your embedded
hardware platform. Therefore, native development denotes the compilation and building
of applications on and for your host system. Cross-development denotes the compilation
and building of applications on the host system that will be run on the embedded system.
Keeping these definitions in mind will help you stay on track through this chapter.
Figure shows the layout of a typical cross-development environment. A host PC is
connected to a target board via one or more physical connections. It is most convenient if
both serial and Ethernet ports are available on the target. Later when we discuss kernel
debugging, you will realize that a second serial port can be a very valuable asset.
Figure Cross-development setup
In the most common scenario, the developer has a serial terminal on the host connected to
the RS-232 serial port, possibly one or more Telnet terminal sessions to the target board,
and potentially one or more debug sessions using Ethernet as the connection medium.
This cross-development setup provides a great deal of flexibility. The basic idea is that
the host system provides the horsepower to run the compilers, debuggers, editors, and
other utilities, while the target executes only the applications designed for it. Yes, you
can certainly run compilers and debuggers on the target system, but we assume that your
host system contains more resources, including RAM, disk storage, and Internet
connectivity. In fact, it is not uncommon for a target embedded board to have no human-
input devices or output displays.
Cross compiler
A cross compiler is a compiler capable of creating executable code for a platform other
than the one on which the compiler is run. Cross compiler tools are used to generate
executables for embedded system or multiple platforms. It is used to compile for a
platform upon which it is not feasible to do the compiling, like microcontrollers that don't
support an operating system. It has become more common to use this tool for
paravirtualization where a system may have one or more platforms in use.
Uses of cross compilers
The fundamental use of a cross compiler is to separate the build environment from the
target environment. This is useful in a number of situations:
• Embedded computers where a device has extremely limited resources. For
example, a microwave oven will have an extremely small computer to read its
touchpad and door sensor, provide output to a digital display and speaker, and to
control the machinery for cooking food. This computer will not be powerful
enough to run a compiler, a file system, or a development environment. Since
debugging and testing may also require more resources than are available on an
embedded system, cross-compilation can be less involved and less prone to errors
than native compilation.
• Compiling for multiple machines. For example, a company may wish to support
several different versions of an operating system or to support several different
operating systems. By using a cross compiler, a single build environment can be
set up to compile for each of these targets.
• Compiling on a server farm. Similar to compiling for multiple machines, a
complicated build that involves many compile operations can be executed across
any machine that is free regardless of its brand or current version of an operating
system.
• Bootstrapping to a new platform. When developing software for a new platform,
or the emulator of a future platform, one uses a cross compiler to compile
necessary tools such as the operating system and a native compiler.
• Compiling native code for emulators for older now-obsolete platforms like the
Commdore 64 or Apple II by enthusiasts who use cross compilers that run on a
current platform (such as Aztec C's MS DOS 6502 cross compilers running under
Windows XP).
Use of virtual machines (such as Java's JVM) resolves some of the reasons for which
cross compilers were developed. The virtual machine paradigm allows the same compiler
output to be used across multiple target systems.
Typically the hardware architecture differs (e.g. compiling a program destined for the
MIPS architecture on an x86 computer) but cross-compilation is also applicable when
only the operating system environment differs, as when compiling a FreeBSD program
under Linux, or even just the system library, as when compiling programs with uClibc on
a glibc host.
Component-Oriented Programming
"Component Oriented Programming offers a unique programming-centered approach to
component-based software development that delivers the well-developed training and
practices you need to successfully apply this cost-effective method. There will be a
unified component infrastructure for building component software using JavaBeans, EJB,
OSGi, CORBA, CCM, .NET, and Web services. Component-oriented programming
supports constructing software systems by composing independent components into a
software architecture. However, existing approaches decouple implementation code from
architecture, allowing inconsistencies, causing confusion, violating architectural
properties, and inhibiting software evolution.
Principles of Component-Oriented Programming
Systems that support component-oriented programming and the programmers that use
them adhere to a set of core principles that continues to evolve. The most important
of these include:
• Separation of interface and implementation
• Binary compatibility
• Language independence
• Location transparency
• Concurrency management
• Version control
• Component-based security
Separation of Interface from Implementation
The fundamental principle of component-oriented programming is that the basic unit in
an application is a binary-compatible interface. The interface provides an abstract service
definition between a client and the object. This principle contrasts with the object-
oriented view of the world that places the object rather than its interface at the center. An
interface is a logical grouping of method definitions that acts as the contract between the
client and the service provider. Each provider is free to provide its own interpretation of
the interface—that is, its own implementation. The interface is implemented by a black-
box binary component that completely encapsulates its interior. This principle is known
as separation of interface from implementation.
To use a component, the client needs to know only the interface definition (the service
contract) and be able to access a binary component that implements that interface. This
extra level of indirection between the client and the object allows one implementation of
an interface to be replaced by another without affecting client code. The client doesn’t
need to be recompiled to use a new version. Sometimes the client doesn’t even need to be
shut down to do the upgrade. Provided the interface is immutable, objects implementing
the interface are free to evolve, and new versions can be introduced. To implement the
functionality promised by an interface inside a component, you use traditional object-
oriented methodologies, but the resulting class hierarchies are usually simpler and easier
to manage. Another effect of using interfaces is that they enable reuse. In object oriented-
programming, the basic unit of reuse is the object.
In theory, different clients should be able to use the same object. Each reuse instance
saves the reusing party the amount of time and effort spent implementing the object.
Reuse initiatives have the potential for significant cost reduction and reduced product-
development cycle time. One reason why the industry adopted object-oriented
programming so avidly was its desire to reap the benefits of reuse.
Binary Compatibility Between Client and Server
Component-oriented programming revolves around packaging code into components, i.e.,
binary building blocks. Changes to the component code are contained in the binary unit
hosting it; you don’t need to recompile and redeploy the clients. However, the ability to
replace and plug in new binary versions of the server implies binary compatibility
between the client and the server, meaning that the client’s code must interact at runtime
with exactly what it expects as far as the binary layout in memory of the component entry
points. This binary compatibility is the basis for the contract between the component and
the client.
Language Independence
In component-oriented programming, the server is developed independently of the client.
Because the client interacts with the server only at runtime, the only thing that binds the
two is binary compatibility. A corollary is that the programming languages that
implement the client and server should not affect their ability to interact at runtime.
Language independence means exactly that: when you develop and deploy components
your choice of programming language should be irrelevant. Language independence
promotes the interchangeability of components, and their adoption and reuse.
Location Transparency
A component-based application contains multiple binary components. These components
can all exist in the same process, in different processes on the same machine, or on
different machines on a network. Recently, with the advent of web services, components
can also be distributed across the Internet. The underlying component technology is
required to provide a client with location transparency, which allows the client code to
be independent of the actual location of the object it uses. Location transparency means
there is nothing in the client’s code pertaining to where the object executes.
The same client code must be able to handle all cases of object location although the
client should be able to insist on a specific location as well. Note that in the figure, the
object can be in the same process (e.g., Process 1 on Machine A), in different processes
on the same machine (e.g., Process 1 and Process 2 on Machine A), on different
machines in the same local network, or even across the Internet (e.g., Machines B and C).
Location transparency is crucial to component-oriented programming for a number of
reasons. First, it lets you develop the client and components locally (which leads to easier
and more productive debugging), yet deploy the same code base in distributed scenarios.
Second, the choice of using the same process for all components, or multiple processes
for multiple machines, has a significant impact on performance and ease of management
versus scalability, availability, robustness, throughput, and security. Organizations have
different priorities and preferences for these tradeoffs, yet the same set of components
from a particular vendor or team should be able to handle all scenarios. Third, the
location of components tends to change as the application’s requirements evolve over
time.
Concurrency Management
A component developer can’t possibly know in advance all the possible ways in which a
component will be used and particularly whether it will be accessed concurrently by
multiple threads. The safest course is for you to assume that the component will be used
in concurrent situations and to provide some mechanism inside the component for
synchronizing access. However, this approach has two flaws. First, it may lead to
deadlocks; if every component in the application has its own synchronization lock, a
deadlock can occur if two components on different threads try to access each other.
Second, it’s an inefficient use of system resources for all components in the application to
be accessed by the same thread.
The underlying component technology must provide a concurrency management service-
way for components to participate in some application-wide synchronization mechanism,
even when the components are developed separately. In addition, the underlying
component technology should allow components and clients to provide their own
synchronization solutions for fine-grained control and optimized performance.
Versioning Support
Component-oriented programming must allow clients and components to evolve
separately. Component developers should be able to deploy new versions (or just fixes)
of existing components without affecting existing client applications. Client developers
should be able to deploy new versions of the client application and expect it to work with
older versions of components. The underlying component technology should support
versioning, which allows a component to evolve along different paths, and for different
versions of the same component to be deployed on the same machine, or side by side. The
component technology should also detect incompatibility as soon as possible and alert the
client. .
Component-Based Security
In component-oriented programming, components are developed separately from the
client applications that use them. Component developers have no way of knowing how a
client application or end user will try to use their work. A benign component could be
used maliciously to corrupt data or transfer funds between accounts without proper
authorization or authentication. Similarly, a client application has no way to know
whether it’s interacting with a malicious component that will abuse the credentials the
client provides. In addition, even if both the client and the component have no ill intent,
the end application user can still try to hack into the system or do some other damage
(even by mistake).