0% found this document useful (0 votes)
39 views6 pages

GraphVR A Virtual Reality Tool For The Exploration of Graphs With HTC Vive System

This document summarizes a research paper presented at the 2018 22nd International Conference on Information Visualization that describes a virtual reality tool called GraphVR for exploring graphs in 3D using an HTC Vive headset. The tool was developed using Unreal Engine and allows users to interact with and navigate 3D representations of graphs in virtual reality. Prior work on 3D graph visualization and virtual reality applications for data visualization are also briefly discussed.

Uploaded by

briandiffo0123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
39 views6 pages

GraphVR A Virtual Reality Tool For The Exploration of Graphs With HTC Vive System

This document summarizes a research paper presented at the 2018 22nd International Conference on Information Visualization that describes a virtual reality tool called GraphVR for exploring graphs in 3D using an HTC Vive headset. The tool was developed using Unreal Engine and allows users to interact with and navigate 3D representations of graphs in virtual reality. Prior work on 3D graph visualization and virtual reality applications for data visualization are also briefly discussed.

Uploaded by

briandiffo0123
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

2018 22nd International Conference Information Visualisation

GraphVR: A Virtual Reality Tool for the Exploration of Graphs with HTC Vive
System

Nicola Capece, Ugo Erra, Jari Grippa


Dipartimento di Matematica, Informatica ed Economia
Università degli Studi della Basilicata
Potenza, Italy
Email: [email protected], [email protected], [email protected]

Abstract—This work examines the use of virtual reality can be beneficial for information visualization tasks in
to visualize and interact with three-dimensional graphs. We general is still an open question [2], [3], [4], but 3D graph
developed the design to be as natural and intuitive as possible, visualizations in particular offer great benefits [5], [6].
through analysis, study, and use of several layout algorithms,
which allow the nodes of a graph to be positioned to reduce Today, the representation of 3D space is possible using
entropy levels. We chose to use this schematic visualization of virtual reality (VR). Although this technology was first
graphs because it can describe the graphical data synthetically developed in 1970, it has only recently become widely
and the links between the data while presenting the data in available through low-cost devices such as Oculus Rift [7]
a visual format. The application was developed entirely using and HTC Vive [8]. In a virtual environment, the user’s
Unreal Engine version 4, and the visualization was performed
using the head-mounted display HTC Vive. location is the focal point of the scene, and there is freedom
in the user’s viewing direction because the entire sphere of
Keywords-Virtual Reality; Graph Exploration; Human Com- directionality around that point is available. Visibility of the
puter Interaction;
scene from the perspective of the user’s location is vital.
However, in most VR applications, user interaction is based
I. I NTRODUCTION
on an input device such as keyboard, mouse, or joystick.
Recent interest in social networks, software architectures, Interaction in 3D space within virtual environments presents
planning, and scheduling has led to the application of graph various challenges. Because users of VR devices lose sight
visualization and exploration to assist analysts with relevant of their hands, traditional mouse and keyboard interaction is
visual cues to understand the intrinsic structure of data. limited, and so actions take place in the virtual environment
The amount of such structured data is constantly growing, without using a toolbar or a menu.
and graph visualizations can help their comprehension by Although several works have explored stereoscopic graph
showing hidden structures and other interesting topological visualization [5], [6], [9], very few empirical studies have
features in graphical format. Numerous effective approaches investigated the applicability of VR in the field of graph
are used to visualize a graph in two-dimensional (2D) space. visualization and interaction [10]. One important aspect of
Graph drawing uses an algorithm to address the problem graph interaction is that it enables users to make changes to
of constructing an automatic visualization of graphs, the the graph drawing based on users’ input. This is particularly
goal being to derive an aesthetically pleasing picture that useful when the graph is very large, because, on the one
follows the layout convention of a given application domain. hand, it reduces the difficulty and the time it takes the user to
However, humans have an inherent ability to understand understand the information represented by the graph, while
representations of objects in three-dimensional (3D) space on the other hand, it increases the amount of information
rather than 2D space [1]. that can be interpreted and understood by the user. In 2D
3D graph visualization is a relatively new field. Having space interaction, completing a generic action requires at
one extra dimension enables the visualization of complex least two steps: click a specific button located outside the
systems in which navigation techniques, graph structures, graph region, and then move the mouse cursor back inside
and interface solutions play major roles. In particular, with the graph region to execute the desired action. This two-step
real-time 3D exploration and interaction, users can navigate paradigm is disadvantageous because the user is forced to
a graph and observe it from different points of view, move move in and out of the graph region repeatedly, thus limiting
one or more nodes, or group unimportant nodes into clusters interactivity within the graph.
to reduce information overload. In this way, because the In this paper, we present a virtual reality system to visual-
user can navigate more effectively, 3D graph visualization is ize and interact with three-dimensional graphs. In particular,
intuitively understandable and provides further information we developed an application that uses the Unreal Engine and
about a graph’s hierarchical structure. Whether 3D space the head-mounted display HTC Vive that enables the user

978-1-5386-7202-0/18/$31.00 ©2018 IEEE 448


DOI 10.1109/iV.2018.00084
Authorized licensed use limited to: Universite de Technologie de Troyes. Downloaded on March 04,2024 at 09:30:50 UTC from IEEE Xplore. Restrictions apply.
to interact with the 3D graph by using HTC controllers. Our [15] have reported on the practical and optimal use of VR
aim was to address specific interaction challenged by taking as a platform for visualization and exploration of scientific
advantage of the tracking capabilities of the head-mounted data. For example, they have developed scripts for highly
display and the controllers of the VR system, to determine dimensional dataset visualization in a virtual environment
what the user is looking at, use this information to identify using Linden Scripting Language, which allows interaction
focal points, aid the user in making selections, and provide with the Second Life virtual world. As test data, they used
instantaneous details of the selected data1 . catalogs of VR objects detected through large sky surveys,
such as brightness of sources, morphology of light in the sky,
II. R ELATED W ORKS and general brightness, which can be represented as vectors
Visualization of and interaction with complex structures in a space comprising tens to hundreds of dimensions. Data
such as graphs have already undergone development in are represented using different approaches such as spatial co-
software systems able to transmit to users a greater per- ordinates, colors, and dimensions. Betella et al. [16] propose
ception and comprehension of graphical information. Com- a system that improves users’ comprehension of big series
puter graphics have become important in this context, and of data through embodied navigation and natural gestures.
among several open-source software systems, one of the Their system is an immersive virtual environment called the
most popular is Gephi [11], a tool developed using Java eXperience Induction Machine, developed using C# and the
object-oriented language that uses a 3D rendering engine Unity game engine. As a case study, they used the human
to display and explore very large graphs in real time. brain connectome, which is a map of the network of nodes
Gephi allows users to customize graphs and to configure and connections, providing an anatomic description of the
in detail its supplied multiple layout algorithms. Cytoscape brain connectivity on different scales such as single neurons,
[12], an open-source software system based on Java, was neural populations, and neural regions. Their system has
originally developed for biomolecular integration through three main components: the graphML parser, which creates
graphical layout and query tools. It is used to visualize large the data structure that allocates all of the graph’s elements
amounts of data such as protein–protein, protein–DNA, and (e.g., nodes and arcs); the atlas, which reads the meta data
other structures relating to biological organisms. Currently, associated with the graph’s elements (spatial information)
Cytoscape allows users to visualize and analyze a graph through an Extensible Markup Language (XML) file; and the
in any application domain. Another widely used system is geometry provider, which uses 3D graphics to visualize the
Graphviz [13], in whose core are implemented various types combined information obtained via these components. Real-
of graph layouts, collected in a list of software, mainly time interaction has been realized using Microsoft Kinect
implemented in C language. Layouts can be used through and a sensory glove to track users’ gestures and positions
a library, graphical interface, or web browser. Graphviz and map them in a virtual environment. Kinect is used
allows users to visualize graphs in numerous domains such to map the positions of users’ hands and torso, while the
as software engineering, networking, and bioinformatics. glove is used to detect the movements of individual fingers.
Although these and other similar tools can show users a Our system aims to provide a general-purpose 3D virtual
large portion of the data, the 2D representation that they environment that allows users to visualize, understand, and
employ, which allows users to maximize the information, interact with graphs. We also aim to exploit users’ natural
highlighting meaningful relationships among the data, often movements through the interface provided by HTC Vive and
hides some of the data, which would be more easily visible its controllers. Our proposed system can visualize any type
by using a 3D representation. In this context, Erra et al. of graph in any domain. Its main advantage is that it can
[14] propose an approach based on real-time 3D graphics to be totally decoupled from the case of study. Furthermore, it
visualize and interact with graphs with VR and a natural user allows users to move freely around in a virtual environment
interface. Their idea is based on the development of a Gephi unconstrained by real-world obstacles and to obtain a correct
plugin that integrates with Gephi’s existing functionality analysis of the graphical data.
and exploits its features. Called 3D Graph Explorer, it III. BACKGROUND
exploits Gephi’s layouts and provides additional features
such as visualization, exploration, and handling of graphs We developed our system, called GraphVR, using Unreal
in a 3D space using mouse and keyboard, a natural user Engine version 4 and C++ language [17] together with
interface, and a Leap Motion sensor, or VR through the the Blueprints Visual Scripting system [18]. Interaction and
Oculus Rift Development Kit 1, to provide an immersive visualization through a virtual environment are rendered
experience. Data visualization is currently a hot topic in via the HTC Vive head-mounted display. Its controllers
the fields of computer science and VR, and Donalek et al. follow users’ movements in the room-scale virtual envi-
ronment created through the two base stations for which
1A video of the proposed system is available from Vive is equipped. A key element in the development of
http://193.204.19.174:8080/share.cgi?ssid=0fk3Wbf Unreal Engine 4 is the Actor concept, which represents

449

Authorized licensed use limited to: Universite de Technologie de Troyes. Downloaded on March 04,2024 at 09:30:50 UTC from IEEE Xplore. Restrictions apply.
the base class for an object that can be positioned inside positions using the layout algorithm selected by the user at
the scene. The base scene of the system is composed of runtime.
a Plane Actor and Volume Cube; the cube contains the
objects that simulate the virtual room. Lighting inside the IV. I NTERACTIVE D ESIGN
scene is performed through the actors called BPSkySphere The GraphVR tool we designed is built from the scratch
and SkyLight, which adjust the general properties of the using using Unreal Engine version 4 and C++ language.
surrounding environment. Inside the room are two of the To visualize the graph in 3D space we used a common
most important actors: (i)PlayerStart, which indicates the layout graph algorithm based on a Force-Directed [19]. The
position in the scene where the Character will be created; purpose of this algorithm is to position the nodes of the
and (ii)PrincipalBP, a Blueprints script associated with a graph in 3D space so that all the edges are of more or less
C++ class called Principal, which represents the software equal length and there are as few crossing edges as possible.
system core. Although Unreal Engine 4 allows one single During the rendering of the 3D graph, the tool allows a user
class per level to spawn the actor, this task is performed to freely explore and interact with a 3D visualization. In
by the Principal class, which is an actor without a mesh particular, the graph is visualized in real-time and the user
or other graphical components and creates all the objects can navigate around the graph using a free-fly 3D camera.
relative to the programming and visualization section. A The user can move inside the graph and also go through all
graph was built through Graph Exchange XML Format the nodes. The interaction method of the graph exploration
(GEXF) file parsing, which contains details about the visual in the 3D space is based on the computer games interaction
components (nodes and edges) and their spatial information. where control is mostly reduced to operating the standard
GEXF can keep the graph compatible with the visualization input devices such as mouse/keyboard. In our case, we use
realized by Gephi. Unreal Engine 4 can use several plugins the HTC Vive Controllers to interact with the 3D graph and
to develop different types of features; the GEXF file parsing to move the camera. The rationale is that over the course
is performed using a plugin called XmlParser. This plugin of the development of the genre, this interaction method has
is instanced through the Principal class and can be used been refined and generally is fairly standardized. In addition,
to check the validity of the GEXF file. After reading the we identified in the design phase as requirement a fast
file, the system iterates the node’s list and the edges located interactions, hand-eye coordination and reaction speed which
inside it, subsequently creating the Model that contains the is a primary model in action games [20]. Such approach
visual and spatial components, created through two classes enables a 3D human-computer interaction in which the
called ModelNode and ModelEdge. Each element of the user’s tasks are performed directly in the 3D spatial context
model is subsequently associated with the corresponding [21]. As for instance, moving, resizing, and grouping nodes
actor to visualize it inside the scene. Actors called Node are performed directly in the 3D environment going over
and Edge (C++ classes) are associated with the classes the classic graphical control elements where interactions
NodeBP and EdgeBP, respectively. The structure of the between humans and machines occur.
NodeBP script is represented by a tree of components,
whose root is called Root Component and is attached to
a Static Mesh represented by a spherical mesh, a customiz- V. G RAPH VR
able material, and a component (Floating Pawn Movement) We designed GraphVR with the aim of offering users a
able to interpret the movements applied to the actor and more natural and intuitive level of interaction. Interaction
transmit them to its own mesh. To the Static Mesh is also with the components in a scene is made possible through
linked a Billboard, implemented using Blueprints with a Text the HTC Vive Controllers. To query the scene, we use a
component, which can make the node identity continually technique called raycasting [22]. One ray starts from each
visible to the user by rotating the text component based on controller along the direction of its forward vector to the
the user’s viewing angle. Inside the scene, nodes do not previously set end location. The ray is constantly shown
suffer the effect of gravity, and collisions are disabled to through a Cable Component. A viewfinder is used to target
allow the Character to cross them without being blocked. and interact with the objects in the scene. Moving the
EdgeBP is an actor composed using a native Unreal Engine controllers in the scene also results in the motion of the ray
4 plugin; it is structured with a Static Mesh applied at attached to it. The scene is constantly queried, and when the
the Spline component, and it can use two properties that ray hits an Actor, the system checks whether it is an element
allow it to connect to two actors (nodes). Finally, two other with which it is possible to interact and eventually activate
important classes make up our software system: Avatar class further features.
and Layout class. The first class is composed of Character, When a controller’s grip button is pressed, the Root
a special actor able to directly receive the user’s input and Component of each node is attached to one Actor. When the
encode it to interact through the virtual environment. The controller is moved, the distance between its initial and final
Layout class is responsible for computing the graph node position is computed as a vector just before the rendering of

450

Authorized licensed use limited to: Universite de Technologie de Troyes. Downloaded on March 04,2024 at 09:30:50 UTC from IEEE Xplore. Restrictions apply.
Figure 1. 1. Menu button (shoulder), 2. Up, down, right and left movement
(left trackpad), 3. Unused (system button), 4. Details of the node (trigger), 5.
Move or scale the graph (grip button), 6. Rotate the graph (right thumbstick)

Figure 3. Features menu.

the left controller’s trigger button. Data are shown through


a popup on the right controller, which can be closed at the
release of the button; on the button’s release, the node returns
to its original position. For a better user experience, we have
added a menu that groups some of the features.
This menu can be retrieved by pressing the menu button
above the trackpad of the left controller (Figure 3). It is
composed of six labels, placed all around the left controller,
one for every feature. A label can be selected by pointing
the right controller at it. By using the grip button of each
controller, the whole graph can be moved. It is possible to
rotate the graph along two axes. The up and down buttons of
the trackpad of the right controller can be used to rotate the
graph along the x axis, and the left and right buttons apply
a rotation to the graph along the y axis. When the ray hits
Figure 2. Trackpad usage.
a node in its direction, its color changes from red to green,
and its length is modified to avoid crossing the selected
node. By using the trigger button, name of the node can be
each frame. This vector is applied to the Actor’s location,
visualized through a label positioned on the right controller.
which will start to follow the controller’s movements and
This feature also allows the user to move the node by placing
will move all actors attached to it. When the grip button of
it in the desired position. When the button is released, the
both controllers is pressed, then the graph is scaled up or
node will remain in its new position, and the label is reset.
down, showing a zoom effect, increasing or decreasing the
This button on the left controller allows the user to select
distance between the two controllers.
or deselect nodes to move more than one at a time. When a
Figure 2 shows a character’s movements associated with
node is selected, it changes to black, returning to its original
the left controller’s trackpad along the four possible direc-
color when it is deselected. By selecting multiple nodes, it
tions of movement (up, down, right, and left). Although a
is possible to create a cluster, which is represented by one
user could move around in a scene simply by walking [23],
big node. The nodes of the cluster and their attributes are
it is improbable to sustain this action for long distances be-
stored in the data structure. In this way, when declustering is
cause of the size difference between the virtual environment
performed, the cluster is removed, and the nodes are returned
and the real environment. Through our feature, it is possible
to their original positions. The application allows the user to
to move without taking a step. Another interesting feature is
make more clusters, select individual clusters for removal, or
that it allows users to visualize details about a node by using
choose to remove them altogether. We have added a feature

451

Authorized licensed use limited to: Universite de Technologie de Troyes. Downloaded on March 04,2024 at 09:30:50 UTC from IEEE Xplore. Restrictions apply.
that allows the avatar to ignore gravity and simulate flight. VI. C ONCLUSION AND F UTURE W ORKS
This allows the user to examine the graph from above or
GraphVR supports graphs of small to medium dimensions
another preferred point of view. When gravity is disabled,
only, typically with a maximum value of around 300 nodes.
the user can move around in the scene simply by looking in
This limit is due to the management of the number of
the desired direction and by pressing the directional buttons.
Actors, one for each node. It is also necessary to consider
We implemented two types of layouts, which can be selected
all actors that represent the arcs, lights, avatar, billboard, and
by the user. The first is called Base Layout [24] (Figure 4)
the rest of the scene. Because the computing complexity is
and is similar to Gephi’s 2D layout. The second layout is
very high, it was necessary to find a compromise between
called Force-Directed [19] (Figure 5) and is implemented in
rendering quality and system fluidity by reducing the level
3D.
of detail [25] of the actors. The idea in the future is to
continue to develop this application by using different types
of layout algorithms. More important we are going to include
real empirical studies to investigate how this proposed work
contributes to research and development communities that
use VR. In particular, we are planning an extensive and
representative experimental study involving a large sample
of people with more diversified technological skills as well
as with no knowledge of the whole setting. The aim is
to study how non-gamers users, as well as users who are
not familiar with the considered domain, would react to the
GraphVR tool.

R EFERENCES
[1] U. Erra, G. Scanniello, and N. Capece, “Visualizing the
Evolution of Software Systems Using the Forest Metaphor,”
in 2012 16th International Conference on Information Visu-
alisation, July 2012, pp. 87–92.

[2] R. Brath, “3D InfoVis is here to stay: Deal with it,” in 2014
Figure 4. Base layout. IEEE VIS International Workshop on 3DVis (3DVis), Nov
2014, pp. 25–31.

[3] J. P. McIntire and K. K. Liggett, “The (possible) utility of


stereoscopic 3D displays for information visualization: The
good, the bad, and the ugly,” in 2014 IEEE VIS International
Workshop on 3DVis (3DVis), Nov 2014, pp. 1–9.

[4] U. Erra and G. Scanniello, “Towards the Visualization of


Software Systems As 3D Forests: The CodeTrees Environ-
ment,” in Proceedings of the 27th Annual ACM Symposium
on Applied Computing, ser. SAC ’12. New York, NY, USA:
ACM, 2012, pp. 981–988.

[5] C. Ware and P. Mitchell, “Reevaluating Stereo and Motion


Cues for Visualizing Graphs in Three Dimensions,” in Pro-
ceedings of the 2Nd Symposium on Applied Perception in
Graphics and Visualization, ser. APGV ’05. New York, NY,
USA: ACM, 2005, pp. 51–58.

[6] U. Erra, D. Malandrino, and L. Pepe, “Virtual Reality In-


terfaces for Interacting with Three-Dimensional Graphs,” In-
ternational Journal of Human-Computer Interaction, vol. 0,
no. 0, pp. 1–14, 2018.

[7] P. Luckey. (2012) Oculus Rift. https://www.oculus.com/.


Figure 5. Force-Directed.
[8] HTC and Valve Corporation. (2012) HTC Vive.
https://www.vive.com/eu/.

452

Authorized licensed use limited to: Universite de Technologie de Troyes. Downloaded on March 04,2024 at 09:30:50 UTC from IEEE Xplore. Restrictions apply.
[9] B. Alper, T. Hollerer, J. Kuchera-Morin, and A. Forbes, [23] G. Caggianese, L. Gallo, and P. Neroni, Design and Pre-
“Stereoscopic Highlighting: 2D Graph Visualization on Stereo liminary Evaluation of Free-Hand Travel Techniques for
Displays,” IEEE Transactions on Visualization and Computer Wearable Immersive Virtual Reality Systems with Egocentric
Graphics, vol. 17, no. 12, pp. 2325–2333, Dec. 2011. Sensing. Cham: Springer International Publishing, 2015, pp.
399–408.
[10] O. H. Kwon, C. Muelder, K. Lee, and K. L. Ma, “A Study
of Layout, Rendering, and Interaction Methods for Immersive [24] G. D. Battista, P. Eades, R. Tamassia, and I. G. Tollis,
Graph Visualization,” IEEE Transactions on Visualization and “Algorithms for drawing graphs: an annotated bibliography,”
Computer Graphics, vol. 22, no. 7, pp. 1802–1815, July 2016. Computational Geometry, vol. 4, no. 5, pp. 235 – 282, 1994.
[11] M. Bastian, S. Heymann, and M. Jacomy, “Gephi: [25] D. P. Luebke, “A developer’s survey of polygonal
An Open Source Software for Exploring and simplification algorithms,” IEEE Comput. Graph. Appl.,
Manipulating Networks,” 2009. [Online]. Available: vol. 21, no. 3, pp. 24–35, May 2001. [Online]. Available:
http://www.aaai.org/ocs/index.php/ICWSM/09/paper/view/154 http://dx.doi.org/10.1109/38.920624
[12] P. Shannon, A. Markiel, O. Ozier, N. S. Baliga, J. T.
Wang, D. Ramage, N. Amin, B. Schwikowski, and T. Ideker,
“Cytoscape: a software environment for integrated models
of biomolecular interaction networks,” Genome research,
vol. 13, no. 11, pp. 2498–2504, 2003.

[13] J. Ellson, E. R. Gansner, E. Koutsofios, S. C. North, and


G. Woodhull, Graphviz and Dynagraph — Static and Dy-
namic Graph Drawing Tools. Berlin, Heidelberg: Springer
Berlin Heidelberg, 2004, pp. 127–148.

[14] U. Erra, D. Malandrino, and L. Pepe, “A methodological


evaluation of natural user interfaces for immersive 3D Graph
explorations,” Journal of Visual Languages & Computing,
vol. 44, pp. 13 – 27, 2018.

[15] C. Donalek, S. G. Djorgovski, A. Cioc, A. Wang, J. Zhang,


E. Lawler, S. Yeh, A. Mahabal, M. Graham, A. Drake,
S. Davidoff, J. S. Norris, and G. Longo, “Immersive and col-
laborative data visualization using virtual reality platforms,”
in 2014 IEEE International Conference on Big Data (Big
Data), Oct 2014, pp. 609–614.

[16] A. Betella, E. M. Bueno, W. Kongsantad, R. Zucca, X. D.


Arsiwalla, P. Omedas, and P. F. M. J. Verschure, “Under-
standing large network datasets through embodied interaction
in virtual reality,” in Proceedings of the 2014 Virtual Reality
International Conference, ser. VRIC ’14. New York, NY,
USA: ACM, 2014, pp. 23:1–23:7.

[17] W. Sherif, Learning C++ by Creating Games with UE4.


Packt Publishing Ltd, 2015.

[18] B. Sewell, Blueprints Visual Scripting for Unreal Engine.


Packt Publishing Ltd, 2015.

[19] T. M. J. Fruchterman and E. M. Reingold, “Graph drawing


by force-directed placement,” Softw. Pract. Exper., vol. 21,
no. 11, pp. 1129–1164, Nov. 1991.

[20] A. Rollings and E. Adams, Andrew Rollings and Ernest


Adams on Game Design. New Riders Games, 2003.

[21] D. A. Bowman, S. Coquillart, B. Froehlich, M. Hirose,


Y. Kitamura, K. Kiyokawa, and W. Stuerzlinger, “3d user
interfaces: New directions and perspectives,” IEEE Comput.
Graph. Appl., vol. 28, no. 6, pp. 20–36, Nov. 2008.

[22] S. Woop, J. Schmittler, and P. Slusallek, “RPU: A Pro-


grammable Ray Processing Unit for Realtime Ray Tracing,”
ACM Trans. Graph., vol. 24, no. 3, pp. 434–444, Jul. 2005.

453

Authorized licensed use limited to: Universite de Technologie de Troyes. Downloaded on March 04,2024 at 09:30:50 UTC from IEEE Xplore. Restrictions apply.

You might also like