Efficient and Effective Volume Visualization With Enhanced Isosurface Rendering
Efficient and Effective Volume Visualization With Enhanced Isosurface Rendering
1 Abstract
2 Compared with full volume rendering, isosurface rendering has
3 several well recognized advantages in efficiency and accuracy.
4 However, standard isosurface rendering has some limitations in ef-
5 fectiveness. First, it uses a monotone colored approach and can
6 only visualize the geometry features of an isosurface. The lack
7 of the capability to illustrate the material property and the inter-
8 nal structures behind an isosurface has been a big limitation of this
9 method in applications. Another limitation of isosurface rendering
is the difficulty to reveal physically meaningful structures, which
arXiv:1202.5360v1 [cs.GR] 24 Feb 2012
10
1
Online Submission ID: 0259
2
Online Submission ID: 0259
Ray-Isosurface Intersection
Ray-Isosurface
Intersection Search
Seed Voxels
Multi-surface-
Selection Based Isosurface
structure
Coloring Segmentation
Visualization
Seed Preview
Figure 2: The overview of the contributions of our approach. The proposed enhanced isosurface rendering technique is accomplished by the
color enhancement method and the explicit scene exploration scheme. Explicit scene exploration consists of surface peeling, voxel-selecting,
isosurface segmentation, and multi-surface-structure visualization.
3
Online Submission ID: 0259
284 first perform an iterative search along the viewing ray to find a ∆l
285 where the scalar value reaches isovalue + ∆v. Then the scalar
1
252 In this way, Equation (1) can be writen as: α ≈ αT n·speed 286 changing rate is estimated by ∆l/∆v. In this way, we find that
287 many of the internal structures can be revealed without noticeable
253 From the above formulations, we know that: 288 performance impact. Figure 5 (c) is an example, and more will be
289 presented in the result part.
254 • A value speed can be easily calculated from the scalar
255 changing rate at the intersection point, v 0 l (liso ), with
256 densityF actor as a global tunable parameter. 290 5 Explicit Scene Exploration
257 • Given the value speed, the alpha values used for accumulation 291 An innate advantage of isosurface rendering is that, for each ray,
258 can be approximated. 292 a definite intersection point can be generated. When voxel based
293 ray-traversal is used, the exact voxel can also be identified. This
259 4.2 Speed-color Map 294 feature can be very useful in interaction design. For example, it en-
295 ables the direct picking of voxels from the volume by mouse click-
260 During rendering, we use v 0 l (liso ) to characterize the local mate- 296 ing and dragging within the 2D image plane. By exploiting this
261 rial. To efficiently calculate the material color from the v 0 l (liso ), a 297 advantage, we develop an explicit scene exploration framework to
262 map from speed to color C ~ is built as follow: 298 quickly identify the structures of interest, pick them out, and re-
299 combine them into a single scene.
263 Given a local transfer-function
1
301 To guarantee the quality of the interaction, such as the voxel select-
264 , for each speed, αspeed,i = αT i n·speed . The accumulated color 302 ing operation, it is crucial to make sure that the ray-isosurface inter-
265 C~ (speed) can be pre-calculated by an alpha blending: 303 section test is accurate for each ray. First, voxel based ray traversal
304 such as the 3DDDA algorithm should be used. Second, the inter-
305 section finding within the voxel should be accurately performed.
n
X i−1
Y
~ (speed) =
C αspeed,i~ci (1 − αspeed,j ) 306 When trilinear interpolation is assumed, the value along the ray seg-
i=1 j=1
307 ment within each voxel can be expressed as a cubic function to the
308 ray parameter. Once the coefficients of the cubic function are de-
309 cided, the intersection test can be transformed into solving a cubic
266 However, since speed has an infinite value range, we cannot 310 equation, which can be efficiently solved using the method pro-
267 simply build a speed-color map linearly. In our implementa- 311 posed by Marmitt et al. [Marmitt et al. 2004]. There are different
268 tion, we use a logarithmsampled speed-color map of size m, 312 ways to decide the coefficients of the cubic function. We choose
~j = C ~ − ln 1 − j/
269 C m , j = 1, 2, 3, · · · , m to store the pre- 313 to use four values along the ray in each voxel to estimate the co-
270 calculated material colors. 314 efficients of the cubic polynomial, as is shown in Figure 6. The
4
Online Submission ID: 0259
p3
p2
p1
p0
Figure 6: Use four values along the ray in each voxel to estimate (a) (b)
the coefficients of the cubic polynomial.
Figure 8: Voxel Selection. (a) Illustration of the process of voxel
selecting. (b) Illustration of a voxel ID buffer.
v1
v0 B
A
v2
v3
(a) (b)
(a) (b)
Figure 9: Isosurface Segmentation. (a) 2D illustration of isosur-
Figure 7: Surface Peeling. (a) Illustration of surface peeling. (b) face segmentation. (b) Use the length of the intersection of isosur-
An example of surface peeling. face and voxel boundary as the weight.
315 viewing ray enters the voxel from p0 and exits from p3. A value 346 for voxel selecting after the rendering. When user clicks or drags
316 is fetched at p0, p1, p2, and p3 each respectively, where p1 and p2 347 the mouse on the image plane, the corresponding voxel ID can be
317 are the trisection points of the ray-voxel intersection. 348 collected and organized. Figure 8 (b) illustrates a voxel ID buffer,
349 where each small color patch is contained in a voxel. The color
318 5.2 Surface Peeling 350 patches are neatly connected together due to accurate ray-isosurface
351 intersection test.
319 When provided with a volume data which we have little knowledge, 352 Voxel selection in a 3D view provides a intuitive and effective in-
320 the first issue of visualization should be to explore the dataset and 353 teraction tool, which we used for the seed selecting in isosurface
321 identify the structures of interest. This has not been a simple issue 354 segmentation.
322 for both full volume rendering and monotone isosurface rendering,
323 due to the fact that many of the structures overlap each other.
355 5.4 Isosurface Segmentation
324 For isosurface rendering, we can use cropping and surface peeling
325 for the brief exploration. Cropping is a simple technique which just 356 For some complicated medical datasets, segmentation is necessary
326 trunk the volume with a bounding box during rendering to localize 357 for volume visualization. Traditionally, volume segmentation meth-
327 the rendering region of the volume. Surface peeling is performed by 358 ods are used as the preprocessing, through which a label volume
328 jumping over the first few intersection points within some selected 359 is generated. Then, rendering is done by using different transfer-
329 image plane areas, so the the structures behind can be exposed. As 360 functions for different volume components. These methods usually
330 is shown in Figure 7 (a), a peeling window is defined on the image 361 have three drawbacks. First, the segmentation is usually performed
331 plane. Within that window, the first ray-isosurface intersection is 362 in slice views or the intensity domain, which is not intuitive. Sec-
332 skipped over, so the second layer of the isosurface is rendered. In 363 ond, the label volume, in which the segmentation result is stored
333 our implementation, we use an integer array as the ”peeling buffer”. 364 has very limited precision. Stair-case artifacts can be frequently
334 The values are initialized as 0. Within each peeling window, the 365 seen in rendering. Third, volume segmentation itself can be very
335 peeling value increases by 1. During rendering, the number of in- 366 complicated and expensive in computation.
336 tersections is counted, if the count is no-bigger than the peeling
337 value, the intersection will be skipped over. Figure 7 (b) gives an 367 In this part, we propose an isosurface domain segmentation method,
338 example of surface peeling, which contains two peeling windows. 368 in which we only consider the voxels containing the specific isosur-
369 face. The idea is that, these voxels can be considered as forming
370 an inter-connected network, using 6-neighborhood, as is shown in
339 5.3 Voxel Selection 371 Figure 9 (a), where it is 4-neighborhood in 2D. So the isosurface
372 segmentation can be considered as a graph segmentation problem.
340 As mentioned above, when ray traversal is done in a voxel-by-voxel
341 manner and the intersections are accurately calculated, the exact in- 373 With the voxel selection method described above, the seed points
342 tersecting voxel can be identified for each ray intersecting the iso- 374 for the foreground and the background can be directly picked from
343 surface. As is shown in Figure 8 (a), a unique ID is assigned to 375 the rendering view. In our implementation, the voxel IDs of the seed
344 each voxel in a sequential manner. Then, a voxel ID buffer is used 376 points are stored as two ”set” structures, which is sorted and opti-
345 to store the intersecting voxel IDs of each ray, which can be used 377 mized for searching. The graph is then constructed by a breadth
5
Online Submission ID: 0259
Kernel Execution
Dataset Volume Size Image Size
Time per Frame
FVR: 60.6232 ms
Mouse 512 × 512 × 512 765 × 592
CEIR: 4.66121 ms
FVR: 46.4839 ms
Knee 379 × 229 × 305 589 × 488
CEIR: 4.21262 ms
FVR: 34.7136 ms
Head 208 × 256 × 225 713 × 594
CEIR: 4.06191 ms
421 The rendering process is just slightly different from single isosur-
422 face rendering. During ray traversal, the first step is to find a voxel
Figure 10: Multi-surface-structure visualization 423 with a none-zero label. Then, within the voxel, the isovalue corre-
424 sponding to the label value is used for ray-isosurface intersection
425 test, and the corresponding speed-color map is used for color en-
378 first search from both seed sets. In the graph, the segmentation 426 hancement. Since the ray-isosurface intersection test can still be
379 problem can be defined as to find an optimal cut that separate the 427 performed at subvoxel level, the accuracy is preserved and is free
380 two seed sets, which can be solved with the min-cut algorithm after 428 of the stair case artifacts.
381 the weight of each link is decided. For the decision of the weights,
382 it is intuitive to consider the geometrically shortest cut as the opti- 429 6 Results
383 mal solution, since it introduces the smallest damage to the original
384 structure. So the weight should be defined as the length of the in- 430 We conducted several experiments on the proposed techniques on
385 tersection of the isosurface and the voxel boundary, as is shown in 431 a PC equipped with Intel i7 950 CPU, 4GB RAM, and NVIDIA
386 Figure 9 (b), the A-B segment. 432 Geforce GTX 480 graphics card. The rendering computation is im-
433 plemented on the GPU with CUDA.
387 If there is only one segment of intersection of the isosurface and the
388 voxel boundary, the length can be simply calculated by a numerical
389 integration. However, with trilinear interpolation in volume space, 434 6.1 Isosurface Color Enhancement
390 there can be as many as two segments at most, although such case
391 is rare. In that case, usually only one of the two segments is of in- 435 As mentioned before, isosurface color enhancement can be used in
392 terest during segmentation, but there’s no information about which 436 two ways due to different methods of scalar changing rate estima-
393 segment it is. So we treat this case by dividing the total length of 437 tion. Using shallow neighborhood approximation, enhanced iso-
394 the two intersection segments by 2. 438 surface rendering can generate the rendering effect very similar to
439 the full volume rendering that uses a transfer-function with a very
395 During seed selection, the current selection status is previewed with 440 narrow transitional section. As shown in Figure 11, the visual ef-
396 a selection based coloring method. Since the seed sets are not likely 441 fect of color enhanced isosurface rendering is very close to the full
397 to be large, they are directly transfered to the graphics memory 442 volume rendering, while it only takes about 2.6 more milliseconds
398 as sequential lists. During rendering, for each intersection point, 443 compared with monotone isosurface rendering.
399 according the voxel ID information, it can be judged whether the
400 voxel is contained in the two lists by binary searches, and the color 444 The capability to reveal internal structures can also be compared
401 can be decided accordingly. 445 with full volume rendering. Figure 12 shows 3 examples of ren-
446 dering the sample dataset with full volume rendering (FVR) and
402 With isosurface segmentation, the IDs of two sets of voxels are ex- 447 color enhance isosurface rendering (CEIR). Table 1 gives the per-
403 tracted, each containing part of the same isosurface, which we call 448 formance comparison of FVR and CEIR. Using color enhanced iso-
404 surface structures. Such a surface structure is not guaranteed to 449 surface rendering, while the essential structures of interest behind
405 be closed, so it cannot substitute volume segmentation. However, 450 an isosurface can be depicted in a similar way as full volume ren-
406 we will show that they can be recombined and rendered in a sin- 451 dering, the computational cost remains at a very low level.
407 gle scene with different coloring settings, which provides valuable
408 visualization results. 452 6.2 Explicit Scene Exploration
409 5.5 Multi-surface-structure Visualization 453 With the explicit scene exploration, a volume dataset can be ex-
454 plored in an intuitive and efficient way. It includes several tech-
455 niques. Within these techniques, surface peeling enables us to ex-
410 To combine the surface structures into a single scene, a label vol- 456 plore structures deep inside the dataset. It can be applied to any iso-
411 ume of 8-bit integer values, ranging from 0 to 255 is used. While 457 surface rendering view. For example, it can be directly combined
412 the value 0 is used to mark the empty voxels, values from 1 to 255 458 with isosurface color enhancement such as the example given in
413 are corresponding to different surface structures. So at most 255 459 Figure 7 (b).
414 surface structures are allowed to be defined, which is sufficient for
415 most applications. For each surface structure, an isovalue is stored, 460 The other techniques mainly aim at the segmentation and recom-
416 which is the isovalue used in the isosurface segmentation stage to 461 bination of the structures of interest. Figure 13 gives an example
417 extract that surface structure. Also, to enable color enhancement, 462 of seed picking and isosurface segmentation, which demonstrates
418 each surface structure has its own local transfer-function, and the 463 how a tumor is segmented from a brain MRI image. Since the
419 speed-color maps are stored in a 2D look-up-table, where each line 464 segmentation only works on the voxels containing the isosurface,
420 is corresponding to a speed-color map. 465 the segmentation can be done very efficiently. In this example, the
6
Online Submission ID: 0259
Figure 11: Isosurface Color Enhancement Example. (a) Ordinary monotone isosurface rendering. (b) Full volume rendering. (c) Color
enhanced isosurface rendering. The size of the images is 735x556. Rendering kernel execution times per-frame: (a) 6.0834 ms (b) 21.8971
ms (c) 8.69162 ms
(a) (b)
(a) (b)
(c) (d)
(a) (b)
7
Online Submission ID: 0259
493 in reality, and the rendering result has a realistic appearance. How-
494 ever, for the same reason, people can argue that isosurface render-
495 ing might not have sufficient functionalities for visualizing complex
496 volume datasets. Our work shows that such point of view is very
497 likely to be biased. First, its intuitiveness and simplicity are the ad-
498 vantages for better computational performance which is critical in
499 some interactive applications. Second, although it has some limita-
500 tions, much can done to improve them and make isosurface render-
501 ing more effective. In this work, we presented a series of techniques
502 to enhance the effectiveness of the isosurface rendering, and make
503 it a more powerful tool suitable for more application scenarios.
(a) (b)
504 To bring more information into isosurface visualization, we pro-
Figure 15: Visualization accuracy comparison of volume segmen- 505 pose the color enhanced isosurface rendering. Using this method,
tation and isosurface segmentation. (a) Rendering a segmented vol- 506 the color of isosurface is no longer monotone. It can be consid-
ume with linear boundary filtering. (b) Rendering a surface struc- 507 ered that the isosurface is painted with a texture, which is generated
ture produces by isosurface segmentation. 508 according the neighborhood density, in order to reveal the internal
509 structures of the volume data. As such, the representative power of
510 isosurface rendering is very much extended. At the same time, its
511 theoretical groundings can be traced to the full volume rendering.
477 ure 15 (a), if volume segmentation is used, some staircase artifacts
478 are very likely to be introduced even if linear boundary filtering is 512 The explicit scene exploration further extends isosurface rendering
479 applied. These artifacts are not seen in Figure 15 (b), where the 513 by allowing physically meaningful structures to be extracted from
480 isosurface segmentation is used. 514 different isosurfaces. In contrast to direct volume segmentation, the
515 isosurface segmentation method can be more seamlessly integrated
481 In some applications like surgical planning, it is an essential task to
516 to an isosurface rendering system. However, since the extracted sur-
482 extract structures of interests and to analysis the spatial relationship
517 face is not guaranteed to be closed, it can’t be used to replace vol-
483 between them. Our explicit scene exploration provides a powerful
518 ume segmentation. Despite of this, we hope to extend the segmen-
484 tool to deal with these requirements. As shown in Figure 16, multi-
519 tation method to volume segmentation by closing the cuts generated
485 ple organs are segmented and represented as the surface structures,
520 in the segmentation. In this way, the applications of the technique
486 then they are recombined into a single scene for rendering.
521 may be extended to requirements other than visualization, such as
522 automated structure analysis.
487 7 Discussion
523 Acknowledgements
488 Among all volume visualization techniques, isosurface rendering
489 may be the most intuitive and efficient method. It is consistent with
490 the most widely applied optical model used in b-rep graphics, which 524 References
491 considers objects to have a solid silhouette, and the visual features
492 mostly lie on the surface. This model faithfully depicts the objects 525 A MANATIDES , J., AND W OO , A. 1987. A fast voxel traversal
8
Online Submission ID: 0259
526 algorithm for ray tracing. In In Eurographics 87, 3–10. 581 X IANG , D., T IAN , J., YANG , F., YANG , Q., Z HANG , X., L I ,
582 Q., AND L IU , X. 2010. Skeleton cuts-an efficient segmenta-
527 A MENT, M., W EISKOPF, D., AND C ARR , H. 2010. Direct inter- 583 tion method for volume rendering. Visualization and Computer
528 val volume visualization. Visualization and Computer Graphics, 584 Graphics, IEEE Transactions on, 99, 1–1.
529 IEEE Transactions on 16, 6, 1505 –1514.
530 B OYKOV, Y., AND KOLMOGOROV, V. 2004. An experimental
531 comparison of min-cut/max-flow algorithms for energy mini-
532 mization in vision. Pattern Analysis and Machine Intelligence,
533 IEEE Transactions on 26, 9, 1124–1137.
534 C ABAN , J., AND R HEINGANS , P. 2008. Texture-based transfer
535 functions for direct volume rendering. Visualization and Com-
536 puter Graphics, IEEE Transactions on 14, 6, 1364–1371.
537 C ORREA , C., AND M A , K. 2008. Size-based transfer functions: A
538 new volume exploration technique. Visualization and Computer
539 Graphics, IEEE Transactions on 14, 6, 1380–1387.
540 E NGEL , K., K RAUS , M., AND E RTL , T. 2001. High-
541 quality pre-integrated volume rendering using hardware-
542 accelerated pixel shading. In Proceedings of the ACM SIG-
543 GRAPH/EUROGRAPHICS workshop on Graphics hardware,
544 ACM New York, NY, USA, 9–16.
545 H ADWIGER , M., B ERGER , C., AND H AUSER , H. 2003. High-
546 quality two-level volume rendering of segmented data sets on
547 consumer graphics hardware. In Visualization, 2003. VIS 2003.
548 IEEE, IEEE, 301–308.
549 K NISS , J., K INDLMANN , G., AND H ANSEN , C. 2002. Multi-
550 dimensional transfer functions for interactive volume rendering.
551 Visualization and Computer Graphics, IEEE Transactions on 8,
552 3, 270–285.
553 L UM , E., W ILSON , B., AND M A , K. 2004. High-quality lighting
554 and efficient pre-integration for volume rendering. In Proceed-
555 ings Joint Eurographics-IEEE TVCG Symposium on Visualiza-
556 tion 2004 (VisSym04), 25–34.
557 M ARMITT, G., K LEER , A., WALD , I., AND F RIEDRICH , H.
558 2004. Fast and accurate ray-voxel intersection techniques for
559 iso-surface ray tracing. In in Proceedings of Vision, Modeling,
560 and Visualization (VMV), 429–435.
561 M AX , N. 1995. Optical models for direct volume rendering.
562 IEEE Transactions on Visualization and Computer Graphics 1,
563 2 (June), 99–108.
564 N EUBAUER , A., M ROZ , L., H AUSER , H., AND W EGENKITTL , R.
565 2002. Cell-based first-hit ray casting. In Proceedings of the sym-
566 posium on Data Visualisation 2002, Eurographics Association,
567 Aire-la-Ville, Switzerland, Switzerland, VISSYM ’02, 77–ff.
568 PARKER , S., S HIRLEY, P., L IVNAT, Y., H ANSEN , C., AND
569 S LOAN , P.-P. 1998. Interactive ray tracing for isosurface ren-
570 dering. Visualization Conference, IEEE 0, 233.
571 P ICKHARDT, P. 2004. Translucency rendering in 3d endoluminal ct
572 colonography: a useful tool for increasing polyp specificity and
573 decreasing interpretation time. American Journal of Roentgenol-
574 ogy 183, 2, 429–436.
575 S CHARSACH , H. 2005. Advanced gpu raycasting. In the 9th Cen-
576 tral European Seminar on Computer Graphics.
577 S TEGMAIER , S., S TRENGERT, M., K LEIN , T., AND E RTL , T.
578 2005. A simple and flexible volume rendering framework for
579 graphics-hardware-based raycasting. International Workshop on
580 Volume Graphics 0, 187–241.