Framework For Interactive Visualization of Ocean Scalar Data
Framework For Interactive Visualization of Ocean Scalar Data
State Key Laboratory of Marine Geology, Tongji University, Shanghai 200092, China; [email protected] (J.Y.);
[email protected] (Z.X.)
* Correspondence: [email protected]
visualization approaches [2–6], as more advanced techniques, can address these limitations
by intuitively representing the geospatial characteristics of large-volume ocean spatiotem-
poral datasets, contributing to an in-depth understanding of ocean water masses and
dynamic changes within ocean environmental phenomena. Therefore, this study focuses on
developing a visualization framework for large-scale three-dimensional ocean scalar data
on both regular and irregular grids, providing efficient data representation and interactive
visual analysis capabilities for oceanographers and stakeholders.
Ocean data visualization approaches are commonly categorized into two-dimensional
and three-dimensional techniques [7]. Traditional two-dimensional data visualization
techniques, typically based on Geographic Information Systems (GISs) in the form of 2D
maps, have been widely adopted to depict and analyze ocean variables [8–10]. However,
using 2D thematic maps to visualize complex data with three-dimensional spatial char-
acteristics presents limitations. In contrast to two-dimensional visualization techniques,
three-dimensional visualization techniques can better show the entire structure information
of water masses and display the scattered features as a whole in three-dimensional space,
enabling researchers to explore and comprehend water masses from multiple perspectives
efficiently, thus supporting research in the field of ocean data analysis [11,12]. Driven
by the fast-paced advancements in computer graphics, the scope and sophistication of
ocean data visualization research are continuously evolving, with an increasing focus on
three-dimensional visualization techniques due to their ability to reveal spatiotemporal
variability and complex characteristics of water masses in recent years.
Three-dimensional data visualization encompasses various techniques, among which
volume rendering is regarded as one of the most valuable. The primary techniques of
volume rendering include indirect and direct volume rendering [13]. Indirect volume
rendering, also known as iso-surfacing [14], includes algorithms like marching cubes and
marching tetrahedra, which typically display three-dimensional data as constant-value con-
tour surfaces. These algorithms can effectively capture and represent contour information
of multidimensional volume data while providing advantages in computational efficiency
and resource consumption. However, volume visualization based on a conventional iso-
surface has limitations in demonstrating the complete internal structure of the ocean scalar
data used to characterize water masses. By comparison, direct volume rendering algorithms
can visualize the data as a continuous three-dimensional volume, clearly displaying the
potential features within the volume data. Over the past few decades, volume rendering
has become a key area of research in three-dimensional data visualization, with diverse
applications in many fields, such as medicine [15], geology [16], and meteorology [17].
Most three-dimensional ocean data visualization tools have been developed based
on a sophisticated Client/Server (C/S) architecture, such as those desktop-based three-
dimensional frameworks for an interactive visualization of large-scale ocean data developed
by Lv et al. [18] and Tian et al. [19]. As web technologies continue to advance rapidly, the
Browser/Server (B/S) architecture has increasingly been adopted in many visualization
studies over the traditional C/S architecture. The B/S architecture centralizes the core
part on the server side, offering greater openness, integrability, and shareability while
simplifying the system’s development, maintenance, and usage processes. It has thus
become one of the leading technical solutions for building data visualization frameworks
and systems across various fields [20–23]. WebGL, a JavaScript-based graphics rendering
Application Programming Interface (API) based on the OpenGL ES standard, is currently
the preferred technology for developing web-based 3D platforms, and numerous studies
have been conducted on three-dimensional interactive ocean data visualization utilizing
WebGL. For instance, Lu et al. [24] used 3D tiles and WebGIS technology to achieve a real-
time online three-dimensional visualization of weather radar data, allowing for visualizing
Appl. Sci. 2025, 15, 2782 3 of 17
data from different horizontal planes by setting the vertical interval. Li et al. [25] designed
an adaptive ray casting spherical volume rendering method based on the open-source web
API Cesium.js to express ocean scalar fields. Liu et al. [26] developed a three-dimensional
interactive visualization framework for ocean eddies in web browsers that supports eddy
extraction, tracking, and querying, facilitating cross-platform scientific collaboration. In our
previous work [27], Plotly.js (v1.47.4), an open-source graphical JavaScript library based on
WebGL, was adopted to develop a web-based three-dimensional interactive visualization
framework for time-varying and large-volume ocean forecasting data.
Nonetheless, due to WebGL’s lack of explicit support for parallel processing and
inability to fully utilize modern graphics hardware [28], WebGL-based visualization frame-
works sometimes face difficulties in meeting the gradually growing performance demands.
Challenges remain in visualizing large-scale data efficiently on web platforms [29], par-
ticularly in interactive and real-time visualization scenarios. Therefore, there is a need to
introduce advanced high-performance rendering technologies for smooth and high-quality
three-dimensional visualization and dynamic interactions in a web environment.
As the latest WebGPU technology emerges, these limitations are expected to be over-
come. WebGPU is a JavaScript API that is based on graphics APIs, including Vulkan,
Metal, and Direct3D 12, and it uses WebGPU Shading Language (WGSL) as its shading
language. It enables more direct access to modern multicore GPUs, providing efficient and
flexible graphics programming interfaces for web application developers. Compared to
WebGL, a mature and widely adopted web graphics technology, WebGPU is considered the
next-generation trend for web graphics rendering, offering improvements in performance,
functionality, and design [30], presenting significant advantages in the following aspects:
First, WebGPU effectively reduces the overhead of frequent data transfers between the CPU
and GPU, thereby improving the performance of web-based 3D applications. Additionally,
it also provides powerful parallel computing capabilities, along with general-purpose
computation functionality beyond rendering, allowing for the rapid handling of large-scale
computational tasks. Moreover, WebGPU has a low-level nature, allowing developers to
create highly realistic and sophisticated 3D graphics by using more power of the GPU.
Given the high-performance processing capabilities of WebCPU, it has been suggested that
WebGPU is expected to become one of the key technologies for developing web-based 3D
platforms in the near future [31]. However, research and applications of WebGPU are still
in their infancy, specifically in the field of ocean data visualization.
As a further improvement of our previous work [27], we propose a three-dimensional
volume rendering framework to implement the efficient and high-quality visualization
of massive ocean scalar data by utilizing WebGPU technology, with the goal of explor-
ing the feasibility and potential applications of WebGPU-based volume rendering in the
oceanographic scientific visualization field. In this work, modeling outputs generated with
various types of grids and different spatial resolutions are first preprocessed with the aim of
enhancing the suitability of the original dataset for subsequent visualization. Then, the ray
casting algorithm, optimized with early ray termination and adaptive sampling methods,
is adopted as the core volume rendering algorithm, with Babylon.js as the rendering engine,
to visualize both large-scale regular and irregular gridded datasets in a web environment.
Moreover, interactive visual analysis tools are also incorporated to enable an in-depth
exploration of the volume data. Finally, several experiments were performed to evaluate
the visual effects and efficiency of the proposed WebGPU-based framework using the
ocean numerical modeling datasets describing the ocean temperature field (temperature
data), flow field (flow velocity magnitude data), and acoustic field (acoustic propagation
loss data).
framework using the ocean numerical modeling datasets describing the ocean tempera-
Appl. Sci. 2025, 15, 2782 4 of 17
ture field (temperature data), flow field (flow velocity magnitude data), and acoustic field
(acoustic propagation loss data).
Theremainder
The remainderof ofthe
thepaper
paperisisorganized
organizedasasfollows:
follows:Section
Section22provides
providesaadetailed
detailed
descriptionofofthe
description theadopted
adopted methods
methods and
and thethe implementation
implementation of the
of the WebGPU-based
WebGPU-based vol-
volume
ume rendering. Section 3 presents and discusses the visual effects of the three-dimensional
rendering. Section 3 presents and discusses the visual effects of the three-dimensional
visualization, as
visualization, as well
well as
as conducts
conducts comparative
comparativeanalyses
analysesofofthe
theproposed
proposed framework’s
framework’s ef-
ficiency. Finally, Section 4 summarizes the conclusion.
efficiency. Finally, Section 4 summarizes the conclusion.
2.2.Methods
Methods
2.1.Preprocessing
2.1. PreprocessingofofRegular
Regularand
andIrregular
IrregularGridded
GriddedOcean
OceanScalar
ScalarData
Data
Thetwo
The twodatasets
datasetsused
usedin inthis
thisstudy
studyinclude
includethethesimulation
simulationresultsresultsofofthetheocean
oceanenvi-
envi-
ronmentalparameters
ronmental parametersand andocean
oceanacoustic
acousticfield,
field,respectively,
respectively,provided
providedby bythe
theSouth
SouthChina
China
SeaInstitute
Sea InstituteofofOceanology
Oceanology(SCSIO).(SCSIO).TheThefirst
firstdataset
datasetcontains
containsscalar
scalardata
data such
such as
as salinity,
salinity,
temperature,and
temperature, andflow
flowvelocity
velocitymagnitude
magnitudedata,data,using
usingthetheWorld
WorldGeodetic
GeodeticSystem
System19841984
(WGS84)coordinate
(WGS84) coordinatesystem.
system.InInthe thehorizontal
horizontal direction,
direction, thethe dataset
dataset is structured
is structured in regu-
in regular
lar grids
grids and uniformly
and uniformly distributed
distributed with awith a spatial
spatial resolution
resolution of 1/60 of◦ 1/60°
× 1/60 × ◦1/60°. In contrast,
. In contrast, the
the vertical
vertical grid grid spacing
spacing is uneven,
is uneven, withwith a resolution
a resolution of 5 of
m 5from
m from0 m 0tom−to 500−500 m and
m and 100 100
m
from −500
m from mm
−500 −2000
to to −2000 m,m,respectively,
respectively, forfor
a total
a totalofof116
116vertical
verticallayers.
layers.The Thetemporal
temporal
resolution
resolutionofofthe
thedataset
dataset is is
1 h, consisting
1 h, consistingof 121 time
of 121 steps
time in total.
steps Figure
in total. 1a illustrates
Figure the
1a illustrates
spatial distribution
the spatial structure
distribution of theofdataset.
structure the dataset.
Figure 1. Schematic showing the general structure of the datasets: (a) the ocean environmental
Figure 1. Schematic showing the general structure of the datasets: (a) the ocean environmental pa-
parameters forecast dataset; (b) the ocean acoustic field forecast dataset.
rameters forecast dataset; (b) the ocean acoustic field forecast dataset.
The second dataset containing acoustic propagation loss data is structured in irregular
The second dataset containing acoustic propagation loss data is structured in irregu-
grids horizontally. This three-dimensional dataset features a cylindrical grid structure,
lar grids horizontally. This three-dimensional dataset features a cylindrical grid structure,
which is centered around a certain point and consists of multiple circular surfaces. For
which is centered around a certain point and consists of multiple circular surfaces. For
each horizontal plane, the data points are distributed on concentric circles rather than
each horizontal plane, the data points are distributed on concentric circles rather than ar-
arranged in typical latitude−longitude grids, with one data point located every 3◦ along
ranged in typical latitude−longitude grids, with one data point located every 3° along each
each circle. The max radius of the concentric circles is approximately 50 km, with a
circle. The max radius of the concentric circles is approximately 50 km, with a radial step
radial step size of 90 m between adjacent circles. Vertically, the maximum depth of the
size of 90 m between adjacent circles. Vertically, the maximum depth of the dataset ex-
dataset extends to approximately −3700 m, with a vertical step size of 6 m. Its spatial
tends to approximately −3700 m, with a vertical step size of 6 m. Its spatial distribution
distribution structure is illustrated in Figure 1b. In particular, to provide a uniform spatial
structure is illustrated in Figure 1b. In particular, to provide a uniform spatial reference
reference for data visualization, the Universal Transverse Mercator (UTM) coordinates of
for data visualization, the Universal Transverse Mercator (UTM) coordinates of each data
each data point at the i-th depth layer, where i ranges from 1 to 630, are first calculated as
point at the i-th depth layer, where i ranges from 1 to 630, are first calculated as detailed
detailed in Equation (1). Specifically, θ and r represent the angle and the distance between
the data point and the central point, respectively, with xcenter and ycenter denoting the
UTM coordinates of the central point in the horizontal plane. Then, the calculated UTM
Appl. Sci. 2025, 15, 2782 5 of 17
coordinates (x, y, z) need to be converted into the WGS84 coordinates (longitude, latitude,
altitude) to maintain consistency with the coordinate system of the first dataset.
x = cos θπ
· r + xcenter
180
θπ
y = sin 180 · r + ycenter (1)
z = −6i
The two datasets are typically stored in the self-describing binary data format, NetCDF
(Network Common Data Form). Although NetCDF files are ideal for storing scientific
data [32], there are challenges present in efficiently loading large-volume NetCDF files
and rendering them in web browsers. Therefore, in the context of fast and smooth visu-
alization in web-based environments, a more lightweight and compact format is needed.
JSON (JavaScript Object Notation), a readable and web-friendly format with a simple
and text-based syntax rule, is commonly used in web application development for data
preprocessing and exchange tasks due to its low-overhead data storage and transmission
capabilities. To improve the efficiency of further processing, the original NetCDF files are
converted into a series of JSON files according to variables and time dimensions. Moreover,
additional preprocessing steps involving data interpolation and volume texture generation
are carried out to address the technical challenges for volume rendering posed by the
uneven structure of the data grids, making them suitable for shader programming.
1. Data interpolation: Concerning the first dataset, interpolation in the vertical direction
is necessary due to its nonuniform depth intervals. We utilize the spline interpolation
method to interpolate the data in the depth range from 0 m to −2000 m with a 5 m
interval, resulting in a total of 401 vertical layers. Similarly, horizontal interpolation is
required for the second dataset. The cubic spline interpolation method is employed
to generate regular latitude and longitude grids, and the horizontal grid size of each
depth layer is 300 × 300 after interpolation, containing a total of 90,000 data points.
2. Volume texture generation: Volume textures, also known as 3D textures, are generally
used to store volume data, enabling color mapping during volume rendering. Volume
textures are a logical extension of traditional 2D textures. Whereas a 2D texture is
typically a picture that provides surface color information for a 3D model, a volume
texture is composed of multiple 2D textures grouped together to describe three-
dimensional spatial data. Volume textures are particularly well suited for shader
programming as they can be efficiently accessed and manipulated in shaders, and
their sizes are compact enough to conserve memory resources consumed during the
visualization process. Therefore, in this study, we ultimately process the data into
volume textures. To generate the volume texture, the data in each JSON file are first
organized into a three-dimensional array with a shape of (Nlongitude , Nlatitude , Ndepth ),
where N represents the number of data layers in each corresponding dimension. After
mapping the data values to the range of 0 to 255, the data are subsequently converted
into a RAW file since the RAW image format is one of the most commonly used
volume texture file formats. The data mapping process is illustrated in Equation (2),
where v represents the original data value, v′ represents the mapped data value, and
vmin and vmax denote the minimum and maximum data values, respectively.
(v − vmin ) · 255
v′ = (2)
vmax − vmin
Appl. Sci. 2025, 15, 2782 6 of 17
2.2. Ray Casting Algorithm Optimized with Early Ray Termination and Adaptive
Sampling Methods
Technically, there are many well-established direct volume rendering algorithms that
have seen extensive applications in visualization research, including ray casting [33,34],
shear-warp [35], texture slicing [36], and splatting [37]. Among these, ray casting, which is
based on image sequences [38], is one of the most widely used volume rendering algorithms
due to its intuitive nature, straightforward implementation, and capability to produce high-
quality visual effects. Thus, we employ it as the core algorithm to visualize the ocean scalar
fields. The fundamental principle of the ray casting algorithm applied in this study is
as follows:
First, for each pixel on the image plane, a ray is emitted from the viewpoint in the direc-
tion towards the pixel and traverses the three-dimensional volume data. Next, equidistant
sampling is performed along the ray with a fixed step size ∆t, as described in Equation (3),
where pi represents the texture coordinates of the i-th sampling point, and pstart indicates
the texture coordinates of the entry point where the ray intersects the volume. Additionally,
→
d denotes the direction of the ray, and ti represents the distance traversed by the ray within
the volume. Subsequently, each sampling point’s voxel value can be obtained by interpo-
lation based on the calculated texture coordinates and the volume texture, which stores
the volume data. The color and opacity values of the sampling point are then determined
according to the voxel value using a specific color mapping method.
( →
pi = pstart + ti d (3)
ti = i∆t
Finally, each pixel’s color and opacity values are composited based on the color and
opacity values of the sampling points in a front-to-back order, as described in Equation (4).
Specifically, Ci represents the color value of the i-th sampling point, and Ai denotes its
opacity value. The accumulated color and opacity values at the i-th sampling point are
represented as Ci∆ and Ai∆ , respectively, whereas Ci∆−1 and Ai∆−1 represent the accumulated
values at the previous sampling point. For each pixel, the sampling and calculation
processes are completed once the ray finishes traversing the entire volume data completely,
generating the color and opacity of the current pixel and ultimately resulting in the final
rendered image.
Ci∆ = 1 − Ai∆−1 · Ai Ci + Ci∆−1
(
(4)
Ai∆ = 1 − Ai∆−1 · Ai + Ai∆−1
Although the ray casting algorithm offers many advantages, it has limitations in terms
of performance. On the one hand, interpolation involving a large number of numerical
operations is employed during rendering, significantly increasing the complexity of compu-
tations. Moreover, whenever the viewpoint changes, such as zooming in, zooming out, or
panning in the volume rendering viewer, the sampling and calculation processes need to be
re-executed to generate the updated volume rendering result, which requires a substantial
computational overhead. As a result, the efficiency of real-time rendering may be affected,
particularly in interactive applications.
Therefore, researchers have proposed various optimization methods to reduce the
computation and memory overhead, such as early ray termination [39], adaptive sam-
pling [40], and octree [41,42]. To achieve a smoother and more real-time visualization, we
employ several methods to improve the efficiency of the ray casting algorithm in this study.
The early ray termination method is first utilized as it is an efficient, straightforward, and
easy-to-integrate optimization strategy. The core principle of early ray termination is that,
as the ray traverses the volume data, the contribution of subsequent sampling to the pixel’s
Appl. Sci. 2025, 15, 2782 7 of 17
final color can be ignored if the accumulated opacity value at a particular sampling point is
already sufficiently large. Once the accumulated opacity value exceeds a specified thresh-
old, the sampling process terminates immediately, avoiding unnecessary computations
and reducing the overall computational load. Furthermore, since seabed topography and
islands typically exist within the ocean modeling computational domains, a large number
of grid points are assigned null values, generating continuous null-value regions where
the opacity of the data points is zero. Given this characteristic of the ocean scalar data,
this study also employs an adaptive sampling method to leap the null-value space quickly.
As shown in Equation (5), if the absolute value of the difference between Ai∆ and Ai∆−1 is
less than the sufficiently small pre-set threshold ϵ (ϵ > 0), the ray is considered to have
entered the null-value region, and the sampling step size is appropriately increased. This
optimization can efficiently accelerate the sampling process while preserving the original
high-quality visual effects.
Ai∆ − Ai∆−1 < ϵ (5)
filtering out the non-relevant data and displaying the relevant portion, enabling a
more targeted analysis of ocean variables.
6. Time-series animation playback: This feature is especially useful for representing the
dynamic processes and changes in ocean environmental phenomena over time. Users
can specify a time range of interest and visualize the corresponding time-varying
volume data in an animation format.
Figure 2.
Figure 2. Main
Main workflow
workflow ofof the
the WebGPU-based
WebGPU-basedvolume
volumerendering
renderingframework
frameworkfor
forinteractive
interactivevisual-
visu-
alization of large-scale ocean scalar data.
ization of large-scale ocean scalar data.
1. Data preprocessing
Data preprocessing and and loading:
loading: The The data
data preprocessing tasks are conducted using
Python scripts,
Python scripts, which
which run run on
on the server side and are automatically triggered when
new datasets
new datasets are are available.
available. The The conversion
conversion from from NetCDF
NetCDF filesfiles to
to JSON
JSON files is first
performed using
performed usingthethenetCDF4
netCDF4library.
library. Then,
Then, after the the
after JSON files files
JSON are initially read using
are initially read
the NumPy
using the NumPy library, the PyProj
library, librarylibrary
the PyProj is employed to transform
is employed the coordinates
to transform of the
the coordinates
ocean
of acoustic
the ocean field forecast
acoustic datasetdataset
field forecast from the UTM
from thecoordinate system to
UTM coordinate the WGS84
system to the
coordinate
WGS84 system.system.
coordinate Data interpolation
Data interpolationfor both fordatasets is theniscarried
both datasets out using
then carried out
the SciPy
using library.
the SciPy Finally,
library. usingusing
Finally, the NumPy
the NumPy library, all datasets
library, are reorganized
all datasets are reorganized into
RAWRAW
into files, files,
whichwhich
serve serve
as volume textures
as volume for subsequent
textures rendering.
for subsequent Specifically,
rendering. the
Specifi-
processed
cally, files are dynamically
the processed retrieved onretrieved
files are dynamically demandon according
demandtoaccording
front-end to requests
front-
without
end the need
requests to load
without the all
needdata
to from
load all thedata
serverfromat once.
the server at once.
2. Initialization of
Initialization of the
the volume
volume rendering viewer: First, First, a blank canvas is established as
the carrier
the carrier forfordisplaying
displayingthe therendering
renderingresults,
results, and
andthetheWebGPU
WebGPU engine engine is isinitialized.
initialized.
A 3D
A 3D scene
scene is then constructed to serve as the container for all graphic objects, with
the engine handling the
the engine handling the task
task of
of rendering
rendering them. To To facilitate a flexible exploration of
the volume
the volume rendering
rendering results
results from
from different viewpoints,
viewpoints, aa camera
camera is created in the
scene and configured
scene and configured to support to support the rotation, zooming, and panning of the results
through interactive
through interactive actions
actions such
such asas mouse
mouse dragsdrags andand keyboard
keyboard inputs.
inputs. Additionally,
Additionally,
detailed legend information, including elements
detailed legend information, including elements such as the such as the title, colorbar, unit, and
numerical labels,
numerical labels, isis also
also attached
attached to to the
the viewer,
viewer, providing contextual information for
the data visualization.
the data visualization.
3. Building vertex
Building vertex and and fragment
fragment shaders:
shaders: WGSL WGSL is is employed
employed to to build
build vertex
vertex and
and frag-
frag-
ment shaders for volume rendering, and a corresponding material
ment shaders for volume rendering, and a corresponding material is generated based is generated based
on these shaders. In the vertex shader, coordinates are transformed from the local
Appl. Sci. 2025, 15, 2782 10 of 17
on these shaders. In the vertex shader, coordinates are transformed from the local
system to the global system, and the direction vector between the viewpoint and each
position is calculated and passed to the fragment shader. In the fragment shader,
for each pixel, a ray originating from the viewpoint, with its direction determined
by the input direction vector, traverses the volume data. Next, the entry and exit
points where the ray intersected the volume are calculated, and adaptive sampling is
performed from the entry point to the exit point along the ray. The basic sampling
step size is initialized by users but is increased to 1.5 times the user-defined value
when the ray enters the null-value region. The values of the sampling points are
retrieved from the current volume texture and mapped to corresponding colors based
on a color texture. Each pixel’s color and opacity values are continuously calculated
until the accumulated opacity value exceeds the specified threshold of 0.98 or the ray
reaches the exit point. Finally, a box mesh is created in the scene, with its material set
to the customized material, generating the final volume rendering result.
4. Integration of interactive visual analysis tools: The developed framework integrates
a set of interactive visual analysis tools for an in-depth exploration of the multidi-
mensional volume datasets, enabling a dynamic adjustment of multiple rendering
parameters. When users interact with the tools, the framework maps the user inputs
via sliders, dropdown menus, or checkboxes to corresponding rendering parameters
and automatically updates the fragment shader with the new parameters, triggering
real-time re-rendering. These tools encompass a range of functionalities, including
data source selection and switching, a basic rendering parameter setting, bounding
box and axes overlaying, volume cutting, spatial data filtering based on value, and
time-series animation playback.
Figure 3.
3. Volume rendering results
Volume rendering results of regular
regular gridded data:
data: (a) the
the visualization of
of the complete
complete
Figure
Figure 3. Volume rendering results of of regular gridded
gridded data: (a)
(a) the visualization
visualization of the
the complete
temperature
temperature data with
data with a transparency
withaatransparency value
transparencyvalue of
value 0 and a basic sampling step size of 0.2; (b)
0.2;the
(b)visu-
temperature data ofof 0 and
0 and a basic
a basic sampling
sampling stepstep
size size of (b)
of 0.2; the the
visu-
alization of the
visualization ofcomplete
the completeflowflow
velocity magnitude
velocity data data
magnitude with with
a transparency valuevalue
a transparency of 0.1of
and0.1aand
basica
alization of the complete flow velocity magnitude data with a transparency value of 0.1 and a basic
sampling
basic step size
sampling stepofsize
0.4.of 0.4.
sampling step size of 0.4.
Figure
Figure 4. Volume rendering
4. Volume rendering results
results of
of irregular
irregular gridded
gridded data:
data: (a)
(a) the
the visualization
visualization of
of the
the complete
complete
Figure
acoustic4. Volume rendering
propagation loss results
data with of
a irregular gridded
transparency valuedata:
of 0 (a)
and the
a visualization
basic sampling of thesize
step complete
of 0.1;
0.1;
acoustic propagation loss data with a transparency value of 0 and a basic sampling step size of
acoustic propagation loss data with a transparency value of 0 and a basic sampling step
(b) the visualization of the same data with a transparency value of 0.3 and a basic sampling stepsize of size
0.1;
(b) the visualization of the same data with a transparency value of 0.3 and a basic sampling step size
(b)0.3.
of the visualization of the same data with a transparency value of 0.3 and a basic sampling step size
of 0.3.
of 0.3.
As demonstrated in this experiment, the framework can effectively present the com-
plex As
As
demonstrated instructures
three-dimensional this experiment,
demonstrated in this experiment, of various the framework
volume datasets
the framework
can effectively
and enable
can effectively
present the com-
an insightful
present the com-
plex three-dimensional
visual exploration of large-scalestructuresocean
of various
scalarvolume
data. Bydatasets and and
adjusting enable an insightful
optimizing vis-
the ren-
plex three-dimensional structures of various volume datasets and enable an insightful vis-
ual exploration
dering parameters, of large-scale
the ocean
framework scalar
can data.
achieve By adjusting
ideal visual and optimizing
effects, the
accurately rendering
revealing
ual exploration of large-scale ocean scalar data. By adjusting and optimizing the rendering
parameters,
the internal the framework
features of ocean canscalar
achieve idealFigure
fields. visual effects, accurately revealing the inter-
parameters, the framework can achieve ideal visual 3a illustrates
effects, the gradual
accurately revealing decrease in
the inter-
nal features
temperature of
of ocean
the scalar
water fields.
mass withFigure 3a
increasing illustrates
depth, the gradual
indicating thatdecrease
surface in temperature
seawater has a
nal features of ocean scalar fields. Figure 3a illustrates the gradual decrease in temperature
of the
higher water mass
temperature with increasing
than the depth,
water atdepth, indicating
deeperindicating
levels in thethat surface
modeling seawater has a higher
of the water mass with increasing that surface computational
seawater has adomain. higher
temperature
Figure 3b shows than the
the waterarrangement
spatial at deeper levelsof in thewith
regions modeling computational
different flow velocity domain. Fig-
magnitudes,
temperature than the water at deeper levels in the modeling computational domain. Fig-
ure 3b shows
depicting the the
flowspatial arrangement
characteristics of theof water
regions withatdifferent
mass differentflow velocity
locations. magnitudes,
Meanwhile, as
ure 3b shows the spatial arrangement of regions with different flow velocity magnitudes,
depicting
illustrated the flow
in Figure characteristics
4, the propagation of the water
attenuation mass at
pattern different locations.
of the acoustic Meanwhile,
field in a complex as
depicting the flow characteristics of the water mass at different locations. Meanwhile, as
illustrated
marine in Figure 4,isthe
environment propagation
effectively attenuation
demonstrated, and pattern
a clearof the acoustic field in aspatial
com-
illustrated in Figure 4, the propagation attenuation pattern ofrepresentation
the acoustic field of the
in a com-
plex marine of
distribution environment
acoustic wave is effectively demonstrated,
energy is provided. and a clear
By comparing representation
Figures 3a and 4aof the
plex marine environment is effectively demonstrated, and a clear representation ofwith
the
spatial distribution
Figures 3b and 4b, itof acoustic wavetheenergy is provided. By comparing Figures 3a and 4a
spatial distribution ofisacoustic
evident that
wave energyinternal spatiotemporal
is provided. distribution
By comparing characteristics
Figures 3a and 4a
with
of theFigures 3b andcan
water masses 4b, it is evident that the internal spatiotemporal distribution charac-
with Figures 3b and 4b, be more
it is clearly
evident that observed by changing
the internal the transparency
spatiotemporal distributionparameter,
charac-
teristicsis of
which the water
inversely masses
related canopacity
to the be more ofclearly
each dataobserved
point, inbycombination
changing the transparency
teristics of the water masses can be more clearly observed by changing thewith an appro-
transparency
parameter,
priate colorbar,which is inversely
while the display related to thecan
of details opacity of each by
be controlled data point, in the
modifying combination
sampling
parameter, which is inversely related to the opacity of each data point, in combination
with size.
step an appropriate
The colorbar,results
experimental while the display
indicate of the
that details can be controlled
proposed framework by modifying
can provide
with an appropriate colorbar, while the display of details can be controlled by modifying
Appl. Sci. 2025, 15, x FOR PEER REVIEW 12 of 17
Appl. Sci. 2025, 15, 2782 12 of 17
the sampling step size. The experimental results indicate that the proposed framework
bothprovide
can a clear and
bothaccurate dataaccurate
a clear and representation while delivering
data representation high-quality
while deliveringvisual effects,
high-quality
offering a comprehensive web-based visualization solution.
visual effects, offering a comprehensive web-based visualization solution.
Additionally, we
Additionally, weconducted
conductedan anexperiment
experimentononthe developed
the developedinteractive visual
interactive analysis
visual anal-
tools to demonstrate their capacities. Figure 5 illustrates the results of the interactive visual
ysis tools to demonstrate their capacities. Figure 5 illustrates the results of the interactive
analysis
visual of the ocean
analysis of the scalar
ocean data.
scalar data.
Figure 5.
Figure Interactive visualization
5. Interactive visualization results:
results: (a)
(a) the
the flow
flow velocity
velocity magnitude
magnitude data
data cut
cut along
along the
the latitude
latitude
of 9.127 ◦ N; (b) the flow velocity magnitude data cut at a vertical depth of −175 m; (c) the acoustic
of 9.127 °N; (b) the flow velocity magnitude data cut at a vertical depth of −175 m; (c) the acoustic
propagation loss data cut along the longitude of 115.589 ◦ E and the latitude of 9.749 ◦ N; (d) the ROI of
propagation loss data cut along the longitude of 115.589 °E and the latitude of 9.749 °N; (d) the ROI
the acoustic propagation loss data filtered with the data values ranging from 175.901 dB to 313.694 dB.
of the acoustic propagation loss data filtered with the data values ranging from 175.901 dB to 313.694
Each red box in the subfigures indicates the key corresponding interactive visual analysis tool.
dB. Each red box in the subfigures indicates the key corresponding interactive visual analysis tool.
Figure 5a,b show the volume rendering results of the flow velocity magnitude data, cut
alongFigure 5a,b
a single show the
direction withvolume
verticalrendering results
and horizontal of theplanes
cutting flow velocity
using themagnitude data,
volume cutting
cut along a single direction with vertical and horizontal cutting planes using
tool, respectively. Figure 5c further demonstrates the multidirectional cutting capability of the volume
cutting tool,
this tool, respectively.
enabling Figurespatial
a multifaced 5c further demonstrates
exploration of the the multidirectional
acoustic propagationcutting ca-
loss data.
pability of this tool, enabling a multifaced spatial exploration of the acoustic
Unlike traditional slicing methods, the proposed functionality allows for the observation propagation
loss data.
of both theUnlike
profiletraditional slicing methods,
and the preserved the proposed
data block, enabling usersfunctionality allowsidentify
to intuitively for the
observation
and analyzeofvaluable
both theinformation
profile and the preserved
on the cuttingdata
planeblock,
as wellenabling usersthe
as within to data
intuitively
block.
identify and analyze valuable information on the cutting plane as well
Figure 5d presents the volume data within the ROI, filtered with a user-specified value as within the data
block. Figure 5d presents the volume data within the ROI, filtered with a
range using the value-based spatial data filtering tool, highlighting the acoustic propagation user-specified
value range
loss data thatusing
userstheaimvalue-based
to visualize.spatial datacan
This tool filtering tool,isolate
effectively highlighting the acoustic
the desired feature,
propagation loss data that users aim to visualize. This tool can
enabling a better understanding of the ocean environmental phenomena of interest.effectively isolate the The
de-
sired feature, enabling
experimental a betterthat
results indicate understanding of the ocean
these user-friendly environmental
interactive phenomena
visual analysis of
tools can
interest. The to
allow users experimental results
gain scientific indicate
insights intothat these
ocean user-friendly
numerical interactive
modeling outputsvisual anal-
through
ysis tools can allow users to gain scientific insights into ocean numerical
straightforward interactive operations, providing a more customizable and flexible data modeling outputs
visualization experience.
Appl. Sci. 2025, 15, x FOR PEER REVIEW 13 of 17
Appl. Sci. 2025, 15, 2782 through straightforward interactive operations, providing a more customizable and flex-
13 of 17
ible data visualization experience.
3.2. Performance
3.2. Performance Analysis
Analysis of Volume
of Volume Rendering
Rendering
Several
Several experiments
experiments werewere conducted
conducted in this
in this section
section to evaluate
to evaluate the performance
the performance of of
the proposed
the proposed WebGPU-based
WebGPU-based volume volume rendering
rendering framework.
framework. DuringDuring the experiments,
the experiments, the
maximum memory usage of our framework was approximately 700 MB, whichMB,
the maximum memory usage of our framework was approximately 700 which is
is within
within an range
an acceptable acceptable range
for most for most
modern modernconfigurations.
hardware hardware configurations.
As this work Asisthisan work
en- is
hancement of our previous research [27], the first experiment aims to compare the perfor- the
an enhancement of our previous research [27], the first experiment aims to compare
manceperformance of the visualization
of the visualization solution proposed
solution proposed in this studyin this
withstudy
the with the one proposed
one proposed in our in
our previous work, which was based on the Plotly.js graphic library,
previous work, which was based on the Plotly.js graphic library, with the texture slicing with the texture slicing
algorithm
algorithm applied
applied to render
to render the the volume
volume data
data as translucent
as translucent image
image stacks.
stacks. It It
is is important to
important
emphasize that the number of temperature data points used in
to emphasize that the number of temperature data points used in the visualization solu-the visualization solution
tion proposed
proposed in this study
in this studyisisapproximately
approximatelythree threetimes
times thatthat of the
of the oneone proposed
proposed in our
in our
previous work. Similarly, for the acoustic propagation loss data, the
previous work. Similarly, for the acoustic propagation loss data, the number of data points number of data points
is approximately 1.3 times that of the previous visualization solution. Figure 6 shows the the
is approximately 1.3 times that of the previous visualization solution. Figure 6 shows
rendering
rendering timetime for volume
for the the volumedatadata
using using different
different visualization
visualization solutions,
solutions, respectively.
respectively.
Figure 6. Rendering time comparison of volume rendering of the temperature and acoustic propaga-
Figure
tion6. loss
Rendering time comparison
data between of volume
the visualization rendering
solution of the
proposed temperature
in this study andand
the acoustic propa-
one proposed in our
gation loss data
previous between the visualization solution proposed in this study and the one proposed in
work.
our previous work.
As observed during the experiment, there is a noticeable difference in the rendering
As observed
time during the experiment,
of both visualization solutions. As there is a noticeable
illustrated in Figuredifference in the rendering
6, the proposed visualization
timesolution
of both visualization solutions.
renders the volume data As illustratedfaster
significantly in Figure
than 6,
thethe
oneproposed visualization
in our previous work. The
solution renders the
visualization volume
solution data significantly
developed faster than
in our previous workthe one inlimitations
presents our previous work.
in processing
The large
visualization solution
datasets and developed
may suffer in our
from slow previous
responses workinteractive
during presents limitations
operations.in In pro-
contrast,
cessing
the large datasetssolution
visualization and mayproposed
suffer from slow
in this responses
study duringimproves
significantly interactive
theoperations.
loading speed
In contrast,
of volumethe rendering,
visualization solution
reducing theproposed
rendering in time
this study significantly 11.7
by approximately improves
times the
for the
temperature
loading speed of data
volumeandrendering,
5.7 times for the acoustic
reducing propagation
the rendering time bylossapproximately
data. The experiment
11.7
demonstrates
times that the visualization
for the temperature data and 5.7 solution
times forbased on Babylon.js
the acoustic and WebGPU
propagation proposed
loss data. The in
this study
experiment can enable efficient
demonstrates that the three-dimensional
visualization solutionvisualization
based and
on responsive
Babylon.js real-time
and
interactions,
WebGPU proposedandinitthis
is a study
feasible
cansolution
enable for visualizing
efficient and analyzing
three-dimensional large-scale and
visualization gridded
data in real-time
responsive a web environment.
interactions, and it is a feasible solution for visualizing and analyzing
Togridded
large-scale further evaluate
data in athe web performance
environment. advantages of WebGPU, a WebGL-based volume
rendering
To furtherframework
evaluate the is performance
developed with Babylon.js
advantages of and the same
WebGPU, ray casting algorithm,
a WebGL-based vol-
umeoffering the same
rendering functionalities
framework as the WebGPU
is developed version for
with Babylon.js andcomparative
the same ray analysis. In the
casting
second experiment, we compare both versions’ frame rate and GPU utilization using the
same rendering parameters and data, as shown in Figure 7.
algorithm, offering the same functionalities as the WebGPU version for comparative anal-
Appl. Sci. 2025, 15, 2782 14 of 17
ysis. In the second experiment, we compare both versions’ frame rate and GPU utilization
using the same rendering parameters and data, as shown in Figure 7.
Figure
Figure 7. Performance
7. Performance comparison
comparison of volume
of volume rendering
rendering of temperature
of the the temperature and acoustic
and acoustic propaga-
propagation
tion loss data between the WebGPU and WebGL versions based on Babylon.js: (a) frame
loss data between the WebGPU and WebGL versions based on Babylon.js: (a) frame rate comparison; rate com-
(b)parison; (b) GPU comparison.
GPU utilization utilization comparison.
Figure
Figure7a7ashows
showsan anincrease
increase in in the frame
frame rate
ratewith
withapproximately
approximately4040 frames
frames perper
sec-
second (fps)when
ond (fps) whenrendering
rendering thethe temperature
temperature data
data (5,487,685
(5,487,685 datadata points)
points) withwith the We-
the WebGPU
bGPU version.
version. For rendering
For rendering the acoustic
the acoustic propagation
propagation loss(56,700,000
loss data data (56,700,000 data points),
data points), the im-
the improvement
provement is approximately
is approximately 25 fps.25The
fps.results
The results
suggest suggest
that thethat the WebGPU
WebGPU versionversion
achieves
achieves
a higher a higher
frame rateframe rate
than thethan the WebGL
WebGL version, version, indicating
indicating that thethat the WebGPU-based
WebGPU-based visuali-
visualization solution
zation solution provides
provides a smoother
a smoother visualization
visualization experience
experience thanthan
the the WebGL-based
WebGL-based one.
one. In Figure
In Figure 7b,7b,
it isitevident
is evident
thatthat
thethe
GPU GPU utilization
utilization of the
of the WebGPU
WebGPU version
version is higher
is higher than
than
thatthat of the
of the WebGL
WebGL version
version during
during thethe rendering
rendering process,
process, suggesting
suggesting that
that WebGPU
WebGPU has
has
thethe abilitytotoutilize
ability utilizethe
thepower
power of of the GPU more more effectively.
effectively.Thus,
Thus,the
theWebGPU-based
WebGPU-based
volume
volume rendering
rendering framework
framework cancanensure
ensureimmediate
immediate feedback
feedbackin in
a web
a webenvironment.
environment. TheThe
experimental
experimental results
results further
further confirm
confirm the performance
the performance advantages
advantages and excellent
and excellent potential
potential
ofof
WebGPU
WebGPU inin
large-scale
large-scale three-dimensional
three-dimensional data
datavisualization,
visualization, especially inin
especially visualization
visualization
tasks
tasksrequiring
requiring high
high real-time
real-timeperformance
performance and interactivity.
and WeWe
interactivity. cancan
conclude
concludethatthat
visual-
visu-
ization solutions based on WebGPU are expected to play an important
alization solutions based on WebGPU are expected to play an important role in the field role in the field of
three-dimensional
of three-dimensional visualization.
visualization.
4.4.Conclusions
Conclusions
The
Thechallenges
challengesin the real-time
in the interactive
real-time visualization
interactive of large-volume
visualization heterogeneous
of large-volume heterogene-
scalar datasets in a web environment have motivated our design and implementation
ous scalar datasets in a web environment have motivated our design and implementation of the
WebGPU-based volume rendering
of the WebGPU-based frameworkframework
volume rendering for an interactive
for anvisualization of large-scaleof
interactive visualization
ocean scalar data. The framework proposed in this study first applies
large-scale ocean scalar data. The framework proposed in this study first applies the ray casting
the ray
algorithm, optimized with early ray termination and adaptive sampling
casting algorithm, optimized with early ray termination and adaptive sampling methods, methods, as
the
ascore algorithm
the core for volume
algorithm for volumerendering. Next,
rendering. by by
Next, completing
completing thethe
initialization of of
initialization thethe
volume rendering viewer and building vertex and fragment shaders with
volume rendering viewer and building vertex and fragment shaders with the Babylon.js the Babylon.js
3D rendering engine and the latest WebGPU technology, the framework successfully
3D rendering engine and the latest WebGPU technology, the framework successfully
achieves efficient three-dimensional volume rendering of massive gridded data, which are
achieves efficient three-dimensional volume rendering of massive gridded data, which
preprocessed through steps including format conversion, data interpolation, and volume
are preprocessed through steps including format conversion, data interpolation, and vol-
texture generation from complex regular and irregular gridded ocean numerical modeling
ume texture generation from complex regular and irregular gridded ocean numerical
datasets. It also integrates interactive visual analysis functionalities, including volume
cutting, value-based spatial data filtering, and time-series animation playback, supporting
visualization with different configurations of parameters. Finally, the WebGPU-based
Appl. Sci. 2025, 15, 2782 15 of 17
Author Contributions: Conceptualization, J.Y. and R.Q.; methodology, J.Y.; software, J.Y.; validation,
J.Y.; data curation, Z.X.; writing—original draft preparation, J.Y.; writing—review and editing, J.Y.
and R.Q.; supervision, R.Q. and Z.X.; project administration, R.Q.; funding acquisition, R.Q. All
authors have read and agreed to the published version of the manuscript.
Funding: This research was funded by the National Key Research and Development Program of
China (2021YFC2800500).
Data Availability Statement: The data are not publicly available due to confidentiality agreements
related to the project. Access to the data should be requested by contacting the corresponding author,
who will require a detailed explanation of the intended use of the data.
Acknowledgments: The authors are grateful to Rufu Qin for his support in the theoretical aspects
of this study. The authors thank Zhounan Xu for his help in data curation and preprocessing. The
authors thank Rufu Qin for his help in reviewing and editing this paper. We also thank the reviewers
and editors for their suggestions to improve the quality of this paper.
References
1. Huang, D.; Zhao, D.; Wei, L.; Wang, Z.; Du, Y. Modeling and Analysis in Marine Big Data: Advances and Challenges. Math. Probl.
Eng. 2015, 2015, 384742. [CrossRef]
2. Wang, Y.; Li, F.; Zhang, B.; Li, X. Development of a component-based interactive visualization system for the analysis of ocean
data. Big Earth Data 2022, 6, 219–235. [CrossRef]
Appl. Sci. 2025, 15, 2782 16 of 17
3. Buck, V.; Stäbler, F.; González, E.; Greinert, J. Digital Earth Viewer: A 4D Visualisation Platform for Geoscience Datasets. In
Proceedings of the 9th Workshop on Visualisation in Environmental Sciences (EnvirVis), Virtual Event, Switzerland, 14 June 2021;
The Eurographics Association: Eindhoven, The Netherlands, 2021; pp. 33–37.
4. Liu, S.; Chen, G.; Yao, S.; Tian, F.; Liu, W. A framework for interactive visual analysis of heterogeneous marine data in an
integrated problem solving environment. Comput. Geosci. 2017, 104, 20–28. [CrossRef]
5. Zheng, H.; Shao, Q.; Chen, J.; Shan, Y.; Qin, X.; Ma, J.; Xu, X. LIC color texture enhancement algorithm for ocean vector field data
based on HSV color mapping and cumulative distribution function. Acta Oceanol. Sin. 2022, 41, 171–180. [CrossRef]
6. Shi, Q.; Ai, B.; Wen, Y.; Feng, W.; Yang, C.; Zhu, H. Particle System-Based Multi-Hierarchy Dynamic Visualization of Ocean
Current Data. ISPRS Int. J. Geo-Inf. 2021, 10, 667. [CrossRef]
7. Xie, C.; Li, M.; Wang, H.; Dong, J. A survey on visual analysis of ocean data. Vis. Inform. 2019, 3, 113–128. [CrossRef]
8. Fang, G.; Wang, D.; Huang, H.; Chen, J. A WebGIS system on the base of satellite data processing system for marine application.
In Proceedings of the Remote Sensing for Environmental Monitoring, GIS Applications, and Geology VII, Florence, Italy, 17–20
September 2007; pp. 562–570.
9. Spondylidis, S.; Topouzelis, K.; Kavroudakis, D.; Vaitis, M. Mesoscale Ocean Feature Identification in the North Aegean Sea with
the Use of Sentinel-3 Data. J. Mar. Sci. Eng. 2020, 8, 740. [CrossRef]
10. Zhang, Y.; Li, G.; Yue, R.; Liu, J.; Shan, G. PEViz: An in situ progressive visual analytics system for ocean ensemble data. J. Vis.
2023, 26, 423–440. [CrossRef]
11. Gao, X.; Zhang, T. Three dimensional visualization analysis for marine field data based on 3D-GIS. In Proceedings of the 6th
International Symposium on Digital Earth: Models, Algorithms, and Virtual Reality, Beijing, China, 9–12 September 2009;
pp. 331–337.
12. Su, T.; Cao, Z.; Lv, Z.; Liu, C.; Li, X. Multi-Dimensional visualization of large-scale marine hydrological environmental data. Adv.
Eng. Softw. 2016, 95, 7–15. [CrossRef]
13. Xu, C.; Sun, G.; Liang, R. A survey of volume visualization techniques for feature enhancement. Vis. Inform. 2021, 5, 70–81.
[CrossRef]
14. El Seoud, M.S.A.; Mady, A.S. A comprehensive review on volume rendering techniques. In Proceedings of the 8th International
Conference on Software and Information Engineering, Cairo, Egypt, 9–12 April 2019; pp. 126–131.
15. Zhang, Q.; Eagleson, R.; Peters, T.M. Volume Visualization: A Technical Overview with a Focus on Medical Applications. J. Digit.
Imaging 2011, 24, 640–664. [CrossRef] [PubMed]
16. Liu, R.; Guo, H.; Yuan, X. Seismic structure extraction based on multi-scale sensitivity analysis. J. Vis. 2014, 17, 157–166. [CrossRef]
17. Song, Y.; Ye, J.; Svakhine, N.; Lasher-Trapp, S.; Baldwin, M.; Ebert, D. An Atmospheric Visual Analysis and Exploration System.
IEEE Trans. Vis. Comput. Graph. 2006, 12, 1157–1164. [CrossRef]
18. Lv, T.; Fu, J.; Li, B. Design and Application of Multi-Dimensional Visualization System for Large-Scale Ocean Data. ISPRS Int. J.
Geo-Inf. 2022, 11, 491. [CrossRef]
19. Tian, F.; Mao, Q.; Zhang, Y.; Chen, G. i4Ocean: Transfer function-based interactive visualization of ocean temperature and salinity
volume data. Int. J. Digit. Earth 2021, 14, 766–788. [CrossRef]
20. Ates, O.; Appukuttan, S.; Fragnaud, H.; Fragnaud, C.; Davison, A.P. NeoViewer: Facilitating reuse of electrophysiology data
through browser-based interactive visualization. SoftwareX 2024, 26, 101710. [CrossRef]
21. Diblen, F.; Hendriks, L.; Stienen, B.; Caron, S.; Bakhshi, R.; Attema, J. Interactive Web-Based Visualization of Multidimensional
Physical and Astronomical Data. Front. Big Data 2021, 4, 626998. [CrossRef]
22. Chen, T.-T.; Sun, Y.-C.; Chu, W.-C.; Lien, C.-Y. BlueLight: An Open Source DICOM Viewer Using Low-Cost Computation
Algorithm Implemented with JavaScript Using Advanced Medical Imaging Visualization. J. Digit. Imaging 2023, 36, 753–763.
[CrossRef]
23. Fan, D.; Liang, T.; He, H.; Guo, M.; Wang, M. Large-Scale Oceanic Dynamic Field Visualization Based on WebGL. IEEE Access
2023, 11, 82816–82829. [CrossRef]
24. Lu, M.; Wang, X.; Liu, X.; Chen, M.; Bi, S.; Zhang, Y.; Lao, T. Web-Based real-time visualization of large-scale weather radar data
using 3D tiles. Trans. GIS 2021, 25, 25–43. [CrossRef]
25. Li, W.; Liang, C.; Yang, F.; Ai, B.; Shi, Q.; Lv, G. A Spherical Volume-Rendering Method of Ocean Scalar Data Based on Adaptive
Ray Casting. ISPRS Int. J. Geo-Inf. 2023, 12, 153. [CrossRef]
26. Liu, L.; Silver, D.; Bemis, K. Visualizing Three-Dimensional Ocean Eddies in Web Browsers. IEEE Access 2019, 7, 44734–44747.
[CrossRef]
27. Qin, R.; Feng, B.; Xu, Z.; Zhou, Y.; Liu, L.; Li, Y. Web-based 3D visualization framework for time-varying and large-volume
oceanic forecasting data using open-source technologies. Environ. Model. Softw. 2021, 135, 104908. [CrossRef]
28. Usta, Z. Webgpu: A New Graphic Api for 3D Webgis Applications. In Proceedings of the 8th International Conference on
GeoInformation Advances, Istanbul, Turkey, 11–12 January 2024; pp. 377–382.
Appl. Sci. 2025, 15, 2782 17 of 17
29. Wang, Z.; Yang, L. Performance optimization methods for large scene in WebGL. In Proceedings of the 6th International
Conference on Computer Information Science and Application Technology (CISAT 2023), Hangzhou, China, 26–28 May 2023;
pp. 1360–1365.
30. Chickerur, S.; Balannavar, S.; Hongekar, P.; Prerna, A.; Jituri, S. WebGL vs. WebGPU: A Performance Analysis for Web 3.0. Procedia
Comput. Sci. 2024, 233, 919–928. [CrossRef]
31. Yu, G.; Liu, C.; Fang, T.; Jia, J.; Lin, E.; He, Y.; Fu, S.; Wang, L.; Wei, L.; Huang, Q. A survey of real-time rendering on Web3D
application. Virtual Real. Intell. Hardw. 2023, 5, 379–394. [CrossRef]
32. Rew, R.; Davis, G. NetCDF: An interface for scientific data access. IEEE Comput. Graph. Appl. 1990, 10, 76–82. [CrossRef]
33. Feng, C.; Qin, T.; Ai, B.; Ding, J.; Wu, T.; Yuan, M. Dynamic typhoon visualization based on the integration of vector and scalar
fields. Front. Mar. Sci. 2024, 11, 1367702. [CrossRef]
34. Jia, Z.; Chen, D.; Wang, B. Research on Improved Ray Casting Algorithm and Its Application in Three-Dimensional Reconstruction.
Shock Vib. 2021, 2021, 8718523. [CrossRef]
35. Qu, N.; Yan, Y.; Cheng, T.; Li, T.; Wang, Y. Construction of Underground 3D Visualization Model of Mining Engineering Based on
Shear-Warp Volume Computer Rendering Technology. Mob. Inf. Syst. 2022, 2022, 8472472. [CrossRef]
36. Zhang, X.; Yue, P.; Chen, Y.; Hu, L. An efficient dynamic volume rendering for large-scale meteorological data in a virtual globe.
Comput. Geosci. 2019, 126, 1–8. [CrossRef]
37. Ess, E.; Sun, Y. Visualizing 3D vector fields with splatted streamlines. In Proceedings of the Visualization and Data Analysis 2006,
San Jose, CA, USA, 15–19 January 2006.
38. Levoy, M. Display of surfaces from volume data. Comput. Des. 1988, 20, 29–37. [CrossRef]
39. Weiler, M.; Kraus, M.; Merz, M.; Ertl, T. Hardware-Based ray casting for tetrahedral meshes. In Proceedings of the IEEE Conference
on Visualization, Seattle, WA, USA, 19–24 October 2003; pp. 333–340.
40. Wang, H.; Xu, G.; Pan, X.; Liu, Z.; Lan, R.; Luo, X.; Zhang, Y. A Novel Ray-Casting Algorithm Using Dynamic Adaptive Sampling.
Wirel. Commun. Mob. Comput. 2020, 2020, 8822624. [CrossRef]
41. Wang, J.; Bi, C.; Deng, L.; Wang, F.; Liu, Y.; Wang, Y. A composition-free parallel volume rendering method. J. Vis. 2021, 24,
531–544. [CrossRef]
42. Jing, G.; Song, W. An octree ray casting algorithm based on multi-core cpus. In Proceedings of the International Symposium on
Computer Science and Computational Technology, ISCSCT, Shanghai, China, 20–22 December 2008; pp. 783–787.
43. Keil, J.; Edler, D.; Schmitt, T.; Dickmann, F. Creating Immersive Virtual Environments Based on Open Geospatial Data and Game
Engines. KN J. Cartogr. Geogr. Inf. 2021, 71, 53–65. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.