Stoilov Et Al. - 2024 - Efficacy of Virtual Preparation Simulators Compare
Stoilov Et Al. - 2024 - Efficacy of Virtual Preparation Simulators Compare
Stoilov Et Al. - 2024 - Efficacy of Virtual Preparation Simulators Compare
Article
Efficacy of Virtual Preparation Simulators Compared to
Traditional Preparations on Phantom Heads
Lea Stoilov 1,† , Fabian Stephan 1,† , Helmut Stark 1 , Norbert Enkling 2 , Dominik Kraus 1 and Milan Stoilov 1, *
1 Department of Prosthodontics, Preclinical Education and Dental Materials Science, University of Bonn,
53111 Bonn, Germany
2 Department of Reconstructive Dentistry and Gerodontology, School of Dental Medicine, University of Bern,
3012 Bern, Switzerland
* Correspondence: [email protected]; Tel.: +49-228-287-22436
† These authors contributed equally to this work.
Abstract: Background: Virtual simulators are increasingly being introduced in dental education. This
study investigates whether virtual simulators offer comparable or superior educational efficacy when
compared to traditional phantom simulators. Materials and Methods: Participants were randomly
allocated into groups: Virtual Preparation (SIM; n = 30) and Traditional Preparation (FRA; n = 30).
Students were tasked with preparing tooth 36 for a full-cast crown during free practice for four days.
Faculty staff provided feedback to both groups. Examinations were administered and graded by
three examiners (preclinical and clinical consultants and a dental surgery consultant). Additionally, a
survey was conducted to assess each training concept. Results: The FRA group achieved significantly
better grades in the preparation exam evaluations by all three examiners, compared to the SIM group.
Interrater reliability showed only moderate agreement, with the clinical examiner giving better grades
than the other two. The questionnaire results indicate that while participants managed with the
virtual system, they preferred the analog system for exams and patient preparation. Conclusion:
Virtual simulators do not seem to be as good when it comes to practicing for a preparation exam or
clinical preparation, especially for unexperienced students. However, they still appear to be useful as
an additional tool for introducing students to the topic of preparation.
Teacher™ (KaVo© Dental, Biberach, Germany), have been implemented. These systems are
seen as software extensions within the corresponding scanning system, assisting students
in learning preparation techniques and, additionally, practicing digital impressions. In the
context of Computer-Assisted Learning (CAL), these systems can highlight initial rough
errors without contact with the faculty staff [1,2]. Students can learn autonomously or
through self-directed learning, engaging with the faculty staff at a higher skill level in
further inquiries. Park et al. [3] described them as “useful” for bridging the gap between
learning styles and supporting students’ self-assessment. Furthermore, students appreciate
immediate, objective, and, particularly, visual feedback on their deficiencies [4,5]. Rosen-
berg et al. [6] found that computer-assisted training is even superior, or at least equivalent,
to traditional methods. In a previous study by the authors, it was also demonstrated
that these systems appear to be equivalent, with the limitation that they cannot provide
feedback on how to approach tooth preparation, problem-solving, and ergonomics [7].
Although traditional phantom heads still represent the gold standard in training and
have been supplemented by modern self-evaluation systems, they cannot precisely simulate
the patient cases encountered in clinical practice. In addition, anatomical structures such
as a moving tongue can only be inadequately mimicked. Furthermore, dental education
standards expect the teaching of increasingly complex procedures or competencies that
can no longer be adequately taught using phantom heads, and might require the use of
modern digital systems [8].
Virtual systems (Haptic Virtual Reality Simulation, HVRS), such as the “Dente” (SIM-
toCARE, Vreeland, The Netherlands), can now virtually simulate most dental treatments.
This includes practicing simple measures as well as operative or prosthetic treatment
steps. Additionally, implant placement or even local anesthesia can now be practiced.
Furthermore, patient-specific cases can be incorporated after digitalization and practiced.
This is particularly useful for simulating treatment procedures in advance, before they are
performed on real patients. Undergraduate students initially learn operative exercises,
such as preparing cavities and, especially, teeth for receiving full crowns. Students could
gain their first experience in handling the handpiece and dental mirror. All of this occurs in
a more playful environment, one similar to a video game. They are gradually introduced to
dental practice through the virtual systems, starting with simple, less complex exercises for
hand–eye coordination. This approach could offer potential advantages for beginners, as
initial attempts to adequately prepare a tooth are often challenging for both students and
supervising faculty. The load on the faculty staff could be reduced at this level, as students
autonomously perform appropriate exercises and receive virtual feedback. Some studies
support virtual simulators as supplements in the preclinical environment, as students
seem to perform well with a virtual simulator. However, these systems cannot yet replace
phantom heads [9–11].
In universities and dental education, traditional phantom heads with plastic models
and teeth continue to be regarded as the gold standard. They are used both in practical
courses and in corresponding exams or state examinations. However, practice time is often
very limited and restricted to class times, and support is provided by the faculty staff.
Students often criticize this situation and demand the opportunity for free practice time [3].
Both points require additional staffing, which in turn increases costs. The possibility of
practicing autonomously in a dental simulation laboratory that is accessible around the
clock could provide a solution. Students would log in with a personalized identifier,
allowing tracking of which student worked at what time and for how long. HVRS systems
could be used for this purpose and enable free practice. The aspect of sustainability should
also be considered. HVRS systems generate less plastic waste, require no compressed air
and no suction, and do not waste water that subsequently needs to be treated.
This study aims to examine whether HVRS systems can be considered to be equivalent
to phantom heads. Specifically, we investigated their effectiveness in teaching crown
preparation and preparing students in an auto-didactic environment (free practice time) for
a crown preparation exam.
Dent. J. 2024, 12, 259 3 of 18
Figure 2. “Dente” simulator by SIMtoCARE (Vreeland, The Netherlands) with tablet, phantom
Figure2.2.“Dente”
Figure “Dente”simulator
simulator
byby SIMtoCARE
SIMtoCARE (Vreeland,
(Vreeland, The The Netherlands)
Netherlands) with tablet,
with tablet, phantom
phantom head,
head, and handpiece.
head, and handpiece.
and handpiece.
During free practice, students worked under realistic conditions with neighboring
teeth. To ensure equal opportunities, the FRA group had unlimited access to consumables
such as practice teeth, neighboring teeth, and gloves, allowing for an unlimited number of
repetitions. Throughout the practice period, all participants could seek constructive feed-
back and suggestions for improvement from the preclinical course staff as often as needed.
Additionally, participants could always view a plastic model of the master preparation of
tooth 36 to spatially visualize the improvement suggestions.
Dent. J. 2024, 12, 259 5 of 18
Table 1. Criteria assessed for evaluation of teeth prepared during the exam. Examiners had the option
to rate with “+” (yes), “−” (no) or “0” (neutral). A total grade from 35–0 points, based on responses
rated 1–5, could be awarded.
Table 2. Questions handed out to the participants at the end of the study. Participants had the
opportunity to choose between “1” (strongly agree) and “5” (strongly disagree).
Statement 1 2 3 4 5
“Operating the Dente or Phantom
A
simulator was simple”
“Operating the Dente or the Phantom
B
simulator was complex”
“Practicing on the Dente or the Phantom
C
simulator improved my skills”
“In the phantom course, I would like to continue
D
training with the system I used in this study”
“The training system I used has prepared me well
E
for the preparation exam”
“With the training system and sufficient practice, I
F
feel capable of preparing a tooth on a patient”
G “Please evaluate the training system you used”
H “What would you like to improve?”
Dent. J. 2024, 12, 259 7 of 18
3. Results
3.1. Group FRA versus Group SIM
All 60 participants completed the preliminary exercise, the free practice sessions, the
final exam, and the evaluation form. No drop-outs were documented. Table 3 shows the
scores given by the three evaluators and the score distribution within each group (mean,
SD, and median). The preclinical evaluator (Prosthodontics, VK) graded the participants
in the Frasaco group (FRA) with a median score of 2 (“good”; Mean: 2.93 ± 1.26), which
was better than those in the SIMtoCARE group (SIM), who received a median score of
4.5 (“sufficient”; Mean: 4.30 ± 0.85). Similarly, the clinical evaluator (Prosthodontics, KL)
gave comparable grades, with a median score of 2 (“good”; Mean: 2.20 ± 1.24) for the
FRA group and 4 (“sufficient”; Mean: 3.60 ± 1.43) for the SIM group. The oral surgery
evaluator (Surgery, OC) rated both groups one grade lower on average than the other two
raters. He assigned a median score of 3 (“satisfactory”; Mean: 3.0 ± 1.23) for the FRA group
and 5 (“poor”; Mean: 4.2 ± 1.06) for the SIM group. Statistical analysis assumed a normal
distribution of the grades. The evaluation from each examiner showed significantly better
scores in the Frasaco groups (VK: p < 0.0001; KL: p = 0.0003; OC: p = 0.0002) when applying
the Mann–Whitney U test (Table 4 and Figure 3).
Table 3. Tabular representation of the absolute grade frequency, median, mean, and standard
deviation (SD) of the performance evaluations by the three examiners (VK = Preclinical, KL = Clinical,
and OC = Oral Surgery) for the respective participant groups FRA and SIM.
Table 4. A tabular representation of the rank statistics (mean rank and rank sum) for the grading of
group FRA and group SIM by all three examiners (VK = Preclinical, KL = Clinical, and OC = Oral
Table 4. A
Surgery), tabular representation
including of the rank
the calculated U-value andstatistics (meanp-value
the associated rank and rank
from thesum) for the grading
Mann–Whitney of
U-test
group
to assessFRA and group
significant SIM by all
differences in three examiners
participant (VK =The
grading. Preclinical, KL(r)
effect size = Clinical,
accordingand
toOC = Oral
Cohen is
Surgery),
also including the calculated U-value and the associated p-value from the Mann–Whitney U-
indicated.
test to assess significant differences in participant grading. The effect size (r) according to Cohen is
also indicated. Rank Statistics Mann–Whitney U-Test
Mean
Rank Rank
Statistics Exact
n Rank Sum U Mann–Whitney U-Test r
(MRang ) p-Value
Mean Rank Exact
VK FRA 30 n 21.75 Rank Sum U r
(MRang) 652.5 187.5
p-Value 0.52
<0.0001
VK SIM
VK FRA 30 30 39.25 21.75 1178 652.5
187.5 <0.0001 0.52
VK SIM 30
KL FRA 30 22.7 39.25 681 1178
216 0.0003 0.46
KL FRA 30
KL SIM 30 38.3 22.7 1149 681
216 0.0003 0.46
KL SIM 30
OC FRA 30 22.55 38.3 676.5 1149
OC FRA 30 22.55 676.5 211.5 0.0002 0.47
OC SIM 30 38.45 1154 211.5 0.0002 0.47
OC SIM 30 38.45 1154
Figure3.3.Box
Figure Boxplot
plotdiagram
diagramofofthe
thegrades
gradesfrom
fromthe
thepreparation
preparation exam
exam evaluated
evaluated bybythethe three
three examin-
examiners
ers (VK = Preclinical Prosthetics, KL = Clinical Prosthetics, and OC = Oral Surgery) for participants
(VK = Preclinical Prosthetics, KL = Clinical Prosthetics, and OC = Oral Surgery) for participants in
in groups FRA and SIM. The “+” symbol indicates the arithmetic mean of the grades. The horizontal
groups FRA and SIM. The “+” symbol indicates the arithmetic mean of the grades. The horizontal red
red line marks the passing threshold. Statistically significant differences between the respective
line marks
groups arethe passingrepresented
visually threshold. Statistically
by asteriskssignificant
“*”, with differences
significancebetween the respective
levels indicated as p groups
< 0.001
are visually represented by asterisks
(“***”), and p < 0.0001 (“****”). “*”, with significance levels indicated as p < 0.001 (“***”), and
p < 0.0001 (“****”).
3.2. Interrater Reliability
3.2. Interrater Reliability
Each examiner independently and blindly evaluated a total of 60 preparations (FRA
Each examiner independently and blindly evaluated a total of 60 preparations (FRA
group, n = 30; SIM group, n = 30). The preclinical evaluator (VK) gave a median score of 4
group, n = 30; SIM group, n = 30). The preclinical evaluator (VK) gave a median score
(“sufficient”; Mean: 3.62 ± 1.26), while the clinical evaluator (KL) gave a median score of
of 4 (“sufficient”; Mean: 3.62 ± 1.26), while the clinical evaluator (KL) gave a median
3 (“satisfactory”;
score Mean: 2.90
of 3 (“satisfactory”; ± 1.50).
Mean: The
2.90 ± oral surgery
1.50). evaluator
The oral surgery assigned a median
evaluator score
assigned a
of 4 (“sufficient”; Mean: 3.60 ± 1.26), which was equivalent to that of the
median score of 4 (“sufficient”; Mean: 3.60 ± 1.26), which was equivalent to that of the VK evaluator
(Table
VK 5). For (Table
evaluator the given
5). grade
For the distribution,
given grade nodistribution,
normal distribution
no normal wasdistribution
assumed. There-was
fore, the Friedman test was used to check for differences in grading.
assumed. Therefore, the Friedman test was used to check for differences in grading. With an assumed
With
significance
an level of α =level
assumed significance 5%, aofsignificant difference difference
α = 5%, a significant (Friedman (Friedman
test statistic:
test32.27; p<
statistic:
0.0001) in grading among the three evaluators (VK mean rank = 2.25; KL
32.27; p < 0.0001) in grading among the three evaluators (VK mean rank = 2.25; KL meanmean rank = 1.54;
OC mean
rank = 1.54;rank
OC =mean
2.21)rank
was =found
2.21) (Figure 4). The
was found calculated
(Figure 4). TheFleiss’ Kappa,
calculated a measure
Fleiss’ Kappa, ofa
inter-rater agreement, showed a moderate agreement, with κ = 0.3 according
measure of inter-rater agreement, showed a moderate agreement, with κ = 0.3 according to Altmanto
(Table 6). The subsequent post-hoc analysis (Dunn’s Multiple Comparison Test) allowed
targeted examination for statistically significant differences between the ratings of two
Dent. J. 2024, 12, 259 9 of 18
Altman (Table 6). The subsequent post-hoc analysis (Dunn’s Multiple Comparison Test)
allowed targeted examination for statistically significant differences between the ratings
of two evaluators. A significantly better rating by the clinical prosthodontics evaluator
(KL) (mean rank = 1.54) compared to the preclinical prosthodontics evaluator (mean
rank = 2.25) and the oral surgery evaluator (mean rank = 2.21) was identified (z = 3.88;
p = 0.0003/z = −3.65; p = 0.0009). This represented a medium effect size, according to
Cohen, with r = 0.35 and r = 0.33, respectively. However, there were no statistically
significant differences between the grades given by the preclinical evaluator (VK) and the
oral surgery evaluator (OC) (z = 0.23; p > 0.9999) (Table 7).
Table 5. Presentation of the absolute grade distribution, median, mean, and standard deviation (SD)
of the performance assessments by the three examiners for all participants.
Figure4.4.Box
Figure Boxplot
plotdiagram
diagramillustrating
illustratingthe
thegrade
gradedistributions
distributionsfor
forthe
thepreparation
preparationexam
examasasevaluated
evaluated
by the three examiners. The “+” symbol denotes the arithmetic mean of the grades for each
by the three examiners. The “+” symbol denotes the arithmetic mean of the grades for each examiner.exam-
iner. The horizontal red line represents the passing threshold. Statistically significant differences
The horizontal red line represents the passing threshold. Statistically significant differences between
between groups are indicated by asterisks “*”. The number of asterisks corresponds to the p-value
groups are indicated by asterisks “*”. The number of asterisks corresponds to the p-value significance
significance level: p < 0.001 (“***”).
level: p < 0.001 (“***”).
3.3. Assessment
Table 6. Presentation of the rank statistics (mean rank and rank sum) of the grades assigned by the
All study along
three examiners, participants
with the completed anstatistic
calculated test evaluation formfor
corrected associated with
ties and the their training
associated p-value
system.
from The analysis
the Friedman test toof the questionnaires
assess showed
significant differences in thestatistically significantacross
grading of participants differences for
all groups.
statements
Fleiss’ Kappa A-G. Theincluded.
(κ) is also corresponding evaluation results and score distribution, as well as
the statistical analysis, can be found in Table 8 and Figure 5. No normal distribution was
Rankassumed
Statistics for the given grade distributions, andFriedman
the Mann–Whitney
Test U testFleiss’
was used (α =
Kappa
5%) to check for statistical significance. Friedman
The answersTest to statement “H” focused particularly
Asymptotic
n Mean Rank (MRang ) Rank Sum κ
on technical improvements within the software
Statistic and were, logically,
(Corrected) p-Value asked exclusively to
Preclinic (VK) 60 the SIM2.25group participants.
135 The responses are summarized in Table 9, with their fre-
quency indicated. 32.27 0.0002 0.3
Clinic (KL) 60 1.54 92.5
Participants in the Frasaco group rated the operation of the analog simulation unit
Oral surgery (OC) 60 and the2.21
phantom head as 132.5
more intuitive compared to the SIMtoCARE system (Statement
“A”; “agree” versus “neutral”; p = 0.0209) (Table 8 and Figure 5). At the same time, both
groups found the operation of their respective systems to be not complicated (Statement
“B”; “neutral”; p = 0.0214). Overall, participants from the FRA group more strongly felt
that their skills improved with their system than was the case in the SIM group (Statement
“C”) (p < 0.0001), with the FRA group giving a median score of 1 (“fully agree”). The SIM
Dent. J. 2024, 12, 259 10 of 18
Table 7. Presentation of the test statistics and z-values calculated by Dunn’s Multiple Comparison Test
to evaluate significant differences in grading between the three examiners. Additionally, uncorrected
p-values and Bonferroni-corrected p-values, along with the effect size (r) according to Cohen for
significant differences, are provided.
3.3. Assessment
All study participants completed an evaluation form associated with their training
system. The analysis of the questionnaires showed statistically significant differences for
statements A-G. The corresponding evaluation results and score distribution, as well as
the statistical analysis, can be found in Table 8 and Figure 5. No normal distribution was
assumed for the given grade distributions, and the Mann–Whitney U test was used (α = 5%)
to check for statistical significance. The answers to statement “H” focused particularly
on technical improvements within the software and were, logically, asked exclusively
to the SIM group participants. The responses are summarized in Table 9, with their
frequency indicated.
Table 8. Tabular summary of the frequency distribution, median, mean, and standard deviation (SD)
for each statement (A to G) on the evaluation form, reported by participants in both the FRA and SIM
groups.
Figure 5.
Figure 5. The
The horizontal
horizontalredreddashed
dashedline visualizes
line visualizesa “neutral” (grade
a “neutral” 3) statement
(grade according
3) statement to theto
according
Likert-type scale. The “+” symbol denotes the arithmetic mean of the grades for each examiner.
the Likert-type scale. The “+” symbol denotes the arithmetic mean of the grades for each examiner.
Mean ± SEM were calculated and the Mann–Whitney U test (* = p < 0.05) was performed. “*”
Mean ± SEM were calculated and the Mann–Whitney U test (* = p < 0.05) was performed. “*” indi-
cates significant statistical differences between the groups. The number of asterisks corresponds to
the p-value significance level: p < 0.050 (“*”) and p < 0.0001 (“****”).
Table 9. Tabular summary of criticisms and suggestions for improvement from the SIM group,
including their absolute frequencies.
Participants in the Frasaco group rated the operation of the analog simulation unit
and the phantom head as more intuitive compared to the SIMtoCARE system (Statement
“A”; “agree” versus “neutral”; p = 0.0209) (Table 8 and Figure 5). At the same time, both
groups found the operation of their respective systems to be not complicated (Statement
“B”; “neutral”; p = 0.0214). Overall, participants from the FRA group more strongly felt
that their skills improved with their system than was the case in the SIM group (Statement
“C”) (p < 0.0001), with the FRA group giving a median score of 1 (“fully agree”). The SIM
group, however, gave a slightly lower score of “2” (“agree”). Interestingly, this contrasts
somewhat with the ratings for statement “D”. Participants in the FRA group agreed that
they wanted to continue practicing with the analog phantom head, while participants in
the SIM group rejected further training with the virtual simulator (“fully agree” versus
“disagree”; p < 0.0001). This opinion was reflected in the responses to statement “E”. Here,
FRA group participants felt better prepared for the final preparation test by using the analog
phantom head (“strongly agree”), compared to the SIM group participants who used the
SIMtoCARE system (“neutral”) (p < 0.0001). Additionally, the confidence to prepare on a
real patient was higher in the FRA group than in the SIM group (Statement “F”) (“agree”
versus “strongly disagree”; p < 0.0001). Overall, both training systems were positively rated
Dent. J. 2024, 12, 259 12 of 18
by the participants (Statement “G”). The analog phantom head received a median score of
“2” (“good”), while the SIMtoCARE system received a median score of “3” (“satisfactory”;
p < 0.0001).
4. Discussion
The education of dental students is a complex process that requires not only the acqui-
sition of theoretical knowledge but also fine motor skills and a high level of hand–foot–eye
coordination. Before invasive clinical procedures can be performed on patients, it is impor-
tant to achieve sufficient competence through intensive use of preclinical practice, [12,13].
Traditionally, these skills are trained and learned using phantom heads and plastic models.
Feedback is provided by the faculty, which is often described as a very subjective assess-
ment of performance [14], leading to dissatisfaction among students [15]. To address this
issue, digital evaluation systems (e.g., PrepCheck, Dental Teacher, and Romexis Compare)
have been introduced in the context of digitalizing dental education. These systems overlay
and compare students’ preparations with instructors’ master preparations, providing objec-
tive feedback [16]. It is even recommended to use these systems for calibrating instructors
and as an adjunct in assessing preparation exams [7]. In a previous study by the authors,
the effectiveness of these systems was investigated [7]. The results showed no significant
difference in the performance of students who received faculty feedback versus feedback
from the preparation assistants. However, students generally positively perceived the use
of these digital systems. The primary function of these systems lies in generating free
practice time, during which no feedback from a tutor or instructor is necessary, allowing
students to learn and improve their skills autonomously [3]. This reduces the workload
and creates more practice time in a phase where it is urgently needed [2].
As the next step in the digitalization of preclinical dental education, haptic virtual
reality simulators (HVRS) are increasingly being adopted [17]. Advances in haptic tech-
nology have changed the perception of virtual reality (VR) and particularly of HVRS. The
haptic experience in the virtual environment (VE) represents a bidirectional flow of infor-
mation between the user and virtual reality, transmitting forces, vibrations, or movements
to the user through a haptic interface and thus simulating tactile and kinesthetic sensa-
tions [18,19]. HVRS fundamentally change the interaction with virtual objects by providing
a realistic sense of touch and feel [20], which is crucial for learning fine dental skills. The
simulation experience is almost entirely transferred to the virtual world, eliminating the
need for plastic teeth or real handpieces. Research in this field has intensified in recent
years, demonstrating the supportive benefits of these systems in training for dental restora-
tive procedures [21–24]. Consequently, more dental schools are implementing these HVR
systems [25,26].
The present study aimed to investigate the effectiveness of virtual simulators (HVRS)
for learning dental preparation (Prosthodontics) during the preclinical phase of dental
education, in comparison to traditional learning using analog phantom heads. For this
purpose, the preparation of tooth 36 for a full-cast crown was performed at an analog
phantom workstation and on the Dente (SIMtoCARE) during a free practice period. A
preparation exam was then conducted to determine the students’ progress.
The performance of students in the FRA group (analog simulation workstation)
was significantly better than that of the SIM group (Dente, SIMtoCARE) (Figure 3). All
three examiners rated the preparations of the FRA group two grades higher on average
(VK: 2 vs. 4.5; KL: 2 vs. 4; OC: 3 vs. 5). Looking at the absolute grade distribution (Table 3),
50% and 53% of the participants in the SIM group received lower grades from examiners
VK and OC, respectively, compared to the analog FRA group, and would have even failed
the preparation exam. The rating from examiner KL was slightly better, but still showed
a high failure rate of 37%. In contrast, the analog FRA group received significantly better
ratings. The grade distribution reversed in this group, with the examiners awarding a grade
of “2” to 50% (VK), 40% (KL), and 37% (OC), attesting to good performance. Ultimately,
33% even achieved a grade of “very good” (“1”) from examiner KL. Based on this data, the
Dent. J. 2024, 12, 259 13 of 18
“Dente” (SIMtoCARE) alone does not seem suitable for preparing inexperienced students
for a preparation exam on the phantom.
In a similar study, Arora et al. [27] also concluded that practice on an analog phantom
head leads to better results than preparation exercises on HVRS. That study used the
Virteasy Simulator (Virteasy Dental© , Changé, France) in order to determine its equivalence
to the phantom head. However, the participants did not have to take a final exam on the
phantom; instead, four preparations were performed and evaluated sequentially. Only in-
experienced second-year students were included, who prepared either only on the Virteasy
or only on the phantom head. Although the initial preparations showed a slight difference
favoring conventional preparation, it was not statistically significant. Interestingly, this
changed significantly for the third and fourth preparations. Participants in the virtual
preparation group contrastingly scored approximately 50/100 points, while the analog
group scored around 80/100 points. This is likely due to the last two preparations being
performed on more complex premolars and molars. Although these results suggest that
HVRS may result in poorer performance in crown preparation training, Arora et al. [27]
confirm the higher effectiveness of virtual simulators. They base this conclusion on the
improved scores observed over time with the use of the simulator. This is also confirmed in
a systematic review by Moussa et al. [24], who report improvement in crown preparation
over time after using HVRS with or without instructor feedback. It appears that HVRS
is more effective for experienced users. This is further evidenced in the study by Wang
et al. [28], which investigated the use of HVRS for metal–ceramic crown preparation. Their
study used the Simodont dental trainer (MOOG© , Nieuw-Vennep, The Netherlands). Both
novices and residents performed preparations, which were then evaluated. Additionally,
the required time and errors in the preparations (i.e., occlusal reduction, undercuts, and
damage to neighboring teeth) were recorded. The results showed that novices took signif-
icantly more time, received poorer evaluations, and made more errors. The detection of
these differences demonstrates the validity of the system. Both groups found working with
the simulator very positive and even preferred virtual training to preparing on plastic teeth.
Murbay et al. [29] also examined the Simodont dental trainer (MOOG© , Nieuw-Vennep,
The Netherlands) and showed that students who entered the study with the Simodont
were able to perform far more satisfactory preparations (in multiple areas), compared to
the group that did not use the simulator, despite having the same level of competence.
Philipp et al. [22] showed opposing results in their study, which had a similar design
to ours and also used the Dente from SIMtoCARE (Vreeland, The Netherlands). They found
no significant difference between the analog (control group) and virtual (test group) groups.
However, the participants were tasked with performing an access cavity for a pulpotomy,
which, although an invasive procedure, is performed on a much smaller area. Therefore,
the complexity of the exercise is lower than the complete preparation of a molar with
neighboring teeth. Furthermore, the participants had already gained preparation experience
on the analog simulator, so unlike in the present study (completely inexperienced students),
the handling of the phantom head for the final exam was familiar and simplified. The
small number of participants (n = 14) also limits its statistical power. Despite the positive
results, Philipp et al. [22] see HVRS more as a bridge between preclinical training on the
phantom and real clinical patients. The authors of the present study share this view and
currently see HVRS as a tool complementary to conventional training. Both approaches
have advantages and disadvantages and can complement each other in their capabilities.
The literature review by Imran et al. [30] and the systematic reviews by Moussa et al. [24]
and Koolivand et al. [31] confirm the positive effect of HVRS and the good conveyance
of psychomotor skills in managing clinical situations. Moussa et al. [24] emphasize the
importance of force-feedback-based 3D simulators compared to 2D-based systems without
force-feedback. They also recommend a combination of instructor feedback and device
feedback. Imran et al. [30] highlight the reduced cost factor, as no technical requirements
for the building (e.g., water supply, drainage, and suction) are necessary, and there is
no need for regular maintenance or buying plastic teeth. However, it should be noted
Dent. J. 2024, 12, 259 14 of 18
that HVRS represents sensitive, highly developed technology that can also malfunction.
Corresponding repairs could be costly and result in the system being unavailable. These
costs could be reduced in the future if HVRS finds wider application.
However, even in the future, the analog phantom head will remain the gold standard
for preparation exams. This is because the simulation of hard tissues, the ergonomics of
working with a simulated patient, the use of a real handpiece with variable speed and actual
water cooling, and the operation of a real treatment unit cannot currently be adequately
represented virtually. Additionally, if a shift to purely virtual training were to occur in the
future, there would still need to be a transition from purely virtual treatment training to real
patient care. The student must then be able to perform a sufficient preparation that does not
endanger the patient, regardless of how well they performed previously in the virtual space
or with virtual patients. For this reason, only a combination of analog and virtual simulation
can currently be recommended, as the student must familiarize themselves with and adapt
to a real treatment environment (the tangible analog phantom patient) at some point, at
the latest, just before treating real patients. In this context, additional aspects become
apparent that are missing in virtual simulation (Dente, SIMtoCARE). Excessive pressure,
insufficient water cooling, and blunt or overly fine instruments (diamonds) ultimately lead
to overheating of the prepared tooth and pose a risk to its vitality. An analog plastic tooth
reacts with charring and the corresponding smell, providing the student with feedback on
this fatal error.
The strength of virtual simulation, on the other hand, lies fundamentally in its ability
to simulate a wide variety of dental procedures with a single system. Considering the
standard portfolio of the Dente (SIMtoCARE), for example, it offers manual dexterity
exercises, restorative procedures such as caries excavation, access cavity preparation, and
pocket depth measurement. Additionally, prosthetic preparation exercises, implantations,
and the administration of block anesthesia are available. Since new cases can theoreti-
cally be added as needed, the possibilities are endless. Particularly in the area of caries
excavation, the practice cases can be designed as interactive exercises in which students
receive direct feedback from the software as to whether all the caries has been removed or if
residual caries remains. It even indicates how much healthy tooth substance was damaged
during the treatment. Compared to the analog phantom, such a situation is generally
difficult to simulate, and if attempted (e.g., using a model with extracted teeth), it requires
supervision by a supervising dentist. This highlights the self-learning aspect of virtual
simulation [32–34]. Jasinevicius et al. [35] demonstrated that students who completed their
preclinical training with a virtual simulator required only 20% of the time that an instructor
needed for personal instruction. It is worth mentioning that virtual systems can import real
individual patient cases following an intraoral scan by, for example, importing the data in
PLY format, and then voxelizing them. This allows for the repeated practice of this individ-
ual situation until a safe and successful approach for future treatment can be implemented.
This capability presents an intriguing option, especially in clinical dental education.
The authors believe that a practice scenario needs to be created in the sense of a
“guided preparation” that shows students a path to safely perform a preparation. This
principle is applied in other fields, such as aviation, particularly with respect to landing.
Pilots receive a predetermined “glideslope” through the Instrument Landing System (ILS),
which ensures that the runway is safely reached and the landing is successful. They receive
constant feedback through the “localizer” about height and distance to the runway and
whether the aircraft is, for instance, too far to the right or left of the runway. Analogous
to this principle, a tooth to be prepared could consist of multiple colored layers. When
the student begins to prepare, they would first expose the tolerance range. Then, they
would see the color of the ideal preparation and can orient themselves to it until they
expose the color of the master preparation. If they grind too deeply, an alert color (e.g.,
red) might indicate excessive substance removal. This way, direct feedback is given during
the preparation process, no human supervision is required, and the whole exercise has a
playful aspect that may enhance the user’s ambition.
Dent. J. 2024, 12, 259 15 of 18
The evaluation of the feedback forms highlights the differences between the two
systems under investigation. Participants’ subjective perceptions of their training methods
were captured using seven statements to be rated (A to G). Additionally, suggestions for
improvement could be provided in an optional free-text format (Statement H).
Initially, participants found working with the Dente (SIMtoCARE) found their system
to be less intuitive, compared to those in the Frasaco group (Statement A). This is further
supported by the results for statement B, “The operation of the Dente or phantom head
was complicated,” which showed a significantly better rating from participants in the FRA
group compared to those in the SIM group. Despite the difference in ratings, the grade
distribution indicates that the perceived complexity of the Dente is not clear-cut. While the
Dente might initially appear less intuitive and more complicated compared to the phantom
head, participants did not perceive it as opaque or particularly complex.
Additionally, participants in the FRA group reported a greater improvement in their
practical skills, compared to responses from those in the SIM group (Statement C). However,
a closer look at the ratings reveals that despite these differences, there is an overall positive
development (60% positive, 27% neutral, 13% negative) in the practical skills of the SIM
group participants as perceived subjectively.
Nonetheless, 76% of the SIM group participants felt that even after adequate practice
with the system, they were not capable of safely preparing a tooth on a patient (Statement
F). In contrast, 70% of the FRA group participants felt confident they could successfully
perform this task in the future. The starkly negative response to Statement D, “I would
like to continue practicing with the system used here in the phantom course”, by the
SIM group (56% rated it 4 or 5, “disagree/strongly disagree”) compared to the positive
response by the FRA group (97% rated it 1 or 2, “strongly agree/agree”) underscores the
differing perspectives and attitudes of the two groups regarding the skills acquired and to
be acquired, as well as their confidence in using the system. This is further emphasized by
the final rating of each system used (Statement G), in which the Dente was rated lower on
average (score “3”) compared to the phantom head (score “2”), albeit not as decisively.
Zafar et al. [11] reached similar conclusions in their study. They investigated the
subjective perceptions of students who performed pulpotomies and crown preparations
using both an HVRS system (Simodont, Nissin Dental© , Kyoto, Japan) and a conventional
dental model. In total, 56% of participants agreed that the HVRS simulator improved their
understanding of pulpotomies and crown preparations. However, participants generally
felt more comfortable using the phantom head with a dental model, compared to the HVRS
simulator. This could be attributed to the participants’ prior familiarity with the analog
system, leading to a bias against the HVRS.
The fundamentally poorer evaluation of the SIMtoCARE system by the students in
this study can be attributed to various factors (Statement H). Primarily, participants felt
that the system did not yet meet the gold standard of the phantom head, which remains
the benchmark for examination situations. The virtual dental models could be rotated
360◦ , allowing users perspectives that did not correspond to the real situation. The lack
of representation of the opposing jaw and the cheeks during preparation, as well as the
ability to zoom in excessively, contributed to the perception that the range of motion of the
handpiece and the visibility of details were less realistic.
Interestingly, Arora et al. [27] view these aspects as advantages of the virtual system,
suggesting that they even contribute to improving students’ skills.
Additionally, some students found the digital lighting and shadow effects unrealistic,
conditions which could distort a realistic working view. Moreover, participants noted
several issues with the SIMtoCARE system’s application. Initially, the rotation speed of
the digital handpiece was fixed at 60,000 RPM, which made finishing the virtual teeth
impossible. A finished surface and precise preparation margin are crucial for a seamless
transition from tooth to crown. Additionally, during system use, there were technical
complications: either the screen froze, or there were deficits in the integrity (“holes”) of the
virtual teeth. In both cases, a system restart was required without saving progress.
Dent. J. 2024, 12, 259 16 of 18
5. Conclusions
Considering the limitations of this study, it can be concluded that haptic feedback
virtual simulators (HVRS) are not currently capable of training inexperienced students for
a preparation exam, compared to the traditional phantom head. Training on the virtual
simulator resulted in significantly lower scores on the preparation exam. Inexperienced
students prefer working with the analog phantom head, even though they initially find the
use of the HVRS manageable and subjectively perceive an improvement in their perfor-
mance. From the authors’ perspective, a combination of both systems is necessary for future
dental education to encompass both virtual and real-world training. For advanced students
who wish to refine their skills in a more intensively simulated environment or engage in
individual practice with patient cases, HVRS offers an efficient and cost-effective option.
Author Contributions: Conceptualization, M.S. and D.K.; methodology, M.S., D.K. and L.S.; software,
F.S. and L.S.; validation, H.S., N.E. and D.K.; formal analysis, M.S.; investigation, L.S. and F.S.;
resources, H.S.; data curation, N.E. and D.K.; writing—original draft preparation, M.S. and L.S.;
writing—review and editing, M.S. and D.K.; visualization, F.S.; supervision, M.S.; project administra-
tion, M.S. and D.K. All authors have read and agreed to the published version of the manuscript.
Funding: This research received no external funding.
Institutional Review Board Statement: Ethical approval was obtained from the Ethics Committee of
the medical faculty of the University of Bonn (number 159/20).
Informed Consent Statement: Informed consent was obtained from all subjects involved in the study.
Dent. J. 2024, 12, 259 17 of 18
Data Availability Statement: The original contributions presented in the study are included in
the article.
Conflicts of Interest: The authors declare no conflicts of interest.
References
1. Cardoso, J.A.; Barbosa, C.; Fernandes, S.; Silva, C.L.; Pinho, A. Reducing subjectivity in the evaluation of pre-clinical dental
preparations for fixed prosthodontics using the Kavo PrepAssistant. Eur. J. Dent. Educ. 2006, 10, 149–156. [CrossRef]
2. Graham, R.; Zubiaurre Bitzer, L.A.; Anderson, O.R. Reliability and predictive validity of a comprehensive preclinical OSCE in
dental education. J. Dent. Educ. 2013, 77, 161–167. [CrossRef]
3. Park, C.F.; Sheinbaum, J.M.; Tamada, Y.; Chandiramani, R.; Lian, L.; Lee, C.; Da Silva, J.; Ishikawa-Nagai, S. Dental Students’
Perceptions of Digital Assessment Software for Preclinical Tooth Preparation Exercises. J. Dent. Educ. 2017, 81, 597–603. [CrossRef]
4. Hamil, L.M.; Mennito, A.S.; Renne, W.G.; Vuthiganon, J. Dental students’ opinions of preparation assessment with E4D compare
software versus traditional methods. J. Dent. Educ. 2014, 78, 1424–1431. [CrossRef] [PubMed]
5. Nagy, Z.A.; Simon, B.; Toth, Z.; Vag, J. Evaluating the efficiency of the Dental Teacher system as a digital preclinical teaching tool.
Eur. J. Dent. Educ. 2018, 22, e619–e623. [CrossRef]
6. Rosenberg, H.; Grad, H.A.; Matear, D.W. The effectiveness of computer-aided, self-instructional programs in dental education: A
systematic review of the literature. J. Dent. Educ. 2003, 67, 524–532. [CrossRef]
7. Stoilov, M.; Trebess, L.; Klemmer, M.; Stark, H.; Enkling, N.; Kraus, D. Comparison of Digital Self-Assessment Systems and
Faculty Feedback for Tooth Preparation in a Preclinical Simulation. Int. J. Environ. Res. Public Health 2021, 18, 13218. [CrossRef]
8. San Diego, J.P.; Newton, T.J.; Sagoo, A.K.; Aston, T.A.; Banerjee, A.; Quinn, B.F.A.; Cox, M.J. Learning Clinical Skills Using Haptic
vs. Phantom Head Dental Chair Simulators in Removal of Artificial Caries: Cluster-Randomized Trials with Two Cohorts’ Cavity
Preparation. Dent. J. 2022, 10, 198. [CrossRef]
9. Plessas, A. Computerized Virtual Reality Simulation in Preclinical Dentistry: Can a Computerized Simulator Replace the
Conventional Phantom Heads and Human Instruction? Simul. Healthc. 2017, 12, 332–338. [CrossRef]
10. Roy, E.; Bakr, M.M.; George, R. The need for virtual reality simulators in dental education: A review. Saudi Dent. J. 2017, 29, 41–47.
[CrossRef]
11. Zafar, S.; Zachar, J.J. Evaluation of HoloHuman augmented reality application as a novel educational tool in dentistry. Eur. J.
Dent. Educ. 2020, 24, 259–265. [CrossRef]
12. Daniel, C.; Kevin, B.; Wyatt, F. Simulation with Preclinical Operative Dentistry courses—3-year retrospective results. J. Dent.
Educ. 2000, 64, 224.
13. Suvinen, T.I.; Messer, L.B.; Franco, E. Clinical simulation in teaching preclinical dentistry. Eur. J. Dent. Educ. 1998, 2, 25–32.
[CrossRef]
14. Wolgin, M.; Grabowski, S.; Elhadad, S.; Frank, W.; Kielbassa, A.M. Comparison of a prepCheck-supported self-assessment
concept with conventional faculty supervision in a pre-clinical simulation environment. Eur. J. Dent. Educ. 2018, 22, e522–e529.
[CrossRef] [PubMed]
15. Sharaf, A.A.; AbdelAziz, A.M.; El Meligy, O.A. Intra- and inter-examiner variability in evaluating preclinical pediatric dentistry
operative procedures. J. Dent. Educ. 2007, 71, 540–544. [CrossRef] [PubMed]
16. Zou, H.; Jin, S.; Sun, J.; Dai, Y. A Cavity Preparation Evaluation System in the Skill Assessment of Dental Students. J. Dent. Educ.
2016, 80, 930–937. [CrossRef]
17. Dutã, M.; Amariei, C.I.; Bogdan, C.M.; Popovici, D.M.; Ionescu, N.; Nuca, C.I. An overview of virtual and augmented reality in
dental education. Oral. Health Dent. Manag. 2011, 11, 42–49.
18. Mihelj, M.; Podobnik, J. Haptics for Virtual Reality and Teleoperation In Intelligent Systems, Control and Automation: Science and
Engineering; Springer: Dordrecht, The Netherlands, 2012; pp. 1–39.
19. Robles-De-La-Torre, G. Virtual Reality: Touch/haptics. In Encyclopedia of Perception; Goldstein, B., Ed.; Sage Publications:
Thausands Oaks, CA, USA, 2009; pp. 1036–1038.
20. Gottlieb, R.; Vervoorn, J.M.; Buchanan, J. Simulation in Dentistry and Oral Health. In The Comprehensive Textbook of Healthcare
Simulation; Levine, A.I., DeMaria, S., Schwartz, A.D., Sim, A.J., Eds.; Springer: New York, NY, USA, 2013; pp. 329–340.
21. Farag, A.; Hashem, D. Impact of the Haptic Virtual Reality Simulator on Dental Students’ Psychomotor Skills in Preclinical
Operative Dentistry. Clin. Pract. 2021, 12, 17–26. [CrossRef]
22. Philip, N.; Ali, K.; Duggal, M.; Daas, H.; Nazzal, H. Effectiveness and Student Perceptions of Haptic Virtual Reality Simulation
Training as an Instructional Tool in Pre-Clinical Paediatric Dentistry: A Pilot Pedagogical Study. Int. J. Environ. Res. Public Health
2023, 20, 4226. [CrossRef]
23. Daud, A.; Matoug-Elwerfelli, M.; Daas, H.; Zahra, D.; Ali, K. Enhancing learning experiences in pre-clinical restorative dentistry:
The impact of virtual reality haptic simulators. BMC Med. Educ. 2023, 23, 948. [CrossRef]
24. Moussa, R.; Alghazaly, A.; Althagafi, N.; Eshky, R.; Borzangy, S. Effectiveness of Virtual Reality and Interactive Simulators on
Dental Education Outcomes: Systematic Review. Eur. J. Dent. 2022, 16, 14–31. [CrossRef]
25. Eaton, K.A.; Reynolds, P.A.; Grayden, S.K.; Wilson, N.H. A vision of dental education in the third millennium. Br. Dent. J. 2008,
205, 261–271. [CrossRef] [PubMed]
Dent. J. 2024, 12, 259 18 of 18
26. Vervoorn, J.W.; Wesselink, P.R.; Cox, M.J.; Quinn, B.; Shahriari-Rad, A. Evidence from the dental education communities: Review
of the field. In Proceedings of the International Association for Dental Research Conference, Boston, MA, USA, 9–11 March 2015;
pp. 1–11.
27. Arora, O.; Sivaswamy, V.; Ahmed, N.; Ganapathy, D. Effectiveness of digital visualization in teaching crown preparation to
predoctoral dental students—A Pilot study. J. Popul. Ther. Clin. Pharmacol. 2023, 30, 168–173. [CrossRef]
28. Wang, F.; Liu, Y.; Tian, M.; Zhang, Y.; Zhang, S.; Chen, J. Application of a 3D Haptic Virtual Reality Simulation System for Dental
Crown Preparation Training. In Proceedings of the 8th International Conference on Information Technology in Medicine and
Education (ITME), Fuzhou, China, 23–25 December 2016; pp. 424–427.
29. Murbay, S.; Neelakantan, P.; Chang, J.W.W.; Yeung, S. Evaluation of the introduction of a dental virtual simulator on the
performance of undergraduate dental students in the pre-clinical operative dentistry course. Eur. J. Dent. Educ. 2020, 24, 5–16.
[CrossRef]
30. Imran, E.; Adanir, N.; Khurshid, Z. Significance of Haptic and Virtual Reality Simulation (VRS) in the Dental Education: A Review
of Literature. Appl. Sci. 2021, 11, 10196. [CrossRef]
31. Koolivand, H.; Shooreshi, M.M.; Safari-Faramani, R.; Borji, M.; Mansoory, M.S.; Moradpoor, H.; Bahrami, M.; Azizi, S.M.
Comparison of the effectiveness of virtual reality-based education and conventional teaching methods in dental education: A
systematic review. BMC Med. Educ. 2024, 24, 8. [CrossRef]
32. Clark, G.T.; Suri, A.; Enciso, R. Autonomous virtual patients in dentistry: System accuracy and expert versus novice comparison.
J. Dent. Educ. 2012, 76, 1365–1370. [CrossRef] [PubMed]
33. Seifert, L.B.; Socolan, O.; Sader, R.; Russeler, M.; Sterz, J. Virtual patients versus small-group teaching in the training of oral and
maxillofacial surgery: A randomized controlled trial. BMC Med. Educ. 2019, 19, 454. [CrossRef]
34. Mardani, M.; Cheraghian, S.; Naeeni, S.K.; Zarifsanaiey, N. Effectiveness of virtual patients in teaching clinical decision-making
skills to dental students. J. Dent. Educ. 2020, 84, 615–623. [CrossRef]
35. Jasinevicius, T.R.; Landers, M.; Nelson, S.; Urbankova, A. An evaluation of two dental simulation systems: Virtual reality versus
contemporary non-computer-assisted. J. Dent. Educ. 2004, 68, 1151–1162. [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual
author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to
people or property resulting from any ideas, methods, instructions or products referred to in the content.