Muc24a Sub2002 I7
Muc24a Sub2002 I7
2
3
Non-Dominant Eye
4
5 ANONYMOUS AUTHOR(S)
6
7
Visual impairments, such as cataracts, pose a significant global challenge with preventable cases and unaddressed issues due to limited
8 eye care understanding. In this work, we investigate cataracts by creating a virtual reality simulation, exploring its impact on a visual
9 search task, and investigating the correlations between eye movement features and the severity of the simulation. Unlike previous
10 work, we focus on simulating cataract progression over time, eye tracking, and movement analysis, leveraging realistic scenes to
11 preserve contextual cueing. We found no effect on search time or eye movements while simulating the progression of one year in the
12
non-dominant eye, and 60% of our participants did not perceive the progression during the experiment. Our findings indicate that the
13
dominant eye compensates for one year of progression. For future work, exclusively simulating in the non-dominant eye may not be
14
15
sufficient for evaluating accessibility systems, increasing empathy, or other work.
16
CCS Concepts: • Human-centered computing → Human computer interaction (HCI).
17
18 Additional Key Words and Phrases: human computer interaction
19
20 ACM Reference Format:
21 Anonymous Author(s). 2018. Exploring Impairment Simulation in Virtual Reality: A Study on Cataract in the Non-Dominant Eye. In .
22 ACM, New York, NY, USA, 17 pages. https://doi.org/XXXXXXX.XXXXXXX
23
24
25
1 INTRODUCTION
26 Visual impairments become more prominent amongst the human population with every passing year. As most visual
27
impairments are age-related and the average age of humans keeps increasing, it follows that the increase in individuals
28
29 with visual impairments will coincide with an estimated 2.2 billion people worldwide affected by a visual impairment in
30 2019 [37], of which 1 billion could have been prevented or has yet to be addressed [4]. Cataracts are one of these visual
31 impairments, with 65.2 million affected in 2019 [37]. Cataracts are the leading cause of blindness and the second most
32
common visual impairment after refractive errors. In contrast to refractive errors, cataracts require regular surgery for
33
34 treatment once diagnosed. However, as the progression of cataracts happens slowly, individuals might only realize they
35 experience the effects of cataracts in a far progressed stadium.
36 In previous work, it has been well established that simulations can be used to investigate the effect of visual
37
impairments on task performance [14, 34, 36] and eye movements [10, 11]. Some of the work with simulated visual
38
39 impairments uses these simulations to evaluate assistive functionalities, e.g., Sipatchin et al. [30]. In the work of Krösl
40 et al. [19], they investigated how to represent cataracts in augmented reality (AR) accurately, and in Krösl et al. [18],
41 they investigated the ability of a virtual reality (VR) simulation of cataracts to create understanding and nurture
42
empathy towards those that live with cataracts. Simulating visual impairments is therefore crucial, not only to facilitate
43
44 accessibility research but also to investigate the impact these visual impairments have on day-to-day activities.
45
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not
46
made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components
47
of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to
48
redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
49
© 2018 ACM.
50 Manuscript submitted to ACM
51
52 1
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.
53 In this work, we implemented a detailed visual impairment pipeline encompassing five parameters that can individu-
54
ally be regulated: visual acuity, contrast, color shift, peripheral vision loss, and sensitivity to light. We then fine-tuned
55
these parameters to be representative of cataracts by interviewing individuals with cataracts or who recently had
56
57 cataract surgery. This allowed us to accurately simulate different severities of cataracts and recreate the progression
58 over time. Published work up till now has mostly focussed on creating understanding for those who live with the visual
59
impairment, not only by enabling those without the visual impairment to experience how it is to live with it, but also to
60
replicate everyday scenarios to investigate the accessibility of these (e.g., the emergency sign [18]). In this work, we
61
62 investigate the effect of simulated VR cataracts on eye movements during a visual search task for household items in
63 realistic scenes to preserve contextual cueing.
64
We build a cataract simulation in virtual reality that accurately represents cataract progression and allows for precise
65
manipulation by validating the simulation with those who experience visual impairments through interviews. Following
66
67 this, we found that when applying one year of simulated progression in a half-hour user study in realistic virtual rooms,
68 participants were mostly unaware of the visual impairments. This finding was based on eye tracking, which contains
69
strong evidence for being the same as that of participants who did not experience visual impairment. These findings
70
articulate the importance of using realistic virtual rooms for contextual cueing and that when investigating visual
71
72 impairments through simulations, exclusively simulating the visual impairment in the non-dominant eye might not be
73 sufficient.
74
75
2 RELATED WORK
76
77 First, we provide a short overview of visual impairments and cataracts and how these visual impairments affect everyday
78 activities. Next, we provide insight into different simulations of visual impairments.
79
80
2.1 Visual Impairment
81
82 The WHO estimates that about 2.2 billion people worldwide are affected by vision impairments. These impairments
83
include presbyopia (1.8 billion) and other refractive errors such as myopia or hypermetropia (123,7 million), cataracts
84
(65.2 million), age-related macular degeneration (AMD) (10.4 million), glaucoma (6.9 million), corneal opacities (4.2
85
86 million), diabetic retinopathy (3 million), trachoma (2 million), and other eye diseases [37]. Some of these impairments,
87 such as refractive errors, we see almost every day, and is in most cases, easy to correct with glasses or contact lenses.
88
Other eye diseases, such as AMD, permanently impact visual function and can lead to central vision loss1 . While highly
89
treatable, cataracts represent one of the leading causes of visual impairments (33%) after refractive errors (43%) and are,
90
91 with 51%, the leading cause for blindness [27].
92
93 2.2 Cataract
94
95
Cataracts manifest as opacities in the eye’s lens, which occlude part of the visual field and can lead to vision loss when
96 left untreated. Depending on their characteristics and the region of the affected lens, cataracts are categorized as nuclear,
97 cortical, or subcapsular cataracts. These all manifest differently; for example, nuclear cataracts manifest as a ubiquitous
98
yellow clouding in one’s vision with increased straylight (’glare’) [25]. Cortical cataracts appear as dot-like opacities,
99
100
peripheral radial opacities, or spoked opacities near the periphery [25]. Subcapsular cataracts are caused by defective
101 fiber production in the lens and result in opacities forming at the posterior pole of the lens, resulting in dark shadows in
102
103 1 https://www.nei.nih.gov/learn-about-eye-health/eye-conditions-and-diseases, accessed 2024–04–12.
104 2
A Study on Cataract in the Non-Dominant Eye Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
105 the center of one’s vision [25]. Although these are categorized differently, they can develop to different extents within
106
the same eye.
107
The effect of lens opacities on vision depends on their location and one’s pupil size. Only opacities within the
108
109 pupillary zone are likely to affect vision when the pupil diameter is small in daylight. Suppose the ambient light is
110 further reduced and the pupil diameter becomes larger. In that case, the vision is further affected as an increasing
111
amount of straylight (light that is scattered by parts of the eye due to optical imperfections like particles in a clouded
112
lens [33]) falls on the retina. Intraocular straylight correlates better to cataract severity than visual acuity and contrast
113
114 sensitivity and is worst in mixed cataracts [26].
115
116
117
2.3 Vision Impairment Simulations
118
Much work has been done to simulate visual impairments across multiple modalities. Krösl et al. [18] investigated
119
the maximum recognition distance threshold using a cataract simulation, where their pipeline forms the basis of our
120
121 simulation of cataracts.
122 Wood et al. [35] and Almutleb and Hassan [1] simulate typical effects of visual impairments such as cataracts or
123
glaucoma. For this, they use a set of modified goggles or contacts. In terms of VR, Lewis et al. [22], Lewis and Shires
124
[23] presented a series of predefined simulations for certain visual impairments where the severity of the simulation
125
126 was fixed, and Lang et al. [21] showed a gaze-contingent simulation of central field loss. Their goal was not to conduct
127 a user study or evaluate designs but to increase the population’s understanding of the effects of vision impairments.
128
Sipatchin et al. [30] provided a more complex simulation of vision impairments by using a scotoma texture. Kwon et al.
129
[20] combined different image-processing effects to mimic particular eye disease patterns on 2D images. However, since
130
131 this was not in virtual reality and without eye tracking, it was not gaze-contingent.
132 Vision impairment simulations have also been used for accessibility inspection, Ates et al. [2] simulated vision
133
impairments in AR, based on photos from the National Eye Institute (NEI). Using Unity3D and standard effects provided
134
by the game engine David et al. [7] targeted their research towards giving designers an idea of the challenges those
135
136 with vision impairments often faced. Recently, Jones et al. [15] showed that these types of simulations can serve as a
137 tool for accessibility evaluations. However, none of this research considers progression over time and has complete
138
control over their simulation, be it in using provided effects or not being gaze-contingent.
139
140
141 3 CATARACTS SIMULATION
142
143 In the following, we describe the implementation of our effects pipeline using five parameters: reduced visual acuity
144 (VA), contrast loss, color shift, loss of peripheral vision, and sensitivity to light. Our effects pipeline follows the principles
145 of the effects pipeline used by Krösl et al. [19]. However, we implemented the effects to fit our study.
146
147
148 3.1 Reduced Visual Acuity
149
150
Blurred vision caused by reduced visual acuity is a common symptom of many eye diseases and vision impairments. In
151 Krösl et al. [17, 18], they already simulated a reduced VA by applying a Gaussian blur to an image and VR environment,
152 respectively. For our implementation, we followed the findings of Hogervorst and Damme [13]. We used a sigma
153
parameter for the Gaussian blur related to the level of VA to be simulated. Additionally, we calculated the grid size of
154
155
the Gaussian blur by multiplying it by three and rounding it to the closest odd number. This grid size indicates how
156 3
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.
157 many neighboring pixels are considered for the blur. For the Gaussian blur itself, we used the following equation:
158 √ 2
© − 2 × 𝑥 ªª
!!
159
1 − 𝑥2 © 1
160 𝑓 (𝑥) = 2 × √ × exp − √ × exp ®® (1)
2𝜋𝜎 2 2 × 𝜎2 2𝜋𝜎 2 2 × 𝜎 2 ®®
161
« « ¬¬
162
following the findings of Hogervorst and Damme [13] with 𝜎 being the blur strength. The Gaussian blur is visualized in
163
164 Figure 1b.
165
166 3.2 Contrast Loss
167
168
We found two approaches for implementing a contract reduction: a basic interpolation of the current image with a
169 neutral gray or the compression of the luminance value in every frame. The latter allows for more control in simulating
170 the mechanisms of the human eye since it’s done in the CEILAB color space (also referred to as L*a*b*). This is because
171
linear changes of the three channels of RGB colors do not yield linear contrast changes [18]. As such, we implemented
172
173
both approaches to evaluate.
174 To achieve contrast reduction using gray interpolation, the following formula is applied to every pixel for every
175 color channel 𝐺 = 𝐺 1 × 𝑔 + 0.5 × (1 − 𝑔) where 𝐺 is the resulting color, 𝐺 1 is the initial color, and 𝑔 ∈ [0, 1], 𝑔 ∈ R, is
176
the strength of the contrast reduction. In Figure 2a, we visualize this approach.
177
178
To reduce the contrast by compressing the luminance, the RGB value of each pixel needs to be transformed into the
179 CEILAB color space first. After this, we can apply the following formula 𝐿 = 𝐿1 × 𝑙 + 50 × (1 − 𝑙) where 𝐿 represents
180 the new luminance value, 𝐿1 the initial luminance, and 𝑙 ∈ [0, 1], 𝑙 ∈ R as a scaling factor. To preserve the average
181
lightness 50 × (1 − 𝑙) is added [18]. We visualize this result in Figure 2b.
182
183
184
3.3 Color Shift
185 Again, we found two ways to implement a color shift for simulating cataracts. First, the interpolation between the
186
camera image and a set color, and, secondly, the simulation of a color filter. Where the first also reduces the contrast. This
187
188
would not allow us to independently manipulate the contrast reduction from Section 3.2 and color shift. Nonetheless,
189 this first option seems to be used more. Hence, we implemented both options and evaluated them later, which is more
190 representative.
191
192
193
194
195
196
197
198
199
200
201
202
203
204 (a) Reference Image (b) Gaussian Blur
205
206 Fig. 1. (a) A low-poly landscape without any visualizations applied. (b) A low-poly landscape with the loss of visual acuity filter
207 applied
208 4
A Study on Cataract in the Non-Dominant Eye Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
209
210
211
212
213
214
215
216
217
218
219
220 (a) Interpolation with Gray (b) Luminance Reduction
221
222 Fig. 2. (a) A low-poly landscape where the interpolation with gray is applied. (b) A low-poly landscape where the luminance reduction
223 is visualized.
224
225
226
227 We realized the color interpolating using the formula 𝑆 = 𝑆 1 × 𝑠 + 𝐶𝑡𝑎𝑟𝑔𝑒𝑡 × (1 − 𝑠) where S is the resulting color, 𝑆 1
228 the initial color, 𝑠 ∈ [0, 1], 𝑠 ∈ R the strength of the color shift and 𝐶𝑡𝑎𝑟𝑔𝑒𝑡 = {1.0, 0.718461, 0.177084} as target color
229
[18]. Figure 3a shows an image resulting from this application.
230
The simulated color filter uses the following formula 𝐹 = 𝐹 1 × 𝐶𝑡𝑎𝑟𝑔𝑒𝑡 × 𝑓 + 𝐹 1 × (1 − 𝑓 ) from [19]. Similarly to the
231
232 color interpolation, the 𝐹 represents the resulting color, 𝐹 1 the initial color, 𝑓 ∈ [0, 1], 𝑓 ∈ R the strength of the color
233 shift and 𝐶𝑡𝑎𝑟𝑔𝑒𝑡 the target color (same values as above). The results of this color filter are visualized in Figure 3b.
234
235
236
3.4 Sensitivity to Light
237 We achieve the simulation of sensitivity to light using a post-processing effect from Unity called "Bloom". This processing
238
effect allows us to manipulate the light’s intensity, threshold, and diffusion. These allow us to simulate the effect of
239
240
images becoming blurred, and bright light becomes especially problematic because it creates intense glare [18]. In
241 Figure 4, we visualize this part of our effects pipeline.
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
(a) Color Interpolation (b) Simulated Color Filter
257
258
Fig. 3. (a) A low-poly landscape where the color interpolation is applied. (b) A low-poly landscape where the simulated color filter is
applied
259
260 5
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.
261
262
263
264
265
266
267
268
269
270
271
272 (a) Sensitivity to Light (b) Effects Combined
273
274 Fig. 4. (a) A low-poly landscape with the sensitivity to light simulated. (b) A low-poly landscape where all effects are combined.
275
276
277 3.5 Loss of Peripheral Vision
278
279 Lastly, cataracts lead to a clouding of the eye lens. While for nuclear cataracts, this clouding is uniformly distributed over
280 the whole lens, subcapsular cataracts create a dark shadow in the center of the lens, and cortical cataracts produce dark
281
shadows in the periphery of the lens. We simulate these shadows using a gaze-contingent stencil shader implementation
282
that hides items with a specific material. This results in hidden items outside the area (i.e., in the peripheral). We chose
283
284 this approach as we found during pilot testing that participants always perceived these dark shadows simulations
285 that other work had used (e.g., Krösl et al. [18]), even when these changes were minimal. This is the case as current
286
hardware cannot keep up with human perception regarding refresh rate and latency, thus making it unsuitable for our
287
study. Figure 5 visualized the loss of peripheral vision from our pipeline.
288
289
290 4 SIMULATION VALIDATION
291
In an effort to validate our implementation of the different cataract parameters and the progression of cataracts,
292
293 we held semi-structured interviews with individuals who either currently have cataracts in one eye or recently had
294
295
296
297
298
299
300
301
302
303
304
305
306
307 (a) Normal Vision (b) Peripheral Vision Loss
308
309 Fig. 5. (a) A low-poly landscape without any of the cataract parameters applied. (b) A low-poly landscape with the peripheral
310 loss visualization applied. In this image, the red balls have a special invisible material that hides these items in the periphery. This
311
simulation is gaze-contingent.
312 6
A Study on Cataract in the Non-Dominant Eye Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
313 cataract surgery. The interviews all consisted of two parts: generic questions and, following this, an iterative parameter
314
adjustment. All were done over video call or in person on a normal screen.
315
316
317
4.1 Procedure
318
319 We prepared a sample scene with a low-poly landscape for the iterative parameter adjustment. In this scene, we
320
implemented parameters for visual acuity reduction, contrast loss, color shift, and sensitivity to light. When two
321
322
approaches were available, we implemented both, all according to Section 3. Before the interview, we sent out the
323 consent forms and a copy of the procedure so the participants had ample time to read through them before participating
324 in the interview. One interview took place in person, and the remaining was over video call. At the start of the interview,
325
we asked if the participant had any questions about the consent form procedure. After this, we started of with a series
326
327
of demographic questions relevant to the study at hand. After the demographic questions, we came to the iterative
328 parameter adjustment. During the iterative parameter adjustment, we inquired about the different implementations for
329 contrast loss and color shift, which are more capable of recreating their visual perception. We did not represent the loss
330
of peripheral or central vision using the stencil shader approach; instead, we applied a vignette or inverted vignette
331
332
effect, as the stencil shader effect is not "visible" and therefore, participants were unable to indicate the size accurately.
333 During the iterative parameter adjustment, we used the sliders to adjust the parameters to recreate the participants’
334 visual impression of their cataracts. First, one by one, after which all parameters are combined, as some affect each
335
other. All participants reported that the accuracy of the representation was higher with the interpolation with gray for
336
337
the contrast shift and the color interpolation for the color shift.
338
339
340 4.2 Participants
341
In total, we interviewed six individuals who either currently have cataracts in one eye or recently have had cataract
342
343
surgery. Two participants were excluded as one reported to have far progressed age-related macular degeneration,
344 which is more influential to their vision, and one participant was excluded as they reported the progression of cataracts
345 to happen within the span of one month, which contradicts existing literature [24]. For the remaining four participants,
346
P1 (female, 82 years old) reported -2 dioptri in the right eye, where the last cataract surgery was in 2014. P2 (male, 74
347
348
years old) was interviewed after recent cataract surgery; during that interview, there was no reported loss of vision. P3
349 (female, 56 years old) reported < 5% vision remaining in her right eye, where she had cataract surgery last performed in
350 2008. However, her vision was affected by several other vision impairments. P4 (male, 68 years old), interviewed after
351
recent cataract surgery, reported +2.5 dioptri in both eyes.
352
353
354
355
4.3 Results
356
During the interviews, we found that all participants reported that the interpolation with gray (see, Figure 2a) is
357
more representative of the loss of contrast and that the color interpolation (see, Figure 3a) is perceived to be more
358
359 representative of the color shift. All participants went through the iterative parameter adjustments until they found
360 that this represented their vision (either in the cataracts-affected eye or while it was affected).
361
We visualize their reported cataracts progression over time in Figure 6. For every parameter respectively, we do not
362
plot those where the parameters were not present. Furthermore, we visualize all parameters’ average progression over
363
364 7
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.
365
366
367
368
369
370
371
372
373
374
375
376
377 (a) Progression of VA reduction (b) Progression of contrast reduction
378
379
380
381
382
383
384
385
386
387
388
389
390
391 (c) Progression of color shift (d) Progression of light sensitivity
392
393 Fig. 6. The progression of the different parameters over the years. Each line represents the progression for one interviewee, where the
394 colors are consistently representing the same participant throughout the graphs. Where interviewees did not report a progression of
395 one parameter, this is not represented in the data. The dashed line represents the average linear progression over 10 years.
396
397
time using the dashed lines. By doing so, we assume a linear growth of each parameter, as we could not collect data
398
399
about progression over the span of multiple years2 . None of our participants report any loss of peripheral vision.
400
401 5 USER STUDY
402
In the following section, we will discuss the user study performed with 33 participants, the virtual environment and
403
404
hardware used for this user study, and the analysis.
405
406 5.1 Apparatus
407
We selected a visual search task to assess the impact of the cataract simulation on participants’ behavior and per-
408
409
formance. Visual search tasks have been employed widely to investigate visual impairments both on patients with
410 visual impairments [6, 28] as well as in simulations of visual impairments [7, 15, 20]. Pollmann et al. [28] showed
411 that participants with Age-Related Macular Degeneration are significantly less deficient in search tasks compared to
412
healthy participants in realistic scenes (such as rooms in a home) due to the possibility of contextual cueing. This shows
413
414
that displaying realistic scenes is necessary to assess the deficiencies of visual impairment in search tasks adequately.
415 2We acknowledge that this might oversimplify the progression of cataracts.
416 8
A Study on Cataract in the Non-Dominant Eye Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
417 Therefore, we opted to show realistic environments of different rooms in the virtual environment. Similar to David et al.
418
[7], we built eight virtual environments to represent different rooms in a house; see Figure 8. For the search items, we
419
used items that could be found in a household, namely, an apple, calculator, rubber duck, first aid kit, flashlight, glass,
420
421 hairdryer, kettle, medicine bottle, phone, screwdriver, soap bar, or stapler. All rooms and items were chosen to preserve
422 contextual cueing.
423
We built the virtual environments in Unity3D and deployed it to a desktop with a Ryzan 9 7900X3D, 64GB RAM, an
424
NVIDIA RTX 4080 Super graphics card, and a connected HTC Vive Pro Eye with a built-in Tobii eye tracker. The HMD
425
426 has two AMOLED screens, with a resolution of 2.880 × 1.600 pixels in total (1.440.600 pixels to each eye, resulting in a
427 pixel density of 615 pixels per inch (PPI)), with a refresh rate of 90 Hz and a field of view of 110◦ . The built-in Tobii eye
428
tracker was accessed using the Vive SRanipal SDK at a sampling frequency of 120 Hz.
429
We collected the head position and directional 3D vectors from Unity. We recorded eye tracking data using the built-in
430
431 eye tracker in the HTC Vive Pro Eye (120Hz) through the SRanipal SDK, giving us eye-directional 3D vectors relative
432 to both the world and the head. To facilitate our analysis, we converted the head directional 3D vecors, eye-in-world
433
directional 3D vector, and directional 3D vectors into 2D Fick angles using the Fick-gimbal method [12]. This conversion
434
involves two rotations, one over the vertical axis and another over the nested horizontal axis, effectively characterizing
435
436 the position of each vector. These resulting 2D Fick angles for eye and head directions were the basis for our following
437 analysis. For the analysis we focused on the eye tracking data for each trial. Given the relatively short duration of our
438
trials, averaging approximately 5.3 ± 8.89 seconds across all trials, we conducted our eye tracking analysis at the trial
439
level rather than examining individual behaviors within each trial.
440
441 We calculated fixations and saccades using pymovements [16], an open-source Python package for analyzing eye-
442 tracking data. We leveraged pymovements’ implementation of the ID-T algorithm [29] with specific fixations thresholds
443
set at a minimum fixation duration of 83 ms and a maximum dispersion of 1.8◦ , in line with prior work [3, 32]. This allows
444
us to calculate the following fixation based metrics: total fixation duration, average fixation duration, fixation count,
445
446 and the time from the start of the trial to the last fixation in the trial. For saccades, we again leverage pymovements,
447 this time for its implementation of the microsaccade algorithm [9]. This algorithm allows us to calculate the saccadic
448
amplitude and saccade frequency. The saccadic amplitude was calculated as the angle in degrees between the saccade
449
onset and offset, while the saccadic frequency was determined by dividing the number of saccades within a trial by the
450
451 trial duration in minutes. To calculate the average pupil dilation for each trial [], we computed the mean of the left and
452 right pupil sizes and subsequently calculated the standard deviation to facilitate our analysis at the trial level. Finally,
453
for computing the Index of Pupillary Activity (IPA), we employed the implementation by Duchowski et al. [8]. Thus,
454
we utilized discrete wavelet transforms to analyze pupil diameter signals, specifically employing a two-level discrete
455
456 wavelet transform. We normalized the wavelet coefficients to ensure a uniform analysis and identified key peaks in the
457 signal to mark significant changes in pupil diameter. We then applied a universal threshold to filter out noise.
458
459
460 5.1.1 Room Luminance. To ensure that the lighting in the different rooms was not a confound in our work, we ensured
461
a similar luminance level was maintained in each room. We measured the luminance from the headset by positioning a
462
463
LUX meter (MT-912 light meter) inside the HTC Vive Pro Eye HMD. We placed 5 points in each room, 4 in each corner,
464 and one central to ensure the lighting was similar. For the 4 in the corner, we measured the lighting conditions in 3
465 steps of 45◦ , ensuring it was always pointing into the wall. For the central point, we divided the circle into 8 steps of
466
45◦ , to turn fully. Furthermore, the camera was pointed down for 25◦ to represent what participants might perceive
467
468 9
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484 Fig. 7. Lux calibration points and directions
485
486
487
488
more accurately. We automated this process and adjusted the lighting conditions in each room. An overview of the five
489
490
measuring points with their respective rotations can be seen in Figure 7.
491
492
5.2 Procedure
493
494 The study coordinator first welcomed the participant and, after signing the consent form, asked them to answer a
495 few demographic questions. The coordinator then introduced and explained the study procedure. Namely, they were
496
tasked with searching for household items in 8 rooms. Additionally, the study coordinator told the participant that we
497
498
would manipulate something throughout the study, and if the participant noticed, they should report it. Afterward, the
499 coordinator and participant determined the dominant eye using the Dolman method. Finally, before entering VR, the
500 study coordinator explained the hardware and controls relevant to the study to the participants.
501
After putting on the HTC Vive Pro Eye, the study coordinator navigated the participant to the integrated eye-tracking
502
503
calibration. After which, the experiment was started. For each trial, we present the item the participant was tasked
504 with finding on a gray background. Once the participant confirms, the item spawns at a random location in the room.
505 These spawning locations were randomly assigned throughout the room, and potential collisions with other items were
506
checked. If there was a collision, a different spawning location was selected. This was then repeated for 15 items in
507
508
each room. We increased the severity of the simulation linearly after each trial, except for when the participant moved
509 from one room to the next for the participant in the linear increase condition. For the control condition, there was no
510 change. In the linear increase condition, the participants all started without any cataract simulation and ended with a
511
simulation severity of 1 year as determined in the interviews (see Section 4.3). As the participants were made aware
512
513
that we would manipulate something throughout the study, they were asked to report anything they noticed that was
514 off from how they expected it to be. If they did not actively report to the study coordinator before the end of the study,
515 the study coordinator asked them specifically if they noticed something odd.
516
We applied a drift correction after each room, following Sipatchin et al. [31]. This drift correction was designed to
517
518
compensate for eye tracking data quality decay in VR due to the participant’s movement, which is known to induce drift
519 into the precision of the eye tracker [5]. These drifts can influence the gaze-contingent simulation of the peripheral loss.
520 10
A Study on Cataract in the Non-Dominant Eye Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
521
522
523
524
525
526
527
528
529
530
531 (a) Bathroom (b) Bedroom
532
533
534
535
536
537
538
539
540
541
542 (c) Dining Room (d) Garage
543
544
545
546
547
548
549
550
551
552
553 (e) Kitchen (f) Living Room
554
555
556
557
558
559
560
561
562
563
564 (g) Meeting Room (h) Office
565
566 Fig. 8. Our realistic virtual environments used for the user study. On the left, no vision impairment was simulated. On the right, with
567 one year of cataract progression simulated.
568
569
570
571
572 11
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.
573
574
575
576
577
578
579
580
581
582
583
584
Fig. 9. The results of our different measurements.
585
586
587
5.3 Participants
588
589 We conducted our study with 33 participants, ages 20 to 71 (mean = 27.6, std = 10), with 14 identifying as male, 19 as
590 female, and one as non-conforming. Six participants wore contacts during the experiment. The remaining participants
591
had normal vision, with 19 being rightly dominant-eyed, and 15 had a left dominant eye, as determined by the Dolman
592
593 method. 20 participants were part of the linear increase, while the remaining participants were part of the control
594 condition. One participant in the linear increase condition was excluded as their eye tracking data was incomplete.
595
596
6 RESULTS
597
598 In this section, we first present the result of our room luminance evaluation. Then, we present the findings of how our
599 simulation affects eye movements, among others, while being influenced by the cataract simulation, see Figure 9. We
600
employ the Bayesian Independent Samples T-Test to compare the slopes of fitted linear regressions over the severity.
601
602
603
6.1 Room Luminance
604 To ensure that there was no effect of luminance across the different rooms, we made sure that there was no statistical
605
difference between the luminance in different rooms. In all rooms, we had a mean lux measurement of 30.1 (𝑠𝑡𝑑 = 5.61).
606
607
We had no values of 𝑝 < 0.05 when applying a two-tailed t-test over the lux measurements collected (see Table 2). Thus,
608 luminance should not have a significant effect on our data.
609
610 6.2 Bayesian Results
611
612 We fit a linear regression for each participant over the different extracted eye-tracking characteristics to compare
613 the two conditions (linear increase of simulated cataract severity and no change). As we expect that there will be a
614 learning effect throughout the user study and per room, we compared the slopes of these regressions using Bayesian
615
independent sample t-tests. Thus, we conducted a Bayesian independent sample t-test using JASP. Priors for all tests
616
617 were set to be Cauchy (location = 0, scale = 0.707), with our alternative hypothesis being that the slopes from linear
618 change and no change are not equal. We found 𝐵𝐹 10 = 0.103, 𝑒𝑟𝑟𝑜𝑟 % = 0.166 when including all the different extracted
619 eye-tracking characteristics. This indicates strong evidence that the slopes of the linear change and no change are
620
equal. We further explored the data by looking at individual eye-tracking characteristics. Our results show anecdotal
621
622 to moderate evidence that the slopes of the linear change and no change are equal (𝐻 0 ). However, when exclusively
623 looking at the fixation slope and fixation rate, we find anecdotal evidence for the sloped being different (𝐻 1 ). We report
624 12
A Study on Cataract in the Non-Dominant Eye Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
625 the results of all our tests in Table 1. We checked if normalizing the participants’ variance showed differences compared
626
to the slopes reported; however, running the same test with the z-score normalization increases the evidence for the
627
data being from the same population.
628
629
Table 1. Bayesian Independent Samples T-Test
630
631
632 Fixation Saccades
633 Time IPA PD Count Disp. Area Avg. Dur Std. Dur Tot. Dur Var. Dur Slope Rate Amplitude Freq
634 BF10 0.345 0.350 0.420 0.407 0.671 0.400 0.375 0.368 0.552 1.423 1.014 0.346 0.338
635 error %. 0.009 0.003 0.010 0.011 <.001 0.012 0.013 0.013 0.017 <.001 <.001 0.009 0.007
636
637
Out of the 19 participants included in the final results with the linear increase, we found that 60% of these participants
638
639
(12) were unaware of the simulated visual impairment at the end of the user study. This means they were unaware
640 of the progression that normally would happen over a year in 30 minutes. From those that did notice the simulated
641 cataracts, 6 participants noticed it in the second half of the study. All of those who noticed described the loss of visual
642
acuity as noticeable. At the same time, none of the participants reported having seen any of the other effects, even
643
644
when explicitly asked for it at the end of the user study.
645
646 7 DISCUSSION
647
In this work, we first built a simulation for cataracts in Unity3D that allows for precise manipulation of five parameters.
648
649
We validated the accuracy of our simulation in interviews with individuals who either currently have cataracts or
650 recently had cataract surgery. In these interviews, we asked the interviewees to manipulate the parameters to represent
651 their perceived vision while being affected by cataracts. By doing this, we could represent a progression curve over
652
time. We then used this progression curve for a user study, simulating cataracts progression over one year in the
653
654
non-dominant eye of participants while they were searching for everyday items in realistic scenes. In the following, we
655 discuss the most critical implications of our work. These include the representation of visual impairment simulation
656 and the implications to future work when considering the effect of visual impairment simulations on eye movements or
657
evaluating different assistive technologies.
658
659
660
7.1 Quality of Cataract Simulation
661 The results of our interviews highlight that our simulation can visualize different severities of cataract progression and
662
that this can reproduce the vision perceived by those who either have cataracts or recently had surgery. This is achieved
663
664 by using different effects, which allow the manipulation of parameters to represent the different aspects of cataracts
665 accurately. For future work in visual impairment simulations, validating the simulation by interviewing specialists
666 and/or patients is important. For cataract simulations specifically, we have made our simulation scripts available for
667
future work to be used and extended; see Section 9.
668
669
670 7.2 Change Blindness
671
We found that most participants were unaware of the simulation of cataracts. Given how we have developed the
672
virtual environments, this articulated the established finding that contextual cueing influences the ability of those
673
674 with simulated visual impairment to find objects [28]. This echoes the importance of implementing realistic scenes
675 and items to investigate simulated visual impairments. Furthermore, we found that 60% of our participants were
676 13
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.
677 completely unaware of the simulation representing the progression of one year of cataracts in the study of 30 minutes.
678
Since participants were unaware of the progression, we expect that, since the progression happens much slower,
679
individuals who experience cataracts will be unlikely to notice it themselves, even after several years of progression.
680
681 This is articulated when, during the interviews, one of our participants mentioned that they were unaware of their
682 cataracts; however, the doctor diagnosed < 5% of residual vision. This expresses the need for detection methods for
683
visual impairments, as when the individual notices the visual impairment, it might already be in a far progressed state.
684
When analyzing the eye tracking characteristics, we found no differences between those who experienced a linear
685
686 increase in severity and those with no change. More precisely, we found strong evidence that these are equal. As such,
687 future work that investigates differences in eye movements should not only simulate the visual impairment in the
688
non-dominant eye but either in the dominant eye or in both. Furthermore, it might be necessary to increase the severity
689
of the simulation to see differences in eye tracking characteristics and trial time.
690
691
692 7.3 Cataracts Beyond 1 Year
693
694
While we manipulated the severity of the cataract simulation over one year, we did not investigate further progression.
695 Future work should investigate the impact of further progression as well as the impact of other visual impairments in a
696 similar way. By doing so, the research community could collectively work towards early detection systems for visual
697
impairments, as individuals with visual impairments do not notice the changes in vision due to the underlying nature
698
699
of visual impairment progression.
700
701 8 CONCLUSION
702
703
We present a systematic investigation of how to implement a cataract simulation and validate this simulation with
704 individuals currently experiencing cataracts or recently undergoing cataract surgery. In the following study, we used
705 this simulation to simulate the progression of cataracts in a realistic virtual reality environment where individuals
706
participating were tasked with searching for household items. We found strong evidence for eye movement characteris-
707
708
tics being the same for those with the linearly increasing severity and those with no change. Furthermore, we noted
709 that most of the participants in this user study were unaware of the simulation. Our findings lay the groundwork for
710 future simulation studies, echoing the necessity for realistic virtual environments and validation of the simulation with
711
patients.
712
713
714 9 OPEN SCIENCE
715
We encourage readers to reproduce and extend our results and analysis methods. Therefore, our simulation scripts,
716
717 collected data, and analysis scripts are available at removed-for-anonymous.review.
718
719
REFERENCES
720
[1] Essam S. Almutleb and Shirin E. Hassan. 2020. The Effect of Simulated Central Field Loss on Street-crossing Decision-Making in Young Adult
721
Pedestrians. Optometry and Vision Science 97, 4 (2020). https://journals.lww.com/optvissci/fulltext/2020/04000/the_effect_of_simulated_central_
722
field_loss_on.2.aspx
723 [2] Halim Cagri Ates, Alexander Fiannaca, and Eelke Folmer. 2015. Immersive Simulation of Visual Impairments Using a Wearable See-through
724 Display. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (Stanford, California, USA) (TEI ’15).
725 Association for Computing Machinery, New York, NY, USA, 225–228. https://doi.org/10.1145/2677199.2680551
726 [3] Jaap A Beintema, Editha M van Loon, and Albert V van den Berg. 2005. Manipulating saccadic decision-rate distributions in visual search. Journal
727 of vision 5, 3 (2005), 1–1. https://doi.org/10.1167/5.3.1
728 14
A Study on Cataract in the Non-Dominant Eye Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
729 [4] Rupert Bourne, Jaimie D Steinmetz, Seth Flaxman, Paul Svitil Briant, Hugh R Taylor, Serge Resnikoff, Robert James Casson, Amir Abdoli, Eman
730 Abu-Gharbieh, Ashkan Afshin, Hamid Ahmadieh, Yonas Akalu, Alehegn Aderaw Alamneh, Wondu Alemayehu, Ahmed Samir Alfaar, Vahid Alipour,
731 Etsay Woldu Anbesu, Sofia Androudi, Jalal Arabloo, Aries Arditi, Malke Asaad, Eleni Bagli, Atif Amin Baig, Till Winfried Bärnighausen, Maurizio
732 Battaglia Parodi, Akshaya Srikanth Bhagavathula, Nikha Bhardwaj, Pankaj Bhardwaj, Krittika Bhattacharyya, Ali Bijani, Mukharram Bikbov,
733
Michele Bottone, Tasanee Braithwaite, Alain M Bron, Zahid A Butt, Ching-Yu Cheng, Dinh-Toi Chu, Maria Vittoria Cicinelli, João M Coelho, Baye
Dagnew, Xiaochen Dai, Reza Dana, Lalit Dandona, Rakhi Dandona, Monte A Del Monte, Jenny P Deva, Daniel Diaz, Shirin Djalalinia, Laura E
734
Dreer, Joshua R Ehrlich, Leon B Ellwein, Mohammad Hassan Emamian, Arthur G Fernandes, Florian Fischer, David S Friedman, João M Furtado,
735
Abhay Motiramji Gaidhane, Shilpa Gaidhane, Gus Gazzard, Berhe Gebremichael, Ronnie George, Ahmad Ghashghaee, Mahaveer Golechha, Samer
736
Hamidi, Billy Randall Hammond, Mary Elizabeth R Hartnett, Risky Kusuma Hartono, Simon I Hay, Golnaz Heidari, Hung Chak Ho, Chi Linh Hoang,
737 Mowafa Househ, Segun Emmanuel Ibitoye, Irena M Ilic, Milena D Ilic, April D Ingram, Seyed Sina Naghibi Irvani, Ravi Prakash Jha, Rim Kahloun,
738 Himal Kandel, Ayele Semachew Kasa, John H Kempen, Maryam Keramati, Moncef Khairallah, Ejaz Ahmad Khan, Rohit C Khanna, Mahalaqua Nazli
739 Khatib, Judy E Kim, Yun Jin Kim, Sezer Kisa, Adnan Kisa, Ai Koyanagi, Om P Kurmi, Van Charles Lansingh, Janet L Leasher, Nicolas Leveziel, Hans
740 Limburg, Marek Majdan, Navid Manafi, Kaweh Mansouri, Colm McAlinden, Seyed Farzad Mohammadi, Abdollah Mohammadian-Hafshejani, Reza
741 Mohammadpourhodki, Ali H Mokdad, Delaram Moosavi, Alan R Morse, Mehdi Naderi, Kovin S Naidoo, Vinay Nangia, Cuong Tat Nguyen, Huong
742 Lan Thi Nguyen, Kolawole Ogundimu, Andrew T Olagunju, Samuel M Ostroff, Songhomitra Panda-Jonas, Konrad Pesudovs, Tunde Peto, Zahiruddin
743
Quazi Syed, Mohammad Hifz Ur Rahman, Pradeep Y Ramulu, Salman Rawaf, David Laith Rawaf, Nickolas Reinig, Alan L Robin, Luca Rossetti,
Sare Safi, Amirhossein Sahebkar, Abdallah M Samy, Deepak Saxena, Janet B Serle, Masood Ali Shaikh, Tueng T Shen, Kenji Shibuya, Jae Il Shin,
744
Juan Carlos Silva, Alexander Silvester, Jasvinder A Singh, Deepika Singhal, Rita S Sitorus, Eirini Skiadaresi, Vegard Skirbekk, Amin Soheili, Raúl A
745
R C Sousa, Emma Elizabeth Spurlock, Dwight Stambolian, Biruk Wogayehu Taddele, Eyayou Girma Tadesse, Nina Tahhan, Md Ismail Tareque, Fotis
746
Topouzis, Bach Xuan Tran, Ravensara S Travillian, Miltiadis K Tsilimbaris, Rohit Varma, Gianni Virgili, Ya Xing Wang, Ningli Wang, Sheila K West,
747 Tien Y Wong, Zoubida Zaidi, Kaleab Alemayehu Zewdie, Jost B Jonas, and Theo Vos. 2021. Trends in prevalence of blindness and distance and
748 near vision impairment over 30 years: an analysis for the Global Burden of Disease Study. The Lancet Global Health 9, 2 (Feb. 2021), e130–e143.
749 https://doi.org/10.1016/s2214-109x(20)30425-3
750 [5] Viviane Clay, Peter König, and Sabine U. König. 2019. Eye tracking in virtual reality. Journal of Eye Movement Research 12, 1 (April 2019).
751 https://doi.org/10.16910/jemr.12.1.3
752 [6] Louise E. Culham, Anthony Chabra, and Gary S. Rubin. 2004. Clinical performance of electronic, head-mounted, low-
753
vision devices. Ophthalmic and Physiological Optics 24, 4 (2004), 281–290. https://doi.org/10.1111/j.1475-1313.2004.00193.x
arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1475-1313.2004.00193.x
754
[7] Erwan David, Julia Beitner, and Melissa Le-Hoa Võ. 2020. Effects of Transient Loss of Vision on Head and Eye Movements during Visual Search in a
755
Virtual Environment. Brain Sciences 10, 11 (2020). https://doi.org/10.3390/brainsci10110841
756
[8] Andrew T Duchowski, Krzysztof Krejtz, Izabela Krejtz, Cezary Biele, Anna Niedzielska, Peter Kiefer, Martin Raubal, and Ioannis Giannopoulos. 2018.
757 The index of pupillary activity: Measuring cognitive load vis-à-vis task difficulty with pupil oscillation. In Proceedings of the 2018 CHI conference on
758 human factors in computing systems. 1–13.
759 [9] Ralf Engbert and Reinhold Kliegl. 2003. Microsaccades uncover the orientation of covert attention. Vision research 43, 9 (2003), 1035–1045.
760 https://doi.org/10.1016/S0042-6989(03)00084-1
761 [10] Elisabeth M Fine and Gary S Rubin. 1999. The effects of simulated cataract on reading with normal vision and simulated central scotoma. Vision
762 Research 39, 25 (Dec. 1999), 4274–4285. https://doi.org/10.1016/s0042-6989(99)00132-7
763
[11] Jesse Grootjen, Alexandra Sipatchin, Siegfried Wahl, Tonja-Katrin Machulla, Lewis Chuang, and Thomas Kosch. 2023. Assessing Eye Tracking
for Continuous Central Field Loss Monitoring. In Proceedings of the 22nd International Conference on Mobile and Ubiquitous Multimedia (Vienna,
764
Austria) (MUM ’23). Association for Computing Machinery, New York, NY, USA, 54–64. https://doi.org/10.1145/3626705.3627776
765
[12] Thomas Haslwanter. 1995. Mathematics of three-dimensional eye rotations. Vision Research 35, 12 (1995), 1727–1739. https://doi.org/10.1016/0042-
766
6989(94)00257-M
767 [13] M.A. Hogervorst and W.J.M. Van Damme. 2006. Visualizing visual impairments. Gerontechnology 5, 4 (Oct. 2006), 208–221. https://doi.org/10.4017/
768 gt.2006.05.04.003.00
769 [14] Alex D. Hwang and Eli Peli. 2014. An Augmented-Reality Edge Enhancement Application for Google Glass. Optometry and Vision Science 91, 8 (Aug.
770 2014), 1021–1030. https://doi.org/10.1097/opx.0000000000000326
771 [15] Pete R. Jones, Tamás Somoskeöy, Hugo Chow-Wing-Bom, and David P. Crabb. 2020. Seeing other perspectives: evaluating the use of virtual and
772 augmented reality to simulate visual impairments (OpenVisSim). npj Digital Medicine 3, 1 (10 Mar 2020), 32. https://doi.org/10.1038/s41746-020-
773
0242-6
[16] Daniel G. Krakowczyk, David R. Reich, Jakob Chwastek, Deborah N. Jakobi, Paul Prasse, Assunta Süss, Oleksii Turuta, Paweł Kasprowski, and
774
Lena A. Jäger. 2023. Pymovements: A Python Package for Eye Movement Data Processing (ETRA ’23). ACM, New York, NY, USA, Article 53, 2 pages.
775
https://doi.org/10.1145/3588015.3590134
776
[17] Katharina Krösl, Dominik Bauer, Michael Schwärzler, Henry Fuchs, Georg Suter, and Michael Wimmer. 2018. A VR-based user study on the
777 effects of vision impairments on recognition distances of escape-route signs in buildings. The Visual Computer 34, 6-8 (June 2018), 911–923.
778 https://doi.org/10.1007/s00371-018-1517-7
779
780 15
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Anon.
781 [18] Katharina Krösl, Carmine Elvezio, Matthias Hürbe, Sonja Karst, Michael Wimmer, and Steven Feiner. 2019. ICthroughVR: Illuminating Cataracts
782 through Virtual Reality. In 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR). IEEE, New York, NY, USA, 655–663. https:
783 //doi.org/10.1109/VR.2019.8798239
784 [19] Katharina Krösl, Carmine Elvezio, Laura R. Luidolt, Matthias Hürbe, Sonja Karst, Steven Feiner, and Michael Wimmer. 2020. CatARact: Simulating
785
Cataracts in Augmented Reality. In 2020 IEEE International Symposium on Mixed and Augmented Reality (ISMAR). IEEE, New York, NY, USA, 682–693.
https://doi.org/10.1109/ISMAR50242.2020.00098
786
[20] Miyoung Kwon, Chaithanya Ramachandra, Premnandhini Satgunam, Bartlett W Mel, Eli Peli, and Bosco S Tjan. 2012. Contour enhancement
787
benefits older adults with simulated central field loss. Optom Vis Sci 89, 9 (Sept. 2012), 1374–1384. https://doi.org/10.1097/OPX.0b013e3182678e52
788
[21] Florian Lang, Jesse W. Grootjen, Lewis L. Chuang, and Tonja Machulla. 2022. IDeA: A Demonstration of a Mixed Reality System to Support Living
789 with Central Field Loss. In Proceedings of Mensch Und Computer 2022 (Darmstadt, Germany) (MuC ’22). Association for Computing Machinery, New
790 York, NY, USA, 611–614. https://doi.org/10.1145/3543758.3547521
791 [22] James Lewis, David Brown, Wayne Cranton, and Robert Mason. 2011. Simulating visual impairments using the Unreal Engine 3 game engine. In
792 2011 IEEE 1st International Conference on Serious Games and Applications for Health (SeGAH). 1–8. https://doi.org/10.1109/SeGAH.2011.6165430
793 [23] J. Lewis and Luke Shires. 2012. Development of a visual impairment simulator using the Microsoft XNA Framework. https://api.semanticscholar.
794 org/CorpusID:53479353
795
[24] Benjamin V. Magno, Manuel B. Datiles, and Maria Susan M. Lasa. 1995. Progression of lens opacities in cataract patients after one year. Acta
Ophthalmologica Scandinavica 73, 1 (Feb. 1995), 45–49. https://doi.org/10.1111/j.1600-0420.1995.tb00012.x
796
[25] R. Michael and A. J. Bron. 2011. The ageing lens and cataract: a model of normal and pathological ageing. Philosophical Transactions of the Royal
797
Society B: Biological Sciences 366, 1568 (April 2011), 1278–1292. https://doi.org/10.1098/rstb.2010.0300
798
[26] Ralph Michael, Laurentius J. Van Rijn, Thomas J. T. P. Van Den Berg, Rafael I. Barraquer, Günther Grabner, Helmut Wilhelm, Tanja Coeckelbergh,
799 Martin Emesz, Patrik Marvan, and Christian Nischler. 2009. Association of lens opacities, intraocular straylight, contrast sensitivity and visual
800 acuity in European drivers. Acta Ophthalmologica 87, 6 (Sept. 2009), 666–671. https://doi.org/10.1111/j.1755-3768.2008.01326.x
801 [27] Donatella Pascolini and Silvio Paolo Mariotti. 2012. Global estimates of visual impairment: 2010. British Journal of Ophthalmology 96, 5 (May 2012),
802 614–618. https://doi.org/10.1136/bjophthalmol-2011-300539
803 [28] Stefan Pollmann, Lisa Rosenblum, Stefanie Linnhoff, Eleonora Porracin, Franziska Geringswald, Anne Herbik, Katja Renner, and Michael B.
804 Hoffmann. 2020. Preserved Contextual Cueing in Realistic Scenes in Patients with Age-Related Macular Degeneration. Brain Sciences 10, 12 (2020).
805
https://doi.org/10.3390/brainsci10120941
[29] Dario D Salvucci and Joseph H Goldberg. 2000. Identifying fixations and saccades in eye-tracking protocols. In Proceedings of the 2000 symposium on
806
Eye tracking research & applications. 71–78. https://doi.org/10.1145/355017.355028
807
[30] Alexandra Sipatchin, Miguel García García, and Siegfried Wahl. 2022. Assistance for macular degeneration (MD): Different strategies for different
808
augmentations. Investigative Ophthalmology & Visual Science 63, 7 (2022), 714–F0442.
809 [31] Alexandra Sipatchin, Miguel García García, and Siegfried Wahl. 2021. Target Maintenance in Gaming via Saliency Augmentation: An Early-Stage
810 Scotoma Simulation Study Using Virtual Reality (VR). Applied Sciences 11, 15 (Aug. 2021), 7164. https://doi.org/10.3390/app11157164
811 [32] AV Van den Berg and EM Van Loon. 2005. An invariant for timing of saccades during visual search. Vision Research 45, 12 (2005), 1543–1555.
812 https://doi.org/10.1016/j.visres.2004.12.018
813 [33] T. J. T. P. Van Den Berg. 1986. Importance of pathological intraocular light scatter for visual disability. Documenta Ophthalmologica 61, 3-4 (Jan.
814 1986), 327–333. https://doi.org/10.1007/BF00142360
815
[34] Jani Väyrynen, Ashley Colley, and Jonna Häkkilä. 2016. Head mounted display design tool for simulating visual disabilities. In Proceedings
of the 15th International Conference on Mobile and Ubiquitous Multimedia. Association for Computing Machinery, New York, NY, USA, 69–73.
816
https://doi.org/10.1145/3012709.3012714
817
[35] Joanne Wood, Alex Chaparro, Trent Carberry, and Byoung Sun Chu. 2010. Effect of simulated visual impairment on nighttime driving performance.
818
Optom Vis Sci 87, 6 (June 2010), 379–386. https://doi.org/10.1097/OPX.0b013e3181d95b0d
819 [36] Joanne M. Wood and Rod Troutbeck. 1994. Effect of Visual Impairment on Driving. Human Factors: The Journal of the Human Factors and Ergonomics
820 Society 36, 3 (Sept. 1994), 476–487. https://doi.org/10.1177/001872089403600305
821 [37] World Health Organization. 2019. World report on vision. https://www.who.int/publications/i/item/9789241516570 Licence: CC BY-NC-SA 3.0 IGO.
822
823
A APPENDIX
824
825
826
827
828
829
830
831
832 16
A Study on Cataract in the Non-Dominant Eye Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
833 Table 2. To ensure there was no difference in lighting for the different rooms, we did a two-tailed t-test between the recorded lux
834
readings in all of the rooms. In this table, we report the p-values found when comparing the lux values of each room. (O: Office, K:
Kitchen, Bath: Bathroom, Bed: Bedroom, DR: Dining room, G: Garage, MR: Meeting Room)
835
836
837 O K Bath Bed DR G MR
838
LR 0.760976 0.782200 0.577032 0.416958 0.644065 0.893342 0.178628
839
O − 0.952046 0.969207 0.907145 0.864451 0.702120 0.867369
840
K − 0.963648 0.968000 0.862396 0.689943 0.941031
841
Bath − 0.880365 0.740621 0.575524 0.798492
842
Bed − 0.537122 0.614115 0.924110
843
DR − 0.796136 0.433321
844
G − 0.501159
845
MR −
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884 17