Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (710)

Search Parameters:
Keywords = haptics

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
12 pages, 1196 KiB  
Article
Predictive Factors of Canine Malignant Hepatic Diseases with Multifocal Hepatic Lesions Using Clinicopathology, Ultrasonography, and Hepatobiliary Ultrasound Scores
by Aphinan Phosri, Pinkarn Chantawong, Niyada Thitaram, Kidsadagon Pringproa and Atigan Thongtharb
Animals 2024, 14(19), 2910; https://doi.org/10.3390/ani14192910 - 9 Oct 2024
Abstract
Multifocal hepatic lesions in dogs arise from various benign and malignant liver diseases. Diagnosing these lesions is challenging because clinical signs, hematological data, and serum biochemistry are not definitive indicators. Ultrasound is utilized as a diagnostic imaging tool to evaluate liver parenchyma and [...] Read more.
Multifocal hepatic lesions in dogs arise from various benign and malignant liver diseases. Diagnosing these lesions is challenging because clinical signs, hematological data, and serum biochemistry are not definitive indicators. Ultrasound is utilized as a diagnostic imaging tool to evaluate liver parenchyma and detect hepatic lesions. This study aims to investigate the predictive factors that differentiate between benign and malignant multifocal hepatic lesions by examining ultrasound characteristics, blood tests, and serum biochemistry. In total, 43 dogs with multifocal hepatic lesions were included in this study. All dogs were classified into benign hepatic diseases (n = 32) and malignant haptic diseases (n = 11). For all dogs, their liver characteristics, lesion characteristics, and hepatobiliary ultrasound score by ultrasound were evaluated and we collected individual clinicopathological data for analysis. The findings of the univariate analysis revealed significant differences in four hematological and blood chemical parameters (hematocrit, white blood cell count, aspartate transaminase (AST), and alkaline phosphatase (ALP)) and six ultrasonographic parameters (liver parenchymal echogenicity, lesion homogeneity, lesion echogenicity, maximum lesion dimension, average lesion dimension, and hepatobiliary ultrasound score). Using multivariate analysis, only two parameters, hepatobiliary ultrasound score and lesion homogeneity, showed significant differences (p-value < 0.001 and p-value = 0.011, respectively). Additionally, these parameters demonstrated high accuracy in predicting malignant multifocal liver lesions, with accuracy rates of 97.67% and 93.02%, respectively. Therefore, the hepatobiliary ultrasound score and lesion homogeneity are considered effective parameters for screening malignant multifocal liver lesions in dogs. Full article
(This article belongs to the Section Veterinary Clinical Studies)
Show Figures

Graphical abstract

13 pages, 485 KiB  
Review
Beyond Presence: Exploring Empathy within the Metaverse
by Anjitha Divakaran, Hyung-Jeong Yang, Seung-won Kim, Ji-eun Shin and Soo-Hyung Kim
Appl. Sci. 2024, 14(19), 8958; https://doi.org/10.3390/app14198958 - 4 Oct 2024
Viewed by 257
Abstract
As the metaverse evolves, characterized by its immersive and interactive landscapes, it presents novel opportunities for empathy research. This study aims to systematically review how empathy manifests in metaverse environments, focusing on two distinct forms: specific empathy (context-based) and universal empathy (generalized). Our [...] Read more.
As the metaverse evolves, characterized by its immersive and interactive landscapes, it presents novel opportunities for empathy research. This study aims to systematically review how empathy manifests in metaverse environments, focusing on two distinct forms: specific empathy (context-based) and universal empathy (generalized). Our analysis reveals a predominant focus on specific empathy, driven by the immersive nature of virtual settings, such as virtual reality (VR) and augmented reality (AR). However, we argue that such immersive scenarios alone are insufficient for a comprehensive exploration of empathy. To deepen empathetic engagement, we propose the integration of advanced sensory feedback mechanisms, such as haptic feedback and biometric sensing. This paper examines the current state of empathy in virtual environments, contrasts it with the potential for enriched empathetic connections through technological enhancements, and proposes future research directions. By fostering both specific and universal empathy, we envision a metaverse that not only bridges gaps but also cultivates meaningful, empathetic connections across its diverse user base. Full article
Show Figures

Figure 1

17 pages, 6147 KiB  
Article
Tactile Simultaneous Localization and Mapping Using Low-Cost, Wearable LiDAR
by John LaRocco, Qudsia Tahmina, John Simonis, Taylor Liang and Yiyao Zhang
Hardware 2024, 2(4), 256-272; https://doi.org/10.3390/hardware2040012 - 29 Sep 2024
Viewed by 348
Abstract
Tactile maps are widely recognized as useful tools for mobility training and the rehabilitation of visually impaired individuals. However, current tactile maps lack real-time versatility and are limited because of high manufacturing and design costs. In this study, we introduce a device (i.e., [...] Read more.
Tactile maps are widely recognized as useful tools for mobility training and the rehabilitation of visually impaired individuals. However, current tactile maps lack real-time versatility and are limited because of high manufacturing and design costs. In this study, we introduce a device (i.e., ClaySight) that enhances the creation of automatic tactile map generation, as well as a model for wearable devices that use low-cost laser imaging, detection, and ranging (LiDAR,) used to improve the immediate spatial knowledge of visually impaired individuals. Our system uses LiDAR sensors to (1) produce affordable, low-latency tactile maps, (2) function as a day-to-day wayfinding aid, and (3) provide interactivity using a wearable device. The system comprises a dynamic mapping and scanning algorithm and an interactive handheld 3D-printed device that houses the hardware. Our algorithm accommodates user specifications to dynamically interact with objects in the surrounding area and create map models that can be represented with haptic feedback or alternative tactile systems. Using economical components and open-source software, the ClaySight system has significant potential to enhance independence and quality of life for the visually impaired. Full article
Show Figures

Figure 1

14 pages, 1401 KiB  
Article
Development and Usability Assessment of Virtual Reality- and Haptic Technology-Based Educational Content for Perioperative Nursing Education
by Hyeon-Young Kim
Healthcare 2024, 12(19), 1947; https://doi.org/10.3390/healthcare12191947 - 29 Sep 2024
Viewed by 550
Abstract
Background/Objectives: In perioperative nursing practice, nursing students can engage in direct, in-person clinical experiences in perioperative environments; however, they face limitations due to infection and contamination risks. This study aimed to develop and evaluate educational content for perioperative clinical practice for nursing students [...] Read more.
Background/Objectives: In perioperative nursing practice, nursing students can engage in direct, in-person clinical experiences in perioperative environments; however, they face limitations due to infection and contamination risks. This study aimed to develop and evaluate educational content for perioperative clinical practice for nursing students using virtual reality (VR) and haptic technology. Methods: The program, based on the Unity Engine, was created through programming and followed the system development lifecycle (SDLC) phases of analysis, design, implementation, and evaluation. This program allows nursing students to engage in perioperative practice using VR and haptic technology, overcoming previous environmental limitations and enhancing practical and immersive experiences through multi-sensory stimuli. Results: Expert evaluations indicated that the developed content was deemed suitable for educational use. Additionally, a usability assessment with 29 nursing students revealed high levels of presence, usability, and satisfaction among the participants. Conclusions: This program can serve as a foundation for future research on VR-based perioperative nursing education. Full article
Show Figures

Figure 1

18 pages, 1052 KiB  
Review
Adoption of the Robotic Platform across Thoracic Surgeries
by Kaity H. Tung, Sai Yendamuri and Kenneth P. Seastedt
J. Clin. Med. 2024, 13(19), 5764; https://doi.org/10.3390/jcm13195764 - 27 Sep 2024
Viewed by 433
Abstract
With the paradigm shift in minimally invasive surgery from the video-assisted thoracoscopic platform to the robotic platform, thoracic surgeons are applying the new technology through various commonly practiced thoracic surgeries, striving to improve patient outcomes and reduce morbidity and mortality. This review will [...] Read more.
With the paradigm shift in minimally invasive surgery from the video-assisted thoracoscopic platform to the robotic platform, thoracic surgeons are applying the new technology through various commonly practiced thoracic surgeries, striving to improve patient outcomes and reduce morbidity and mortality. This review will discuss the updates in lung resections, lung transplantation, mediastinal surgeries with a focus on thymic resection, rib resection, tracheal resection, tracheobronchoplasty, diaphragm plication, esophagectomy, and paraesophageal hernia repair. The transition from open surgery to video-assisted thoracoscopic surgery (VATS) to now robotic video-assisted thoracic surgery (RVATS) allows complex surgeries to be completed through smaller and smaller incisions with better visualization through high-definition images and finer mobilization, accomplishing what might be unresectable before, permitting shorter hospital stay, minimizing healing time, and encompassing broader surgical candidacy. Moreover, better patient outcomes are not only achieved through what the lead surgeon could carry out during surgeries but also through the training of the next generation via accessible live video feedback and recordings. Though larger volume randomized controlled studies are pending to compare the outcomes of VATS to RVATS surgeries, published studies show non-inferiority data from RVATS performances. With progressive enhancement, such as overcoming the lack of haptic feedback, and future incorporation of artificial intelligence (AI), the robotic platform will likely be a cost-effective route once surgeons overcome the initial learning curve. Full article
(This article belongs to the Section General Surgery)
Show Figures

Figure 1

14 pages, 15754 KiB  
Article
Development of Second Prototype of Twin-Driven Magnetorheological Fluid Actuator for Haptic Device
by Takehito Kikuchi, Asaka Ikeda, Rino Matsushita and Isao Abe
Micromachines 2024, 15(10), 1184; https://doi.org/10.3390/mi15101184 - 25 Sep 2024
Viewed by 372
Abstract
Magnetorheological fluids (MRFs) are functional fluids that exhibit rapid and reproducible rheological responses to external magnetic fields. An MRF has been utilized to develop a haptic device with precise haptic feedback for teleoperative surgical systems. To achieve this, we developed several types of [...] Read more.
Magnetorheological fluids (MRFs) are functional fluids that exhibit rapid and reproducible rheological responses to external magnetic fields. An MRF has been utilized to develop a haptic device with precise haptic feedback for teleoperative surgical systems. To achieve this, we developed several types of compact MRF clutches for haptics (H-MRCs) and integrated them into a twin-driven MRF actuator (TD-MRA). The first TD-MRA prototype was successfully used to generate fine haptic feedback for operators. However, undesirable torque ripples were observed due to shaft misalignment and the low rigidity of the structure. Additionally, the detailed torque control performance was not evaluated from both static and dynamic current inputs. The objective of this study is to develop a second prototype to reduce torque ripple by improving the structure and evaluating its static and dynamic torque performance. Torque performance was measured using both constant and stepwise current inputs. The coefficient of variance of the torque was successfully reduced by half due to the structural redesign. Although the time constants of the H-MRC were less than 10 ms, those of the TD-MRA were less than 20 ms under all conditions. To address the slower downward output response, we implemented an improved input method, which successfully halved the response time. Full article
(This article belongs to the Special Issue Magnetorheological Materials and Application Systems)
Show Figures

Figure 1

18 pages, 17808 KiB  
Article
Virtual Hand Deformation-Based Pseudo-Haptic Feedback for Enhanced Force Perception and Task Performance in Physically Constrained Teleoperation
by Kento Yamamoto, Yaonan Zhu, Tadayoshi Aoyama and Yasuhisa Hasegawa
Robotics 2024, 13(10), 143; https://doi.org/10.3390/robotics13100143 - 24 Sep 2024
Viewed by 673
Abstract
Force-feedback devices enhance task performance in most robot teleoperations. However, their increased size with additional degrees of freedom can limit the robot’s applicability. To address this, an interface that visually presents force feedback is proposed, eliminating the need for bulky physical devices. Our [...] Read more.
Force-feedback devices enhance task performance in most robot teleoperations. However, their increased size with additional degrees of freedom can limit the robot’s applicability. To address this, an interface that visually presents force feedback is proposed, eliminating the need for bulky physical devices. Our telepresence system renders robotic hands transparent in the camera image while displaying virtual hands. The forces applied to the robot deform these virtual hands. The deformation creates an illusion that the operator’s hands are deforming, thus providing pseudo-haptic feedback. We conducted a weight comparison experiment in a virtual reality environment to evaluate force sensitivity. In addition, we conducted an object touch experiment to assess the speed of contact detection in a robot teleoperation setting. The results demonstrate that our method significantly surpasses conventional pseudo-haptic feedback in conveying force differences. Operators detected object touch 24.7% faster using virtual hand deformation compared to conditions without feedback. This matches the response times of physical force-feedback devices. This interface not only increases the operator’s force sensitivity but also matches the performance of conventional force-feedback devices without physically constraining the operator. Therefore, the interface enhances both task performance and the experience of teleoperation. Full article
(This article belongs to the Special Issue Extended Reality and AI Empowered Robots)
Show Figures

Graphical abstract

15 pages, 3502 KiB  
Article
Evaluation of Haptic Textures for Tangible Interfaces for the Tactile Internet
by Nikolaos Tzimos, George Voutsakelis, Sotirios Kontogiannis and Georgios Kokkonis
Electronics 2024, 13(18), 3775; https://doi.org/10.3390/electronics13183775 - 23 Sep 2024
Viewed by 478
Abstract
Every texture in the real world provides us with the essential information to identify the physical characteristics of real objects. In addition to sight, humans use the sense of touch to explore their environment. Through haptic interaction we obtain unique and distinct information [...] Read more.
Every texture in the real world provides us with the essential information to identify the physical characteristics of real objects. In addition to sight, humans use the sense of touch to explore their environment. Through haptic interaction we obtain unique and distinct information about the texture and the shape of objects. In this paper, we enhance X3D 3D graphics files with haptic features to create 3D objects with haptic feedback. We propose haptic attributes such as static and dynamic friction, stiffness, and maximum altitude that provide the optimal user experience in a virtual haptic environment. After numerous optimization attempts on the haptic textures, we propose various haptic geometrical textures for creating a virtual 3D haptic environment for the tactile Internet. These tangible geometrical textures can be attached to any geometric shape, enhancing the haptic sense. We conducted a study of user interaction with a virtual environment consisting of 3D objects enhanced with haptic textures to evaluate performance and user experience. The goal is to evaluate the realism and recognition accuracy of each generated texture. The findings of the study aid visually impaired individuals to better understand their physical environment, using haptic devices in conjunction with the enhanced haptic textures. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

18 pages, 3659 KiB  
Article
Enabling Pandemic-Resilient Healthcare: Edge-Computing-Assisted Real-Time Elderly Caring Monitoring System
by Muhammad Zubair Islam, A. S. M. Sharifuzzaman Sagar and Hyung Seok Kim
Appl. Sci. 2024, 14(18), 8486; https://doi.org/10.3390/app14188486 - 20 Sep 2024
Viewed by 581
Abstract
Over the past few years, life expectancy has increased significantly. However, elderly individuals living independently often require assistance due to mobility issues, symptoms of dementia, or other health-related challenges. In these situations, high-quality elderly care systems for the aging population require innovative approaches [...] Read more.
Over the past few years, life expectancy has increased significantly. However, elderly individuals living independently often require assistance due to mobility issues, symptoms of dementia, or other health-related challenges. In these situations, high-quality elderly care systems for the aging population require innovative approaches to guarantee Quality of Service (QoS) and Quality of Experience (QoE). Traditional remote elderly care methods face several challenges, including high latency and poor service quality, which affect their transparency and stability. This paper proposes an Edge Computational Intelligence (ECI)-based haptic-driven ECI-TeleCaring system for the remote caring and monitoring of elderly people. It utilizes a Software-Defined Network (SDN) and Mobile Edge Computing (MEC) to reduce latency and enhance responsiveness. Dual Long Short-Term Memory (LSTM) models are deployed at the edge to enable real-time location-aware activity prediction to ensure QoS and QoE. The results from the simulation demonstrate that the proposed system is proficient in managing the transmission of data in real time without and with an activity recognition and location-aware model by communication latency under 2.5 ms (more than 60%) and from 11∼12 ms (60∼95%) for 10 to 1000 data packets, respectively. The results also show that the proposed system ensures a trade-off between the transparency and stability of the system from the QoS and QoE perspectives. Moreover, the proposed system serves as a testbed for implementing, investigating, and managing elder telecaring services for QoS/QoE provisioning. It facilitates real-time monitoring of the deployed technological parameters along with network delay and packet loss, and it oversees data exchange between the master domain (human operator) and slave domain (telerobot). Full article
(This article belongs to the Special Issue Advances in Intelligent Communication System)
Show Figures

Figure 1

12 pages, 1080 KiB  
Article
Development and Validation of a Tool for VBOI (Virtual Body Ownership Illusion) Level Assessment
by Gayoung Yoo and Kyungdoh Kim
Appl. Sci. 2024, 14(18), 8432; https://doi.org/10.3390/app14188432 - 19 Sep 2024
Viewed by 370
Abstract
Virtual Body Ownership Illusion (Virtual BOI) refers to the perceptual, cognitive, and behavioral changes that occur due to the illusion that a virtual body is one’s own actual body. Recent research has focused on inducing Virtual Body Ownership Illusion (Virtual BOI) using various [...] Read more.
Virtual Body Ownership Illusion (Virtual BOI) refers to the perceptual, cognitive, and behavioral changes that occur due to the illusion that a virtual body is one’s own actual body. Recent research has focused on inducing Virtual Body Ownership Illusion (Virtual BOI) using various physical conditions of VR environments such as haptic feedback and 360-degree immersion, among others. The level of Virtual BOI has been recognized as an important factor in VR-based clinical therapy programs where patient immersion is crucial. However, a common issue is the lack of standardized evaluation tools for Virtual BOI, with most experiments relying on ad hoc tools based on experimental conditions or lacking consideration for the physical design elements of VR. This measurement tool was designed to consider the characteristics of recent VR devices, such as haptics and hand tracking, in the design of experiments and questionnaires. The tool is composed of sub-attributes related to VR technology, including Embodiment, Presence, Visuo-tactile, Visuo-proprioceptive, and Visuo-Motor. Based on a review of the existing literature, we hypothesized that the Virtual BOI scores would vary depending on manipulation methods, viewpoints, and haptic conditions. An experiment was conducted with 39 participants, who performed the same task under four different conditions using a virtual hand. Virtual BOI scores were assessed using the evaluation tool developed for this study. The questionnaire underwent CFA, and three items with factor loadings below 0.5 were removed, resulting in a total of 14 items. Each subscale demonstrated high reliability, with Cronbach’s alpha values greater than 0.60. When developing experiments, clinical programs, or VR content related to Virtual BOI, the evaluation tool presented in this study can be used to assess the level of Virtual BOI. Additionally, by considering technological elements such as haptics and hand tracking, VR environments can be designed to enhance the level of Virtual BOI. Full article
Show Figures

Figure 1

14 pages, 6274 KiB  
Article
Evaluation of a Three-Dimensional Printed Interventional Simulator for Cardiac Ablation Therapy Training
by Carlo Saija, Sachin Sabu, Lisa Leung, Ellie Lowe, Noor Al-Bahrani, Marco Antonio Coutinho Pinto, Mark Herridge, Nadia M. Chowdhury, Gregory Gibson, Calum Byrne, Adharvan Gabbeta, Ewen Marion, Rashi Chavan, Jonathan Behar, Antonia Agapi Pontiki, Pierre Berthet-Rayne, Richard James Housden and Kawal Rhode
Appl. Sci. 2024, 14(18), 8423; https://doi.org/10.3390/app14188423 - 19 Sep 2024
Viewed by 599
Abstract
Cardiac ablation (CA) is an interventional electrophysiological procedure used to disrupt arrhythmic substrates in the myocardium by inducing localized scarring. Current CA training relies on the master–apprentice model. In different fields of medicine including CA, virtual and physical simulators have proven to enhance, [...] Read more.
Cardiac ablation (CA) is an interventional electrophysiological procedure used to disrupt arrhythmic substrates in the myocardium by inducing localized scarring. Current CA training relies on the master–apprentice model. In different fields of medicine including CA, virtual and physical simulators have proven to enhance, and even outperform, conventional training modalities while providing a risk-free learning environment. Despite the benefits, high costs and operational difficulties limit the widespread use of interventional simulators. Our previous research introduced a low-cost CA simulator using a 3D-printed biatrial cardiac model, successfully recording ten ablation lesions on the phantom myocardium. In this work, we present and evaluate an enhanced version: compared to the previous version, the cardiac phantom’s electrical behavior and ablation settings were optimized to produce consistent lesions, while 3D-printed components improved the haptic and radiographic properties of the simulator. Seven cardiologists compared the experimental simulator’s performance to the leading commercial system from Heartroid in a 24-question survey on a 5-point Likert scale. The four following areas of fidelity were considered: catheter entry, anatomical correctness, radiographic appearance, and mapping and ablation. The experimental simulator significantly outperformed the commercial system (p < 0.01), particularly in radiographic appearance (p < 0.01). The results show the potential for the experimental simulator in routine CA training. Full article
(This article belongs to the Section Additive Manufacturing Technologies)
Show Figures

Figure 1

19 pages, 6078 KiB  
Article
Using a Guidance Virtual Fixture on a Soft Robot to Improve Ureteroscopy Procedures in a Phantom
by Chun-Feng Lai, Elena De Momi, Giancarlo Ferrigno and Jenny Dankelman
Robotics 2024, 13(9), 140; https://doi.org/10.3390/robotics13090140 - 18 Sep 2024
Viewed by 530
Abstract
Manipulating a flexible ureteroscope is difficult, due to its bendable body and hand–eye coordination problems, especially when exploring the lower pole of the kidney. Though robotic interventions have been adopted in various clinical scenarios, they are rarely used in ureteroscopy. This study proposes [...] Read more.
Manipulating a flexible ureteroscope is difficult, due to its bendable body and hand–eye coordination problems, especially when exploring the lower pole of the kidney. Though robotic interventions have been adopted in various clinical scenarios, they are rarely used in ureteroscopy. This study proposes a teleoperation system consists of a soft robotic endoscope together with a Guidance Virtual Fixture (GVF) to help users explore the kidney’s lower pole. The soft robotic arm was a cable-driven, 3D-printed design with a helicoid structure. GVF was dynamically constructed using video streams from an endoscopic camera. With a haptic controller, GVF can provide haptic feedback to guide the users in following a trajectory. In the user study, participants were asked to follow trajectories when the soft robotic arm was in a retroflex posture. The results suggest that the GVF can reduce errors in the trajectory tracking tasks when the users receive the proper training and gain more experience. Based on the NASA Task Load Index questionnaires, most participants preferred having the GVF when manipulating the robotic arm. In conclusion, the results demonstrate the benefits and potential of using a robotic arm with a GVF. More research is needed to investigate the effectiveness of the GVFs and the robotic endoscope in ureteroscopic procedures. Full article
(This article belongs to the Section Soft Robotics)
Show Figures

Figure 1

10 pages, 2093 KiB  
Article
The Surgical Outcomes of Modified Intraocular Lens Suturing with Forceps-Assisted Haptics Extraction: A Clinical and Basic Evaluation
by Yasuyuki Sotani, Hisanori Imai, Maya Kishi, Hiroko Yamada, Wataru Matsumiya, Akiko Miki, Sentaro Kusuhara and Makoto Nakamura
J. Clin. Med. 2024, 13(18), 5522; https://doi.org/10.3390/jcm13185522 - 18 Sep 2024
Viewed by 491
Abstract
Background/Objectives: Postoperative intraocular lens (IOL) tilt is a risk associated with IOL scleral fixation. However, the cause of IOL tilt during IOL suturing remains unclear. Therefore, this study aimed to evaluate the surgical outcomes of a modified IOL suturing technique and investigate the [...] Read more.
Background/Objectives: Postoperative intraocular lens (IOL) tilt is a risk associated with IOL scleral fixation. However, the cause of IOL tilt during IOL suturing remains unclear. Therefore, this study aimed to evaluate the surgical outcomes of a modified IOL suturing technique and investigate the factors contributing to postoperative IOL tilt and decentration. Methods: We included 25 eyes of 22 patients who underwent IOL suturing between April 2018 and February 2020. A modified IOL suturing technique that decreased the need for intraocular suture manipulation was used. Factors contributing to IOL tilt and decentration were investigated using an intraoperative optical coherence tomography (iOCT) system. Results: The mean postoperative best-corrected visual acuity improved from 0.15 ± 0.45 to −0.02 ± 0.19 (p = 0.02). The mean IOL tilt angle at the last visit after surgery was 1.84 ± 1.28 degrees. The present study reveals that the distance of the scleral puncture site from the corneal limbus had a stronger effect on IOL tilt; meanwhile, the suture position of the haptics had a greater effect on IOL decentration. Conclusions: The modified IOL suturing technique, which avoids intraocular suture handling, had favorable surgical outcomes with improved postoperative visual acuity and controlled IOL tilt and decentration. Accurate surgical techniques and careful measurement of distances during surgery are crucial for preventing postoperative IOL tilt and decentration. Full article
(This article belongs to the Section Ophthalmology)
Show Figures

Figure 1

24 pages, 1447 KiB  
Review
Effects of Haptic Feedback Interventions in Post-Stroke Gait and Balance Disorders: A Systematic Review and Meta-Analysis
by Maria Gomez-Risquet, Rocío Cáceres-Matos, Eleonora Magni and Carlos Luque-Moreno
J. Pers. Med. 2024, 14(9), 974; https://doi.org/10.3390/jpm14090974 - 14 Sep 2024
Viewed by 507
Abstract
Background: Haptic feedback is an established method to provide sensory information (tactile or kinesthetic) about the performance of an activity that an individual can not consciously detect. After a stroke, hemiparesis usually leads to gait and balance disorders, where haptic feedback can [...] Read more.
Background: Haptic feedback is an established method to provide sensory information (tactile or kinesthetic) about the performance of an activity that an individual can not consciously detect. After a stroke, hemiparesis usually leads to gait and balance disorders, where haptic feedback can be a promising approach to promote recovery. The aim of the present study is to understand its potential effects on gait and balance impairments, both after interventions and in terms of immediate effects. Methods: This research was carried out using the following scientific databases: Embase, Scopus, Web of Science, and Medline/PubMed from inception to May 2024. The Checklist for Measuring quality, PEDro scale, and the Cochrane collaboration tool were used to assess the methodological quality and risk of bias of the studies. Results: Thirteen articles were chosen for qualitative analysis, with four providing data for the meta-analysis. The findings did not yield definitive evidence on the effectiveness of haptic feedback for treating balance and gait disorders following a stroke. Conclusions: Further research is necessary in order to determine the effectiveness of haptic feedback mechanisms, with larger sample sizes and more robust methodologies. Longer interventions and pre–post design in gait training with haptic feedback are necessary. Full article
Show Figures

Figure 1

19 pages, 27719 KiB  
Article
Assistive Control through a Hapto-Visual Digital Twin for a Master Device Used for Didactic Telesurgery
by Daniel Pacheco Quiñones, Daniela Maffiodo and Med Amine Laribi
Robotics 2024, 13(9), 138; https://doi.org/10.3390/robotics13090138 - 11 Sep 2024
Viewed by 432
Abstract
This article explores the integration of a hapto-visual digital twin on a master device used for bilateral teleoperation. The device, known as a quasi-spherical parallel manipulator, is currently employed for the remote center of motion control in teleoperated mini-invasive surgery. After providing detailed [...] Read more.
This article explores the integration of a hapto-visual digital twin on a master device used for bilateral teleoperation. The device, known as a quasi-spherical parallel manipulator, is currently employed for the remote center of motion control in teleoperated mini-invasive surgery. After providing detailed insights into the device’s kinematics, including its geometric configuration, Jacobian, and reachable workspace, the paper illustrates the overall control system, encompassing both hardware and software components. The article describes how a digital twin, which implements a haptic assistive control and a visually enhanced representation of the device, was integrated into the system. The digital twin was then tested with the device: in the experiments, one “student” end-user must follow a predefined “teacher” trajectory. Preliminary results demonstrate how the overall system can pose a good starting point for didactic telesurgery operation. The control action, yet to be optimized and tested on different subjects, indeed seems to grant satisfying performance and accuracy. Full article
(This article belongs to the Special Issue Digital Twin-Based Human–Robot Collaborative Systems)
Show Figures

Figure 1

Back to TopTop