100% found this document useful (19 votes)
195 views17 pages

Learning Dynamic Spatial Relations The Case of A Knowledge Based Endoscopic Camera Guidance Robot Unrestricted Download

This PhD thesis by Andreas Bihlmaier focuses on the development of a knowledge-based endoscopic camera guidance robot, addressing the challenges of minimally-invasive surgery and medical robotics. It details the system architecture, learning models for spatial relations, and intraoperative robot-based camera assistance, along with evaluation studies. The work is part of the Sonderforschungsbereich/Transregio 125 project, emphasizing the importance of collaboration in advancing surgical robotics technology.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
100% found this document useful (19 votes)
195 views17 pages

Learning Dynamic Spatial Relations The Case of A Knowledge Based Endoscopic Camera Guidance Robot Unrestricted Download

This PhD thesis by Andreas Bihlmaier focuses on the development of a knowledge-based endoscopic camera guidance robot, addressing the challenges of minimally-invasive surgery and medical robotics. It details the system architecture, learning models for spatial relations, and intraoperative robot-based camera assistance, along with evaluation studies. The work is part of the Sonderforschungsbereich/Transregio 125 project, emphasizing the importance of collaboration in advancing surgical robotics technology.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 17

Learning Dynamic Spatial Relations The Case of a

Knowledge based Endoscopic Camera Guidance Robot

Visit the link below to download the full version of this book:

https://medipdf.com/product/learning-dynamic-spatial-relations-the-case-of-a-kno
wledge-based-endoscopic-camera-guidance-robot/

Click Download Now


Andreas Bihlmaier
Karlsruhe, Germany

PhD Thesis, Karlsruhe Institute of Technology (KIT), 2016

ISBN 978-3-658-14913-0 ISBN 978-3-658-14914-7 (eBook)


DOI 10.1007/978-3-658-14914-7

Library of Congress Control Number: 2016946311

Springer Vieweg
© Springer Fachmedien Wiesbaden 2016
This work is subject to copyright. All rights are reserved by the Publisher, whether the whole or part
of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations,
recitation, broadcasting, reproduction on microfilms or in any other physical way, and transmission
or information storage and retrieval, electronic adaptation, computer software, or by similar or
dissimilar methodology now known or hereafter developed.
The use of general descriptive names, registered names, trademarks, service marks, etc. in this
publication does not imply, even in the absence of a specific statement, that such names are exempt
from the relevant protective laws and regulations and therefore free for general use.
The publisher, the authors and the editors are safe to assume that the advice and information in this
book are believed to be true and accurate at the date of publication. Neither the publisher nor the
authors or the editors give a warranty, express or implied, with respect to the material contained
herein or for any errors or omissions that may have been made.

Printed on acid-free paper

This Springer Vieweg imprint is published by Springer Nature


The registered company is Springer Fachmedien Wiesbaden GmbH
For puiu
Acknowledgement
A number of people beyond the author of this PhD thesis have been
essential to achieve the results presented here. My first “Thank
You” is directed to Prof. Heinz Wörn and Prof. Beat Müller not
only for their guidance and advice, but without them there would
not even have been a research project to start my work on. In this
context, I also want to acknowledge everyone involved in writing the
Sonderforschungsbereich/Transregio 125 “Cognition-Guided Surgery”1
project proposal, in particular Oliver Weede, who set the goal within
the project to research into autonomous endoscope guidance.
Thanks to everybody involved into the SFB/Transregio 125 project.
A special acknowledgement is mandatory for Hannes Kenngott, Martin
Wagner and Patrick Mietkowski: Without your help nothing would
have been possible. No interdisciplinary papers on surgical robotics
would have been written; no prizes would have been won.
The other important context was the Institute for Anthropomatics
and Robotics – Intelligent Process Control and Robotics (IAR-IPR),
which has a great culture of informal interactions, sincere criticism
and mutual support, thanks to everybody there. Not least to the
secretaries, without whom there would be no time left to do research.
Furthermore, a big thanks to all undergraduates for their contribu-
tions.
Finally, I am grateful to my wife, my family and all dear friends for
their support and enduring a PhD student’s work-life balance that
often has too much weight in the work pan.
Karlsruhe Andreas Bihlmaier
1
Funded by the Deutsche Forschungsgemeinschaft (DFG) and undertaken as
a cooperation by the University Hospital Heidelberg, the Karlsruhe Institute of
Technology (KIT) and the German Cancer Research Center (DKFZ).
Contents

Glossary XIII

1 Introduction 1
1.1 Minimally-Invasive Surgery . . . . . . . . . . . . . . . 2
1.1.1 A new kind of surgery . . . . . . . . . . . . . . 2
1.1.2 Ergonomic Challenges . . . . . . . . . . . . . . 4
1.2 Medical Robotics . . . . . . . . . . . . . . . . . . . . . 10
1.3 Knowledge-based Cognitive Systems . . . . . . . . . . 17
1.4 Overview of Thesis . . . . . . . . . . . . . . . . . . . . 19

2 Endoscope Robots and Automated Camera Guidance 23


2.1 A Survey of Motorized Endoscope Holders . . . . . . . 23
2.1.1 Endex . . . . . . . . . . . . . . . . . . . . . . . 25
2.1.2 AESOP . . . . . . . . . . . . . . . . . . . . . . 26
2.1.3 Begin and Hurteau et al. . . . . . . . . . . . . 30
2.1.4 LARS / PLRCM . . . . . . . . . . . . . . . . . 31
2.1.5 HISAR . . . . . . . . . . . . . . . . . . . . . . 33
2.1.6 Laparobot, EndoSista, EndoAssist . . . . . . . 35
2.1.7 FIPS Endoarm . . . . . . . . . . . . . . . . . . 38
2.1.8 Munoz, ERM . . . . . . . . . . . . . . . . . . . 40
2.1.9 LER, ViKY . . . . . . . . . . . . . . . . . . . . 42
2.1.10 Naviot . . . . . . . . . . . . . . . . . . . . . . . 46
2.1.11 SOLOASSIST . . . . . . . . . . . . . . . . . . . 48
2.1.12 LapMan . . . . . . . . . . . . . . . . . . . . . . 52
2.1.13 SWARM . . . . . . . . . . . . . . . . . . . . . . 54
2.1.14 COVER . . . . . . . . . . . . . . . . . . . . . . 55
2.1.15 SMART / P-arm . . . . . . . . . . . . . . . . . 56
2.1.16 Tonatiuh II . . . . . . . . . . . . . . . . . . . . 58
X Contents

2.1.17 PMASS . . . . . . . . . . . . . . . . . . . . . . 59
2.1.18 FreeHand . . . . . . . . . . . . . . . . . . . . . 60
2.1.19 EVOLAP . . . . . . . . . . . . . . . . . . . . . 62
2.1.20 RoboLens . . . . . . . . . . . . . . . . . . . . . 64
2.1.21 Tadano et al. . . . . . . . . . . . . . . . . . . . 65
2.1.22 Further systems . . . . . . . . . . . . . . . . . 67
2.1.23 Telemanipulation Systems . . . . . . . . . . . . 68
2.1.24 Summary . . . . . . . . . . . . . . . . . . . . . 71
2.2 Approaches for Automated Camera Guidance . . . . . 75
2.2.1 Related Problems . . . . . . . . . . . . . . . . . 76
2.2.2 Classification of Approaches . . . . . . . . . . . 80
2.2.3 Survey of Approaches . . . . . . . . . . . . . . 82

3 System Architecture and Conceptual Overview 103


3.1 Conceptual System Architecture . . . . . . . . . . . . 103
3.1.1 Perception . . . . . . . . . . . . . . . . . . . . . 104
3.1.2 Interpretation . . . . . . . . . . . . . . . . . . . 106
3.1.3 Knowledge Base . . . . . . . . . . . . . . . . . 108
3.1.4 Action . . . . . . . . . . . . . . . . . . . . . . . 112
3.2 Camera Guidance as a Knowledge-based Cognitive
System . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
3.2.1 Perception: Surgeon and Instruments . . . . . 115
3.2.2 Interpretation: Optimal Endoscopic View . . . 119
3.2.3 Knowledge Base: Camera Quality Classifier . . 120
3.2.4 Action: Smooth and Pivot-constrained Robot
Motion . . . . . . . . . . . . . . . . . . . . . . 124

4 Modular Research Platform for Robot-Assisted


Minimally-Invasive Surgery 127
4.1 The Robot Operating System (ROS) . . . . . . . . . . 127
4.2 A Modular Platform . . . . . . . . . . . . . . . . . . . 130
4.2.1 On Modularity . . . . . . . . . . . . . . . . . . 132
4.2.2 Sensors . . . . . . . . . . . . . . . . . . . . . . 134
4.2.3 Distributed Processing . . . . . . . . . . . . . . 135
4.2.4 Actuators . . . . . . . . . . . . . . . . . . . . . 136
Contents XI

4.2.5 Model Management . . . . . . . . . . . . . . . 139


4.3 Simulation Environment . . . . . . . . . . . . . . . . . 140
4.3.1 Robotics Simulators . . . . . . . . . . . . . . . 142
4.3.2 Laparoscopic Surgical Simulators . . . . . . . . 146
4.3.3 Robot Unit Testing . . . . . . . . . . . . . . . . 148
4.4 Distributed Monitoring, Reliability and System
Diagnosis . . . . . . . . . . . . . . . . . . . . . . . . . 152

5 Learning of Surgical Know-How by Models of Spatial


Relations 157
5.1 Perception . . . . . . . . . . . . . . . . . . . . . . . . . 160
5.2 Interpretation . . . . . . . . . . . . . . . . . . . . . . . 160
5.2.1 Tracking of Instrument Tips . . . . . . . . . . . 161
5.2.2 Online Phase Recognition . . . . . . . . . . . . 171
5.3 Learning a Camera Guidance Quality Classifier . . . . 172
5.3.1 Learning Spatial Relations: A Static 1D Example172
5.3.2 Reduction of Parameter Space . . . . . . . . . 176
5.3.3 Deriving Synthetic Learning Examples . . . . . 178
5.3.4 Meta-Parameter Optimization . . . . . . . . . 182
5.3.5 Classifier Evaluation: Point in Time and Period
of Time . . . . . . . . . . . . . . . . . . . . . . 183

6 Intraoperative Robot-Based Camera Assistance 185


6.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 187
6.1.1 Calibration and Registration . . . . . . . . . . 187
6.1.2 Trocar and Robot Placement . . . . . . . . . . 190
6.1.3 Collision Avoidance . . . . . . . . . . . . . . . 192
6.1.4 Basic Robot Performance . . . . . . . . . . . . 193
6.1.5 Mapping to Spherical Coordinates . . . . . . . 197
6.1.6 Modeling and Execution of Surgical Workflow . 197
6.2 Intraoperative Action . . . . . . . . . . . . . . . . . . 198
6.2.1 Inverting the Forward Model by Adaptive
Sampling . . . . . . . . . . . . . . . . . . . . . 199
6.2.2 From Current Pose to Next Good Pose . . . . . 202
6.2.3 Multimodal Human-Robot-Interaction . . . . . 203
XII Contents

6.3 Optional Action Components . . . . . . . . . . . . . . 204


6.3.1 Extended Field of View through Live Stitching 205
6.3.2 Optimization of Redundant Degree of Freedom 207

7 Evaluation Studies 209


7.1 Metrics for Objective Assessment of Surgical Tasks . . 210
7.2 Experimental Setup . . . . . . . . . . . . . . . . . . . 211
7.2.1 OpenHELP . . . . . . . . . . . . . . . . . . . . 211
7.2.2 Laparoscopic Rectal Resection with Total
Mesorectal Resection . . . . . . . . . . . . . . . 212
7.3 Experimental Results . . . . . . . . . . . . . . . . . . . 214

8 Conclusion 219
8.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . 219
8.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . 221
Glossary
API Application Programming Interface.

CQC Camera Quality Classifier.

CT Computed Tomography.

Degree of freedom is used to describe the number of independent


motions of a system. An unconnected rigid body has six degrees of
freedom (DoF), three translations and three rotations. In a serial
kinematic chain, the number of joints corresponds to the number of
DoF. These can be less than the six required DoF to freely position a
body in space (kinematic deficiency) or more (kinematic redundancy)
.

DoF Degree of freedom.

FLS Fundamentals of Laparoscopic Surgery.

FRI Fast Research Interface.

Fundamentals of Laparoscopic Surgery is an educational program


comprising the fundamental knowledge and technical skills for lap-
aroscopic surgery. The manual tasks are often utilized for various
benchmarks. .

GUI Graphical User Interface.

HMI Human-Machine Interface.

HMM Hiden Markov Models.


XIV Glossary

HRI Human-Robot Interaction.

IDL Interface Description Language.

LWR Light Weight Robot.

MIS Minimally-Invasive Surgery.

ML Machine Learning.

MRI Magnetic Resonance Imaging.

OR Operating Room.

PTP Point-To-Point.

RCM Remote Center-of-Motion.

Real-Time is used in different contexts in computer science. Often


it merely refers the capability to process (streaming) data “fast
enough”, e.g. at sensor rate or for interactive use. This is the case
for “real-time rendering”. However, in the context of robotics, real-
time usually refers to real-time computing/systems. In this context,
a real-time system must not only guarantee the logical correctness
of its results, but also that responses happen at a specified time
(temporal correctness). .

RGB Red Green Blue.

RGB-D Red Green Blue & Depth.

ROS Robot Operating System.

RPC Remote Procedure Call.

RT Real-Time.

SDF Simulation Description Format.


Glossary XV

TLX Task Load Index.

URDF Unified Robot Description Format.

XML Extensibe Markup Language.


1 Introduction
The aim of this thesis is to describe novel methods that facilitate
autonomous robotic endoscope assistance, which is derived from actual
surgical know-how. In minimally-invasive surgery (MIS), the surgeon
has to rely on an assistant to guide the endoscope camera for him in
order to get a view at the surgical site. Given the many issues that arise
from this situation, motorized endoscope holders appeared early in the
short history of MIS. Nevertheless, even today’s assistance systems are
still based on the paradigm of manual control by the surgeon. Even
research systems looking to increase the autonomy of the endoscope
robots have largely focused on simple control rules such as directly
following the instruments. The system detailed in this thesis combines
modelling of spatial relations with machine learning techniques in
order to directly acquire the complex and situation-dependent relation
between endoscope position and surgical action. A generic model of
endoscope motions is predefined. However, the specific endoscope
positions best suited to the surgical task are learned from concrete
actions of a camera assistant utilizing his surgical know-how.
Due to the interdisciplinary nature of this task, situated between
computer science and surgery, contributions from both sciences are
essential. Although it is impossible to clearly separate one field
from the other in this task, the work at hand provides the technical
perspective of the system from a robotics and computer science point
of view.
The major contributions to the field of medical robotics, which
represent the core of this thesis are:
• The first knowledge-based endoscopic camera guidance system with
a performance on the level of a human assistant (interdisciplinary
contribution).

© Springer Fachmedien Wiesbaden 2016


A. Bihlmaier, Learning Dynamic Spatial Relations,
DOI 10.1007/978-3-658-14914-7_1
2 1 Introduction

• A unified model and algorithm pipeline for representing, learning


and moving a robot according to dynamic spatial relationships
(disciplinary contribution).

Since it is important to understand the overall scenario in which


endoscopic camera guidance plays an important role, some brief
background information on minimally-invasive surgery will be provided
in the following section. This will be followed by a short primer on
surgical robotics and knowledge-based cognitive systems. At the end
of this chapter an overview of the overall thesis is provided.

1.1 Minimally-Invasive Surgery


1.1.1 A new kind of surgery
Minimally-invasive Surgery (MIS) in the abdomen1 refers to surgical
procedures which are performed through very small incisions with long
tubular instruments (Fig. 1.1). A trocar is inserted into each incision
and fixed in place with a few stitches. The trocar allows to insert an
instrument through it while at the same time creating a tight seal
with the abdominal wall. The patient’s abdomen is then inflated with
carbon dioxide (CO2) gas to create an artificial pneumoperitoneum
that separates the abdominal wall from the organs inside (Fig. 1.2). In
order to get a view from the inside of the patient’s abdominal cavity,
an endoscope is inserted through one of the trocars. The endoscope
(Fig. 1.3a) consists of rod lenses as optics that transport an image to a
camera located outside of the patient. Fiber optics on the perimeter of
the endoscope transmit light, which illuminates the abdominal cavity.
The most common MIS instruments (Fig. 1.3b) are grasper, scissors,
electrocautery and stapler.
MIS in its modern form has only been around since 1985 when
the first laparoscopic cholecystectomy2 was performed [1]. Yet, the
laparoscopic procedure quickly gained acceptance in the surgical
1
Visceral surgery.
2
Removal of the gallbladder by surgery.
1.1 Minimally-Invasive Surgery 3

Figure 1.1: Outline of operating room, medical staff, patient and mon-
itor in minimally-invasive surgery.

community [2]. This was mainly driven by early results [3] showing
less complications and a reduced mortality rate, although an increase
in the rate of common bile duct injury. Also soft factors play an
important role, such as a shortened hospital stay and better cosmetic
results due to less scarring. From a technical point of view, the
availability of sufficiently small and cheap video cameras with good
enough light sensitivity and resolution paved the way for MIS. Only
through advanced technologies was it possible to bring the laparoscopic
view onto a screen, which allowed tolerable working conditions. More
information on the early history of laparoscopic surgery can be found
in the overview by Lau et al. [4].
4 1 Introduction

Laparoscopic
Instrument

Endoscopic
Camera

Endoscope
Trocar
CO2

Light
Abdominal cavity Abdominal wall

Figure 1.2: Schematic drawing of inflated abdominal cavity with trocars,


instruments and endoscope.

1.1.2 Ergonomic Challenges


However, the new method also had some significant drawbacks, es-
pecially for the surgeon. Compared to open surgery, the hand-eye
coordination becomes much more difficult [5][6]. Therefore, MIS as a
surgical technique requires additional motor skills training [7]. Instead
of directly looking at his own hands, the surgeon has to look away
from the operating site towards the endoscope monitor (Fig. 1.1).
This additional level of indirection compared to open surgery can
be expressed through interaction models. Figure 1.4 illustrates the
differences in a block diagram following Stassen et al. [8]. Furthermore,
the so called fulcrum effect, i.e. the mirroring of motion together with
motion scaling depending on instrument insertion depth, is known to
increase the difficulty in MIS [9]. Zheng et al. [10] evaluated the mental
workload of 12 novice and 9 expert surgeons during complex tasks such
as laparoscopic suturing. The Fundamentals of Laparoscopic Surgery
(FLS) scoring system [11] was used to asses the sutures. In parallel
to performing the suturing task, the participants had to attend to a
visual detection task. Error rates of this task were used as further
1.1 Minimally-Invasive Surgery 5

(a) Image of an endoscope with camera and light source attached.

(b) A grasper, scissors and stapler for MIS with rigid bodies and markers
for optical tracking.

Figure 1.3: Minimally-invasive instruments for laparoscopy.

scores. As expected, metrics for the suture tasked strongly correlated


with the laparoscopic experience of the surgeon. The expert surgeons
were also able to better attend to the visual detection in parallel. As
pointed out by the authors, mental resources are a finite resource,
through practice more resources are freed, e.g. through motor skill
automaticity, and can be allocated to secondary tasks: “This explains
why experienced surgeons are able to notice abnormal events in the
operating room (OR), respond to events faster, and initiate preemptive
maneuvers better than novice surgeons. The spare mental resource
provides the foundation for situation awareness and perhaps would
6 1 Introduction

lead to better outcomes in the complex environment of a surgery.”


For the topic of this thesis, two further conclusions can be drawn:

• Reducing the mental workload of the surgeon should result in better


outcomes in case complications occur.

• Manual control of a motorized endoscope holder by the surgeon is


more viable for expert surgeons than for novices. Yet, manual con-
trol still reduces the surgeon’s mental resources, whose availability
might benefit the patient.

Based on the daily experience of surgeons that perform a lot of MIS


interventions, ergonomics in laparoscopic surgery has received atten-
tion in clinical research. Experimental results on an optimal operating
room (OR) setup for MIS and guidelines to improve ergonomics have
been published [12]. Yet, due to the confined space around the OR
table and the many devices in a modern OR, the monitor often cannot
be optimally positioned for both surgeon and camera assistant [13].
Since the operating field in MIS is only visible on-screen, surgeon,
assistant and sometimes further OR staff must view the monitor.
Thus, in combination with the required arm positions, the OR staff
has to take unfavorable postures (Fig. 1.5).
These postures have to be maintained for an extended period of
time over the course of the intervention [14]. This is problematic
because the constant head rotation, for example, can induce fatigue
and cause neck pain or headaches. Surveys state [15] that as much
as 84% of surgeons consider the working posture uncomfortable or
even painful. This ergonomic risk [16] does not only apply to the
surgeon, but also to the camera assistant. It is quite common that
either the camera assistant or the surgeon could stand in an adequate
posture individually, but not both at the same time because of spatial
constraints at the OR table. Moran [17] even states that “the most
frustrating aspect of these types of [laparoscopic] surgeries comes from
difficulties in interacting with the camera operator.”
Beyond the negative effects on the OR staff, a good view on the
operating field has an impact on the surgical outcome. Studies by

You might also like