• Ei tuloksia

Computational Evaluation of Microsurgical Skills

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Computational Evaluation of Microsurgical Skills"

Copied!
190
0
0

Kokoteksti

(1)

uef.fi

PUBLICATIONS OF

THE UNIVERSITY OF EASTERN FINLAND Dissertations in Forestry and Natural Sciences

ISBN 978-952-61-4742-0 ISSN 1798-5668

Dissertations in Forestry and Natural Sciences

DISSERTATIONS | JANI KOSKINEN | COMPUTATIONAL EVALUATION OF MICROSURGICAL SKILLS | NO 493

JANI KOSKINEN

Computational Evaluation of Microsurgical Skills

PUBLICATIONS OF

THE UNIVERSITY OF EASTERN FINLAND

Microsurgical skills are traditionally gained from mentoring by expert surgeons. However,

the expert surgeon’s feedback can be subjective and reliance on it limits training

opportunities. I investigate computational methods for evaluating microsurgical skills.

The methods use video- and eye-tracking data that can be unobtrusively recorded

during surgical tasks. Analysis of the surgeon’s gaze and tool movements can allow

evaluating performance more objectively and automatically in microsurgery.

JANI KOSKINEN

(2)
(3)

Computational Evaluation of Microsurgical Skills

(4)
(5)

Jani Koskinen

Computational Evaluation of Microsurgical Skills

Publications of the University of Eastern Finland Dissertations in Forestry and Natural Sciences

No 493

University of Eastern Finland Joensuu

2022

Academic dissertation

To be presented by permission of the Faculty of Science and Forestry for public examination in the Auditorium AU100 in the Aurora Building at

the University of Eastern Finland, Joensuu, on December 22, 2022, at 15 o’clock

(6)

Punamusta Joensuu, 2022

Editors: Pertti Pasanen, Nina Hakulinen, Raine Kortet, Matti Tedre and Jukka Tuomela

Distribution: Itä-Suomen yliopiston kirjasto julkaisumyynti@uef.fi

ISBN: 978-952-61-4742-0 (print) ISBN: 978-952-61-4743-7 (PDF)

ISSNL: 1798-5668 ISSN: 1798-5668 ISSN: 1798-5676 (PDF)

(7)

Author’s address: Jani Koskinen

University of Eastern Finland School of Computing

P.O. Box 111

80101 JOENSUU, FINLAND email: jani.koskinen@uef.fi

Supervisors: Associate professor Roman Bednarik University of Eastern Finland

School of Computing P.O. Box 111

80101 JOENSUU, FINLAND email: roman.bednarik@uef.fi Antti Huotarinen, MD, Ph.D.

Kuopio University Hospital Department of Neurosurgery P.O. Box 100

70211, KUOPIO, FINLAND email: antti.huotarinen@kuh.fi Reviewers: Associate professor George Mylonas

Imperial College London

Department of Surgery & Cancer 20 South Wharf Road

W2 1PF, LONDON, UNITED KINGDOM email: george.mylonas@imperial.ac.uk Professor Paolo Fiorini

University of Verona

Department of Computer Science Strada le Grazie 15

37134 VERONA, ITALY email: paolo.fiorini@univr.it

(8)

Opponent: Professor Marco Zenati

Harvard University, Harvard Medical School Department of Surgery

1400 VFW Pkwy, West Roxbury, MA 02132 BOSTON, USA

email: Marco_Zenati@hms.harvard.edu

(9)

7 Koskinen, Jani

Computational Evaluation of Microsurgical Skills Joensuu: University of Eastern Finland, 2022 Publications of the University of Eastern Finland Dissertations in Forestry and Natural Sciences; No 493 ISBN:978-952-61-4742-0 (print)

ISSNL: 1798-5668 ISSN: 1798-5668

ISBN: 978-952-61-4743-7 (PDF) ISSN: 1798-5676 (PDF)

ABSTRACT

Surgical skills are traditionally gained during mentoring sessions with expert surgeons. While this approach has been wide-spread and successful in many aspects, the expert surgeon’s central role limits the availability of independent training opportunities for new surgeons. The prior research has also revealed that surgical skill evaluations can be highly subjective. To solve these problems, researchers have turned to investigating computational methods for evaluating surgical skills objectively and automatically. These methods rely on data that can be measured during surgical procedures by using sensors like video cameras and eye trackers.

Thus far, the research has mostly focused on laparoscopy, general surgery, or robotic surgery. In this thesis, we apply computational methods to the evaluation of microsurgical skills. In microsurgical procedures, the surgeon is introduced to several technical challenges that are not present in most surgical techniques. These same challenges also make it harder to evaluate surgical skills, both on a subjective and computational basis.

Consequently, the research into computational evaluation of microsurgical skills has been scarcer than other surgical techniques.

We focus on metrics that can be extracted from the surgeons’ eye and tool-usage behavior. By applying video analysis and eye tracking, we illuminate various features that can distinguish microsurgical skills. The

(10)

8

methods presented do not require special hardware that may interfere with the surgeon’s performance or ergonomics. Using these methods, we investigate how eye-tracking features can classify microsurgical skills; how the video analysis of a surgeon’s basic movements can reveal information about their skills and performance; and how computer vision-based surgical instrument detection and microscope eye-tracking can be used together to monitor the surgeon’s eye-hand coordination.

Our results suggest that the pupil-based features extracted through eye trackers can classify expert surgeons and novices who are in their early stages of learning a novel microsurgical task. Analysis of basic tool movements revealed significant limitations in the novices’ ability to use their tools efficiently both bi- and uni-manually. The tool movement analysis was also capable of distinguishing the skill and performance differences at a subtask level, which is an important prerequisite for introducing a real-time evaluation of microsurgical skills. The joint analysis of the surgeon’s gaze and tool movements, using eye-tracking and computer vision methods, introduces a new method for automated eye- hand coordination monitoring in microsurgery.

Universal Decimal Classification: 004.85, 331.102.322, 612.846, 617

Library of Congress Subject Headings: Microsurgery; Motor ability;

Kinematics; Eye-hand coordination; Eye tracking; Gaze; Computer vision;

Video recording; Deep learning (Machine learning); Classification;

Expertise; Performance

Yleinen suomalainen ontologia: mikrokirurgia; kädentaidot;

ammattitaito; osaaminen; silmänliikkeet; katseseuranta; koordinaatio (motoriikka); motoriset taidot; konenäkö; videokuvaus; syväoppiminen;

luokitus (toiminta)

(11)

9

Acknowledgements

First, I would like to thank the School of Computing at the University of Eastern Finland and the Microsurgery Center at the Kuopio University Hospital for providing a marvelous, supporting environment for conducting research. This work was made possible by the doctoral programme in Science, Technology and Computing (SCITECO). During the work, I received a travel grant from the Saastamoinen Foundation that allowed me to conduct a research visit at the University of Alberta in Edmonton, Canada.

I am deeply grateful to my supervisors, associate professor Roman Bednarik and Antti Huotarinen, MD, PhD, for their guidance over the last four years. This work would not have been possible without their support.

When I started this work, I did not know much about anything. While at first it felt like stepping into a pandemonium, the mentorship I received from Roman and Antti made me quickly feel confident that I will be able to get the job done. In addition to giving invaluable advice with research, they have supported me in countless other ways that made the work enjoyable.

I am grateful for the examiners of the thesis, Associate professor George Mylonas from the Imperial College London, and Professor Paolo Fiorini from the University of Verona, for taking the time for carefully reviewing my thesis. Their comments and suggestions improved this thesis significantly. I was honored when Professor Marco Zenati from the Harvard Medical School agreed to be the opponent for my dissertation defense.

My colleagues in Joensuu and in Kuopio played an important role in making this work possible. I am deeply thankful to Hana Vrzáková, Antti- Pekka Elomaa, Sami Andberg, Mastaneh Torkamani-Azar, Ahreum Lee, Ahmed Hussein, Matti Itkonen, and Niko Lappalainen for all the funny, interesting and constructive discussions and collaborations. I am also thankful to Professor Bin Zheng, Dr. Wenjing He and all the researchers at the Surgical Simulation Research Lab who I had the pleasure to work with during my 2-month visit at the University of Alberta.

(12)

10

Finally, my greatest gratitude goes to my family and friends who have supported me enormously during this work. My parents have always held it self-evident that with enough effort I can achieve whatever I want. My wife has always been there when I needed the most support. The best feeling always was coming home from work.

Joensuu, December 2022 Jani Koskinen

(13)

11 LIST OF ORIGINAL PUBLICATIONS

This thesis is based on data presented in the following articles, referred to by the Roman Numerals I-IV.

I Koskinen J, Bednarik R, Vrzáková H, Elomaa A.-P., Combined Gaze Metrics as Stress-Sensitive Indicators of Microsurgical Proficiency, Surgical Innovation, 27(6), 614-622, (2020).

II Koskinen J, Huotarinen A, Elomaa A.-P., Zheng B, Bednarik R, Movement-level process modeling of microsurgical bimanual and unimanual tasks, International Journal of Computer Assisted Radiology and Surgery, 17(2), 305-314 (2022).

III Koskinen J, He W, Elomaa A.-P., Kaipainen A, Hussein A, Zheng B, Huotarinen A, Bednarik R, Utilizing Grasp Monitoring to Predict Microsurgical Expertise, Journal of Surgical Research, 282, 101-108 (2023).

IV Koskinen J, Torkamani-Azar M, Hussein A, Huotarinen A, Bednarik R, Automated tool detection with deep learning for monitoring kinematics and eye-hand coordination in microsurgery, Computers in Biology and Medicine, 141 (Feb. 2022), 105121 (2022).

The above publications have been included at the end of this thesis with their copyright holders’ permission.

(14)

12

AUTHOR’S CONTRIBUTION

I) This publication is an extension of the work that was initiated earlier in two conference publications. The machine learning approach was developed by the author in discussions with Dr. Roman Bednarik and Dr. Hana Vrzáková (University of Eastern Finland), applied to experimental data recorded by Dr. Antti-Pekka Elomaa (Kuopio University Hospital). The author trained and evaluated the machine learning models for surgical skill classification, wrote the first draft of the manuscript, and edited it with guidance in all parts from Dr.

Roman Bednarik and Dr. Hana Vrzáková and Dr. Antti-Pekka Elomaa.

II) The surgical process model used in the publication was developed by the author based on the same dataset used in publication I, after initial discussions with Dr. Bin Zheng (University of Alberta). The data annotation was done by the author, who also pre-processed the data and did the statistical analysis. The first draft and manuscript edits were completed by the author in cooperation with Dr. Antti Huotarinen (Kuopio University Hospital), Dr. Roman Bednarik, and Dr.

Bin Zheng.

III) The initial research idea was conceived in discussions with Dr. Roman Bednarik and Dr. Wenjing He (then at University of Alberta) and based on the dataset used in publication I. Further refinements of the research idea were done by the author after discussions with Dr. Bin Zheng. The manual annotation of the grasps was done by the author, as well as the statistical analysis, initial drafting, and editing of the manuscript. In the evaluation of the results, the author was guided by Ahmed Hussein, MD (Kuopio University Hospital) and Dr. Antti Huotarinen, who also provided the main expert evaluation of participants’ performance.

(15)

13 IV) The research idea was developed in joint discussions with the authors of the study. The author performed the training and evaluation of the surgical instrument detection model. The author also participated in manual annotations and wrote scripts for processing and analyzing the data from the case study that was conceived by Ahmed Hussein, MD (Kuopio University Hospital). The editing process was done jointly by the author together with Dr. Mastaneh Torkamani-Azar (University of Eastern Finland), Dr. Roman Bednarik, and with subject matter guidance from Dr. Antti Huotarinen and Ahmed Hussein, MD.

(16)

14

(17)

15 TABLE OF CONTENTS

ABSTRACT ... 7

Acknowledgements ... 9

1 Introduction ... 25

1.1 What do microsurgical procedures look like? ... 25

1.2 Microsurgical training and evaluation currently ... 28

1.3 Scope of the work ... 29

1.4 Relevance of the work ... 30

1.5 Research questions and methods ... 32

2 Background ... 35

2.1 Requirements for computational evaluation ... 35

2.2 Surgical skill and its evaluation: a short history ... 36

2.3 Motor skill, learning and eye movements in surgery ... 40

2.4 How surgical data is collected ... 44

2.5 Metrics Used to Evaluate Surgical Skill ... 49

2.5.1 Tool use-based metrics ... 49

2.5.2 Eye-tracking metrics ... 50

2.5.3 Other metrics ... 51

2.6 Surgical process models ... 53

2.6.1 Different SPM component levels and definitions ... 56

2.7 Computational Methods in Microsurgery ... 57

3 Computational Evaluation of Microsurgical Skills: A Summary of Contributions ... 61

3.1 Overview of Experiments and Data Collection ... 61

3.1.1 Dataset 1: Microsurgical Suturing Simulation ... 61

3.1.2 Dataset 2: Training and evaluating tool detection ... 64

3.2 Overview of analysis methods ... 67

3.2.1 Eye tracking data ... 67

3.2.2 Microsurgical skill classification from eye metrics ... 67

3.2.3 Manual annotations ... 68

3.2.4 Performance evaluation by expert assessment ... 69

(18)

16

3.2.5 Microsurgical tool detection ... 70

3.3 PI: Combined gaze metrics for microsurgical skill evaluation ... 73

3.3.1 Methods ... 73

3.3.2 Results ... 75

3.3.3 Conclusions ... 75

3.4 PII: Movement-level Modeling of Microsurgical Suturing ... 76

3.4.1 Methods ... 77

3.4.2 Results ... 80

3.4.3 Conclusions ... 84

3.5 PIII: Grasping as a Measure of Microsurgical skill ... 85

3.5.1 Methods ... 85

3.5.2 Results ... 87

3.5.3 Conclusions ... 89

3.6 PIV: Microsurgical tool detection and eye-hand coordination analysis .. 90

3.6.1 Methods ... 91

3.6.2 Results ... 94

3.6.3 Conclusions ... 97

4 Discussion ... 99

4.1 Answering the research questions ... 99

4.2 Feedback, objectivity, automation and all that ... 106

4.2.1 Value of the feedback ... 108

4.3 Limitations and extensions ... 110

5 Conclusions ... 113

References ... 115

ARTICLES ... 137

(19)

17 LIST OF FIGURES

Figure 1. Some key components in microsurgery (A) Surgical microscope, showing its oculars. (B) Examples of microsurgical instruments.

(C) Two surgeons operating with a surgical microscope that has oculars for two surgeons. (D) Surgeon’s view through the microscope. Figures C and D adapted from Wikimedia Commons . ... 27 Figure 2. Scope of the work, with the sub-categories investigated in the

publications highlighted in blue. Our aim is to evaluate technical skills from data recorded with participants completing simulated microsurgical training tasks. Within technical skills, we focus on how surgeons’ tool use, such as their ability to maneuver with both hands simultaneously, and on evaluating eye-hand coordination and cognitive workload, both of which can be analyzed with eye tracking. Drawing of the surgeon adapted from Wikimedia Commons. ... 30 Figure 3. Computational microsurgical skill evaluation framework. In the

first stage we form a definition or requirements of skill that guides the approach to computational skill evaluation. In the second stage, we record data, from which we then extract metrics and build surgical process models that can be analyzed to distinguish microsurgical skill. Finally, the results of the analysis are used to evaluate surgeon’s skill. The focus of this thesis is in the second stage, highlighted in blue, where we investigate computational methods for skill evaluation. Drawing of the surgeon adapted from Wikimedia commons, see footnote 3 to Figure 2. ... 35 Figure 4. Number of search results from SCOPUS on objective surgical skill

evaluation with the keyword ( ( objective AND surg* AND ( evaluat* OR assess* ) AND ( performance OR skill* ) ) ) in the title, abstract and keywords. The number of results per year has increased more steeply since the early 2000s. ... 37

(20)

18

Figure 5. Examples of simple simulated training tasks. Low-fidelity models like these can be used to practice basic microsurgical techniques. (A) Surgeon practicing dissection and suturing on a grape. The surgeon is holding the needle with the left hand tool.

(B) Surgeon practicing needle piercing on a synthetic surface.

Second row (C) Needle passing task, where the surgeon has to use surgical instruments to pass a microsurgical needle through the holes in the needles inserted to the surface. (D) Vessel suturing task, where the surgeon has to attach two ends of the tube by suturing. ... 39 Figure 6. Surgeon completing a microsurgical training task using the

surgical microscope. Surgeons’ gaze and tool behavior can be recorded and analyzed to get insights into their cognitive workload, skill, performance and error recovery. Additional information may be gained by recording EMG, EEG, heart rate variability, and other signals. Figure partially adapted from Wikimedia commons. ... 44 Figure 7. Surgical tool use analysis based on computer vision and video

recorded through the surgical microscope. A deep learning model is used to detect the tips of the surgical instruments in the video. This information can be used to monitor the presence of the tools in the task and to evaluate metrics such as movement smoothness. ... 45 Figure 8. Sequence of frames taken from a video-based microscope eye

tracker. Eye trackers can be used to investigate the surgeon’s gaze behavior and eye-hand coordination. Monitoring changes in pupil sizes and blink rate can reveal increase in cognitive workload and differences in skill. ... 46 Figure 9. How surgical procedures can be decomposed and analyzed. Top

row shows snapshots from a surgical training task. Tasks are divided into finer levels like segments, which in turn can be divided into actions and events, and this decomposition can be supplemented with sensor data such as recordings of surgical

(21)

19 tool movements. The elements of the surgical process model at different levels need not be sequential. ... 53 Figure 10. Example of a surgical process model decomposition of a

microsurgical knot tying training task. In this model, the surgeon’s left- and right-hand tool movements are split into pairs of actions and targets at the lowest level. Each action- target pair could occur in any order. Above this level, the entire task was divided into longer segments that had a fixed order.

Adapted with minor changes from publication PII. ... 55 Figure 11. Microsurgical tools and gaze tracking from a video recording of

a microsurgical training task. The video was recorded through the surgical microscope and shows the surgeon’s point of view.

Monitoring of gaze and tools can be used to evaluate metrics such as tool movement smoothness, gaze fixations and saccades, and gaze-tool distances. Adapted from the supplementary material of PIV. ... 57 Figure 12. Eye tracker microscope. The eye tracker is attached in front of

the surgical microscope’s ocular (left). While the user performs a surgical task (center), their eye movements are recorded for analysis (right). ... 59 Figure 13. One slot from the training board used in dataset 1; the figure

shows participant suturing. The full training board included six of these slots. The number above the slot indicates its order, and the arrow on the upper left side of the slot indicates the direction of the incision in the latex skin (45 degrees in this example). ... 62 Figure 14. Left: participant completing a suture, as recorded through the

microscope. Right: frame from the eye tracker attached to the microscope. The blue circle around the pupil comes from pupil detection with YOLOv5. ... 63 Figure 15. Example frames from the videos used to train and evaluate

surgical tool detection in publication PIV. Clockwise from upper left: Simulated microsurgical cutting, simulated dissection,

(22)

20

porcine drilling (surface), porcine drilling (inside), simulated interrupted suturing, simulated continuous suturing. Figure adopted from PIV. ... 65 Figure 16. Surgical tool detection results using YOLOv5. The model was

trained to detect the tips of the surgical instruments, here microforceps (left) and needleholder (right). Smaller instruments like the needle (center) were detected with a bounding box that covered the whole object. ... 71 Figure 17. Example of the surgical process model description of the first

two segments in terms of their <action;target> pairs. The expert moves the microforceps to the incision for support while using the needle holder to pierce the latex surface with the needle at the incision. In contrast, the novice does many movements without a clear target, leading to inefficient use of the tools.

Figure adopted from PII. ... 79 Figure 18. Similarity at suture level, calculated for the needleholder (NH, y-

axis) and microforceps (MF, x-axis). More negative values indicate closer similarity to novices. Figure adopted from PII.

... 81 Figure 19. Similarity at segment level. Microforceps (MF) and needleholder

(NH). More negative values indicate closer similarity with other novices. Figure adopted from PII. ... 81 Figure 20. Suturing efficiency at suture level, indicting the share of the total

suturing duration spent on productive movements.

Microforceps (MF), needleholders (NH). Figure adopted from PII.

... 82 Figure 21. Suturing efficiency at segment level. Needleholder (NH),

microforceps (MF). Figure adopted from PII. ... 82 Figure 22. Bimanual efficiency at suture level, indicating the ratio of suture

duration where the participant completed productive movements simultaneously with both tools. Figure adopted from PII. ... 83

(23)

21 Figure 23. Bimanual efficiency at segment level. Figure adopted from PII.

... 83 Figure 24. Example of two consecutive grasps using a needle holder. The

first row shows a failed grasp attempt and the second row a successful grasp. A grasp attempt was determined when the jaws of the instrument closed at the thread or the needle. If the grasp failed, the thread (needle) did not move when the instrument was moved. This almost always led to more grasp attempts at the same location. Figure adopted from PIII. ... 86 Figure 25. Suturing steps and major phases. The steps were 1) Grasping

the needle with the needle holder, 2) piercing the incision on both sides, 3) grasping the needle with the micro-forceps or the needle holder, 4) extracting the needle until a right amount of thread is left on the insertion side, 5) completing three surgical knots with 2 or 3 loops, and 6) cutting the ends of the thread with micro-scissors. Figure adopted from PIII. ... 86 Figure 26. Number of grasps and skill for the whole suture (A) and at each

segment or suture phase (B). NH = needle handling. The darker values indicate the proportion of failed grasps. Figure adopted from PIII. ... 88 Figure 27. Total number of grasps in a suture and the corresponding

UWOMSA grades given by expert evaluator. In Efficiency and handling, we can see that lower number of grasps were associated with higher scores. Figure adopted from PIII. .... 88 Figure 28. Suturing duration and number of grasps needed to complete a

suture. For novices, the number of grasps affected the suturing duration more than for experts. Figure adopted from PIII. . 89 Figure 29. Pupil size before and after a grasp for novices and experts. The

pupil size is measured as percentage change from the first frame in the 4 second window surrounding the grasp. Figure adopted from PIII. ... 89

(24)

22

Figure 30. Example microsurgical tool detection results (yellow and green lines) and gaze tracking (magenta circle). Adapted from the supplementary material of PIV. ... 92

(25)

23 LIST OF TABLES

Table 1. Research questions and the publications addressing them. ... 33 Table 2 Summary of characteristics related to expertise and the effects on

performance in motor control tasks ... 43 Table 3 Examples of how different sensors have been used to evaluate

surgical performance. ... 48 Table 4. Overview of what certain data types and metrics extracted from

the data can tell us about surgical skill, performance and task difficulty. ... 52 Table 5. Demographics of the participants in Dataset 1. Table adopted from

PIII... 62 Table 6. Overview of microsurgical tasks present in the Dataset 2. These

videos were used to train and evaluate the object detection model for surgical tool detection. Table adopted from PIV. 65 Table 7. Description of the suture segmentation used in publication PI. The

segmentation was modified for publications PII and PIII. For segments 1 – 9, the event marking the start is described in the right column. Each segment lasts until the next segment in the list. The cutting segment starts after the last knot is tied and ends when the thread ends are cut. Adopted from PI. ... 68 Table 8. Overview of Dataset 1 and Dataset 2 case study ... 72 Table 9. vocabulary used for defining the surgical process model. Table

adopted from PII. ... 77 Table 10. Description of the actions annotated and analyzed in the case

study. Table adopted from PIV. ... 93 Table 11. Overall performance of the tool detection for all 17 microsurgical

tools in the three evaluation experiments. Detailed results for each tool are found in Table 3 in publication PIV. The test results represent performance in unseen videos, while in the validation results, the data came from the same videos used to train the model. ... 95 Table 12. Kinematic metrics where differences between dissection and

enhancement of the visual scene were significantly different in

(26)

24

the tool and gaze metrics from the case study. Result was deemed statistically significant when p value was less than 0.001. Non-significant results have been omitted here; see publication PIV for the full results. ... 96 Table 13. Assessment of how different advantages of computational

methods are realized in the publications (PI – PIV) ... 106

(27)

25

1 Introduction

The safety of surgical procedures rests on proper training and evaluation of surgical skills [1–4]. Surgeons traditionally gain these skills under the mentorship of more experienced surgeons [3,5]. This method, however, consumes time and human resources, and the evaluation of surgical skills is subjective [6]. Together with the introduction of work-hour regulations in hospitals, these issues have raised the demand for better approaches to training and evaluating surgical skills [6–8].

At the same time, advances in computational methods and sensors have brought forward the question of how they could be utilized in surgery [9,10]. From this question has emerged the concept of surgical data science [9,10]. Surgical data science aims to answer the problems in surgical training by developing computational methods for evaluating and training surgeons [9–11]. The computational methods take recordings of e.g., surgeons’ tool and eye movements, and translate them into an evaluation of their skill and performance [11].

The research in computational evaluation of surgical skills has previously focused on surgical techniques such as laparoscopy, endoscopy, and robotic surgery [11–13]. Likewise, many studies on machine learning applications in surgery have focused on these techniques [14]. This is likely attributable to the ease of recording data, such as surgeon’s tool and eye movements, with these surgical techniques. In contrast, microsurgical procedures impose special technical challenges for collecting the data required for computational analysis.

1.1 What do microsurgical procedures look like?

Microsurgery is a surgical technique where surgeons complete procedures using a magnifying device, such as the surgical microscope (See Figure 1 (a)), surgical robots [15], loupes [16], or an exoscope where the magnified

(28)

26

view is produced with digital cameras and high-definition displays [17,18].

In this thesis we focus on tasks performed with the surgical microscope.

The microscope allows operations on some of the most delicate structures of the human body, such as blood vessels smaller than 2 mm in diameter [19]. To complete the procedures, the surgeons wield a set of special micro-instruments (Figure 1 (b)) [20–22]. These instruments have features, such as small sizes, long handles, and shapes, which allow the surgeon to operate in depth using small openings and still maintain a clear view of the operating area (Figure 1 (c)–(d)) [22]. Techniques like microvascular anastomosis – connecting two blood vessels by suturing – may be performed using a thread with a diameter of 0.02 mm attached to a curved needle with a diameter smaller than 0.1 mm [21].

Microsurgical techniques are used in situations where a clear and highly magnified view of the surgical site is crucial, including neurosurgery and spine surgery, ENT (Ear, Nose, Throat), and ophthalmic (eye) surgeries [23].

Typical applications of microsurgery are reconstructive procedures, such as transferring tissue and reattaching amputated parts of the human body [20,24]. Restoring the reattached part’s normal function requires reconnecting a large number of small blood vessels and nerves [19,25].

The small size and sensitivity of the operated structures, along with physiological limitations like hand tremors [26], ergonomic issues from handling the microscope [27], working in constrained postures [28] and with limited visual contact with the environment [29] make microsurgery especially challenging. The restricted visual contact hampers teamwork [27,29] and interruptions are often more detrimental to the surgeon’s performance than in other surgical techniques [27]. Many of the same challenges faced by the surgeon also complicate microsurgical performance evaluation.

(29)

27 Figure 1. Some key components in microsurgery (A) Image of a surgical microscope, showing its oculars. (B) Examples of microsurgical instruments. (C) Two surgeons operating with a surgical microscope that has oculars for two surgeons. (D) Surgeon’s view through the microscope.

Figures C and D adapted from Wikimedia Commons12.

1https://commons.wikimedia.org/wiki/File:US_Navy_080607-N-9689V-

008_Cmdr._Kenneth_Kubis_and_U.S._Air_Force_Capt._Tighe_Richardson_use_an_operating _microscope_while_performing_cataract_eye_surgery_to_return_sight_to_Marylin_Kansi,_a _12-year-old_girl_from_Cotabato.jpg, public domain.

2https://commons.wikimedia.org/wiki/File:Trigeminal_nerve_neurovascular_conflict.JPG, by Luigi Berra, used under CC BY-SA 4.0

(30)

28

1.2 Microsurgical training and evaluation currently

Basic microsurgical training occurs in dedicated courses that typically include at least 40 hours of training over the duration of 5 days [30]. In these courses, the participants complete training tasks with models that simulate actual surgical procedures and techniques [31,32]. The training tasks allow the participants to practice basic techniques that involve using micro-instruments, handling the microscope and different tissues, and performing specific microsurgical tasks such as anastomosis [33].

The simulation models progress in fidelity as the participants gain experience, starting with basic synthetic models that can be made from materials like surgical gloves, followed by non-living models such as chicken arteries or human cadavers, to live animal models [32]. The living models – typically rats that have been anesthetized [34] – have desirable properties like blood clotting that cannot be easily replicated with non- living or synthetic materials [30]. However, live models are used less frequently nowadays, and the surgical techniques that are practiced on them are mainly anastomoses and tissue transfer [30,32,35].

While the participants complete the tasks, expert surgeons guide them and evaluate their performance, either in real time or based on video recordings [30,32,36]. To help assess performance systematically and make feedback clearer to the participants, expert surgeons can use structured evaluation instruments. Typically, these consist of checklists and grading scales, with a set of criteria for different aspects of performance; for example, handling of the tools and the efficiency of their movements [36–

38]. These instruments help ensure that each participant is evaluated by similar criteria. The participants can also use some of these instruments for their self-assessment [39].

As an example, one of the most common evaluation instruments is the Objective Structured Assessment of Technical Skills (OSATS) [40], which has also been adapted to microsurgery [41]. OSATS consists of two parts. The first part is a checklist that scores performance based on 14 binary criteria, for example “Select appropriate suture” (yes/no). The second part is a rating scale where a 5-point Likert scale is used to measure factors like “Respect

(31)

29 for tissue” (a score of 5 means that tissues were handled with minimum damage). Typically, an experienced surgeon watches a video recording of a participant performing a microsurgical training task, and evaluates the performance based on the items in the OSATS checklist and rating scale.

OSATS has been validated in several studies, especially for distinguishing the experts from intermediate and novice surgeons [42].

While these training methods have been wide-spread and successful, the central role of expert guidance also limits the availability of training opportunities. Most of the microsurgical practice takes place independently during the general clinical training sessions, where the trainees’ skills cannot be readily evaluated. The research has previously found problems with the efficiency of learning and retaining skills and highlighted the need for developing methods that give trainees more chances to practice their skills independently [43,44].

1.3 Scope of the work

We investigate computational methods for microsurgical skill assessment, using data that can be recorded during microsurgical training sessions (Figure 2). This data includes measurements of the surgeons’ tool and eye movements. We focus solely on technical surgical skills [8], including measures that are directly related to the technical handling of the microsurgical instruments; as well as the measures that are indirectly related to technical skills, such as physiological changes that occur during surgical performance. In other words, we will investigate measures like tool movements and the associated cognitive workload, but leave out non- technical skills like communication and decision-making [45]. Moreover, even though parts of robotic surgery and use of exoscopes fall within the area of microsurgery, we will limit ourselves to training procedures performed using a surgical microscope.

(32)

30

Figure 2. Scope of the work, with the sub-categories investigated in the publications highlighted in blue. Our aim is to evaluate technical skills from data recorded with participants completing simulated microsurgical training tasks. Within technical skills, we focus on how surgeons’ tool use, such as their ability to maneuver with both hands simultaneously, and on evaluating eye-hand coordination and cognitive workload, both of which can be analyzed with eye tracking. Drawing of the surgeon adapted from Wikimedia Commons3.

1.4 Relevance of the work

Several factors have increased the demand for more objective and automated assessment of surgical skills [8]. First, the traditional methods for assessing surgical skills and training new surgeons suffer from subjectivity and the fact that they consume considerable time from expert surgeons [13,36]. Secondly, insurance companies and governments have become interested in having clear metrics for evaluating quality of healthcare. This interest has been compounded by the introduction of working-hour regulations and financial pressures that limit the time and resources available for surgical training [6,46]. Third, techniques such as

3Derivative work, original:

https://commons.wikimedia.org/wiki/File:12_Operating_Room_Aravind.jpg, used under CC BY-SA 4.0

(33)

31 robotic and minimally invasive surgery have introduced new technical demands that in turn require new methods for training and evaluating technical skills [2,12]. This last factor in particular motivates developing methods catered to microsurgery.

Operations performed with a surgical microscope demand technical skills that are unlike other surgical techniques [23,33,35,47]. Research in expertise has shown that in order to develop and maintain superior performance in a certain skill, it is essential to engage in repeated, deliberate practice of that particular skill with the correct type of feedback;

and the lack of continuous practice will lead to the degradation of those skills [3,48]. Studies have also shown that skills learned in one surgical technique do not necessarily translate into other techniques [49].

Therefore, it is important to develop specific methods for microsurgery, allowing the surgeons to effortlessly learn microsurgical techniques while receiving meaningful and objective feedback to enable their repeated, deliberate practice.

We propose that computational methods for microsurgical skill evaluation would offer several advantages:

● Transparency: If a trainee’s performance is deemed to be flawed, the data can reveal why.

● Objectivity: Feedback does not vary based on the person giving or receiving feedback.

● Automation: Trainees can practice by themselves, which saves resources, increases opportunities for training, and makes training more efficient.

● Real-time feedback: The timing of feedback is known to affect learning of new skills.

● Transferability: The possibility of using the same methods to monitor the surgeon’s performance during actual surgical procedures.

Optimally, the methods for computational evaluation of microsurgical skills would decrease the burden on expert surgeons, create new methods for unsupervised training and assessment of surgical skills in real and

(34)

32

simulated surgical procedures, and make surgical training more efficient with additional training opportunities.

1.5 Research questions and methods

The primary goal of this work was to develop methods for assessing microsurgical skills through the surgeon’s eye and tool movements. We started the work with a study aimed at classifying the surgical skills based on eye-tracking metrics that have been linked to an increased cognitive workload (Publication I, or PI). After this first study, we turned to surgeons’

tool usage and conducted two video analyses of surgeons’ tool movements during a microsurgical training task (PII and PIII). In PIV, we took the first steps to applying automated methods for conducting an eye-hand coordination analysis. The specific research questions are introduced in Table 1.

In these studies, we have focused on analyzing surgeons’ gaze and tool- usage behavior, both jointly and independently. The majority of the data was recorded in microsurgical training centers with novice and expert participants completing various microsurgical training tasks. In PIV, we also made use of publicly available video repositories of neurosurgical procedures. The gaze data was recorded with eye trackers that could be attached to the surgical microscope, and the analysis of that data relied on the prior findings in the use of pupillometry as a proxy to cognitive workload and the development of eye-hand coordination. For the analysis of tool-usage behavior, we used both manual video analysis and modern deep learning-based computer vision methods. The quantitative metrics gathered with these methods were correlated with rating scales and evaluations by experienced surgeons.

(35)

33 Table 1. Research questions and the publications addressing them.

Research Question

Publication RQ1 How can task-evoked pupil dilation

distinguish microsurgical expertise?

Is the classification improved by adding blinks?

PI

RQ2 How can we evaluate surgical skills based on surgeon’s activities

with surgical process models built on only a few basic actions and targets?

by monitoring individual actions such as the number of grasps?

PII, PIII

RQ3 How can deep learning-based object detection methods be applied to microsurgery in order to

Monitor tool usage

Monitor their eye-hand

coordination

…and what are the technical challenges?

PIV

(36)

34

(37)

35

2 Background

2.1 Requirements for computational evaluation

To begin the computational evaluation of surgical skills, we first need to define what we are evaluating. That is, we need some definition for the skill or performance that allows us to develop and test hypotheses. Then, we need data. The data can be of a single type, such as the time needed to complete the task; or it can be multi-modal, such as the simultaneous recordings of eye and tool movements. From the data, we can extract metrics that will reveal information about the surgeon’s performance. For example, we may find that novice surgeons tend to make unnecessary movements with the tools. In which case, the “number of movements”

would be a metric that tells us something about their performance. Finally, we can define a model of a surgical procedure that formally describes the distinct phases and actions in the surgical procedure, allowing us to compare the surgical performances and put the extracted metrics into context (Figure 3).

Figure 3. Computational microsurgical skill evaluation framework. In the first stage we form a definition or requirements of skill that guides the approach to computational skill evaluation. In the second stage, we record data, from which we then extract metrics and build surgical process models that can be analyzed to distinguish microsurgical skill. Finally, the results of the analysis are used to evaluate surgeon’s skill. The focus of this thesis is in the second stage, highlighted in blue, where we investigate computational methods for skill evaluation. Drawing of the surgeon adapted from Wikimedia commons, see footnote 3 to Figure 2.

(38)

36

When we combine the different elements, we can answer questions such as:

● How did the novice and expert surgeons’ tool movements differ during some phases of the procedure?

● Did some phases or actions increase their cognitive workload?

Was there a difference between novices and experts?

● Did some (adverse) event increase their cognitive workload or affect their subsequent tool-handling performance? How long did it them take to recover?

The analysis can be expanded by evaluating multiple simultaneously recorded data sources. Such multimodal data analysis allows even finer insights to be revealed from surgical performance.

2.2 Surgical skill and its evaluation: a short history

Being a skilled surgeon entails having expertise in several areas, such as anatomy, tissue, and instrument handling [50], along with communication, teamwork, decision-making, and situational awareness [45]. These skills usually have been divided into technical and non-technical skills. The former refers to skills like proper handling of the instruments and the latter to skills like decision-making and communication. Attaining and maintaining these skills requires repeated and continuous training [3,48].

However, the traditional apprenticeship-style training of surgical skills has lacked standardized evaluation methods, increasing the interest for developing more objective approaches [8,11] (Figure 4).

(39)

37 Figure 4. Number of search results from SCOPUS on objective surgical skill evaluation with the keyword ( ( objective AND surg* AND ( evaluat* OR assess* ) AND ( performance OR skill* ) ) ) in the title, abstract and keywords. The number of results per year has increased more steeply since the early 2000s.

One of the first steps taken towards a more objective evaluation was the introduction of checklists and grading scales [42]. Perhaps the most widely applied of these is the Objective Structured Assessment of Technical Skill (OSATS) [40,51], which we gave as an example in the Introduction. Several other similar instruments have been developed, with some specializing in certain surgical techniques, such as the University of Western Ontario Microsurgical Skills Acquisition/Assessment instrument [52] that was developed for assessing microsurgical knot tying and anastomosis skills.

Another instrument designed specifically for microsurgery is the Structured Assessment of Microsurgery Skills (SAMS), that, like OSATS, includes both global rating scales and a checklist for errors [53].

Some studies, however, have shown that these instruments cannot fully fix issues with subjectivity and validity, particularly in distinguishing the finer differences in skill levels [38,43,54,55]. Since the evaluation with these instruments is often done after the task is completed, they also cannot provide real-time feedback, which further underlines the need for more automated, computational methods.

(40)

38

One of the earliest computational methods for assessing surgical skill was to record the time taken to complete a task. Novices’ and expert surgeons’

completion times differ strongly, and the metric is often reported even when the study focuses on other metrics [12,13,56]. Completion time, while effective at separating experts from novices, is incapable of distinguishing finer differences in their tool usage, nor can it supply concrete feedback to the user. Another example of an early computational method applied tensiometers to test the breaking and tightening forces in surgical knots [57]. This approach requires special instruments, and like the task duration, it cannot provide detailed feedback at the sub-task level or in real-time.

To address such issues, some of the research turned to the direct monitoring of surgeons’ tool usage [58]. This is accomplished with sensor- based recording of the tool movements, early examples of which are the Imperial College Surgical Assessment Device (ICSAD) [59], the BlueDRAGON system [60] and electromagnetic motion tracking [61]. These methods were later expanded with computer vision methods capable of tracking the instruments from video recordings [62].

Apart from the computer vision methods, surgical tool tracking in the operating room is hindered by the need of maintaining the surgical site sterile [11]. This requirement can be overcome by recording the data in surgical simulators.

Surgical simulators refer to training environments that have been designed to resemble actual surgical procedures or techniques. Simulators can be created with various synthetic or organic materials (See Figure 5). A simple simulation of human tissues can be made from a rubber glove where surgical trainees can practice their knot-tying skills [32]. Advanced simulators can include more sophisticated platforms, such the box trainers used in laparoscopy [63], or can use cadavers to make the interaction with tissues more realistic [32]. Some closely-related technologies are Augmented and Virtual Reality (AR and VR, respectively), whose use is inspired by their success in aviation training [64–66]. The first VR surgical training studies were conducted in the 1990s [67] and their application has since been investigated in several surgical tasks [12]. Simulated

(41)

39 environments like these have enabled recording data both in reproducible settings and in larger quantities than what would be possible in the operating room [11,68].

Figure 5. Examples of simple simulated training tasks. Low-fidelity models like these can be used to practice basic microsurgical techniques. (A) Surgeon practicing dissection and suturing on a grape. The surgeon is holding the needle with the left hand tool. (B) Surgeon practicing needle piercing on a synthetic surface. Second row (C) Needle passing task, where the surgeon has to use surgical instruments to pass a microsurgical needle through the holes in the needles inserted to the surface. (D) Vessel suturing task, where the surgeon has to attach two ends of the tube by suturing.

As the number of studies on computational surgical skill evaluation have increased (see Figure 4), the data sources and analysis methods have also become more variable. Recordings of the surgeon’s physiological data, such as electroencephalography (EEG) [69,70], heart rate [71], electrodermal activity [72], functional near-infrared spectroscopy [73], and eye tracking [70,74–81], have been used to evaluate the surgeon’s attention, cognitive workload and eye-hand coordination.

In order to analyze the data and perform tasks (like surgical skill classification), researchers applied classical machine learning methods,

(42)

40

including K-nearest neighbors, support vector machines, Hidden Markov Models, and linear discriminant analysis [82–86]. Artificial neural networks have also been used to classify surgeons’ skills from eye tracking data [87].

Early computer vision methods were based on hand-crafted features and algorithms, and they were used to track surgical instruments, segment task phases, and to detect objects in the surgical site [62,88].

In the early 2010s, the development and introduction of powerful computing technologies, large datasets, and improved neural network algorithms caused a massive increase in the development of the so-called deep learning methods, including for surgical applications [89]. The deep learning methods have tremendously improved the performance in many classification, segmentation, and object detection tasks [89]. This has been especially true for computer vision in surgery, where deep learning has been applied in object detection, tracking, segmentation, and surgical skill evaluations [90–93]. The deep learning methods have also been applied to the classification and segmentation of kinematic data [94,95].

However, many studies still report more traditional measurements, such as the task completion time [96–98]. To the best of our knowledge, checklists and grading scales are currently the only semi-objective assessment methods that have been more widely used in actual surgical training [42].

2.3 Motor skill, learning and eye movements in surgery

The manner in which the surgical performance and skill should be defined is not well established [99]. Research in the computational evaluation of surgical skills has typically used operative definitions for skills: skillful surgeons have more experience (by the number of operations or time spent as a professional surgeon), and skillful performance is characterized by how these experienced surgeons execute the task, or how their physiological signals (such as eye movements) differ from novices. More fundamental research into motor skills, learning, and the concurrent changes in physiological signals has been conducted in laboratory settings that did not focus on surgery [100–102]. To understand the motivation

(43)

41 behind the investigations conducted in this work, we will briefly describe some general results from the prior research into expertise and motor learning.

Experts have superior knowledge, performance, and decision-making processes, and they can adjust better to dynamic situations – and can demonstrate their expertise when required [3,4]. Experts are able to display superior performance consistently in tasks that are representative of their domain [103]. This performance can, however, be extremely domain-specific, meaning that the superior performance in one task does not necessarily generalize to other, even seemingly similar tasks [103,104].

Achieving expert-level performance requires repeated and deliberate practice, [3,4]. In a review of surgical skill learning studies, Issenberg et al.

reported that repeated practice and appropriate feedback were the most influential factors for learning surgical skills [105]. Surgeons who perform high volumes of certain types of procedures display a consistently superior performance [4], with metrics such as the procedure completion time improving significantly after each procedure [106]. Pauses in practice, on the other hand, lead to a decreased performance and outcomes [48].

Merely repeating a task frequently enough does not guarantee better performance, however; the practice has to also be deliberate [104].

Deliberate practice can be supported by appropriate feedback [105].

Research has shown that receiving feedback shortly after performing a task increases the learning efficiency and retention [105]. Feedback given for good performance rather than poor performance can lead to more effective learning [107]. The type of feedback also matters; in a study of knot-tying skills, surgical residents who received verbal feedback were better able to retain their skills one month later than those who only received a numerical evaluation of their movement economy [108]. The feedback also needs to be accurate, so as to not reinforce bad habits [48,109].

Besides the external feedback, the surgeon will also learn through inherent, perceptual feedback [109]. With this feedback, surgeons learn independently by adjusting their movements in response to errors or

(44)

42

inefficiencies in their movements [101], such as when they try to lift an object and underestimate its weight. In surgical procedures where the view of the surgical site is mediated by displays and microscopes, the distorted visual and tactile feedback leads to additional challenges for motor control learning [47,110].

Another essential requirement for successful motor control is the coordination of hand movements with visual information extracted through gaze. Gaze behavior is known to change as motor skills develop [100]. For example, in early stages of motor control learning, such as when asked to point or grasp something with a novel tool, our gaze tends to closely follow the tool as we learn to map visual and motor feedback to control the movements. As we gain more experience, our eye movements will start to predict the tool movements [102,111].

Such changes in gaze patterns have also been shown to occur in surgery [112]. Learning impacts the location and frequency of gaze fixations [100,111] and the way we allocate attention, such as when experienced radiologists assess x-ray and CT images [113]. Differences in gaze behavior between novice and expert participants has been reported in various surgical techniques, including laparoscopy [114,115] and microsurgery [116,117].

With practice and feedback, we learn to move more efficiently and to minimize errors [4]. Throughout this process, the motor skill development goes through a learning curve; with fast increases in performance during the early and middle stages, and with diminished increases as the training goes on [44,96,104,118]. These stages are characterized by different patterns in hand and eye movements, task duration, movement accuracy, and the number of errors [100,109]. The improvements are mainly related to the technical performance during a given task.

In addition to the technical improvements, motor learning is also associated with psychometric changes. For example, gaining experience allows for a more efficient allocation of cognitive workload [119]. This implies that the measurement of cognitive workload can also reveal something about performance and skills [120]. Such measurements can be

(45)

43 realized with eye trackers, via the monitoring of spontaneous blink rate and the so-called task-evoked pupil dilation [121–123].

Task-evoked pupil dilation refers to the situation where pupils dilate in response to an increased cognitive workload, e.g., when a person is memorizing a series of digits [121,122,124]. This finding has motivated pupillometry, or the measurement of pupil dilations for estimating stress, cognitive resources, and task difficulty [123,125,126]. Likewise, changes in the blink rate have been associated with changes in the cognitive workload [123]. As one characteristic of expertise is the ability to allocate cognitive resources effectively, pupil dilations and blink rates have been also used to evaluate expertise [112,123].

Table 2 Summary of characteristics related to expertise and the effects on performance in motor control tasks

Expertise Ref.

Characterized by Superior performance and knowledge in domain-specific tasks

[3,4,103]

Gained with Repeated, continuous and deliberate practice with appropriate feedback

[3,4,104,118]

Motor control effects

Tasks performed quicker, more efficiently and with fewer errors. Changes in eye-gaze

behavior.

[100,102,123]

Indirect effects Decreased cognitive workload [119]

These findings (summarized in Table 2) have several implications for the computational evaluation of microsurgical skills. First, the fact that motor control learning goes through phases with distinct patterns in the gaze and hand (or tool) movements means that these patterns can be measured to evaluate the learning process. Second, the importance of deliberate practice on mastering surgical skills highlights the necessity for developing automated training systems that do not require expert surgeons’ presence, and thus provide more resources for surgical trainees to practice. Third, the automated feedback that such a system can provide could enhance the learning process, as it will be possible to generate feedback in real-time.

(46)

44

Finally, physiological measurements like pupillometry could be used to evaluate not only pure technical skills, but also the phenomena linked to technical skills that would otherwise be difficult to monitor, such as the cognitive workload and stress.

2.4 How surgical data is collected

Numerous methods of recording surgical data have been investigated for various surgical techniques and procedures, including the data from the operating room. This data has included instrument kinematic data, audio, video, pressure, light, and many different physiological signals [127–129].

Here, we will focus on methods that are used to record and analyze surgeon’s tool use and gaze behavior (Figure 6).

Figure 6. Surgeon completing a microsurgical training task using the surgical microscope. Surgeons’ gaze and tool behavior can be recorded and analyzed to get insights into their cognitive workload, skill, performance and error recovery. Additional information may be gained by recording EMG, EEG, heart rate variability, and other signals. Figure partially adapted from Wikimedia commons4.

4Derivative work, original:

https://commons.wikimedia.org/wiki/File:Foot_Laser_Surgery.jpg by Michael Wynn, used under CC BY-SA 3.0.

(47)

45 The traditional method for tracking surgical tool movements utilizes physical motion-capturing devices, which typically involve attaching markers to the tools and an external sensor that records the marker movements [59,130,131]. These approaches are usually impractical in realistic microsurgical environments due to limitations in the available space and the small scale of the movements [132]. Tool tracking without external sensors can be realized from video recordings of surgical procedures. Most modern surgical microscopes, for example, have integrated video recording capabilities that can directly capture the surgeon’s view [23] (Figure 7).

Figure 7. Surgical tool use analysis based on computer vision and video recorded through the surgical microscope. A deep learning model is used to detect the tips of the surgical instruments in the video. This information can be used to monitor the presence of the tools in the task and to evaluate metrics such as movement smoothness.

As for the video-based tool trackers, the early attempts utilized classical computer vision methods with hand-crafted algorithms based on features like color and texture [62]. A characteristic property of these classical algorithms is that they require extensive manual designing yet are prone to poor performances in new environments [89]. For most computer vision tasks, the classical methods have been surpassed by methods based on deep artificial neural networks [89,133]. These methods have been applied

Viittaukset

LIITTYVÄT TIEDOSTOT

Kvantitatiivinen vertailu CFAST-ohjelman tulosten ja kokeellisten tulosten välillä osoit- ti, että CFAST-ohjelman tulokset ylemmän vyöhykkeen maksimilämpötilasta ja ajasta,

Yleensä haitallisten aineiden vaikutukset voivat olla välittömiä tai pitkäaikaisesta altistumisesta johtuvia, jolloin vaikutuksen voimakkuuteen ja nopeuteen vaikuttavat

Sen lisäksi, että käytön aikaisella monitoroinnilla voidaan tarkkailla rakenteen käyttäytymistä todellisuudessa esiintyvien rasitustilanteiden aikana, on toisaalta mahdollista

The assessment is based on the assumption that dietary change cannot be looked at only concentrating upon consumption, but rather we need comprehensive understanding of the

In the widely used Australian weed risk assessment model there a relatively many questions in order to reduce the need for further evaluation (Parker et al 2007).. Having said that,

The semantic similarity within a suitable number of nearest neighbors could be used as an objective measure for recommending labels for sound events, and the common label could

In “Evaluation of collocation extraction methods for the Russian language,” my co-authors and my objective was to provide a systematic evaluation of Russian empirical

Thus, introducing the evaluation of syllable prominence measures to complement the more traditionally used temporal fluency features of L2 speech can help improve assessment methods