• Ei tuloksia

Automatic assessment of the myoclonus severity from videos recorded according to standardized Unified Myoclonus Rating Scale protocol and using human pose and body movement analysis

N/A
N/A
Info
Lataa
Protected

Academic year: 2022

Jaa "Automatic assessment of the myoclonus severity from videos recorded according to standardized Unified Myoclonus Rating Scale protocol and using human pose and body movement analysis"

Copied!
18
0
0

Kokoteksti

(1)UEF//eRepository DSpace Rinnakkaistallenteet. https://erepo.uef.fi Terveystieteiden tiedekunta. 2020. Automatic assessment of the myoclonus severity from videos recorded according to standardized Unified Myoclonus Rating Scale protocol and using human pose and body movement analysis Hyppönen, J Elsevier BV Tieteelliset aikakauslehtiartikkelit © British Epilepsy Association CC BY-NC-ND https://creativecommons.org/licenses/by-nc-nd/4.0/ http://dx.doi.org/10.1016/j.seizure.2020.01.014 https://erepo.uef.fi/handle/123456789/8076 Downloaded from University of Eastern Finland's eRepository.

(2) Journal Pre-proof Automatic assessment of the myoclonus severity from videos recorded according to standardized Unified Myoclonus Rating Scale protocol and using human pose and body movement analysis Jelena Hyppönen, Anna Hakala, Kaapo Annala, Honglei Zhang, Jukka Peltola, Esa Mervaala, Reetta Kälviäinen. PII:. S1059-1311(20)30025-X. DOI:. https://doi.org/10.1016/j.seizure.2020.01.014. Reference:. YSEIZ 3645. To appear in:. Seizure: European Journal of Epilepsy. Received Date:. 6 November 2019. Revised Date:. 23 December 2019. Accepted Date:. 20 January 2020. Please cite this article as: Hyppönen J, Hakala A, Annala K, Zhang H, Peltola J, Mervaala E, Kälviäinen R, Automatic assessment of the myoclonus severity from videos recorded according to standardized Unified Myoclonus Rating Scale protocol and using human pose and body movement analysis, Seizure: European Journal of Epilepsy (2020), doi: https://doi.org/10.1016/j.seizure.2020.01.014. This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain. © 2019 Published by Elsevier..

(3) Automatic assessment of the myoclonus severity from videos recorded according to standardized Unified Myoclonus Rating Scale protocol and using human pose and body movement analysis Jelena Hyppönen1, Anna Hakala2, Kaapo Annala2, Honglei Zhang2, Jukka Peltola3, Esa Mervaala1,4, Reetta Kälviäinen4,5 Affiliations: 1. Kuopio Epilepsy Center, Department of Clinical Neurophysiology, Kuopio University Hospital,. Member of ERN EpiCARE, Kuopio, Finland Neuro Event Labs Oy (2712284-1), Tampere, Finland. ro of. 2. 3. Department of Neurology, Tampere University Hospital and Faculty of Medicine and Health Technology, Tampere University 4. Kuopio Epilepsy Center, Neurocenter, Kuopio University Hospital, Member of ERN EpiCARE,. Kuopio, Finland. ur. na. lP. Corresponding author Jelena Hyppönen, M.D., Ph.D. Kuopio University Hospital Department of Clinical Neurophysiology POB 100 Kuopio Finland jelena.hypponen@kuh.fi. re. 5. -p. Institute of Clinical Medicine, School of Medicine, Faculty of Health Sciences, University of Eastern Finland, Kuopio, Finland. Jo. Highlights  Automatic detection of myoclonic jerks from video footage is feasible  Algorithms use human pose and body movement for myoclonus detection  Myoclonic jerks and movement smoothness can be automatically quantified  Automatic analysis is in high agreement with clinical myoclonus rating Abstract Purpose: Myoclonus in progressive myoclonus epilepsy type 1 (EPM1) patients shows marked variability, which presents a substantial challenge in devising treatment and conducting clinical trials. Consequently, fast and objective myoclonus quantification methods are needed. 1.

(4) Methods: Ten video-recorded unified myoclonus rating scale (UMRS) myoclonus with action tests were performed on EPM1 patients who were selected for the development and testing of the automatic myoclonus quantification method. Human pose and body movement analyses of the videos were used to identify body keypoints and further analyze movement smoothness and speed. The automatic myoclonus rating scale (ARMS) was developed. It included the jerk count during movement score and the log dimensionless jerk (LDLJ) score to evaluate changes in the smoothness of movement.. ro of. Results: The scores obtained with the automatic analyses showed moderate to strong significant correlation with the UMRS myoclonus with action scores. The jerk count of the primary keypoints and the LDLJ scores were effective in the evaluation of the myoclonic jerks during hand movements. They also correlated moderately to strongly with the total UMRS test panel scores (r2 = 0,77, P = 0,009 for the jerk count score and r2 = 0,88, P = 0,001 for the LDLJ score). The automatic analyses was weaker in quantification of the neck, trunk, and leg myoclonus.. -p. Conclusion: Automatic quantification of myoclonic jerks using human pose and body movement analysis of patients’ videos is feasible and was found to be quite consistent with the accepted clinical gold standard quantification method. Based on the results of this study, the automatic analytical method should be further developed and validated to improve myoclonus severity follow-up for EPM1 patients.. re. Keywords: EPM1; myoclonus; Unified Myoclonus Rating Scale; pose estimation analysis. lP. Introduction. na. Many neurological diseases such as epilepsy cause abnormal body posture or abnormal movement of body parts. These abnormal involuntary movements, such as muscle twitches, myoclonic jerks or other motor manifestations during epileptic seizures, provide important information that can be used to diagnose and assess disease severity and progression.. Jo. ur. A variety of clinical rating scales has been developed for physicians to evaluate body pose and/or body movements (1, 2) . However, this type of evaluation and analysis is not a trivial task and requires expert knowledge. Some of the design metrics are imprecise and often subjective. Thus, it is difficult to compare the measurements made by different observers (e.g., physicians and specialists) and even measurements made by the same observer at different times. Very often, patients require continuous monitoring, and body pose and movement must be repeatedly evaluated and analyzed to track disease progression or the effectiveness of a particular treatment. This makes current rating scales and follow-up approaches subjective, time-consuming, and difficult to apply in a consistent manner. Therefore, physicians, researchers, and the medical industry have been seeking automatic tools that can efficiently and reliably analyze human body pose and body part movements with some degree of consistency. Computer vision is a scientific field that develops methods for acquiring, processing, analyzing and understanding digital images. Over the past few years, it has achieved remarkable success due to the rapid development of certain deep learning techniques (3). Deep learning, which relies on deep neural network (DNN) architecture, is based on a cascade of multiple layers of convolutional 2.

(5) neural networks that can effectively analyze audio signals, images, videos and other types of sequential data. The technology has become tremendously effective in natural language processing, speech recognition, image and video recognition, and object detection. Deep learning techniques have been used to improve the accuracy of human pose estimation and the detection of keypoints (i.e., detection of regions of high importance, such as joints) (4-6) . For example, the method was adapted, fine-tuned, and tested for pose and movement recognition of motor seizures in an epilepsy monitoring unit (7). Machine learning was also applied to develop automated evaluation methods for quantifying patients’ movement disorders associated with Parkinson’s disease (8). It was used for the automatic sub-task segmentation of timed up and go test videos of patients with Parkinson’s disease (9), and automatic pose estimation from videos has also been successfully applied in the evaluation of the general movements of infants (10).. -p. ro of. Progressive myoclonus epilepsy type 1 (EPM1) belongs to a group of progressive myoclonus epilepsies (PMEs) characterized by myoclonus, epilepsy, and progressive neurological deterioration (11). The disease manifests with generalized tonic-clonic seizures and myoclonus. The tonic-clonic seizures can usually be effectively controlled using antiepileptic drugs (AEDs), though myoclonic jerks tend to be particularly progressive and disabling. It was previously reported that the disease symptoms progress within first 5-10 years from their onset, then later reach a plateau (12). Therefore, it is crucial to find the best suitable medical treatment for this disease, especially myoclonus.. ur. na. lP. re. Several clinical scales are employed to quantitate myoclonus, including the myoclonus rating scale, the unified myoclonus rating scale (UMRS), and modified versions of the UMRS (1, 13, 14). The UMRS is widely considered to be the gold standard for evaluating myoclonus. The UMRS test panel includes an evaluation of both the extent of the myoclonic jerk and myoclonic jerk frequency over a 10 s observation period. The inter-rater agreement associated with the UMRS test panel has been shown to be relatively high. For the myoclonus with action portion, the Cronbach’s α-score was reported with a frequency score of 0.85 and an amplitude score of 0.86 (1, 2). However, this evaluation largely depends on the experience of the scoring physician. Moreover, previous drug trials have shown that it fails to account for the global complexity of myoclonus in patients with myoclonic epilepsy (15). Myoclonus in EPM1 patients is known to be stimulus sensitive, and stress, sleep deprivation, anxiety, time of the day, and many other factors can influence the severity of myoclonic jerks. Therefore, objective serial quantification of myoclonus in EPM1 patients using the UMRS remains suboptimal due to subjectivity of the evaluation and the individual fluctuations of myoclonus.. Jo. In this study, an automatic tool was developed to obtain a myoclonic jerk score from videorecordings using human body keypoint detection and human pose estimation. EPM1 patients are exceptionally well suited for this application because the determination of their clinical myoclonus rating is so challenging. The UMRS myoclonus with action scores were compared with scores obtained from the new automatic myoclonus rating scale (AMRS) in adult EPM1 patients. Novel terms are introduced in this paper to describe the proposed analytical method, including jerk count and log dimensionless jerk (LDLJ) score. Jerk count was developed for this study, and it was defined as the number of velocity spikes during the movement. There are several measurements for evaluation of movement smoothness (16). The LDLJ was adapted for myoclonus evaluation as part of the new AMRS. 3.

(6) Materials and Methods Subjects EPM1 patients with different degrees of myoclonus severity were selected from our previously described EPM1 phenotype-genotype study (17). Ten EPM1 patients (eight homozygous for the expansion mutation in the CSTB gene, three men and seven women, aged 14-47 years, mean age 30.5 ± 11 years) were included in this pilot study. The principal investigator of the original EPM1 phenotyping study (RK) selected the cases, and there were no previous attempts to develop an automatic quantification method to assess myoclonic jerks. Therefore, this study employed a proof of concept approach to demonstrate the feasibility of myoclonus quantification from patient video recordings. The myoclonus patients with different degrees of severity were selected to evaluate whether detecting different myoclonic jerks from video footage was possible by using the automatic method described below.. ro of. Based on the previous UMRS myoclonus with action scores, three patients were found to have mild disability (1-30), three patients exhibited moderate disability (31-59), and four patients had severe disability (score > 60). All patients were being treated with individually adjusted AEDs. Three patients were wheelchair bound, and two patients occasionally needed a wheelchair.. re. -p. The original study was conducted at the Kuopio Epilepsy Center at Kuopio University Hospital jointly with the Folkhälsan Institute of Genetics and Neuroscience Center at the University of Helsinki. The Kuopio University Hospital’s ethics committee approved the study protocol, and informed consent was properly obtained from all participants. For adolescent patients (age 12-14 years), the informed consent was provided by their guardians.. na. lP. Ten video-recorded UMRS myoclonus with action test panels were available for automatic scoring and analysis. Developers of the AMRS were unfamiliar with the clinical history of the selected EPM1 patients and were unaware of the patients’ previous UMRS scoring results. All UMRS scoring results were obtained from the previously conducted EPM1 phenotyping study (17). Clinical neurophysiologist (JH) completed all UMRS scoring for the original study. She received training and certification to administer and score the UMRS test panel for the brivaracetam drug-trial (15). A re-evaluation of the original UMRS scores was not conducted for this pilot study because previously obtained scores were deemed acceptable for the analyses.. Jo. ur. Original videos included all tasks associated with the UMRS’s stimulus sensitivity, myoclonus with action, and functional test panels. However, only six of the UMRS myoclonus with action tasks were selected for the automatic analyses, including the neck, trunk, right arm, left arm, right leg, and left leg tasks (1), while close eyelids, arising, standing, and walking tasks were excluded. In the original video, patients closing their eyes were recorded using the Zoom option, which affects the reliability of keypoint recognition. Arising, standing, and walking tasks were excluded because some patients with severe myoclonus were unable to perform the tasks, and therefore, the video recordings did not contain the data required to conduct an automatic analysis for these tasks. Automatic human pose and body movement analysis For an image that contains one or more people, a human keypoint detector can locate the keypoints of the human body. These systems use deep convolutional neural network (CNN) 4.

(7) lP. re. -p. 1. Right ear 2. Right eye 3. Left eye 4. Left ear 5. Nose 6. Right shoulder 7. Neck 8. Left shoulder 9. Right elbow 10. Left elbow 11. Right hip 12. Left hip 13. Right wrist 14. Left wrist 15. Right knee 16. Left knee 17. Right ankle 18. Left ankle. ro of. architecture, such as VGG or ResNet (18, 19), as the backbone, and extra layers are then added to predict the location of the keypoints. After the initial detection of keypoints, the system further associates the detected keypoints to separate patients from one another and improve the accuracy of the estimated location of the keypoints. To train the system, it was first initiated using a pretrained model, which is often a model trained for image classification that uses an ImageNet (20) dataset. Trainable parameters were fine-tuned for human keypoint detection using a human pose dataset such as MPII (21) or COCO (22). In our study, OpenPose (4) was used as the human keypoint detector to detect and locate 18 keypoints on the human body, as shown in Figure 1.. na. Figure 1. Eighteen keypoints detected by a human keypoints detector.. Jo. ur. The primary keypoint is the body part that is directly involved in the movement. Auxiliary keypoints are the body parts that may indicate body stability during the movement. For each UMRS task, the corresponding primary keypoint and auxiliary keypoints were selected for recognition and tracking by the algorithm during movement. Details related to the investigated keypoints are presented in Table 1. UMRS hand tasks include both static (keeping finger on the nose) and moving (finger-to-nose) actions (1). It also includes the performance of a static task, which is the extension of both arms, first with palms down and then wrists extended. The keypoints were detected and analyzed separately for each part of the task, as shown in the Table 1. After keypoint detection, the pose of the human body, the trajectory of the keypoints during an action, and the smoothness of the movement can be analyzed. However, the exact locations of body keypoints in an image are sometimes ambiguous (e.g., the shoulder is more like a large homogeneous area than a precisely defined point). Since each frame is processed separately, the 5.

(8) estimated keypoint locations obtained from a video are often unstable. In addition, occlusion of the keypoints, for example clothing, affects the precision of keypoint detection, which can be imprecise or even impossible if the selected keypoint is not clearly visible in the video. For example, the patient was sitting while performing the UMRS “heel to toe shin” tasks, and it was impossible to reliably detect the auxiliary keypoints such as the hip joint. All these factors can induce noise during keypoint detection and tracking because the detected keypoint locations might incidentally vary in subsequent images. The induced noise affects the tracking of keypoint movement, especially when the speed of the movement is estimated. Figure 2(a) shows the right wrist movement when the patient is asked to complete an action. The estimated speed based on the location of the right wrist, as shown in Figure 2(b), contains a high level of frequency noise due to keypoint detection instability. To address this issue, the optical flow method was used to estimate the speed of keypoints.. re. -p. ro of. Optical flow can be used to estimate motion in a sequence of images by finding corresponding pixels in the images and calculating their trajectories (23). The method assumes that the flow is constant in a local area and estimates the flow by minimizing the least square error. To achieve a stable estimation when an object moves along a trajectory between two frames, the system builds a pyramid of images and applies a coarse-to-fine approach, whereby the estimation from higher layers can be used to correct the estimation in lower layers. Using the detected keypoints, the speed of the keypoint movement was estimated by analyzing the optical flow around the keypoint area. The speed estimation using optical flow, as shown in Figure 2(c), was found to be more stable than the estimation based on raw keypoint locations, as shown in Figure 2(b).. Jo. ur. pixel. na. lP. The keypoint detector and the optical flow algorithm introduced noise to the acquired signals. Thus, the raw signals were further processed with a low pass moving average filter (window size = 7).. pixel. (a). (b). (c). Figure 2. (a) Right wrist movement when the patient was asked to perform an action; (b) the speed of the movement estimated from raw keypoint locations; (c) the speed of the movement estimated from optical flow.. 6.

(9) Automatic Myoclonus Rating Scale (AMRS) The UMRS score for each task is a multiplied measure of the myoclonus amplitude and frequency (1). The score for each UMRS task was first calculated separately, and the total sum of all the tasks was then used for general evaluation. In accordance with the original UMRS setup, the automatic scores were generated separately for each task. For the arm tasks, the moving and static scores were obtained separately. These separate scores for the arm myoclonic jerks and their sum were used for the analyses. We also calculated the sum of the individual task’s jerk count for the primary keypoint and the sum of the LDLJ scores to match and compare the automatic scores with the total score of the UMRS myoclonus with action panel.. ro of. For this pilot study, jerk count was used as the main evaluation metric. The concept of jerk count was developed for this study, and it was defined as the number of velocity spikes above a predefined threshold during the movement. Since keypoint detection can introduce lowmagnitude changes in speed, an empirically determined threshold was used to adequately remove these changes. The analyses first included a count of all the velocity spikes detected by the algorithm during the task. The results were than linearly scaled from 0-5 independently for each task. The score obtained from the analyses was used to evaluate the myoclonic jerks throughout the entire task or movement. A higher value is indicative of more prominent myoclonus.. na. Statistical analysis. lP. re. -p. The other important characterization of sensorimotor control is the smoothness of the movement. A number of measures have been described that can be used to evaluate movement smoothness, especially during point-to-point or goal-directed tasks. In general, smoothness measurements evaluate the speed profile of the movement, which is usually bell-shaped in healthy subjects. In subjects with neurological disorders the movement speed can be highly intermittent including interchanging episodes of acceleration and deceleration. Smoothness measurements quantify the amount of intermittency during the movement. Therefore, in addition to jerk count, we estimated the log dimensionless jerk (LDLJ) scores of the primary keypoint (16). Since this metric was used to evaluate the smoothness of the movement in general, it was more affected by small and highfrequency jerks, and it was also more sensitive to noise.. Jo. Results. ur. Statistical analyses were performed with IBM SPSS Statistics, version 22. Spearman’s correlation tests were used to determine the correlation between continuous variables, and the correlation was considered strong, moderate, fair, or poor if r2 was 0.8-0.9, 0.6-0.7, 0.3-0.5, or 0.1-0.2, respectively (24) .. The main characteristics for each patient are listed in Table 2. The automatically calculated jerk count scores for the listed keypoints were correlated with the UMRS myoclonus with action test panel scores. The jerk count scores for the primary keypoints showed significant moderate to strong correlation with the UMRS scores of the arms and total scores (Table 3, Figure 3). The automatic leg movement jerk count scores and UMRS leg tests scores were moderately correlated for the right leg, but did not reach the significance level for the left leg. The LDLJ scores of the primary keypoints exhibited significant moderate to strong 7.

(10) correlation with all the UMRS subscores and the total scores (Table 3, Figure 3). Detailed correlation results for the jerk count of the primary keypoints and LDLJ scores of the primary keypoints are presented in Table 3. (a). (b) 2. 2. r = 0,879 P = 0,001. ro of. r = 0,770 P = 0,009. -p. Figure 3. (a) Correlation between the UMRS myoclonus with action total score and the total jerk count score of the primary keypoints. (b) Correlation between the UMRS myoclonus with action total score and the total LDLJ score for the primary keypoint.. na. lP. re. By contrast, the jerk count scores for the first and the second auxiliary keypoints had much weaker correlation with the UMRS scores (Supplementary Table 1). Only the auxiliary scores for the arm jerk count were moderately correlated with the UMRS myoclonus with action scores of the arm movement (Supplementary table 1). The auxiliary keypoints’ jerk counts did not correlate with the total UMRS myoclonus with action score (r2 = 0.333, P = 0.347 and r2 = 0.224, P = 0.533 for the first and second auxiliary keypoints, respectively). In addition, due to the primary video setup required by the UMRS video recording instructions, the auxiliary keypoints could not be used for the automated analysis of the leg action test panel.. Jo. Discussion. ur. In accordance with the above, the automatic total jerk count of primary keypoints and total LDLJ score were significantly moderately to strongly correlated with the results of the UMRS panel’s functional test results (Supplementary Table 2).. To the best of our knowledge, this is the first study that has attempted to use pose estimation and body movement analysis to evaluate myoclonus from individual patient videos. In this pilot study, the analysis algorithms were adapted for the scoring of myoclonus during active movement to obtain the AMRS scores. The results indicated that the system was capable of automatically quantifying myoclonic jerks in six tasks of the UMRS myoclonus with action panel. The correlation results of the automatically obtained AMRS scores for the arm tasks and the total scores with the original UMRS scores were moderate to strong. There was a high level of agreement between the original clinical UMRS scores and the automatic analyses. Our results suggest that the pose estimation and keypoint 8.

(11) detection method used for myoclonic jerk count developed in this study could be a valid method to quantify myoclonus severity in EPM1 patients. However, since it was a small-scale proof of concept study, the reproducibility of the automatic analyses of serial measurements were not evaluated. One scorer originally evaluated the videos used for the development of the automated method. Therefore, the comparison of the method with the multiple scorings performed by the certified physicians was outside the scope of this pilot study. The automatic analyses could have a lower intra-observer variability, but more comparisons of the automatic analyses with the original UMRS scores performed by different observers would be needed to validate this statement.. -p. ro of. It is important to note that the nature of myoclonic jerks could be characterized in a more subtle way by the automatic analyses. Myoclonic jerks can have vary widely in severity and distribution. In EPM1 patients, myoclonic jerks range from minimal and modest, clinically restricted (e.g., one limb myoclonias) to a very large generalized myoclonic jerks that can prevent the patient from being able to walk. The automatic primary keypoint jerk count and LDLJ score analyses performed differently for some of our patients (e.g. table 2, patients 3, 8 and 10). The LDLJ score was more likely to detect small amplitude and high-frequency myoclonic jerks, which means that the myoclonus in these patients had a significant effect on the smoothness of movement without causing significant speed spikes. Clinical quantification of the smoothness of movement is difficult. However, it is an important characteristic of motor learning and sensorimotor control. Therefore, considering the heterogeneous nature of myoclonic jerks, using both the jerk count score and the LDLJ score could be a beneficial and more sensitive method of detecting subtle changes in patients’ myoclonus count or their smoothness of movement.. Jo. ur. na. lP. re. In this study, strong to moderate results were obtained for the automatic analyses of the arms and total myoclonus scores, while the developed algorithm workflow performed more poorly in the quantifications of the myoclonic jerks during neck, trunk and leg tasks. As previously state in the Methods section, a portion of the tasks were excluded from analysis due to the limitations of the original videos. The automatic scoring also exhibited better performance in the evaluation of moving primary keypoints for the jerk count compared with the selected auxiliary “static” keypoints. The detection and tracking of the keypoints depends on the video recording quality including the setup of the video recorded tasks. The position of the subject during the movement or clothing might also substantially affect the reliability of the keypoint detection and increase the signal-to-noise ratio. Therefore, obtaining a standardized video recording with a clear and unobscured view of the selected keypoints is an important component of any future development of the method. Nevertheless, the automatic analysis was reliably consistent and showed that the limited set of tasks may offer an accurate reflection of the overall myoclonus severity assessment found in UMRS videos. Future studies that employ and evaluate this method should use highdefinition video footage, and the patient’s entire body and their legs should be in view. The keypoint detection system could be reinforced with an embedded keypoint tracking system. Currently, the keypoints are detected independently in each frame, resulting in a normally distributed point cloud around the true keypoint, whereas a combined system would provide more accurate results by adapting the system to frame sequences instead of independent frames. This could significantly improve system’s speed estimation performance, and thus, the jerk count and LDLJ assessment. Numerous methods are used to evaluate various movement disorders, which require patients to wear different types of sensors (8, 25). Only a limited number of attached sensors is usually used 9.

(12) in the evaluation to minimize the patients’ discomfort. Therefore, the information collected by the wearable devices has limitations. Our automatic evaluation only requires video of the patient performing certain simple motor tasks. The patient is not required to wear any additional sensors or devices. Therefore, it is less burdensome for the patient, and results can be obtained faster during an outpatient visit. Video recordings can be used to holistically evaluate myoclonic jerks, while simultaneously isolating and evaluating different parts of the body.. -p. ro of. On the other hand, myoclonus in EPM1 patients is known to fluctuate during the day, and many factors, such as unexpected external stimuli, stress, and sleep deprivation, may significantly aggravate myoclonus. Therefore, the limitation of the current protocol is the length of the video recording of the test panel, which provides an evaluation of myoclonus in a relatively short period of time. The method is also designed to evaluate certain simple motor tasks, and therefore, it is not directly applicable for long-term follow up. On the other hand, the wearable sensors are well suited for long-term home-based monitoring. Therefore, using separately the video recording and wearable sensors can provide valuable but limited information for the clinical evaluation, and it is unlikely that one selected method will provide a comprehensive evaluation of such a complex symptom phenotype as myoclonus in EPM1 patients. One aim of future development is to combine different types of automatic follow-up and evaluation methods. The automatic analyses of the videos described in this study may be used to plan the sensor recording, and there methods can be combined to validate the data obtained from the sensors.. na. lP. re. It is always challenging to conduct clinical treatment trials using the best standardized and preferably objective quantitative method to evaluate changes in disease severity or clinical phenotype, and myoclonus is no exception to this rule. Our automatic myoclonus scoring system is the first step to offer a standardized quantitative method. As previously discussed, promising results were found for quantifying the smoothness of movement by detecting smaller, highfrequency myoclonic jerks. When testing new drugs, the patients’ subjective evaluation of disability and possible improvement in their symptoms are both taken into account. Some of the previous automated methods for analyzing symptom have exhibited weaker correlation with patients’ subjective feelings compared to clinical evaluation scales (26). Therefore, our method must also be tested by taking EPM1 patients’ subjective evaluation of their symptom severity and disability into account.. Jo. ur. The results of this pilot study strongly suggest that this method is promising for the quantification of myoclonus. However, it must still be validated by a study that includes a larger patient population, and its capacity for prospective follow-up must also be tested. To address the problem of evaluating fluctuations’ of myoclonus symptoms throughout the day, the automatic scoring must be modified for continuous assessment throughout the day, possibly in combination with other recording techniques. It is hoped that the automatic methods described above can be further improved and standardized for better and faster performance in the clinical setting. For example, adjusting the original UMRS test panel to better meet the demands of the automatic pose estimation and keypoint recognition analyses would greatly enhance the method’s effectiveness. Conclusions The results of this pilot study indicated that the automated keypoint detection method and pose estimation from video method reliably quantified myoclonic jerks in EPM1 patients. The automatic 10.

(13) quantification of myoclonus demonstrated a high level of agreement with the previous clinical evaluation. It also effectively quantified the smoothness of movement, and it was sensitive enough to detect small-amplitude and high-frequency myoclonic jerks. In the future, the method should be validated with a study of a larger patient population, and its feasibility should be further tested. Finally, updating the video recording requirements and possibly the test panel tasks may yield the best possible results from the automated analyses.. Disclosure of Conflict of Interest Jelena Hyppönen 'Declarations of interest: none'. ro of. Anna Hakala, Kaapo Annala and Honglei Zhang AH and KA are employees of Neuro Event Labs, the company that provided the equipment and technology used in the study. HZ works as data science consultant for Neuro Event Labs. Neuro Event Labs has received research grant for this study from Eisai.. lP. Esa Mervaala 'Declarations of interest: none'. re. -p. Jukka Peltola JP has participated in clinical trials for Eisai, UCB, and Bial; received research grants from Eisai, Medtronic, UCB, and Cyberonics/Livanova; received speaker honoraria from Cyberonics/Livanova, Eisai, Medtronic, Orion Pharma, and UCB; received support for travel to congresses from Cyberonics/Livanova, Eisai, Medtronic, and UCB; and participated in advisory boards for Cyberonics/Livanova, Eisai, Medtronic, UCB, and Pfizer. In addition, JP is a shareholder of Neuro Event Labs.. Jo. ur. na. Reetta Kälviäinen RK has received grants from the Academy of Finland and the Saastamoinen Foundation, a speaker’s honoraria from Eisai, UCB, and Orion and honoraria for the membership of the advisory boards from Eisai, GW Pharmaceuticals, Marinus Pharmaceuticals, Sage Therapeutics, Takeda and UCB.. 11.

(14) References 1. Frucht SJ, Leurgans SE, Hallett M, Fahn S. The Unified Myoclonus Rating Scale. Advances in Neurology. 2002;89:361. 2. Pietracupa S, Bruno E, Cavanna AE, Falla M, Zappia M, Colosimo C. Scales for hyperkinetic disorders: A systematic review. J Neurol Sci. 2015 November 15;358(1-2):9-21. 3. Goodfellow I, Bengio Y, Courville A. Deep Learning. MIT Press; 2016. 4. Cao Z, Simon T, Wei S, Sheikh Y. Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields. CoRR. 2016;abs/1611.08050.. ro of. 5. Insafutdinov E, Pishchulin L, Andres B, Andriluka M, Schiele B. DeeperCut: A Deeper, Stronger, and Faster Multi-Person Pose Estimation Model. CoRR. 2016;abs/1605.03170. 6. Kendall A, Grimes M, Cipolla R. Convolutional networks for real-time 6-DOF camera relocalization. CoRR. 2015;abs/1505.07427.. 7. Chen K, Gabriel P, Alasfour A, Gong C, Doyle WK, Devinsky O, et al. Patient-Specific Pose Estimation in Clinical Environments. IEEE J Transl Eng Health Med. 2018 October 10;6:2101111.. re. -p. 8. Kubota KJ, Chen JA, Little MA. Machine learning for large-scale wearable sensor data in Parkinson's disease: Concepts, promises, pitfalls, and futures. Mov Disord. 2016 September 01;31(9):1314-26.. lP. 9. Li T, Chen J, Hu C, Ma Y, Wu Z, Wan W, et al. Automatic Timed Up-and-Go Sub-Task Segmentation for Parkinson's Disease Patients Using Video-Based Activity Classification. IEEE Trans Neural Syst Rehabil Eng. 2018 November 01;26(11):2189-99.. na. 10. Marchi V, Hakala A, Knight A, D'Acunto F, Scattoni ML, Guzzetta A, et al. Automated pose estimation captures key aspects of General Movements at eight to 17 weeks from conventional videos. Acta Paediatr. 2019 March 18. 11. Kälviäinen R, Khyuppenen J, Koskenkorva P, Eriksson K, Vanninen R, Mervaala E. Clinical picture of EPM1-Unverricht-Lundborg disease. Epilepsia. 2008 April 01;49(4):549.. ur. 12. Magaudda A, Ferlazzo E, Nguyen VH, Genton P. Unverricht-Lundborg disease, a condition with self-limited progression: long-term follow-up of 20 patients. Epilepsia. 2006 May 01;47(5):860.. Jo. 13. Koskiniemi M, Van Vleymen B, Hakamies L, Lamusuo S, Taalas J. Piracetam relieves symptoms in progressive myoclonus epilepsy: a multicentre, randomised, double blind, crossover study comparing the efficacy and safety of three dosages of oral piracetam with placebo. Journal of neurology, neurosurgery, and psychiatry. 1998 March 01;64(3):344. 14. Goldsmith D, Minassian BA. Efficacy and tolerability of perampanel in ten patients with Lafora disease. Epilepsy Behav. 2016 September 01;62:132-5. 15. Kalviainen R, Genton P, Andermann E, Andermann F, Magaudda A, Frucht SJ, et al. Brivaracetam in Unverricht-Lundborg disease (EPM1): Results from two randomized, double-blind, placebo-controlled studies. Epilepsia. 2016 February 01;57(2):210-21. 12.

(15) 16. Balasubramanian S, Melendez-Calderon A, Roby-Brami A, Burdet E. On the analysis of movement smoothness. J Neuroeng Rehabil. 2015 December 09;12:112-015. 17. Hypponen J, Aikia M, Joensuu T, Julkunen P, Danner N, Koskenkorva P, et al. Refining the phenotype of Unverricht-Lundborg disease (EPM1): a population-wide Finnish study. Neurology. 2015 April 14;84(15):1529-36. 18. Simonyan K, Zisserman A. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv 1409.1556. Invalid date Invalid date. 19. He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. CoRR. 2015;abs/1512.03385.. ro of. 20. Deng J, Dong W, Socher R, Li L, Li K, Fei-fei L. Imagenet: A large-scale hierarchical image database. In CVPR; ; 2009. 21. Andriluka M, Pishchulin L, Gehler P, Schiele B. 2D Human Pose Estimation: New Benchmark and State of the Art Analysis. Proceedings of the 2014 IEEE Conference on Computer Vision and Pattern Recognition; Washington, DC, USA: IEEE Computer Society; 2014.. -p. 22. Lin T, Maire M, Belongie SJ, Bourdev LD, Girshick RB, Hays J, et al. Microsoft COCO: Common Objects in Context. CoRR. 2014;abs/1405.0312.. re. 23. Fortun D, Bouthemy P, Kervrann C. Optical flow modeling and computation: a survey. Computer Vision and Image Understanding}. 2015 May 01;134:21.. lP. 24. Akoglu H. User's guide to correlation coefficients. Turk J Emerg Med. 2018 August 07;18(3):913. 25. Rissanen SM, Ruonala V, Pekkonen E, Kankaanpaa M, Airaksinen O, Karjalainen PA. Signal features of surface electromyography in advanced Parkinson's disease during different settings of deep brain stimulation. Clin Neurophysiol. 2015 December 01;126(12):2290-8.. Jo. ur. na. 26. Rodriguez-Blazquez C, Forjaz MJ, Kurtis MM, Balestrino R, Martinez-Martin P. Rating Scales for Movement Disorders With Sleep Disturbances: A Narrative Review. Front Neurol. 2018 June 07;9:435.. 13.

(16) Jo ur. na l. Pr. e-. pr. oo. f. Table 1. Description of primary and secondary keypoints used for the movement assessment during the UMRS test panel tasks. *Due to the primary video setup required by the UMRS video recording instructions, the auxiliary keypoints could not be used for the automated analysis of the leg action test panel UMRS task Movement performed during the task Primary keypoint (“moving”) Auxiliary keypoints (“not moving”) B. Neck Head movement in flexion-extension and Nose tip Right shoulder, Left shoulder side to side rotation C. Trunk Trunk flexion when sitting Neck Right shoulder, Left shoulder D. Right arm – Finger to nose movement Right wrist Nose tip, Left shoulder moving D. Right arm - static Finger on the nose Right wrist Nose tip, Left shoulder E. Left arm Finger to nose movement Left wrist Nose tip, Right shoulder moving E. Left arm - static Finger on the nose Left wrist Nose tip, Right shoulder D.E. arms forward Both arms forward with palms down and Right wrist, Left wrist Nose tip then wrists extended F. Right leg Heel to toe shin Right ankle NA* G. Left leg Heel to toe shin Left ankle NA*. 14.

(17) Age. Wheelchair use. Age at EPM1 onset. Disease Nr of duration AEDs. UMRS Myoclonus with Action: total score 120. 1. Malea. 47. Wheelchair bound. 10. 37. 6 AEDs. 2. Malea. 25. Walks. 11. 14. 3. Femaleb 34. Wheelchair bound. 10. 24. 4. Malea. 25. Walks. 10. 5. Femalea. 18. Occasionally. 11. 6. Femalea. 41. Walks. 7. Femalea. 23. 8. Femalea. 38. 9 10. pr. Patient. e-. oo. f. Table 2. Basic characteristics of the EPM1 patients included in the study.. UMRS Functional test: total score 28. AMRS Primary keypoint jerk count: total score 34,2. AMRS LDLJ score of the primary keypoint: total score 27,3. 12. 2. 4,3. 8. 4 AEDs. 69. 9. 8,8. 36,8. Pr. 2 AEDs. 2 AEDs. 29. 3. 13,5. 13,8. 7. 3 AEDs. 43. 14. 18,5. 21,5. na l. 15. 31. 4 AEDs. 8. 1. 6,5. 10,8. Wheelchair bound. 8. 15. 3 AEDs. 90. 18. 24,9. 27. Occasionally. 10. 28. 4 AEDs. 63. 17. 16,6. 30. Femaleb 14. Walks. 6. 8. 2 AEDs. 40. 8. 10,8. 15,1. Femalea. Walks. 13. 27. 5 AEDs. 34. 7. 4,9. 17,9. Jo ur. 10. 40. a. Expansion mutation homozygote, b Compound heterozygote, AED, anti-epileptic drug, UMRS, Unified Myoclonus rating scale, AMRS, Automatic Myoclonus Rating Scale 15.

(18) oo. f. Table 3. Main results of the UMRS Myoclonus with Action scores correlation with the AMRS jerk count scores of the primary keypoints and LDLJ scores of the primary keypoint. Myoclonus with Action: left arm Myoclonus with Action: right leg Myoclonus with Action: left leg. pr 0,770 0,009. LDLJ score for the primary keypoint B. C. D. E. Left F. Right G. Left Neck Trunk Right arm leg leg arm 0,664 0,036 0,767 0,010 0,763 0,010 0,786 0,007 0,743 0,014 0,689 0,027. Total sum. 0,879 0,001. Jo ur. Myoclonus with Action: total score. e-. Myoclonus with Action: right arm. Total sum. Pr. Myoclonus with Action: trunk. na l. Myoclonus with Action: neck. Jerk count of the primary keypoint B. C. D. E. Left F. Right G. Left Neck Trunk Right arm leg leg arm -0,110 0,763 -0,040 0,912 0,730 0,016 0,881 0,001 0,686 0,028 0,554 0,097. Data presented as Spearman’s correlation coefficient, P-value. 16.

(19)

Viittaukset

LIITTYVÄT TIEDOSTOT

Upregulated expression of the Reg IV protein in epithelial cells at regenerating margins of gastric ulcers, in goblet cells of intestinal metaplasia in the esophagus and stomach,

By choreography we want to emphasize meaningful continua of movement that humans, as individuals or as groups, constitute and experience during interaction with technology (see

While the study is an initial exploration, it emphasizes the value of taking the kinesthetic experience (comprising the felt dimensions of movement and body and the memories

Jos valaisimet sijoitetaan hihnan yläpuolelle, ne eivät yleensä valaise kuljettimen alustaa riittävästi, jolloin esimerkiksi karisteen poisto hankaloituu.. Hihnan

Tulokseen vaikuttavat kaasun lisäksi merkittävästi myös selektiivilasin emissiviteetti ja lasien välinen etäisyys, minkä vuoksi mittari täytyy kalibroida eri selektiivilaseille

Highlights  Automatic detection of myoclonic jerks from video footage is feasible  Algorithms use human pose and body movement for myoclonus detection  Myoclonic jerks and

Sunflower Student Movement, power, knowledge, body, mobile communication, critical discourse analysis.. Säilytyspaikka – Depository University

The best model was obtained by the Support Vector Machine classifier using Random Forest to select the most important features: the general accuracy achieved was 89.8% and the