Project description:Virtual reality (VR) simulator has emerged as a laparoscopic surgical skill training tool that needs validation using brain-behavior analysis. Therefore, brain network and skilled behavior relationship were evaluated using functional near-infrared spectroscopy (fNIRS) from seven experienced right-handed surgeons and six right-handed medical students during the performance of Fundamentals of Laparoscopic Surgery (FLS) pattern of cutting tasks in a physical and a VR simulator. Multiple regression and path analysis (MRPA) found that the FLS performance score was statistically significantly related to the interregional directed functional connectivity from the right prefrontal cortex to the supplementary motor area with F (2, 114) = 9, p < 0.001, and R2 = 0.136. Additionally, a two-way multivariate analysis of variance (MANOVA) found a statistically significant effect of the simulator technology on the interregional directed functional connectivity from the right prefrontal cortex to the left primary motor cortex (F (1, 15) = 6.002, p = 0.027; partial η2 = 0.286) that can be related to differential right-lateralized executive control of attention. Then, MRPA found that the coefficient of variation (CoV) of the FLS performance score was statistically significantly associated with the CoV of the interregionally directed functional connectivity from the right primary motor cortex to the left primary motor cortex and the left primary motor cortex to the left prefrontal cortex with F (2, 22) = 3.912, p = 0.035, and R2 = 0.262. This highlighted the importance of the efference copy information from the motor cortices to the prefrontal cortex for postulated left-lateralized perceptual decision-making to reduce behavioral variability.
Project description:BackgroundExcessive tool-tissue interaction forces often result in tissue damage and intraoperative complications, while insufficient forces prevent the completion of the task. This review sought to explore the tool-tissue interaction forces exerted by instruments during surgery across different specialities, tissues, manoeuvres and experience levels.Materials & methodsA PRISMA-guided systematic review was carried out using Embase, Medline and Web of Science databases.ResultsOf 462 articles screened, 45 studies discussing surgical tool-tissue forces were included. The studies were categorized into 9 different specialities with the mean of average forces lowest for ophthalmology (0.04N) and highest for orthopaedic surgery (210N). Nervous tissue required the least amount of force to manipulate (mean of average: 0.4N), whilst connective tissue (including bone) required the most (mean of average: 45.8). For manoeuvres, drilling recorded the highest forces (mean of average: 14N), whilst sharp dissection recorded the lowest (mean of average: 0.03N). When comparing differences in the mean of average forces between groups, novices exerted 22.7% more force than experts, and presence of a feedback mechanism (e.g. audio) reduced exerted forces by 47.9%.ConclusionsThe measurement of tool-tissue forces is a novel but rapidly expanding field. The range of forces applied varies according to surgical speciality, tissue, manoeuvre, operator experience and feedback provided. Knowledge of the safe range of surgical forces will improve surgical safety whilst maintaining effectiveness. Measuring forces during surgery may provide an objective metric for training and assessment. Development of smart instruments, robotics and integrated feedback systems will facilitate this.
Project description:BackgroundTissue handling is one of the pivotal parts of surgical procedures. We aimed to elucidate the characteristics of experts' left-hand during laparoscopic tissue dissection.MethodsParticipants performed tissue dissection around the porcine aorta. The grasping force/point of the grasping forceps were measured using custom-made sensor forceps, and the forceps location was also recorded by motion capture system (Mocap). According to the global operative assessment of laparoscopic skills (GOALS), two experts scored the recorded movies, and based on the mean scores, participants were divided into three groups: novice (<10), intermediate (10≤ to <20), and expert (≤20). Force-based metrics were compared among the three groups using the Kruskal-Wallis test. Principal component analysis (PCA) using significant metrics was also performed.ResultsA total of 42 trainings were successfully recorded. The statistical test revealed that novices frequently regrasped a tissue (median total number of grasps, novices: 268.0 times, intermediates: 89.5, experts: 52.0, p < 0.0001), the traction angle became stable against the aorta (median weighted standard deviation of traction angle, novices: 30.74°, intermediates: 26.80, experts: 23.75, p = 0.0285), and the grasping point moved away from the aorta according to skill competency [median percentage of grasping force applied in close zone (0 to 2.0 cm from aorta), novices: 34.96 %, intermediates: 21.61 %, experts: 10.91 %, p = 0.0032]. PCA showed that the efficiency-related (total number of grasps) and effective tissue traction-related (weighted average grasping position in Y-axis and distribution of grasping area) metrics mainly contributed to the skill difference (proportion of variance of first principal component: 60.83 %).ConclusionThe present results revealed experts' left-hand characteristics, including correct tissue grasping, sufficient tissue traction from the aorta, and stable traction angle. Our next challenge is the provision of immediate and visual feedback onsite after the present wet-lab training, and shortening the learning curve of trainees.
Project description:ObjectiveAssessment of surgical skills is crucial for improving training standards and ensuring the quality of primary care. This study aimed to develop a gradient boosting classification model (GBM) to classify surgical expertise into inexperienced, competent, and experienced levels in robot-assisted surgery (RAS) using visual metrics.MethodsEye gaze data were recorded from 11 participants performing four subtasks; blunt dissection, retraction, cold dissection, and hot dissection using live pigs and the da Vinci robot. Eye gaze data were used to extract the visual metrics. One expert RAS surgeon evaluated each participant's performance and expertise level using the modified Global Evaluative Assessment of Robotic Skills (GEARS) assessment tool. The extracted visual metrics were used to classify surgical skill levels and to evaluate individual GEARS metrics. Analysis of Variance (ANOVA) was used to test the differences for each feature across skill levels.ResultsClassification accuracies for blunt dissection, retraction, cold dissection, and burn dissection were 95%, 96%, 96%, and 96%, respectively. The time to complete only the retraction was significantly different among the 3 skill levels (p-value = 0.04). Performance was significantly different for 3 categories of surgical skill level for all subtasks (p-values<0.01). The extracted visual metrics were strongly associated with GEARS metrics (R2>0.7 for GEARS metrics evaluation models).ConclusionsMachine learning (ML) algorithms trained by visual metrics of RAS surgeons can classify surgical skill levels and evaluate GEARS measures. The time to complete a surgical subtask may not be considered a stand-alone factor for skill level assessment.
Project description:BackgroundVarious surgical skills lead to differences in patient outcomes and identifying poorly skilled surgeons with constructive feedback contributes to surgical quality improvement. The aim of the study was to develop an algorithm for evaluating surgical skills in laparoscopic cholecystectomy based on the features of elementary functional surgical gestures (Surgestures).Materials and methodsSeventy-five laparoscopic cholecystectomy videos were collected from 33 surgeons in five hospitals. The phase of mobilization hepatocystic triangle and gallbladder dissection from the liver bed of each video were annotated with 14 Surgestures. The videos were grouped into competent and incompetent based on the quantiles of modified global operative assessment of laparoscopic skills (mGOALS). Surgeon-related information, clinical data, and intraoperative events were analyzed. Sixty-three Surgesture features were extracted to develop the surgical skill classification algorithm. The area under the receiver operating characteristic curve of the classification and the top features were evaluated.ResultsCorrelation analysis revealed that most perioperative factors had no significant correlation with mGOALS scores. The incompetent group has a higher probability of cholecystic vascular injury compared to the competent group (30.8 vs 6.1%, P =0.004). The competent group demonstrated fewer inefficient Surgestures, lower shift frequency, and a larger dissection-exposure ratio of Surgestures during the procedure. The area under the receiver operating characteristic curve of the classification algorithm achieved 0.866. Different Surgesture features contributed variably to overall performance and specific skill items.ConclusionThe computer algorithm accurately classified surgeons with different skill levels using objective Surgesture features, adding insight into designing automatic laparoscopic surgical skill assessment tools with technical feedback.
Project description:The aim of this study was to develop machine learning classification models using electroencephalogram (EEG) and eye-gaze features to predict the level of surgical expertise in robot-assisted surgery (RAS). EEG and eye-gaze data were recorded from 11 participants who performed cystectomy, hysterectomy, and nephrectomy using the da Vinci robot. Skill level was evaluated by an expert RAS surgeon using the modified Global Evaluative Assessment of Robotic Skills (GEARS) tool, and data from three subtasks were extracted to classify skill levels using three classification models-multinomial logistic regression (MLR), random forest (RF), and gradient boosting (GB). The GB algorithm was used with a combination of EEG and eye-gaze data to classify skill levels, and differences between the models were tested using two-sample t tests. The GB model using EEG features showed the best performance for blunt dissection (83% accuracy), retraction (85% accuracy), and burn dissection (81% accuracy). The combination of EEG and eye-gaze features using the GB algorithm improved the accuracy of skill level classification to 88% for blunt dissection, 93% for retraction, and 86% for burn dissection. The implementation of objective skill classification models in clinical settings may enhance the RAS surgical training process by providing objective feedback about performance to surgeons and their teachers.
Project description:During its earliest stages, the avian embryo is approximately planar. Through a complex series of folds, this flat geometry is transformed into the intricate three-dimensional structure of the developing organism. Formation of the head fold (HF) is the first step in this cascading sequence of out-of-plane tissue folds. The HF establishes the anterior extent of the embryo and initiates heart, foregut and brain development. Here, we use a combination of computational modeling and experiments to determine the physical forces that drive HF formation. Using chick embryos cultured ex ovo, we measured: (1) changes in tissue morphology in living embryos using optical coherence tomography (OCT); (2) morphogenetic strains (deformations) through the tracking of tissue labels; and (3) regional tissue stresses using changes in the geometry of circular wounds punched through the blastoderm. To determine the physical mechanisms that generate the HF, we created a three-dimensional computational model of the early embryo, consisting of pseudoelastic plates representing the blastoderm and vitelline membrane. Based on previous experimental findings, we simulated the following morphogenetic mechanisms: (1) convergent extension in the neural plate (NP); (2) cell wedging along the anterior NP border; and (3) autonomous in-plane deformations outside the NP. Our numerical predictions agree relatively well with the observed morphology, as well as with our measured stress and strain distributions. The model also predicts the abnormal tissue geometries produced when development is mechanically perturbed. Taken together, the results suggest that the proposed morphogenetic mechanisms provide the main tissue-level forces that drive HF formation.
Project description:BackgroundVideos have been used in many settings including medical simulation. Limited information currently exists on video-based assessment in surgical training. Effective assessment tools have substantial impact on the future of training. The objectives of this study were as follows: to evaluate the inter-rater reliability of video-based assessment of orthopedic surgery residents performing open cadaveric simulation procedures and to explore the benefits and limitations of video-based assessment.MethodsA multi-method technique was used. In the quantitative portion, four residents participated in a Surgical Objective Structured Clinical Examination in 2017 at a quaternary care training center. A single camera bird's-eye view was used to videotape the procedures. Five orthopedic surgeons evaluated the surgical videos using the Ottawa Surgical Competency Operating Room Evaluation. Interclass correlation coefficient was used to calculate inter-rater reliability. In the qualitative section, semi-structured interviews were used to explore the perceived strengths and limitations of video-based assessment.Results and discussionThe scores using video-based assessment demonstrated good inter-rater reliability (ICC = 0.832, p = 0.014) in assessing open orthopedic procedures on cadavers. Qualitatively, the strengths of video-based assessment in this study are its ability to assess global performance and/or specific skills, ability to reassess missed points during live assessment, and potential use for less common procedures. It also allows for detailed constructive feedback, flexible assessment time, anonymous assessment, multiple assessors and serves as a good coaching tool. The main limitations of video-based assessment are poor audio-video quality, and questionable feasibility for assessing readiness for practice.ConclusionVideo-based assessment is a potential adjunct to live assessment in orthopedic open procedures with good inter-rater reliability. Improving audio-video quality will enhance the quality of the assessment and improve the effectiveness of using this tool in surgical training.
Project description:PurposeSurgeons' skill in the operating room is a major determinant of patient outcomes. Assessment of surgeons' skill is necessary to improve patient outcomes and quality of care through surgical training and coaching. Methods for video-based assessment of surgical skill can provide objective and efficient tools for surgeons. Our work introduces a new method based on attention mechanisms and provides a comprehensive comparative analysis of state-of-the-art methods for video-based assessment of surgical skill in the operating room.MethodsUsing a dataset of 99 videos of capsulorhexis, a critical step in cataract surgery, we evaluated image feature-based methods and two deep learning methods to assess skill using RGB videos. In the first method, we predict instrument tips as keypoints and predict surgical skill using temporal convolutional neural networks. In the second method, we propose a frame-wise encoder (2D convolutional neural network) followed by a temporal model (recurrent neural network), both of which are augmented by visual attention mechanisms. We computed the area under the receiver operating characteristic curve (AUC), sensitivity, specificity, and predictive values through fivefold cross-validation.ResultsTo classify a binary skill label (expert vs. novice), the range of AUC estimates was 0.49 (95% confidence interval; CI = 0.37 to 0.60) to 0.76 (95% CI = 0.66 to 0.85) for image feature-based methods. The sensitivity and specificity were consistently high for none of the methods. For the deep learning methods, the AUC was 0.79 (95% CI = 0.70 to 0.88) using keypoints alone, 0.78 (95% CI = 0.69 to 0.88) and 0.75 (95% CI = 0.65 to 0.85) with and without attention mechanisms, respectively.ConclusionDeep learning methods are necessary for video-based assessment of surgical skill in the operating room. Attention mechanisms improved discrimination ability of the network. Our findings should be evaluated for external validity in other datasets.