Objective Evaluation

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 610983 Experts worldwide ranked by ideXlab platform

Nicholas Stergiou - One of the best experts on this subject based on the ideXlab platform.

  • Objective Evaluation of expert and novice performance during robotic surgical training tasks
    2009
    Co-Authors: Timothy Judkins, Dmitry Oleynikov, Nicholas Stergiou
    Abstract:

    Background Robotic laparoscopic surgery has revolutionized minimally invasive surgery for the treatment of abdominal pathologies. However, current training techniques rely on subjective Evaluation. The authors sought to identify Objective measures of robotic surgical performance by comparing novices and experts during three training tasks.

  • Objective Evaluation of expert and novice performance during robotic surgical training tasks
    2009
    Co-Authors: Timothy N Judkins, Dmitry Oleynikov, Nicholas Stergiou
    Abstract:

    Robotic laparoscopic surgery has revolutionized minimally invasive surgery for the treatment of abdominal pathologies. However, current training techniques rely on subjective Evaluation. The authors sought to identify Objective measures of robotic surgical performance by comparing novices and experts during three training tasks. Five novices (medical students) were trained in three tasks with the da Vinci Surgical System. Five experts trained in advanced laparoscopy also performed the three tasks. Time to task completion (TTC), total distance traveled (D), speed (S), curvature (κ), and relative phase (Φ) were measured. Before training, TTC, D, and κ were significantly smaller for experts than for novices (p < 0.05), whereas S was significantly larger for experts than for novices before training (p < 0.05). Novices performed significantly better after training, as shown by smaller TTC, D, and κ, and larger S. Novice performance after training approached expert performance. This study clearly demonstrated the ability of Objective kinematic measures to distinguish between novice and expert performance and training effects in the performance of robotic surgical training tasks.

Dmitry Oleynikov - One of the best experts on this subject based on the ideXlab platform.

  • Objective Evaluation of expert and novice performance during robotic surgical training tasks
    2009
    Co-Authors: Timothy Judkins, Dmitry Oleynikov, Nicholas Stergiou
    Abstract:

    Background Robotic laparoscopic surgery has revolutionized minimally invasive surgery for the treatment of abdominal pathologies. However, current training techniques rely on subjective Evaluation. The authors sought to identify Objective measures of robotic surgical performance by comparing novices and experts during three training tasks.

  • Objective Evaluation of expert and novice performance during robotic surgical training tasks
    2009
    Co-Authors: Timothy N Judkins, Dmitry Oleynikov, Nicholas Stergiou
    Abstract:

    Robotic laparoscopic surgery has revolutionized minimally invasive surgery for the treatment of abdominal pathologies. However, current training techniques rely on subjective Evaluation. The authors sought to identify Objective measures of robotic surgical performance by comparing novices and experts during three training tasks. Five novices (medical students) were trained in three tasks with the da Vinci Surgical System. Five experts trained in advanced laparoscopy also performed the three tasks. Time to task completion (TTC), total distance traveled (D), speed (S), curvature (κ), and relative phase (Φ) were measured. Before training, TTC, D, and κ were significantly smaller for experts than for novices (p < 0.05), whereas S was significantly larger for experts than for novices before training (p < 0.05). Novices performed significantly better after training, as shown by smaller TTC, D, and κ, and larger S. Novice performance after training approached expert performance. This study clearly demonstrated the ability of Objective kinematic measures to distinguish between novice and expert performance and training effects in the performance of robotic surgical training tasks.

Timothy N Judkins - One of the best experts on this subject based on the ideXlab platform.

  • Objective Evaluation of expert and novice performance during robotic surgical training tasks
    2009
    Co-Authors: Timothy N Judkins, Dmitry Oleynikov, Nicholas Stergiou
    Abstract:

    Robotic laparoscopic surgery has revolutionized minimally invasive surgery for the treatment of abdominal pathologies. However, current training techniques rely on subjective Evaluation. The authors sought to identify Objective measures of robotic surgical performance by comparing novices and experts during three training tasks. Five novices (medical students) were trained in three tasks with the da Vinci Surgical System. Five experts trained in advanced laparoscopy also performed the three tasks. Time to task completion (TTC), total distance traveled (D), speed (S), curvature (κ), and relative phase (Φ) were measured. Before training, TTC, D, and κ were significantly smaller for experts than for novices (p < 0.05), whereas S was significantly larger for experts than for novices before training (p < 0.05). Novices performed significantly better after training, as shown by smaller TTC, D, and κ, and larger S. Novice performance after training approached expert performance. This study clearly demonstrated the ability of Objective kinematic measures to distinguish between novice and expert performance and training effects in the performance of robotic surgical training tasks.

Martial Hebert - One of the best experts on this subject based on the ideXlab platform.

  • toward Objective Evaluation of image segmentation algorithms
    2007
    Co-Authors: Ranjith Unnikrishnan, Caroline Pantofaru, Martial Hebert
    Abstract:

    Unsupervised image segmentation is an important component in many image understanding algorithms and practical vision systems. However, Evaluation of segmentation algorithms thus far has been largely subjective, leaving a system designer to judge the effectiveness of a technique based only on intuition and results in the form of a few example segmented images. This is largely due to image segmentation being an ill-defined problem-there is no unique ground-truth segmentation of an image against which the output of an algorithm may be compared. This paper demonstrates how a recently proposed measure of similarity, the normalized probabilistic rand (NPR) index, can be used to perform a quantitative comparison between image segmentation algorithms using a hand-labeled set of ground-truth segmentations. We show that the measure allows principled comparisons between segmentations created by different algorithms, as well as segmentations on different images. We outline a procedure for algorithm Evaluation through an example Evaluation of some familiar algorithms - the mean-shift-based algorithm, an efficient graph-based segmentation algorithm, a hybrid algorithm that combines the strengths of both methods, and expectation maximization. Results are presented on the 300 images in the publicly available Berkeley segmentation data set

  • a measure for Objective Evaluation of image segmentation algorithms
    2005
    Co-Authors: Ranjith Unnikrishnan, Caroline Pantofaru, Martial Hebert
    Abstract:

    Despite significant advances in image segmentation techniques, Evaluation of these techniques thus far has been largely subjective. Typically, the effectiveness of a new algorithm is demonstrated only by the presentation of a few segmented images and is otherwise left to subjective Evaluation by the reader. Little effort has been spent on the design of perceptually correct measures to compare an automatic segmentation of an image to a set of hand-segmented examples of the same image. This paper demonstrates how a modification of the Rand index, the Normalized Probabilistic Rand (NPR) index, meets the requirements of largescale performance Evaluation of image segmentation. We show that the measure has a clear probabilistic interpretation as the maximum likelihood estimator of an underlying Gibbs model, can be correctly normalized to account for the inherent similarity in a set of ground truth images, and can be computed efficiently for large datasets. Results are presented on images from the publicly available Berkeley Segmentation dataset.

Timothy Judkins - One of the best experts on this subject based on the ideXlab platform.