Incoming Frame

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 3378 Experts worldwide ranked by ideXlab platform

Yebin Liu - One of the best experts on this subject based on the ideXlab platform.

  • SimulCap : Single-View Human Performance Capture with Cloth Simulation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zerong Zheng, Gerard Pons-moll, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Yebin Liu
    Abstract:

    This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. Our main contributions are: (i) a multi-layer representation of garments and body, and (ii) a physics-based performance capture procedure. We first digitize the performer using multi-layer surface representation, which includes the undressed body surface and separate clothing meshes. For performance capture, we perform skeleton tracking, cloth simulation, and iterative depth fitting sequentially for the Incoming Frame. By incorporating cloth simulation into the performance capture pipeline, we can simulate plausible cloth dynamics and cloth-body interactions even in the occluded regions, which was not possible in previous capture methods. Moreover, by formulating depth fitting as a physical process, our system produces cloth tracking results consistent with the depth observation while still maintaining physical constraints. Results and evaluations show the effectiveness of our method. Our method also enables new types of applications such as cloth retargeting, free-viewpoint video rendering and animations.

  • CVPR - SimulCap : Single-View Human Performance Capture With Cloth Simulation
    2019 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019
    Co-Authors: Zerong Zheng, Gerard Pons-moll, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Yebin Liu
    Abstract:

    This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. Our main contributions are: (i) a multi-layer representation of garments and body, and (ii) a physics-based performance capture procedure. We first digitize the performer using multi-layer surface representation, which includes the undressed body surface and separate clothing meshes. For performance capture, we perform skeleton tracking, cloth simulation, and iterative depth fitting sequentially for the Incoming Frame. By incorporating cloth simulation into the performance capture pipeline, we can simulate plausible cloth dynamics and cloth-body interactions even in the occluded regions, which was not possible in previous capture methods. Moreover, by formulating depth fitting as a physical process, our system produces cloth tracking results consistent with the depth observation while still maintaining physical constraints. Results and evaluations show the effectiveness of our method. Our method also enables new types of applications such as cloth retargeting, free-viewpoint video rendering and animations.

Zerong Zheng - One of the best experts on this subject based on the ideXlab platform.

  • SimulCap : Single-View Human Performance Capture with Cloth Simulation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zerong Zheng, Gerard Pons-moll, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Yebin Liu
    Abstract:

    This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. Our main contributions are: (i) a multi-layer representation of garments and body, and (ii) a physics-based performance capture procedure. We first digitize the performer using multi-layer surface representation, which includes the undressed body surface and separate clothing meshes. For performance capture, we perform skeleton tracking, cloth simulation, and iterative depth fitting sequentially for the Incoming Frame. By incorporating cloth simulation into the performance capture pipeline, we can simulate plausible cloth dynamics and cloth-body interactions even in the occluded regions, which was not possible in previous capture methods. Moreover, by formulating depth fitting as a physical process, our system produces cloth tracking results consistent with the depth observation while still maintaining physical constraints. Results and evaluations show the effectiveness of our method. Our method also enables new types of applications such as cloth retargeting, free-viewpoint video rendering and animations.

  • CVPR - SimulCap : Single-View Human Performance Capture With Cloth Simulation
    2019 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019
    Co-Authors: Zerong Zheng, Gerard Pons-moll, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Yebin Liu
    Abstract:

    This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. Our main contributions are: (i) a multi-layer representation of garments and body, and (ii) a physics-based performance capture procedure. We first digitize the performer using multi-layer surface representation, which includes the undressed body surface and separate clothing meshes. For performance capture, we perform skeleton tracking, cloth simulation, and iterative depth fitting sequentially for the Incoming Frame. By incorporating cloth simulation into the performance capture pipeline, we can simulate plausible cloth dynamics and cloth-body interactions even in the occluded regions, which was not possible in previous capture methods. Moreover, by formulating depth fitting as a physical process, our system produces cloth tracking results consistent with the depth observation while still maintaining physical constraints. Results and evaluations show the effectiveness of our method. Our method also enables new types of applications such as cloth retargeting, free-viewpoint video rendering and animations.

Steve B Jiang - One of the best experts on this subject based on the ideXlab platform.

  • TH‐C‐BRC‐10: Fluoroscopic Lung Tumor Tracking Based on Real‐Time Deformable Image Registration
    Medical Physics, 2011
    Co-Authors: Xun Jia, Y Graves, Steve B Jiang
    Abstract:

    Purpose: To develop a real‐time and accurate lungtumor tracking algorithm from fluoroscopic images based on deformable image registration. Methods: The new tracking algorithm is developed based on deformable image registration (DIR) of fluoroscopic images using our GPU‐based Demons algorithm. Choosing an initial Frame image as a reference image, we deform all the Incoming Frame images to this reference image to obtain deformation vector fields (DVFs). And consequently, the physician contoured tumor region on the initial Frame can be properly propagated to the Incoming Frames and the new tumor location can be obtained by calculating the centroid of the propagated tumor. Results:The proposed tracking algorithm is initially tested on four lungcancer patients' fluoroscopic images. The average computation time for the tumor tracking for each Frame is between 0.21 and 0.26 seconds on an NVIDIA Tesla C1060 equipped workstation. The accuracy of tracking results are quantified by comparing the numerical tracking tumor centroid with physician determined tumor centroid in each Frame. In superior‐inferior direction, the average of tumor centroid localization error is found to be 0.97 mm for the best testing case and 1.29 mm for the worst cases, and the error at 95 percentile is smaller than 3.2 mm for all testing cases. In lateral direction, the average error is in the range between 0.8 and 1.02 mm, and 95% error is smaller than 3.5 mm for all testing cases. The average of 2D vector errors range from 1.49–2.00mm and 95% 2D error is smaller than 4.0mm. Conclusions:The preliminary results indicate that high computational efficiency and tumor location accuracy can be achieved using the proposed real‐time DIR based lungtumor tracking method. This method provides a real‐time markerless tumor tracking tool for lungcancerradiation therapy.

  • TH‐C‐M100F‐04: A Deformable Lung Tumor Tracking Method in Fluoroscopic Video Using Active Shape Models
    Medical Physics, 2007
    Co-Authors: Russell J. Hamilton, Robert A. Schowengerdt, Steve B Jiang
    Abstract:

    Purpose: Elastic tumor deformation and different intra‐ and inter‐fractional tumor paths between inhalation and exhalation were observed for some lung patients. For high accuracy and possibly 100% duty cycle dose delivery, we propose and evaluate a noninvasive method to fluoroscopically track location and shape variations of lungtumors with different types of deformations. Method and Materials: During a fluoroscopic simulation, lungtumor contours in one complete respiratory period are manually drawn by an expert. Each contour is described by 65 landmarks. The respiratory period is divided into 9 phases and a Point Distribution Models (PDM) statistically describing typical tumor shape variations are built for each phase. When tracking starts, the breathing phase for an Incoming Frame is first determined by the respiratory signal generated simultaneously from diaphragm motion and the PDM for this Frame is also found. Starting from an initial estimate of the tumor contour, the Active Shape Models algorithm searches the area near each landmark and finds a better location. Based on the shifts found for these landmarks, the initial estimate is deformed within a certain range of typical shape variations found in PDM and also rigidly transformed to match the shifts. The new generated contour iteratively updates the previous contour estimate until no significant difference appears between two consecutive iterations or a user defined number of iterations is reached. Results:Tumors demonstrating distinct types of deformations in fluoroscopic videos were well tracked. All the landmarks of the tracked objects were manually revised by an expert using a GUI tool. The average magnitude of the deviation between the tracked and revised results was within 2 mm for 95% the landmarks and within 3mm for all landmarks. Conclusion: This method affords precise tracking of lungtumor location and deformation and may be used for real‐time tracking or DMLC radiotherapy.

Rongchun Zhao - One of the best experts on this subject based on the ideXlab platform.

  • ICIP - QP_TR Trust Region Blob Tracking Through Scale-Space
    2006 International Conference on Image Processing, 2006
    Co-Authors: Jing-ping Jia, Qing Wang, Yanmei Chai, Rongchun Zhao
    Abstract:

    A new approach of tracking objects in image sequences is proposed, in which the constant changes of the size and orientation of the target can be precisely described. For each Incoming Frame, a probability distribution image of the target is created, where the target's area turns into a blob. The scale of this blob can be determined based on the local maxima of differential scale-space filters. We employ the QP_TR trust region algorithm to search the local maxima of orientational multi-scale normalized Laplacian filter of the probability distribution image to locate the target as well as to determine its scale and orientation. Based on the tracking results of sequence examples, the new method is proven to be capable of describing the target more accurately and thus achieves much better tracking precision.

  • QP_TR Trust Region Blob Tracking Through Scale-Space with Automatic Selection of Features
    Lecture Notes in Computer Science, 2006
    Co-Authors: Jing-ping Jia, Qing Wang, Yanmei Chai, Rongchun Zhao
    Abstract:

    A new approach of tracking objects in image sequences is proposed, in which the constant changes of the size and orientation of the target can be precisely described. For each Incoming Frame, a likelihood image of the target is created according to the automatically chosen best feature, where the target's area turns into a blob. The scale of this blob can be determined based on the local maxima of differential scale-space filters. We employ the QP_TR trust region algorithm to search for the local maxima of orientational multi-scale normalized Laplacian filter of the likelihood image to locate the target as well as to determine its scale and orientation. Based on the tracking results of sequence examples, the novel method has been proven to be capable of describing the target more accurately and thus achieves much better tracking precision.

  • ICIAR (1) - QP_TR trust region blob tracking through scale-space with automatic selection of features
    Lecture Notes in Computer Science, 2006
    Co-Authors: Jing-ping Jia, Qing Wang, Yanmei Chai, Rongchun Zhao
    Abstract:

    A new approach of tracking objects in image sequences is proposed, in which the constant changes of the size and orientation of the target can be precisely described. For each Incoming Frame, a likelihood image of the target is created according to the automatically chosen best feature, where the target’s area turns into a blob. The scale of this blob can be determined based on the local maxima of differential scale-space filters. We employ the QP_TR trust region algorithm to search for the local maxima of orientational multi-scale normalized Laplacian filter of the likelihood image to locate the target as well as to determine its scale and orientation. Based on the tracking results of sequence examples, the novel method has been proven to be capable of describing the target more accurately and thus achieves much better tracking precision.

  • ISVC (1) - Blob tracking with adaptive feature selection and accurate scale determination
    Advances in Visual Computing, 2006
    Co-Authors: Jing-ping Jia, Yanmei Chai, Rongchun Zhao, David Dagan Feng, Zheru Chi
    Abstract:

    We propose a novel color based tracking Framework in which an object configuration and color feature are simultaneously determined via scale space filtration. The tracker can automatically select discriminative color feature that well distinguishes foreground from background. According to that feature, a likelihood image of the target is generated for each Incoming Frame. The target’s area turns into a blob in the likelihood image. The scale of this blob can be determined based on the local maximum of differential scale-space filters. We employ the QP_TR trust region algorithm to search for the local maximum of multi-scale normalized Laplacian filter of the likelihood image to locate the target as well as determine its scale. Based on the tracking results of sequence examples, the proposed method has been proven to be resilient to the color and lighting changes, be capable of describing the target more accurately and achieve much better tracking precision.

  • Blob Tracking with Adaptive Feature Selection and Accurate Scale Determination
    Lecture Notes in Computer Science, 2006
    Co-Authors: Jing-ping Jia, Yanmei Chai, Rongchun Zhao, David Dagan Feng, Zheru Chi
    Abstract:

    We propose a novel color based tracking Framework in which an object configuration and color feature are simultaneously determined via scale space filtration. The tracker can automatically select discriminative color feature that well distinguishes foreground from background. According to that feature, a likelihood image of the target is generated for each Incoming Frame. The target's area turns into a blob in the likelihood image. The scale of this blob can be determined based on the local maximum of differential scale-space filters. We employ the QP_TR trust region algorithm to search for the local maximum of multi-scale normalized Laplacian filter of the likelihood image to locate the target as well as determine its scale. Based on the tracking results of sequence examples, the proposed method has been proven to be resilient to the color and lighting changes, be capable of describing the target more accurately and achieve much better tracking precision.

Qionghai Dai - One of the best experts on this subject based on the ideXlab platform.

  • SimulCap : Single-View Human Performance Capture with Cloth Simulation
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Zerong Zheng, Gerard Pons-moll, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Yebin Liu
    Abstract:

    This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. Our main contributions are: (i) a multi-layer representation of garments and body, and (ii) a physics-based performance capture procedure. We first digitize the performer using multi-layer surface representation, which includes the undressed body surface and separate clothing meshes. For performance capture, we perform skeleton tracking, cloth simulation, and iterative depth fitting sequentially for the Incoming Frame. By incorporating cloth simulation into the performance capture pipeline, we can simulate plausible cloth dynamics and cloth-body interactions even in the occluded regions, which was not possible in previous capture methods. Moreover, by formulating depth fitting as a physical process, our system produces cloth tracking results consistent with the depth observation while still maintaining physical constraints. Results and evaluations show the effectiveness of our method. Our method also enables new types of applications such as cloth retargeting, free-viewpoint video rendering and animations.

  • CVPR - SimulCap : Single-View Human Performance Capture With Cloth Simulation
    2019 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019
    Co-Authors: Zerong Zheng, Gerard Pons-moll, Yuan Zhong, Jianhui Zhao, Qionghai Dai, Yebin Liu
    Abstract:

    This paper proposes a new method for live free-viewpoint human performance capture with dynamic details (e.g., cloth wrinkles) using a single RGBD camera. Our main contributions are: (i) a multi-layer representation of garments and body, and (ii) a physics-based performance capture procedure. We first digitize the performer using multi-layer surface representation, which includes the undressed body surface and separate clothing meshes. For performance capture, we perform skeleton tracking, cloth simulation, and iterative depth fitting sequentially for the Incoming Frame. By incorporating cloth simulation into the performance capture pipeline, we can simulate plausible cloth dynamics and cloth-body interactions even in the occluded regions, which was not possible in previous capture methods. Moreover, by formulating depth fitting as a physical process, our system produces cloth tracking results consistent with the depth observation while still maintaining physical constraints. Results and evaluations show the effectiveness of our method. Our method also enables new types of applications such as cloth retargeting, free-viewpoint video rendering and animations.