Auxiliary Module - Explore the Science & Experts | ideXlab

Scan Science and Technology

Contact Leading Edge Experts & Companies

Auxiliary Module

The Experts below are selected from a list of 168 Experts worldwide ranked by ideXlab platform

Shuo Li – One of the best experts on this subject based on the ideXlab platform.

  • OF-MSRN: Optical Flow-Auxiliary Multi-Task Regression Network for Direct Quantitative Measurement, Segmentation and Motion Estimation
    Proceedings of the AAAI Conference on Artificial Intelligence, 2020
    Co-Authors: Chengqian Zhao, Cheng Feng, Dengwang Li, Shuo Li

    Abstract:

    Comprehensively analyzing the carotid artery is critically significant to diagnosing and treating cardiovascular diseases. The object of this work is to simultaneously achieve direct quantitative measurement and automated segmentation of the lumen diameter and intima-media thickness as well as the motion estimation of the carotid wall. No work has simultaneously achieved the comprehensive analysis of carotid artery due to three intractable challenges: 1) Tiny intima-media is more challenging to measure and segment; 2) Artifact generated by radial motion restrict the accuracy of measurement and segmentation; 3) Occlusions on diseased carotid walls generate dynamic complexity and indeterminacy. In this paper, we propose a novel optical flow-Auxiliary multi-task regression network named OF-MSRN to overcome these challenges. We concatenate multi-scale features to a regression network to simultaneously achieve measurement and segmentation, which makes full use of the potential correlation between the two tasks. More importantly, we creatively explore an optical flow Auxiliary Module to take advantage of the co-promotion of segmentation and motion estimation to overcome the restrictions of the radial motion. Besides, we evaluate consistency between forward and backward optical flow to improve the accuracy of motion estimation of the diseased carotid wall. Extensive experiments on US sequences of 101 patients demonstrate the superior performance of OF-MSRN on the comprehensive analysis of the carotid artery by utilizing the dual optimization of the optical flow Auxiliary Module.

  • AAAI – OF-MSRN: Optical Flow-Auxiliary Multi-Task Regression Network for Direct Quantitative Measurement, Segmentation and Motion Estimation
    , 2020
    Co-Authors: Chengqian Zhao, Cheng Feng, Dengwang Li, Shuo Li

    Abstract:

    Comprehensively analyzing the carotid artery is critically significant to diagnosing and treating cardiovascular diseases. The object of this work is to simultaneously achieve direct quantitative measurement and automated segmentation of the lumen diameter and intima-media thickness as well as the motion estimation of the carotid wall. No work has simultaneously achieved the comprehensive analysis of carotid artery due to three intractable challenges: 1) Tiny intima-media is more challenging to measure and segment; 2) Artifact generated by radial motion restrict the accuracy of measurement and segmentation; 3) Occlusions on diseased carotid walls generate dynamic complexity and indeterminacy. In this paper, we propose a novel optical flow-Auxiliary multi-task regression network named OF-MSRN to overcome these challenges. We concatenate multi-scale features to a regression network to simultaneously achieve measurement and segmentation, which makes full use of the potential correlation between the two tasks. More importantly, we creatively explore an optical flow Auxiliary Module to take advantage of the co-promotion of segmentation and motion estimation to overcome the restrictions of the radial motion. Besides, we evaluate consistency between forward and backward optical flow to improve the accuracy of motion estimation of the diseased carotid wall. Extensive experiments on US sequences of 101 patients demonstrate the superior performance of OF-MSRN on the comprehensive analysis of the carotid artery by utilizing the dual optimization of the optical flow Auxiliary Module.

Bohan Zhuang – One of the best experts on this subject based on the ideXlab platform.

  • Training Quantized Neural Networks With a Full-Precision Auxiliary Module
    2020 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020
    Co-Authors: Bohan Zhuang, Chunhua Shen, Ian Reid

    Abstract:

    In this paper, we seek to tackle a challenge in training low-precision networks: the notorious difficulty in propagating gradient through a low-precision network due to the non-differentiable quantization function. We propose a solution by training the low-precision network with a full-precision Auxiliary Module. Specifically, during training, we construct a mix-precision network by augmenting the original low-precision network with the full precision Auxiliary Module. Then the augmented mix-precision network and the low-precision network are jointly optimized. This strategy creates additional full-precision routes to update the parameters of the low-precision model, thus making the gradient back-propagates more easily. At the inference time, we discard the Auxiliary Module without introducing any computational complexity to the low-precision network. We evaluate the proposed method on image classification and object detection over various quantization approaches and show consistent performance increase. In particular, we achieve near lossless performance to the full-precision model by using a 4-bit detector, which is of great practical value.

  • Auxiliary Learning for Deep Multi-task Learning
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Bohan Zhuang, Chunhua Shen, Hao Chen

    Abstract:

    Multi-task learning (MTL) is an efficient solution to solve multiple tasks simultaneously in order to get better speed and performance than handling each single-task in turn. The most current methods can be categorized as either: (i) hard parameter sharing where a subset of the parameters is shared among tasks while other parameters are task-specific; or (ii) soft parameter sharing where all parameters are task-specific but they are jointly regularized. Both methods suffer from limitations: the shared hidden layers of the former are difficult to optimize due to the competing objectives while the complexity of the latter grows linearly with the increasing number of tasks. To mitigate those drawbacks, this paper proposes an alternative, where we explicitly construct an Auxiliary Module to mimic the soft parameter sharing for assisting the optimization of the hard parameter sharing layers in the training phase. In particular, the Auxiliary Module takes the outputs of the shared hidden layers as inputs and is supervised by the Auxiliary task loss. During training, the Auxiliary Module is jointly optimized with the MTL network, serving as a regularization by introducing an inductive bias to the shared layers. In the testing phase, only the original MTL network is kept. Thus our method avoids the limitation of both categories. We evaluate the proposed Auxiliary Module on pixel-wise prediction tasks, including semantic segmentation, depth estimation, and surface normal prediction with different network structures. The extensive experiments over various settings verify the effectiveness of our methods.

Ian Reid – One of the best experts on this subject based on the ideXlab platform.

  • Training Quantized Neural Networks With a Full-Precision Auxiliary Module
    2020 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020
    Co-Authors: Bohan Zhuang, Chunhua Shen, Ian Reid

    Abstract:

    In this paper, we seek to tackle a challenge in training low-precision networks: the notorious difficulty in propagating gradient through a low-precision network due to the non-differentiable quantization function. We propose a solution by training the low-precision network with a full-precision Auxiliary Module. Specifically, during training, we construct a mix-precision network by augmenting the original low-precision network with the full precision Auxiliary Module. Then the augmented mix-precision network and the low-precision network are jointly optimized. This strategy creates additional full-precision routes to update the parameters of the low-precision model, thus making the gradient back-propagates more easily. At the inference time, we discard the Auxiliary Module without introducing any computational complexity to the low-precision network. We evaluate the proposed method on image classification and object detection over various quantization approaches and show consistent performance increase. In particular, we achieve near lossless performance to the full-precision model by using a 4-bit detector, which is of great practical value.