Task Transfer

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 71997 Experts worldwide ranked by ideXlab platform

Ehsan Samei - One of the best experts on this subject based on the ideXlab platform.

  • 3D Task-Transfer function representation of the signal Transfer properties of low-contrast lesions in FBP- and iterative-reconstructed CT.
    Medical physics, 2018
    Co-Authors: Marthony Robins, Justin Solomon, Taylor Richards, Ehsan Samei
    Abstract:

    PURPOSE The purpose of this study was to investigate how accurately the Task-Transfer function (TTF) models the signal Transfer properties of low-contrast features in a non-linear commercial CT system. METHODS A cylindrical phantom containing 24 anthropomorphic "physical" lesions was 3D printed. Lesions had two sizes (523, 2145 mm3 ), and two nominal radio-densities (80 and 100 HU at 120 kV). CT images were acquired on a commercial CT system (Siemens Flash scanner) at four dose levels (CTDIvol , 32 cm phantom:1.5, 3.0, 6.0, 22.0 mGy) and reconstructed using FBP and IR kernels (B31f, B45f, I31f\2, I44f\2). Low-contrast rod inserts (in-plane) and a slanted edge (z-direction) were used to estimate 3D-TTFs. CAD versions of lesions were blurred by the 3D-TTFs, virtually superimposed into corresponding phantom images, and compared to the physical lesions in terms of (a) a 4AFC visual assessment, (b) edge gradient, (c) size, and (d) shape similarity. Assessments 2 and 3 were based on an equivalence criterion D¯≥COV¯ to determine if the natural variability COV¯ in the physical lesions was greater or equal to the difference D¯ between physical and simulated. Shape similarity was quantified via Sorensen-Dice coefficient (SDC). Comparisons were done for each lesion and for all imaging conditions. RESULTS The readers detected simulated lesions at a rate of 37.9 ± 3.1% (25% implies random guessing). Lesion edge blur and volume differences D¯ were on average less than physical lesions' natural variability COV¯ . The SDC (average ± SD) was 0.80 ± 0.13 (max of 1 possible). CONCLUSIONS The visual appearance, edge blur, size, and shape of simulated lesions were similar to the physical lesions, which suggests 3D-TTF models the low-contrast signal Transfer properties of this non-linear CT system reasonably well.

  • Can a 3D Task Transfer function accurately represent the signal Transfer properties of low-contrast lesions in non-linear CT systems?
    Medical Imaging 2018: Physics of Medical Imaging, 2018
    Co-Authors: Marthony Robins, Justin Solomon, Ehsan Samei
    Abstract:

    The purpose of this study was to investigate how accurately the Task-Transfer function (TTF) models the signal Transfer properties of low-contrast features in a non-linear CT system. A cylindrical phantom containing 24 anthropomorphic liver lesions (modeled from patient lesions) was designed using computer-aided design software (Rhinoceros 3D). Lesions had irregular shapes, 2 sizes (523, 2145 mm3), and 2 contrast levels (80, 100 HU). The phantom was printed with a state-of-the-art multimaterial 3D printer (Stratasys J750). CT images were acquired on a clinical CT scanner (Siemens Flash) at 4 dose levels (CTDIVol, 32 cm phantom: 1.5, 3, 6, 22 mGy) and reconstructed using 2 FBP kernels (B31f, B45f) and 2 iterative kernels (SAFIRE, strength-2: I31f, and I44f). 3D-TTFs were estimated by combining TTFs measured using low-contrast rod inserts (in-plane) and a slanted edge (z-direction) printed in-phantom. CAD versions of lesions were blurred by 3D-TTFs and virtually superimposed into corresponding phantom images using a previously validated technique. We compared lesion morphometry (i.e., size and shape) measurements between 3D printed “physical” and TTF-blurred “simulated” lesions across multiple acquisitions. Lesion size was quantified using a commercial segmentation software (Syngo.via). Lesion shape was quantified by measuring the Jaccard index between the segmented masks of paired physical and simulated lesions. The relative volume difference D between physical and simulated lesions was mostly less than the natural variability COV of the physical lesions. For large and small lesions, the COV1,

  • ct performance as a variable function of resolution noise and Task property for iterative reconstructions
    Proceedings of SPIE, 2012
    Co-Authors: Baiyu Chen, S Richard, O Christianson, Xiaodong Zhou, Ehsan Samei
    Abstract:

    The increasing availability of iterative reconstruction (IR) algorithms on clinical scanners is creating a demand for effectively and efficiently evaluating imaging performance and potential dose reduction. In this study, the location- and Task-specific evaluation was performed using detectability index (d') by combining a Task function, the Task Transfer function (TTF), and the noise power spectrum (NPS). Task function modeled a wide variety detection Tasks in terms of shape and contrast. The TTF and NPS were measured from a physical phantom as a function of contrast and dose levels. Measured d' values were compared between three IRs (IRIS, SAFIRE3 and SAFIRE5) and conventional filtered back-projection (FBP) at various dose levels, showing an equivalent performance of IR at lower dose levels. AUC further calculated from d' showed that compared to FBP, SAFIRE5 may reduce dose by up to 50-60%; SAFIRE3 and IRIS by up to 20-30%. This study provides an initial framework for the localized and Task-specific evaluation of IRs in CT and a guideline for the identification of optimal operating dose point with iterative reconstructions.

Vineeth N Balasubramanian - One of the best experts on this subject based on the ideXlab platform.

  • Zero-Shot Task Transfer.
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Arghya Pal, Vineeth N Balasubramanian
    Abstract:

    In this work, we present a novel meta-learning algorithm, i.e. TTNet, that regresses model parameters for novel Tasks for which no ground truth is available (zero-shot Tasks). In order to adapt to novel zero-shot Tasks, our meta-learner learns from the model parameters of known Tasks (with ground truth) and the correlation of known Tasks to zero-shot Tasks. Such intuition finds its foothold in cognitive science, where a subject (human baby) can adapt to a novel-concept (depth understanding) by correlating it with old concepts (hand movement or self-motion), without receiving explicit supervision. We evaluated our model on the Taskonomy dataset, with four Tasks as zero-shot: surface-normal, room layout, depth, and camera pose estimation. These Tasks were chosen based on the data acquisition complexity and the complexity associated with the learning process using a deep network. Our proposed methodology out-performs state-of-the-art models (which use ground truth)on each of our zero-shot Tasks, showing promise on zero-shot Task Transfer. We also conducted extensive experiments to study the various choices of our methodology, as well as showed how the proposed method can also be used in Transfer learning. To the best of our knowledge, this is the firstsuch effort on zero-shot learning in the Task space.

  • CVPR - Zero-Shot Task Transfer
    2019 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019
    Co-Authors: Arghya Pal, Vineeth N Balasubramanian
    Abstract:

    In this work, we present a novel meta-learning algorithm that regresses model parameters for novel Tasks for which no ground truth is available (zero-shot Tasks). In order to adapt to novel zero-shot Tasks, our meta-learner learns from the model parameters of known Tasks (with ground truth) and the correlation of known Tasks to zero-shot Tasks. Such intuition finds its foothold in cognitive science, where a subject (human baby) can adapt to a novel concept (depth understanding) by correlating it with old concepts (hand movement or self-motion), without receiving an explicit supervision. We evaluated our model on the Taskonomy dataset, with four Tasks as zero-shot: surface normal, room layout, depth and camera pose estimation. These Tasks were chosen based on the data acquisition complexity and the complexity associated with the learning process using a deep network. Our proposed methodolgy outperforms state-of-the-art models (which use ground truth) on each of our zero-shot Tasks, showing promise on zero-shot Task Transfer. We also conducted extensive experiments to study the various choices of our methodology, as well as showed how the proposed method can also be used in Transfer learning. To the best of our knowledge, this is the first such effort on zero-shot learning in the Task space.

Fangming Liu - One of the best experts on this subject based on the ideXlab platform.

  • On-Edge Multi-Task Transfer Learning: Model and Practice With Data-Driven Task Allocation
    IEEE Transactions on Parallel and Distributed Systems, 2020
    Co-Authors: Qiong Chen, Zimu Zheng, Dan Wang, Fangming Liu
    Abstract:

    On edge devices, data scarcity occurs as a common problem where Transfer learning serves as a widely-suggested remedy. Nevertheless, Transfer learning imposes heavy computation burden to the resource-constrained edge devices. Existing Task allocation works usually assume all submitted Tasks are equally important, leading to inefficient resource allocation at a Task level when directly applied in Multi-Task Transfer Learning (MTL). To address these issues, we first reveal that it is crucial to measure the impact of Tasks on overall decision performance improvement and quantify Task importance . We then show that Task allocation with Task importance for MTL (TATIM) is a variant of NP-complete Knapsack problem, where the complicated computation to solve this problem needs to be conducted repeatedly under varying contexts. To solve TATIM with high computational efficiency, we propose a Data-driven Cooperative Task Allocation (DCTA) approach. Finally, we evaluate the performance of DCTA by not only a trace-driven simulation, but also a new comprehensive real-world AIOps case study which bridges model and practice via a new architecture and main components design within AIOps system. Extensive experiments show that our DCTA reduces 3.24 times of processing time, and saves 48.4 percent energy consumption compared with the state-of-the-art when solving TATIM.

  • ICDCS - Data-driven Task Allocation for Multi-Task Transfer Learning on the Edge
    2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), 2019
    Co-Authors: Qiong Chen, Zimu Zheng, Dan Wang, Fangming Liu
    Abstract:

    Edge computing for machine learning has become a heated research topic. On edge devices, data scarcity occurs as a common problem where Transfer learning serves as a widely-suggested remedy. Nevertheless, one obstacle is that Transfer learning imposes heavy computation burden to the resource-constrained edge devices. Motivated by the fact that only a few Tasks of Multi-Task Transfer Learning (MTL) have a higher potential for overall decision performance improvement, we design a novel Task allocation scheme, which assigns more important Tasks to more powerful edge devices to maximize the overall decision performance. In this paper, we focus on Task allocation under multi-Task scenarios by introducing Task importance and make the following contributions. First, we reveal that it is important to measure the impact of Tasks on overall decision performance improvement and quantify Task importance. We also observe the long-tail property of Task importance, i.e., only a few Tasks are important, which facilitates more efficient Task allocation. Second, we show that Task allocation with Task importance for MTL (TATIM) is in fact a variant of the NP-complete Knapsack problem, where the complicated computation to solve this problem needs to be conducted repeatedly under varying contexts. To solve TATIM with high computational efficiency, we innovatively propose a Data-driven Cooperative Task Allocation (DCTA) approach. Third, we evaluate the performance of our DCTA approach by applying it to a real-world industrial operation (e.g., AIOps) scenario. Experiments show that our DCTA approach can reduce 3.24 times of processing time compared with the state-of-the-art when solving TATIM. We offer our DCTA approach as an effective and practical mechanism for reducing the required resource associated with performing MTL on edge devices.

Arghya Pal - One of the best experts on this subject based on the ideXlab platform.

  • Zero-Shot Task Transfer.
    arXiv: Computer Vision and Pattern Recognition, 2019
    Co-Authors: Arghya Pal, Vineeth N Balasubramanian
    Abstract:

    In this work, we present a novel meta-learning algorithm, i.e. TTNet, that regresses model parameters for novel Tasks for which no ground truth is available (zero-shot Tasks). In order to adapt to novel zero-shot Tasks, our meta-learner learns from the model parameters of known Tasks (with ground truth) and the correlation of known Tasks to zero-shot Tasks. Such intuition finds its foothold in cognitive science, where a subject (human baby) can adapt to a novel-concept (depth understanding) by correlating it with old concepts (hand movement or self-motion), without receiving explicit supervision. We evaluated our model on the Taskonomy dataset, with four Tasks as zero-shot: surface-normal, room layout, depth, and camera pose estimation. These Tasks were chosen based on the data acquisition complexity and the complexity associated with the learning process using a deep network. Our proposed methodology out-performs state-of-the-art models (which use ground truth)on each of our zero-shot Tasks, showing promise on zero-shot Task Transfer. We also conducted extensive experiments to study the various choices of our methodology, as well as showed how the proposed method can also be used in Transfer learning. To the best of our knowledge, this is the firstsuch effort on zero-shot learning in the Task space.

  • CVPR - Zero-Shot Task Transfer
    2019 IEEE CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2019
    Co-Authors: Arghya Pal, Vineeth N Balasubramanian
    Abstract:

    In this work, we present a novel meta-learning algorithm that regresses model parameters for novel Tasks for which no ground truth is available (zero-shot Tasks). In order to adapt to novel zero-shot Tasks, our meta-learner learns from the model parameters of known Tasks (with ground truth) and the correlation of known Tasks to zero-shot Tasks. Such intuition finds its foothold in cognitive science, where a subject (human baby) can adapt to a novel concept (depth understanding) by correlating it with old concepts (hand movement or self-motion), without receiving an explicit supervision. We evaluated our model on the Taskonomy dataset, with four Tasks as zero-shot: surface normal, room layout, depth and camera pose estimation. These Tasks were chosen based on the data acquisition complexity and the complexity associated with the learning process using a deep network. Our proposed methodolgy outperforms state-of-the-art models (which use ground truth) on each of our zero-shot Tasks, showing promise on zero-shot Task Transfer. We also conducted extensive experiments to study the various choices of our methodology, as well as showed how the proposed method can also be used in Transfer learning. To the best of our knowledge, this is the first such effort on zero-shot learning in the Task space.

Qiong Chen - One of the best experts on this subject based on the ideXlab platform.

  • On-Edge Multi-Task Transfer Learning: Model and Practice With Data-Driven Task Allocation
    IEEE Transactions on Parallel and Distributed Systems, 2020
    Co-Authors: Qiong Chen, Zimu Zheng, Dan Wang, Fangming Liu
    Abstract:

    On edge devices, data scarcity occurs as a common problem where Transfer learning serves as a widely-suggested remedy. Nevertheless, Transfer learning imposes heavy computation burden to the resource-constrained edge devices. Existing Task allocation works usually assume all submitted Tasks are equally important, leading to inefficient resource allocation at a Task level when directly applied in Multi-Task Transfer Learning (MTL). To address these issues, we first reveal that it is crucial to measure the impact of Tasks on overall decision performance improvement and quantify Task importance . We then show that Task allocation with Task importance for MTL (TATIM) is a variant of NP-complete Knapsack problem, where the complicated computation to solve this problem needs to be conducted repeatedly under varying contexts. To solve TATIM with high computational efficiency, we propose a Data-driven Cooperative Task Allocation (DCTA) approach. Finally, we evaluate the performance of DCTA by not only a trace-driven simulation, but also a new comprehensive real-world AIOps case study which bridges model and practice via a new architecture and main components design within AIOps system. Extensive experiments show that our DCTA reduces 3.24 times of processing time, and saves 48.4 percent energy consumption compared with the state-of-the-art when solving TATIM.

  • ICDCS - Data-driven Task Allocation for Multi-Task Transfer Learning on the Edge
    2019 IEEE 39th International Conference on Distributed Computing Systems (ICDCS), 2019
    Co-Authors: Qiong Chen, Zimu Zheng, Dan Wang, Fangming Liu
    Abstract:

    Edge computing for machine learning has become a heated research topic. On edge devices, data scarcity occurs as a common problem where Transfer learning serves as a widely-suggested remedy. Nevertheless, one obstacle is that Transfer learning imposes heavy computation burden to the resource-constrained edge devices. Motivated by the fact that only a few Tasks of Multi-Task Transfer Learning (MTL) have a higher potential for overall decision performance improvement, we design a novel Task allocation scheme, which assigns more important Tasks to more powerful edge devices to maximize the overall decision performance. In this paper, we focus on Task allocation under multi-Task scenarios by introducing Task importance and make the following contributions. First, we reveal that it is important to measure the impact of Tasks on overall decision performance improvement and quantify Task importance. We also observe the long-tail property of Task importance, i.e., only a few Tasks are important, which facilitates more efficient Task allocation. Second, we show that Task allocation with Task importance for MTL (TATIM) is in fact a variant of the NP-complete Knapsack problem, where the complicated computation to solve this problem needs to be conducted repeatedly under varying contexts. To solve TATIM with high computational efficiency, we innovatively propose a Data-driven Cooperative Task Allocation (DCTA) approach. Third, we evaluate the performance of our DCTA approach by applying it to a real-world industrial operation (e.g., AIOps) scenario. Experiments show that our DCTA approach can reduce 3.24 times of processing time compared with the state-of-the-art when solving TATIM. We offer our DCTA approach as an effective and practical mechanism for reducing the required resource associated with performing MTL on edge devices.