Decurrent

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 318 Experts worldwide ranked by ideXlab platform

Huazhong Yang - One of the best experts on this subject based on the ideXlab platform.

  • IJCNN - Large scale recurrent neural network on GPU
    2014 International Joint Conference on Neural Networks (IJCNN), 2014
    Co-Authors: Boxun Li, Jiayi Duan, Erjin Zhou, Ningyi Xu, Jiaxing Zhang, Bo Huang, Huazhong Yang
    Abstract:

    Large scale artificial neural networks (ANNs) have been widely used in data processing applications. The recurrent neural network (RNN) is a special type of neural network equipped with additional recurrent connections. Such a unique architecture enables the recurrent neural network to remember the past processed information and makes it an expressive model for nonlinear sequence processing tasks. However, the large computation complexity makes it difficult to effectively train a recurrent neural network and therefore significantly limits the research on the recurrent neural network in the last 20 years. In recent years, the use of graphics processing units (GPUs) becomes a significant advance to speed up the training process of large scale neural networks by taking advantage of the massive parallelism capabilities of GPUs. In this paper, we propose an efficient GPU implementation of the large scale recurrent neural network and demonstrate the power of scaling up the recurrent neural network with GPUs. We first explore the potential parallelism of the recurrent neural network and propose a fine-grained two-stage pipeline implementation. Experiment results show that the proposed GPU implementation can achieve 2 ~ 11 x speed-up compared with the basic CPU implementation with the Intel Math Kernel Library. We then use the proposed GPU implementation to scale up the recurrent neural network and improve its performance. The experiment results of the Microsoft Research Sentence Completion Challenge demonstrate that the large scale recurrent network without class layer is able to beat the traditional class-based modest-size recurrent network and achieve an accuracy of 47%, the best result achieved by a single recurrent neural network on the same dataset.

  • large scale recurrent neural network on gpu
    International Joint Conference on Neural Network, 2014
    Co-Authors: Boxun Li, Jiayi Duan, Erjin Zhou, Ningyi Xu, Jiaxing Zhang, Bo Huang, Huazhong Yang
    Abstract:

    Large scale artificial neural networks (ANNs) have been widely used in data processing applications. The recurrent neural network (RNN) is a special type of neural network equipped with additional recurrent connections. Such a unique architecture enables the recurrent neural network to remember the past processed information and makes it an expressive model for nonlinear sequence processing tasks. However, the large computation complexity makes it difficult to effectively train a recurrent neural network and therefore significantly limits the research on the recurrent neural network in the last 20 years. In recent years, the use of graphics processing units (GPUs) becomes a significant advance to speed up the training process of large scale neural networks by taking advantage of the massive parallelism capabilities of GPUs. In this paper, we propose an efficient GPU implementation of the large scale recurrent neural network and demonstrate the power of scaling up the recurrent neural network with GPUs. We first explore the potential parallelism of the recurrent neural network and propose a fine-grained two-stage pipeline implementation. Experiment results show that the proposed GPU implementation can achieve 2 ~ 11 x speed-up compared with the basic CPU implementation with the Intel Math Kernel Library. We then use the proposed GPU implementation to scale up the recurrent neural network and improve its performance. The experiment results of the Microsoft Research Sentence Completion Challenge demonstrate that the large scale recurrent network without class layer is able to beat the traditional class-based modest-size recurrent network and achieve an accuracy of 47%, the best result achieved by a single recurrent neural network on the same dataset.

Joe B. Putnam - One of the best experts on this subject based on the ideXlab platform.

  • Recurrent bronchogenic cyst causing recurrent laryngeal nerve palsy
    European Journal of Cardio-Thoracic Surgery, 2002
    Co-Authors: David C Rice, Joe B. Putnam
    Abstract:

    A case of a 50-year-old male who developed left recurrent laryngeal nerve palsy due to a bronchogenic cyst is presented. The bronchogenic cyst recurred following incomplete excision and multiple attempts at percutaneous aspiration. Recurrent laryngeal nerve palsy is an unusual complication of bronchogenic cysts. This case highlights the need for complete excision of these cysts and the lack of efficacy of cyst aspiration. q 2002 Elsevier Science B.V. All rights reserved.

  • recurrent bronchogenic cyst causing recurrent laryngeal nerve palsy
    European Journal of Cardio-Thoracic Surgery, 2002
    Co-Authors: David C Rice, Joe B. Putnam
    Abstract:

    A case of a 50-year-old male who developed left recurrent laryngeal nerve palsy due to a bronchogenic cyst is presented. The bronchogenic cyst recurred following incomplete excision and multiple attempts at percutaneous aspiration. Recurrent laryngeal nerve palsy is an unusual complication of bronchogenic cysts. This case highlights the need for complete excision of these cysts and the lack of efficacy of cyst aspiration.

Richard A. Yeasting - One of the best experts on this subject based on the ideXlab platform.

  • Vulnerability of the recurrent laryngeal nerve in the anterior approach to the lower cervical spine
    Spine, 1997
    Co-Authors: Nabil A. Ebraheim, Martin Skie, Jike Lu, Bruce E Heck, Richard A. Yeasting
    Abstract:

    Study Design. To perform anatomic dissections and measurements of the recurrent laryngeal nerve between the inferior thyroid artery and superior border of the clavicle (mid-portion) on both sides. Objectives. To determine quantitatively the differences in course and location between the recurrent laryngeal nerves on both sides and to relate this to the vulnerability of the recurrent laryngeal nerve during an anterior approach to the lower cervical spine. of Background Data. The midportion of the recurrent laryngeal nerve is usually encountered in the anterior approach to the lower cervical spine, especially on the right side. No quantitative regional anatomy describing the course and location of the mid-portion of the recurrent laryngeal nerve is available in the literature. Methods. Fifteen adult cadavers were used for dissections of the recurrent laryngeal nerve. The length of the recurrent laryngeal nerve between the superior border of the clavicle and the inferior thyroid artery, and the angle of the recurrent laryngeal nerve with respect to sagittal plane, were measured bilaterally. In addition, six cross-sections at C7 were obtained to determine the linear distances between esophagotracheal groove and the recurrent laryngeal nerve. Results. The recurrent laryngeal nerve on the right runs in a superior and medial direction, with an angle of 25.0° ± 4.7° relative to sagittal plane, compared with 4.7° ± 3.7° on the left. The length of the recurrent laryngeal nerve between the superior border of the clavicle and the inferior thyroid artery is 23.0 ± 4.4mm on the left, and 22.8 ± 4.3 mm on the right. The recurrent laryngeal nerve lies deep within the esophagotracheal groove on the left, but 6.5 ± 1.2 mm anterior and 7.3 ± 0.8 mm lateral to the esophagotracheal groove on the right. Conclusions. The recurrent laryngeal nerve on the right side is highly vulnerable to injury if ligature of the inferior thyroid vessels is not performed as laterally as possible or if retraction of the midline structures along with the recurrent laryngeal nerve is not performed intermittently. Avoiding injury to the recurrent laryngeal nerve, especially on the right side, is a major consideration, during an anterior approach to lower cervical spine.

  • Vulnerability of the recurrent laryngeal nerve in the anterior approach to the lower cervical spine
    Spine, 1997
    Co-Authors: Nabil A. Ebraheim, Martin Skie, Bruce E Heck, Richard A. Yeasting
    Abstract:

    STUDY DESIGN To perform anatomic dissections and measurements of the recurrent laryngeal nerve between the inferior thyroid artery and superior border of the clavicle (mid-portion) on both sides. OBJECTIVES To determine quantitatively the differences in course and location between the recurrent laryngeal nerves on both sides and to relate this to the vulnerability of the recurrent laryngeal nerve during an anterior approach to the lower cervical spine. SUMMARY OF BACKGROUND DATA The midportion of the recurrent laryngeal nerve is usually encountered in the anterior approach to the lower cervical spine, especially on the right side. No quantitative regional anatomy describing the course and location of the mid-portion of the recurrent laryngeal nerve is available in the literature. METHODS Fifteen adult cadavers were used for dissections of the recurrent laryngeal nerve. The length of the recurrent laryngeal nerve between the superior border of the clavicle and the inferior thyroid artery, and the angle of the recurrent laryngeal nerve with respect to sagittal plane, were measured bilaterally. In addition, six cross-sections at C7 were obtained to determine the linear distances between esophagotracheal groove and the recurrent laryngeal nerve. RESULTS The recurrent laryngeal nerve on the right runs in a superior and medial direction, with an angle of 25.0 degrees +/- 4.7 degrees relative to sagittal plane, compared with 4.7 degrees +/- 3.7 degrees on the left. The length of the recurrent laryngeal nerve between the superior border of the clavicle and the inferior thyroid artery is 23.0 +/- 4.4 mm on the left, and 22.8 +/- 4.3 mm on the right. The recurrent laryngeal nerve lies deep within the esophagotracheal groove on the left, but 6.5 +/- 1.2 mm anterior and 7.3 +/- 0.8 mm lateral to the esophagotracheal groove on the right. CONCLUSIONS The recurrent laryngeal nerve on the right side is highly vulnerable to injury if ligature of the inferior thyroid vessels is not performed as laterally as possible or if retraction of the midline structures along with the recurrent laryngeal nerve is not performed intermittently. Avoiding injury to the recurrent laryngeal nerve, especially on the right side, is a major consideration during an anterior approach to lower cervical spine.

Boxun Li - One of the best experts on this subject based on the ideXlab platform.

  • IJCNN - Large scale recurrent neural network on GPU
    2014 International Joint Conference on Neural Networks (IJCNN), 2014
    Co-Authors: Boxun Li, Jiayi Duan, Erjin Zhou, Ningyi Xu, Jiaxing Zhang, Bo Huang, Huazhong Yang
    Abstract:

    Large scale artificial neural networks (ANNs) have been widely used in data processing applications. The recurrent neural network (RNN) is a special type of neural network equipped with additional recurrent connections. Such a unique architecture enables the recurrent neural network to remember the past processed information and makes it an expressive model for nonlinear sequence processing tasks. However, the large computation complexity makes it difficult to effectively train a recurrent neural network and therefore significantly limits the research on the recurrent neural network in the last 20 years. In recent years, the use of graphics processing units (GPUs) becomes a significant advance to speed up the training process of large scale neural networks by taking advantage of the massive parallelism capabilities of GPUs. In this paper, we propose an efficient GPU implementation of the large scale recurrent neural network and demonstrate the power of scaling up the recurrent neural network with GPUs. We first explore the potential parallelism of the recurrent neural network and propose a fine-grained two-stage pipeline implementation. Experiment results show that the proposed GPU implementation can achieve 2 ~ 11 x speed-up compared with the basic CPU implementation with the Intel Math Kernel Library. We then use the proposed GPU implementation to scale up the recurrent neural network and improve its performance. The experiment results of the Microsoft Research Sentence Completion Challenge demonstrate that the large scale recurrent network without class layer is able to beat the traditional class-based modest-size recurrent network and achieve an accuracy of 47%, the best result achieved by a single recurrent neural network on the same dataset.

  • large scale recurrent neural network on gpu
    International Joint Conference on Neural Network, 2014
    Co-Authors: Boxun Li, Jiayi Duan, Erjin Zhou, Ningyi Xu, Jiaxing Zhang, Bo Huang, Huazhong Yang
    Abstract:

    Large scale artificial neural networks (ANNs) have been widely used in data processing applications. The recurrent neural network (RNN) is a special type of neural network equipped with additional recurrent connections. Such a unique architecture enables the recurrent neural network to remember the past processed information and makes it an expressive model for nonlinear sequence processing tasks. However, the large computation complexity makes it difficult to effectively train a recurrent neural network and therefore significantly limits the research on the recurrent neural network in the last 20 years. In recent years, the use of graphics processing units (GPUs) becomes a significant advance to speed up the training process of large scale neural networks by taking advantage of the massive parallelism capabilities of GPUs. In this paper, we propose an efficient GPU implementation of the large scale recurrent neural network and demonstrate the power of scaling up the recurrent neural network with GPUs. We first explore the potential parallelism of the recurrent neural network and propose a fine-grained two-stage pipeline implementation. Experiment results show that the proposed GPU implementation can achieve 2 ~ 11 x speed-up compared with the basic CPU implementation with the Intel Math Kernel Library. We then use the proposed GPU implementation to scale up the recurrent neural network and improve its performance. The experiment results of the Microsoft Research Sentence Completion Challenge demonstrate that the large scale recurrent network without class layer is able to beat the traditional class-based modest-size recurrent network and achieve an accuracy of 47%, the best result achieved by a single recurrent neural network on the same dataset.

David C Rice - One of the best experts on this subject based on the ideXlab platform.

  • Recurrent bronchogenic cyst causing recurrent laryngeal nerve palsy
    European Journal of Cardio-Thoracic Surgery, 2002
    Co-Authors: David C Rice, Joe B. Putnam
    Abstract:

    A case of a 50-year-old male who developed left recurrent laryngeal nerve palsy due to a bronchogenic cyst is presented. The bronchogenic cyst recurred following incomplete excision and multiple attempts at percutaneous aspiration. Recurrent laryngeal nerve palsy is an unusual complication of bronchogenic cysts. This case highlights the need for complete excision of these cysts and the lack of efficacy of cyst aspiration. q 2002 Elsevier Science B.V. All rights reserved.

  • recurrent bronchogenic cyst causing recurrent laryngeal nerve palsy
    European Journal of Cardio-Thoracic Surgery, 2002
    Co-Authors: David C Rice, Joe B. Putnam
    Abstract:

    A case of a 50-year-old male who developed left recurrent laryngeal nerve palsy due to a bronchogenic cyst is presented. The bronchogenic cyst recurred following incomplete excision and multiple attempts at percutaneous aspiration. Recurrent laryngeal nerve palsy is an unusual complication of bronchogenic cysts. This case highlights the need for complete excision of these cysts and the lack of efficacy of cyst aspiration.