Sequence Network

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 109572 Experts worldwide ranked by ideXlab platform

Eduardo Bezerra - One of the best experts on this subject based on the ideXlab platform.

  • STConvS2S: Spatiotemporal Convolutional Sequence to Sequence Network for weather forecasting
    Neurocomputing, 2021
    Co-Authors: Rafaela David De Castro, Yania Molina Souto, Eduardo Ogasawara, Fabio Porto, Eduardo Bezerra
    Abstract:

    Abstract Applying machine learning models to meteorological data brings many opportunities to the Geosciences field, such as predicting future weather conditions more accurately. In recent years, modeling meteorological data with deep neural Networks has become a relevant area of investigation. These works apply either recurrent neural Networks (RNN) or some hybrid approach mixing RNN and convolutional neural Networks (CNN). In this work, we propose STConvS2S (Spatiotemporal Convolutional Sequence to Sequence Network), a deep learning architecture built for learning both spatial and temporal data dependencies using only convolutional layers. Our proposed architecture resolves two limitations of convolutional Networks to predict Sequences using historical data: (1) they violate the temporal order during the learning process and (2) they require the lengths of the input and output Sequences to be equal. Computational experiments using air temperature and rainfall data from South America show that our architecture captures spatiotemporal context and that it outperforms or matches the results of state-of-the-art architectures for forecasting tasks. In particular, one of the variants of our proposed architecture is 23% better at predicting future Sequences and five times faster at training than the RNN-based model used as a baseline.

Thomas S Huang - One of the best experts on this subject based on the ideXlab platform.

  • youtube vos Sequence to Sequence video object segmentation
    European Conference on Computer Vision, 2018
    Co-Authors: Linjie Yang, Yuchen Fan, Jianchao Yang, Dingcheng Yue, Yuchen Liang, Brian Price, Scott Cohen, Thomas S Huang
    Abstract:

    Learning long-term spatial-temporal features are critical for many video analysis tasks. However, existing video segmentation methods predominantly rely on static image segmentation techniques, and methods capturing temporal dependency for segmentation have to depend on pretrained optical flow models, leading to suboptimal solutions for the problem. End-to-end sequential learning to explore spatial-temporal features for video segmentation is largely limited by the scale of available video segmentation datasets, i.e., even the largest video segmentation dataset only contains 90 short video clips. To solve this problem, we build a new large-scale video object segmentation dataset called YouTube Video Object Segmentation dataset (YouTube-VOS). Our dataset contains 3,252 YouTube video clips and 78 categories including common objects and human activities (This is the statistics when we submit this paper, see updated statistics on our website). This is by far the largest video object segmentation dataset to our knowledge and we have released it at https://youtube-vos.org. Based on this dataset, we propose a novel Sequence-to-Sequence Network to fully exploit long-term spatial-temporal information in videos for segmentation. We demonstrate that our method is able to achieve the best results on our YouTube-VOS test set and comparable results on DAVIS 2016 compared to the current state-of-the-art methods. Experiments show that the large scale dataset is indeed a key factor to the success of our model.

  • youtube vos Sequence to Sequence video object segmentation
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Linjie Yang, Yuchen Fan, Jianchao Yang, Dingcheng Yue, Yuchen Liang, Brian Price, Scott Cohen, Thomas S Huang
    Abstract:

    Learning long-term spatial-temporal features are critical for many video analysis tasks. However, existing video segmentation methods predominantly rely on static image segmentation techniques, and methods capturing temporal dependency for segmentation have to depend on pretrained optical flow models, leading to suboptimal solutions for the problem. End-to-end sequential learning to explore spatial-temporal features for video segmentation is largely limited by the scale of available video segmentation datasets, i.e., even the largest video segmentation dataset only contains 90 short video clips. To solve this problem, we build a new large-scale video object segmentation dataset called YouTube Video Object Segmentation dataset (YouTube-VOS). Our dataset contains 3,252 YouTube video clips and 78 categories including common objects and human activities. This is by far the largest video object segmentation dataset to our knowledge and we have released it at this https URL. Based on this dataset, we propose a novel Sequence-to-Sequence Network to fully exploit long-term spatial-temporal information in videos for segmentation. We demonstrate that our method is able to achieve the best results on our YouTube-VOS test set and comparable results on DAVIS 2016 compared to the current state-of-the-art methods. Experiments show that the large scale dataset is indeed a key factor to the success of our model.

  • youtube vos Sequence to Sequence video object segmentation
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Ning Xu, Linjie Yang, Yuchen Fan, Jianchao Yang, Dingcheng Yue, Yuchen Liang, Brian Price, Scott Cohen, Thomas S Huang
    Abstract:

    Learning long-term spatial-temporal features are critical for many video analysis tasks. However, existing video segmentation methods predominantly rely on static image segmentation techniques, and methods capturing temporal dependency for segmentation have to depend on pretrained optical flow models, leading to suboptimal solutions for the problem. End-to-end sequential learning to explore spatial-temporal features for video segmentation is largely limited by the scale of available video segmentation datasets, i.e., even the largest video segmentation dataset only contains 90 short video clips. To solve this problem, we build a new large-scale video object segmentation dataset called YouTube Video Object Segmentation dataset (YouTube-VOS). Our dataset contains 3,252 YouTube video clips and 78 categories including common objects and human activities. This is by far the largest video object segmentation dataset to our knowledge and we have released it at this https URL. Based on this dataset, we propose a novel Sequence-to-Sequence Network to fully exploit long-term spatial-temporal information in videos for segmentation. We demonstrate that our method is able to achieve the best results on our YouTube-VOS test set and comparable results on DAVIS 2016 compared to the current state-of-the-art methods. Experiments show that the large scale dataset is indeed a key factor to the success of our model.

Rafaela David De Castro - One of the best experts on this subject based on the ideXlab platform.

  • STConvS2S: Spatiotemporal Convolutional Sequence to Sequence Network for weather forecasting
    Neurocomputing, 2021
    Co-Authors: Rafaela David De Castro, Yania Molina Souto, Eduardo Ogasawara, Fabio Porto, Eduardo Bezerra
    Abstract:

    Abstract Applying machine learning models to meteorological data brings many opportunities to the Geosciences field, such as predicting future weather conditions more accurately. In recent years, modeling meteorological data with deep neural Networks has become a relevant area of investigation. These works apply either recurrent neural Networks (RNN) or some hybrid approach mixing RNN and convolutional neural Networks (CNN). In this work, we propose STConvS2S (Spatiotemporal Convolutional Sequence to Sequence Network), a deep learning architecture built for learning both spatial and temporal data dependencies using only convolutional layers. Our proposed architecture resolves two limitations of convolutional Networks to predict Sequences using historical data: (1) they violate the temporal order during the learning process and (2) they require the lengths of the input and output Sequences to be equal. Computational experiments using air temperature and rainfall data from South America show that our architecture captures spatiotemporal context and that it outperforms or matches the results of state-of-the-art architectures for forecasting tasks. In particular, one of the variants of our proposed architecture is 23% better at predicting future Sequences and five times faster at training than the RNN-based model used as a baseline.

Ronald G. Harley - One of the best experts on this subject based on the ideXlab platform.

  • short circuit analysis of induction machines wind power application
    IEEE PES Transmission and Distribution Conference and Exposition, 2012
    Co-Authors: Dustin F. Howard, T Smith, Michael Starke, Ronald G. Harley
    Abstract:

    The short circuit behavior of Type I (fixed speed) wind turbine-generators is analyzed in this paper to aid in the protection coordination of wind plants of this type. A simple Network consisting of one wind turbine-generator is analyzed for two Network faults: a three phase short circuit and a phase A to ground fault. Electromagnetic transient simulations and Sequence Network calculations are compared for the two fault scenarios. It is found that traditional Sequence Network calculations give accurate results for the short circuit currents in the balanced fault case, but are inaccurate for the un-faulted phases in the unbalanced fault case. The time-current behavior of the fundamental frequency component of the short circuit currents for both fault cases are described, and found to differ significantly in the unbalanced and balanced fault cases.

  • Improved Sequence Network Model of Wind Turbine Generators for Short-Circuit Studies
    IEEE Transactions on Energy Conversion, 2012
    Co-Authors: Dusti F. Howard, Thomas G. Habetle, Ronald G. Harley
    Abstract:

    Protective relay settings for wind turbine generators often depend on short-circuit calculations performed using Sequence Network circuits in rms-based protection software. Traditional assumptions made in deriving the Sequence Network representation of induction machines result in some error in short-circuit calculations, which could potentially cause incorrect relay settings. A more accurate Sequence Network model of induction machines is derived in this paper to aid in short-circuit calculations in induction-generator-based wind plants. A rigorous mathematical approach to defining the stator short-circuit current equations under general faults is described and validated with transient simulations. The Sequence Network model of the induction machine is derived from the closed-form mathematical solutions. An example short-circuit calculation is performed on a simple Network in this paper, and found to have a marked improvement in accuracy from traditional Sequence Network calculation methods for unbalanced faults. Resonant effects of power factor correction capacitors are also described.

Linjie Yang - One of the best experts on this subject based on the ideXlab platform.

  • youtube vos Sequence to Sequence video object segmentation
    European Conference on Computer Vision, 2018
    Co-Authors: Linjie Yang, Yuchen Fan, Jianchao Yang, Dingcheng Yue, Yuchen Liang, Brian Price, Scott Cohen, Thomas S Huang
    Abstract:

    Learning long-term spatial-temporal features are critical for many video analysis tasks. However, existing video segmentation methods predominantly rely on static image segmentation techniques, and methods capturing temporal dependency for segmentation have to depend on pretrained optical flow models, leading to suboptimal solutions for the problem. End-to-end sequential learning to explore spatial-temporal features for video segmentation is largely limited by the scale of available video segmentation datasets, i.e., even the largest video segmentation dataset only contains 90 short video clips. To solve this problem, we build a new large-scale video object segmentation dataset called YouTube Video Object Segmentation dataset (YouTube-VOS). Our dataset contains 3,252 YouTube video clips and 78 categories including common objects and human activities (This is the statistics when we submit this paper, see updated statistics on our website). This is by far the largest video object segmentation dataset to our knowledge and we have released it at https://youtube-vos.org. Based on this dataset, we propose a novel Sequence-to-Sequence Network to fully exploit long-term spatial-temporal information in videos for segmentation. We demonstrate that our method is able to achieve the best results on our YouTube-VOS test set and comparable results on DAVIS 2016 compared to the current state-of-the-art methods. Experiments show that the large scale dataset is indeed a key factor to the success of our model.

  • youtube vos Sequence to Sequence video object segmentation
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Linjie Yang, Yuchen Fan, Jianchao Yang, Dingcheng Yue, Yuchen Liang, Brian Price, Scott Cohen, Thomas S Huang
    Abstract:

    Learning long-term spatial-temporal features are critical for many video analysis tasks. However, existing video segmentation methods predominantly rely on static image segmentation techniques, and methods capturing temporal dependency for segmentation have to depend on pretrained optical flow models, leading to suboptimal solutions for the problem. End-to-end sequential learning to explore spatial-temporal features for video segmentation is largely limited by the scale of available video segmentation datasets, i.e., even the largest video segmentation dataset only contains 90 short video clips. To solve this problem, we build a new large-scale video object segmentation dataset called YouTube Video Object Segmentation dataset (YouTube-VOS). Our dataset contains 3,252 YouTube video clips and 78 categories including common objects and human activities. This is by far the largest video object segmentation dataset to our knowledge and we have released it at this https URL. Based on this dataset, we propose a novel Sequence-to-Sequence Network to fully exploit long-term spatial-temporal information in videos for segmentation. We demonstrate that our method is able to achieve the best results on our YouTube-VOS test set and comparable results on DAVIS 2016 compared to the current state-of-the-art methods. Experiments show that the large scale dataset is indeed a key factor to the success of our model.

  • youtube vos Sequence to Sequence video object segmentation
    arXiv: Computer Vision and Pattern Recognition, 2018
    Co-Authors: Ning Xu, Linjie Yang, Yuchen Fan, Jianchao Yang, Dingcheng Yue, Yuchen Liang, Brian Price, Scott Cohen, Thomas S Huang
    Abstract:

    Learning long-term spatial-temporal features are critical for many video analysis tasks. However, existing video segmentation methods predominantly rely on static image segmentation techniques, and methods capturing temporal dependency for segmentation have to depend on pretrained optical flow models, leading to suboptimal solutions for the problem. End-to-end sequential learning to explore spatial-temporal features for video segmentation is largely limited by the scale of available video segmentation datasets, i.e., even the largest video segmentation dataset only contains 90 short video clips. To solve this problem, we build a new large-scale video object segmentation dataset called YouTube Video Object Segmentation dataset (YouTube-VOS). Our dataset contains 3,252 YouTube video clips and 78 categories including common objects and human activities. This is by far the largest video object segmentation dataset to our knowledge and we have released it at this https URL. Based on this dataset, we propose a novel Sequence-to-Sequence Network to fully exploit long-term spatial-temporal information in videos for segmentation. We demonstrate that our method is able to achieve the best results on our YouTube-VOS test set and comparable results on DAVIS 2016 compared to the current state-of-the-art methods. Experiments show that the large scale dataset is indeed a key factor to the success of our model.