Vegetation Classification

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4206 Experts worldwide ranked by ideXlab platform

David Stockwell - One of the best experts on this subject based on the ideXlab platform.

  • Spatial contextual superpixel model for natural roadside Vegetation Classification
    Pattern Recognition, 2016
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell
    Abstract:

    In this paper, we present a novel Spatial Contextual Superpixel Model (SCSM) for Vegetation Classification in natural roadside images. The SCSM accomplishes the goal by transforming the Classification task from a pixel into a superpixel domain for more effective adoption of both local and global spatial contextual information between superpixels in an image. First, the image is segmented into a set of superpixels with strong homogeneous texture, from which Pixel Patch Selective (PPS) features are extracted to train class-specific binary classifiers for obtaining Contextual Superpixel Probability Maps (CSPMs) for all classes, coupled with spatial constraints. A set of superpixel candidates with the highest probabilities is then determined to represent global characteristics of a testing image. A superpixel merging strategy is further proposed to progressively merge superpixels with low probabilities into the most similar neighbors by performing a double-check on whether a superpixel and its neighour accept each other, as well as enhancing a global contextual constraint. We demonstrate high performance by the proposed model on two challenging natural roadside image datasets from the Department of Transport and Main Roads and on the Stanford background benchmark dataset. A novel Spatial Contextual Superpixel Model (SCSM) for natural Vegetation Classification.A new reverse superpixel merging strategy to progressively merge superpixels.High performance on challenging natural datasets and Stanford background data.

  • IJCNN - Aggregating pixel-level prediction and cluster-level texton occurrence within superpixel voting for roadside Vegetation Classification
    2016 International Joint Conference on Neural Networks (IJCNN), 2016
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell, Sujan Chowdhury
    Abstract:

    Roadside Vegetation Classification has recently attracted increasing attention, due to its significance in applications such as Vegetation growth management and fire hazard identification. Existing studies primarily focus on learning visible feature based classifiers or invisible feature based thresholds, which often suffer from a generalization problem to new data. This paper proposes an approach that aggregates pixel-level supervised Classification and cluster-level texton occurrence within a voting strategy over superpixels for Vegetation Classification, which takes into account both generic features in the training data and local characteristics in the testing data. Class-specific artificial neural networks are trained to predict class probabilities for all pixels, while a texton based adaptive K-means clustering process is introduced to group pixels into clusters and obtain texton occurrence. The pixel-level class probabilities and cluster-level texton occurrence are further integrated in superpixel-level voting to assign each superpixel to a class category. The proposed approach outperforms previous approaches on a roadside image dataset collected by the Department of Transport and Main Roads, Queensland, Australia, and achieves state-of-the-art performance using low-resolution images from the Croatia roadside grass dataset.

  • ICONIP (1) - Class-Semantic Color-Texture Textons for Vegetation Classification
    Neural Information Processing, 2015
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell
    Abstract:

    This paper proposes a new color-texture texton based approach for roadside Vegetation Classification in natural images. Two individual sets of class-semantic textons are first generated from color and filter bank texture features for each class. The color and texture features of testing pixels are then mapped into one of the generated textons using the nearest distance, resulting in two texton occurrence matrices – one for color and one for texture. The Classification is achieved by aggregating color-texture texton occurrences over all pixels in each over-segmented superpixel using a majority voting strategy. Our approach outperforms previous benchmarking approaches and achieves 81% and 74.5% accuracies of classifying seven objects on a cropped region dataset and six objects on an image dataset collected by the Department of Transport and Main Roads, Queensland, Australia.

  • class semantic color texture textons for Vegetation Classification
    International Conference on Neural Information Processing, 2015
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell
    Abstract:

    This paper proposes a new color-texture texton based approach for roadside Vegetation Classification in natural images. Two individual sets of class-semantic textons are first generated from color and filter bank texture features for each class. The color and texture features of testing pixels are then mapped into one of the generated textons using the nearest distance, resulting in two texton occurrence matrices – one for color and one for texture. The Classification is achieved by aggregating color-texture texton occurrences over all pixels in each over-segmented superpixel using a majority voting strategy. Our approach outperforms previous benchmarking approaches and achieves 81% and 74.5% accuracies of classifying seven objects on a cropped region dataset and six objects on an image dataset collected by the Department of Transport and Main Roads, Queensland, Australia.

  • ICNC - Roadside Vegetation Classification using color intensity and moments
    2015 11th International Conference on Natural Computation (ICNC), 2015
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell
    Abstract:

    Roadside Vegetation Classification plays a significant role in many applications, such as grass fire risk assessment and Vegetation growth condition monitoring. Most existing approaches focus on the use of Vegetation indices from the invisible spectrum, and only limited attention has been given to using visual features, such as color and texture. This paper presents a new approach for Vegetation Classification using a fusion of color and texture features. The color intensity features are extracted in the opponent color space, while the texture comprises of three color moments. We demonstrate 79% accuracy of the approach on a dataset created from real world video data collected by the Department of Transport and Main Roads (DTMR), Queensland, Australia, and promising results on a set of natural images. We also highlight some typical challenges for roadside Vegetation Classification in natural conditions.

Ligang Zhang - One of the best experts on this subject based on the ideXlab platform.

  • Spatial contextual superpixel model for natural roadside Vegetation Classification
    Pattern Recognition, 2016
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell
    Abstract:

    In this paper, we present a novel Spatial Contextual Superpixel Model (SCSM) for Vegetation Classification in natural roadside images. The SCSM accomplishes the goal by transforming the Classification task from a pixel into a superpixel domain for more effective adoption of both local and global spatial contextual information between superpixels in an image. First, the image is segmented into a set of superpixels with strong homogeneous texture, from which Pixel Patch Selective (PPS) features are extracted to train class-specific binary classifiers for obtaining Contextual Superpixel Probability Maps (CSPMs) for all classes, coupled with spatial constraints. A set of superpixel candidates with the highest probabilities is then determined to represent global characteristics of a testing image. A superpixel merging strategy is further proposed to progressively merge superpixels with low probabilities into the most similar neighbors by performing a double-check on whether a superpixel and its neighour accept each other, as well as enhancing a global contextual constraint. We demonstrate high performance by the proposed model on two challenging natural roadside image datasets from the Department of Transport and Main Roads and on the Stanford background benchmark dataset. A novel Spatial Contextual Superpixel Model (SCSM) for natural Vegetation Classification.A new reverse superpixel merging strategy to progressively merge superpixels.High performance on challenging natural datasets and Stanford background data.

  • Aggregating pixel-level prediction and cluster-level texton occurrence within superpixel voting for roadside Vegetation Classification
    2016 International Joint Conference on Neural Networks (IJCNN), 2016
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell, Sujan Chowdhury
    Abstract:

    Roadside Vegetation Classification has recently attracted increasing attention, due to its significance in applications such as Vegetation growth management and fire hazard identification. Existing studies primarily focus on learning visible feature based classifiers or invisible feature based thresholds, which often suffer from a generalization problem to new data. This paper proposes an approach that aggregates pixel-level supervised Classification and cluster-level texton occurrence within a voting strategy over superpixels for Vegetation Classification, which takes into account both generic features in the training data and local characteristics in the testing data. Class-specific artificial neural networks are trained to predict class probabilities for all pixels, while a texton based adaptive K-means clustering process is introduced to group pixels into clusters and obtain texton occurrence. The pixel-level class probabilities and cluster-level texton occurrence are further integrated in superpixel-level voting to assign each superpixel to a class category. The proposed approach outperforms previous approaches on a roadside image dataset collected by the Department of Transport and Main Roads, Queensland, Australia, and achieves state-of-the-art performance using low-resolution images from the Croatia roadside grass dataset.

  • IJCNN - Aggregating pixel-level prediction and cluster-level texton occurrence within superpixel voting for roadside Vegetation Classification
    2016 International Joint Conference on Neural Networks (IJCNN), 2016
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell, Sujan Chowdhury
    Abstract:

    Roadside Vegetation Classification has recently attracted increasing attention, due to its significance in applications such as Vegetation growth management and fire hazard identification. Existing studies primarily focus on learning visible feature based classifiers or invisible feature based thresholds, which often suffer from a generalization problem to new data. This paper proposes an approach that aggregates pixel-level supervised Classification and cluster-level texton occurrence within a voting strategy over superpixels for Vegetation Classification, which takes into account both generic features in the training data and local characteristics in the testing data. Class-specific artificial neural networks are trained to predict class probabilities for all pixels, while a texton based adaptive K-means clustering process is introduced to group pixels into clusters and obtain texton occurrence. The pixel-level class probabilities and cluster-level texton occurrence are further integrated in superpixel-level voting to assign each superpixel to a class category. The proposed approach outperforms previous approaches on a roadside image dataset collected by the Department of Transport and Main Roads, Queensland, Australia, and achieves state-of-the-art performance using low-resolution images from the Croatia roadside grass dataset.

  • ICONIP (1) - Class-Semantic Color-Texture Textons for Vegetation Classification
    Neural Information Processing, 2015
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell
    Abstract:

    This paper proposes a new color-texture texton based approach for roadside Vegetation Classification in natural images. Two individual sets of class-semantic textons are first generated from color and filter bank texture features for each class. The color and texture features of testing pixels are then mapped into one of the generated textons using the nearest distance, resulting in two texton occurrence matrices – one for color and one for texture. The Classification is achieved by aggregating color-texture texton occurrences over all pixels in each over-segmented superpixel using a majority voting strategy. Our approach outperforms previous benchmarking approaches and achieves 81% and 74.5% accuracies of classifying seven objects on a cropped region dataset and six objects on an image dataset collected by the Department of Transport and Main Roads, Queensland, Australia.

  • class semantic color texture textons for Vegetation Classification
    International Conference on Neural Information Processing, 2015
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell
    Abstract:

    This paper proposes a new color-texture texton based approach for roadside Vegetation Classification in natural images. Two individual sets of class-semantic textons are first generated from color and filter bank texture features for each class. The color and texture features of testing pixels are then mapped into one of the generated textons using the nearest distance, resulting in two texton occurrence matrices – one for color and one for texture. The Classification is achieved by aggregating color-texture texton occurrences over all pixels in each over-segmented superpixel using a majority voting strategy. Our approach outperforms previous benchmarking approaches and achieves 81% and 74.5% accuracies of classifying seven objects on a cropped region dataset and six objects on an image dataset collected by the Department of Transport and Main Roads, Queensland, Australia.

Brijesh Verma - One of the best experts on this subject based on the ideXlab platform.

  • Spatial contextual superpixel model for natural roadside Vegetation Classification
    Pattern Recognition, 2016
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell
    Abstract:

    In this paper, we present a novel Spatial Contextual Superpixel Model (SCSM) for Vegetation Classification in natural roadside images. The SCSM accomplishes the goal by transforming the Classification task from a pixel into a superpixel domain for more effective adoption of both local and global spatial contextual information between superpixels in an image. First, the image is segmented into a set of superpixels with strong homogeneous texture, from which Pixel Patch Selective (PPS) features are extracted to train class-specific binary classifiers for obtaining Contextual Superpixel Probability Maps (CSPMs) for all classes, coupled with spatial constraints. A set of superpixel candidates with the highest probabilities is then determined to represent global characteristics of a testing image. A superpixel merging strategy is further proposed to progressively merge superpixels with low probabilities into the most similar neighbors by performing a double-check on whether a superpixel and its neighour accept each other, as well as enhancing a global contextual constraint. We demonstrate high performance by the proposed model on two challenging natural roadside image datasets from the Department of Transport and Main Roads and on the Stanford background benchmark dataset. A novel Spatial Contextual Superpixel Model (SCSM) for natural Vegetation Classification.A new reverse superpixel merging strategy to progressively merge superpixels.High performance on challenging natural datasets and Stanford background data.

  • Aggregating pixel-level prediction and cluster-level texton occurrence within superpixel voting for roadside Vegetation Classification
    2016 International Joint Conference on Neural Networks (IJCNN), 2016
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell, Sujan Chowdhury
    Abstract:

    Roadside Vegetation Classification has recently attracted increasing attention, due to its significance in applications such as Vegetation growth management and fire hazard identification. Existing studies primarily focus on learning visible feature based classifiers or invisible feature based thresholds, which often suffer from a generalization problem to new data. This paper proposes an approach that aggregates pixel-level supervised Classification and cluster-level texton occurrence within a voting strategy over superpixels for Vegetation Classification, which takes into account both generic features in the training data and local characteristics in the testing data. Class-specific artificial neural networks are trained to predict class probabilities for all pixels, while a texton based adaptive K-means clustering process is introduced to group pixels into clusters and obtain texton occurrence. The pixel-level class probabilities and cluster-level texton occurrence are further integrated in superpixel-level voting to assign each superpixel to a class category. The proposed approach outperforms previous approaches on a roadside image dataset collected by the Department of Transport and Main Roads, Queensland, Australia, and achieves state-of-the-art performance using low-resolution images from the Croatia roadside grass dataset.

  • IJCNN - Aggregating pixel-level prediction and cluster-level texton occurrence within superpixel voting for roadside Vegetation Classification
    2016 International Joint Conference on Neural Networks (IJCNN), 2016
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell, Sujan Chowdhury
    Abstract:

    Roadside Vegetation Classification has recently attracted increasing attention, due to its significance in applications such as Vegetation growth management and fire hazard identification. Existing studies primarily focus on learning visible feature based classifiers or invisible feature based thresholds, which often suffer from a generalization problem to new data. This paper proposes an approach that aggregates pixel-level supervised Classification and cluster-level texton occurrence within a voting strategy over superpixels for Vegetation Classification, which takes into account both generic features in the training data and local characteristics in the testing data. Class-specific artificial neural networks are trained to predict class probabilities for all pixels, while a texton based adaptive K-means clustering process is introduced to group pixels into clusters and obtain texton occurrence. The pixel-level class probabilities and cluster-level texton occurrence are further integrated in superpixel-level voting to assign each superpixel to a class category. The proposed approach outperforms previous approaches on a roadside image dataset collected by the Department of Transport and Main Roads, Queensland, Australia, and achieves state-of-the-art performance using low-resolution images from the Croatia roadside grass dataset.

  • ICONIP (1) - Class-Semantic Color-Texture Textons for Vegetation Classification
    Neural Information Processing, 2015
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell
    Abstract:

    This paper proposes a new color-texture texton based approach for roadside Vegetation Classification in natural images. Two individual sets of class-semantic textons are first generated from color and filter bank texture features for each class. The color and texture features of testing pixels are then mapped into one of the generated textons using the nearest distance, resulting in two texton occurrence matrices – one for color and one for texture. The Classification is achieved by aggregating color-texture texton occurrences over all pixels in each over-segmented superpixel using a majority voting strategy. Our approach outperforms previous benchmarking approaches and achieves 81% and 74.5% accuracies of classifying seven objects on a cropped region dataset and six objects on an image dataset collected by the Department of Transport and Main Roads, Queensland, Australia.

  • class semantic color texture textons for Vegetation Classification
    International Conference on Neural Information Processing, 2015
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell
    Abstract:

    This paper proposes a new color-texture texton based approach for roadside Vegetation Classification in natural images. Two individual sets of class-semantic textons are first generated from color and filter bank texture features for each class. The color and texture features of testing pixels are then mapped into one of the generated textons using the nearest distance, resulting in two texton occurrence matrices – one for color and one for texture. The Classification is achieved by aggregating color-texture texton occurrences over all pixels in each over-segmented superpixel using a majority voting strategy. Our approach outperforms previous benchmarking approaches and achieves 81% and 74.5% accuracies of classifying seven objects on a cropped region dataset and six objects on an image dataset collected by the Department of Transport and Main Roads, Queensland, Australia.

Sujan Chowdhury - One of the best experts on this subject based on the ideXlab platform.

  • Aggregating pixel-level prediction and cluster-level texton occurrence within superpixel voting for roadside Vegetation Classification
    2016 International Joint Conference on Neural Networks (IJCNN), 2016
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell, Sujan Chowdhury
    Abstract:

    Roadside Vegetation Classification has recently attracted increasing attention, due to its significance in applications such as Vegetation growth management and fire hazard identification. Existing studies primarily focus on learning visible feature based classifiers or invisible feature based thresholds, which often suffer from a generalization problem to new data. This paper proposes an approach that aggregates pixel-level supervised Classification and cluster-level texton occurrence within a voting strategy over superpixels for Vegetation Classification, which takes into account both generic features in the training data and local characteristics in the testing data. Class-specific artificial neural networks are trained to predict class probabilities for all pixels, while a texton based adaptive K-means clustering process is introduced to group pixels into clusters and obtain texton occurrence. The pixel-level class probabilities and cluster-level texton occurrence are further integrated in superpixel-level voting to assign each superpixel to a class category. The proposed approach outperforms previous approaches on a roadside image dataset collected by the Department of Transport and Main Roads, Queensland, Australia, and achieves state-of-the-art performance using low-resolution images from the Croatia roadside grass dataset.

  • IJCNN - Aggregating pixel-level prediction and cluster-level texton occurrence within superpixel voting for roadside Vegetation Classification
    2016 International Joint Conference on Neural Networks (IJCNN), 2016
    Co-Authors: Ligang Zhang, Brijesh Verma, David Stockwell, Sujan Chowdhury
    Abstract:

    Roadside Vegetation Classification has recently attracted increasing attention, due to its significance in applications such as Vegetation growth management and fire hazard identification. Existing studies primarily focus on learning visible feature based classifiers or invisible feature based thresholds, which often suffer from a generalization problem to new data. This paper proposes an approach that aggregates pixel-level supervised Classification and cluster-level texton occurrence within a voting strategy over superpixels for Vegetation Classification, which takes into account both generic features in the training data and local characteristics in the testing data. Class-specific artificial neural networks are trained to predict class probabilities for all pixels, while a texton based adaptive K-means clustering process is introduced to group pixels into clusters and obtain texton occurrence. The pixel-level class probabilities and cluster-level texton occurrence are further integrated in superpixel-level voting to assign each superpixel to a class category. The proposed approach outperforms previous approaches on a roadside image dataset collected by the Department of Transport and Main Roads, Queensland, Australia, and achieves state-of-the-art performance using low-resolution images from the Croatia roadside grass dataset.

  • a novel texture feature based multiple classifier technique for roadside Vegetation Classification
    Expert Systems With Applications, 2015
    Co-Authors: Sujan Chowdhury, Brijesh Verma, David Stockwell
    Abstract:

    Proposed technique use LBP based GLCM feature vector and multiple classifiers.We achieve over 92% accuracy for Vegetation Classification.Extensive experiments use 5-fold cross validation.The experiments were conducted on dense and sparse grasses.In future, extension will be done by introducing large dataset of grasses. This paper presents a novel texture feature based multiple classifier technique and applies it to roadside Vegetation Classification. It is well-known that automation of roadside Vegetation Classification is one of the important issues emerging strongly in improving the fire risk and road safety. Hence, the application presented in this paper is significantly important for identifying fire risks and road safety. The images collected from outdoor environments such as roadside, are affected for a high variability of illumination conditions because of different weather conditions. This paper proposes a novel texture feature based robust expert system for Vegetation identification. It consists of five steps, namely image pre-processing, feature extraction, training with multiple classifiers, Classification, validation and statistical analysis. In the initial stage, Co-occurrence of Binary Pattern (CBP) technique is applied in order to obtain the texture feature relevant to Vegetation in the roadside images. In the training and Classification stages, three classifiers have been fused to combine the multiple decisions. The first classifier is based on Support Vector Machine, the second classifier is based on feed forward back-propagation neural network (FF-BPNN) and the third classifier is based on -Nearest Neighbor (k-NN). The proposed technique has been applied and evaluated on two types of Vegetation images i.e. dense and sparse grasses. The Classification accuracy with a success of 92.72% has been obtained using 5-fold cross validation approach. An (Analysis of Variance) test has also been conducted to show the statistical significance of results.

  • A Novel Hybrid Learning Technique for Roadside Vegetation Classification
    2014
    Co-Authors: Sujan Chowdhury, Brijesh Verma, David Stockwell
    Abstract:

    Roadside Vegetation Classification is an essential task for roadside fire risk assessment and environmental surveys. The Vegetation such as type of grasses and their biomasses are used to identify the fire risk, however it is very difficult to distinguish Vegetation, in particular, the type of roadside grasses. The purpose of this study is to develop a technique which can distinguish Vegetation structure and automatically identify fire risk. This paper presents a novel hybrid learning technique for the Classification of roadside Vegetation with a new feature extraction strategy. The hybrid technique is based on texture feature and fusion of three classifiers: Support Vector Machine (SVM), Neural Network (NN) and k-Nearest Neighbor (k-NN). The segmented image regions are created from image data and texture features are extracted. The three diverse classifiers are trained with extracted features and decisions are fused using the majority vote. The proposed hybrid learning technique has been evaluated on roadside data obtained from Queensland Transport and Main Roads and results are discussed.

Tim G Benton - One of the best experts on this subject based on the ideXlab platform.

  • classifying grass dominated habitats from remotely sensed data the influence of spectral resolution acquisition time and the Vegetation Classification system on accuracy and thematic resolution
    Science of The Total Environment, 2020
    Co-Authors: Ute Bradter, Jerome Oconnell, William E Kunin, Caroline W H Boffey, Richard J Ellis, Tim G Benton
    Abstract:

    Abstract Detailed maps of Vegetation facilitate spatial conservation planning. Such information can be difficult to map from remotely sensed data with the detail (thematic resolution) required for ecological applications. For grass-dominated habitats in the South-East of the UK, it was evaluated which of the following choices improved Classification accuracies at various thematic resolutions: 1) Hyperspectral data versus data with a reduced spectral resolution of eight and 13 bands, which were simulated from the hyperspectral data. 2) A Vegetation Classification system using a detailed description of Vegetation (sub-) communities (the British National Vegetation Classification, NVC) versus clustering based on the dominant plant species (Dom-Species). 3) The month of imagery acquisition. Hyperspectral data produced the highest accuracies for Vegetation away from edges using the NVC (84 – 87%). Simulated 13-band data performed also well (83-86% accuracy). Simulated 8-band data performed poorer at finer thematic resolutions (77-78% accuracy), but produced accuracies similar to those from simulated 13-band or hyperspectral data for coarser thematic resolutions (82-86%). Grouping Vegetation by NVC (84 – 87% accuracy for hyperspectral data) usually achieved higher accuracies compared to Dom-Species (81 – 84 % for hyperspectral data). Highest discrimination rates were achieved around the time Vegetation was fully developed. The results suggest that using a detailed description of Vegetation (sub-) communities instead of one based on the dominating species can result in more accurate mapping. The NVC may reflect differences in site conditions in addition to differences in the composition of dominant species, which may benefit Vegetation Classification. The results also suggest that using hyperspectral data or the 13-band multispectral data can help to achieve the fine thematic resolutions that are often required in ecological applications. Accurate Vegetation maps with a high thematic resolution can benefit a range of applications, such as species and habitat conservation.