Hard Drive Space

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 204 Experts worldwide ranked by ideXlab platform

Erik J. Sacks - One of the best experts on this subject based on the ideXlab platform.

  • TagDigger: user-friendly extraction of read counts from GBS and RAD-seq data
    Source Code for Biology and Medicine, 2016
    Co-Authors: Lindsay V. Clark, Erik J. Sacks
    Abstract:

    Background In genotyping-by-sequencing (GBS) and restriction site-associated DNA sequencing (RAD-seq), read depth is important for assessing the quality of genotype calls and estimating allele dosage in polyploids. However, existing pipelines for GBS and RAD-seq do not provide read counts in formats that are both accurate and easy to access. Additionally, although existing pipelines allow previously-mined SNPs to be genotyped on new samples, they do not allow the user to manually specify a subset of loci to examine. Pipelines that do not use a reference genome assign arbitrary names to SNPs, making meta-analysis across projects difficult. Results We created the software TagDigger, which includes three programs for analyzing GBS and RAD-seq data. The first script, tagdigger_interactive.py , rapidly extracts read counts and genotypes from FASTQ files using user-supplied sets of barcodes and tags. Input and output is in CSV format so that it can be opened by spreadsheet software. Tag sequences can also be imported from the Stacks, TASSEL-GBSv2, TASSEL-UNEAK, or pyRAD pipelines, and a separate file can be imported listing the names of markers to retain. A second script, tag_manager.py , consolidates marker names and sequences across multiple projects. A third script, barcode_splitter.py , assists with preparing FASTQ data for deposit in a public archive by splitting FASTQ files by barcode and generating MD5 checksums for the resulting files. Conclusions TagDigger is open-source and freely available software written in Python 3. It uses a scalable, rapid search algorithm that can process over 100 million FASTQ reads per hour. TagDigger will run on a laptop with any operating system, does not consume Hard Drive Space with intermediate files, and does not require programming skill to use.

  • TagDigger: user-friendly extraction of read counts from GBS and RAD-seq data
    Source Code for Biology and Medicine, 2016
    Co-Authors: Lindsay V. Clark, Erik J. Sacks
    Abstract:

    In genotyping-by-sequencing (GBS) and restriction site-associated DNA sequencing (RAD-seq), read depth is important for assessing the quality of genotype calls and estimating allele dosage in polyploids. However, existing pipelines for GBS and RAD-seq do not provide read counts in formats that are both accurate and easy to access. Additionally, although existing pipelines allow previously-mined SNPs to be genotyped on new samples, they do not allow the user to manually specify a subset of loci to examine. Pipelines that do not use a reference genome assign arbitrary names to SNPs, making meta-analysis across projects difficult. We created the software TagDigger, which includes three programs for analyzing GBS and RAD-seq data. The first script, tagdigger_interactive.py, rapidly extracts read counts and genotypes from FASTQ files using user-supplied sets of barcodes and tags. Input and output is in CSV format so that it can be opened by spreadsheet software. Tag sequences can also be imported from the Stacks, TASSEL-GBSv2, TASSEL-UNEAK, or pyRAD pipelines, and a separate file can be imported listing the names of markers to retain. A second script, tag_manager.py, consolidates marker names and sequences across multiple projects. A third script, barcode_splitter.py, assists with preparing FASTQ data for deposit in a public archive by splitting FASTQ files by barcode and generating MD5 checksums for the resulting files. TagDigger is open-source and freely available software written in Python 3. It uses a scalable, rapid search algorithm that can process over 100 million FASTQ reads per hour. TagDigger will run on a laptop with any operating system, does not consume Hard Drive Space with intermediate files, and does not require programming skill to use.

Lindsay V. Clark - One of the best experts on this subject based on the ideXlab platform.

  • TagDigger: user-friendly extraction of read counts from GBS and RAD-seq data
    Source Code for Biology and Medicine, 2016
    Co-Authors: Lindsay V. Clark, Erik J. Sacks
    Abstract:

    Background In genotyping-by-sequencing (GBS) and restriction site-associated DNA sequencing (RAD-seq), read depth is important for assessing the quality of genotype calls and estimating allele dosage in polyploids. However, existing pipelines for GBS and RAD-seq do not provide read counts in formats that are both accurate and easy to access. Additionally, although existing pipelines allow previously-mined SNPs to be genotyped on new samples, they do not allow the user to manually specify a subset of loci to examine. Pipelines that do not use a reference genome assign arbitrary names to SNPs, making meta-analysis across projects difficult. Results We created the software TagDigger, which includes three programs for analyzing GBS and RAD-seq data. The first script, tagdigger_interactive.py , rapidly extracts read counts and genotypes from FASTQ files using user-supplied sets of barcodes and tags. Input and output is in CSV format so that it can be opened by spreadsheet software. Tag sequences can also be imported from the Stacks, TASSEL-GBSv2, TASSEL-UNEAK, or pyRAD pipelines, and a separate file can be imported listing the names of markers to retain. A second script, tag_manager.py , consolidates marker names and sequences across multiple projects. A third script, barcode_splitter.py , assists with preparing FASTQ data for deposit in a public archive by splitting FASTQ files by barcode and generating MD5 checksums for the resulting files. Conclusions TagDigger is open-source and freely available software written in Python 3. It uses a scalable, rapid search algorithm that can process over 100 million FASTQ reads per hour. TagDigger will run on a laptop with any operating system, does not consume Hard Drive Space with intermediate files, and does not require programming skill to use.

  • TagDigger: user-friendly extraction of read counts from GBS and RAD-seq data
    Source Code for Biology and Medicine, 2016
    Co-Authors: Lindsay V. Clark, Erik J. Sacks
    Abstract:

    In genotyping-by-sequencing (GBS) and restriction site-associated DNA sequencing (RAD-seq), read depth is important for assessing the quality of genotype calls and estimating allele dosage in polyploids. However, existing pipelines for GBS and RAD-seq do not provide read counts in formats that are both accurate and easy to access. Additionally, although existing pipelines allow previously-mined SNPs to be genotyped on new samples, they do not allow the user to manually specify a subset of loci to examine. Pipelines that do not use a reference genome assign arbitrary names to SNPs, making meta-analysis across projects difficult. We created the software TagDigger, which includes three programs for analyzing GBS and RAD-seq data. The first script, tagdigger_interactive.py, rapidly extracts read counts and genotypes from FASTQ files using user-supplied sets of barcodes and tags. Input and output is in CSV format so that it can be opened by spreadsheet software. Tag sequences can also be imported from the Stacks, TASSEL-GBSv2, TASSEL-UNEAK, or pyRAD pipelines, and a separate file can be imported listing the names of markers to retain. A second script, tag_manager.py, consolidates marker names and sequences across multiple projects. A third script, barcode_splitter.py, assists with preparing FASTQ data for deposit in a public archive by splitting FASTQ files by barcode and generating MD5 checksums for the resulting files. TagDigger is open-source and freely available software written in Python 3. It uses a scalable, rapid search algorithm that can process over 100 million FASTQ reads per hour. TagDigger will run on a laptop with any operating system, does not consume Hard Drive Space with intermediate files, and does not require programming skill to use.

Prajukti Bhattacharyya - One of the best experts on this subject based on the ideXlab platform.

  • Using image analysis and ArcGIS ® to improve automatic grain boundary detection and quantify geological images
    Computers & Geosciences, 2012
    Co-Authors: Michael A. Devasto, Dyanna M. Czeck, Prajukti Bhattacharyya
    Abstract:

    Geological images, such as photos and photomicrographs of rocks, are commonly used as supportive evidence to indicate geological processes. A limiting factor to quantifying images is the digitization process; therefore, image analysis has remained largely qualitative. ArcGIS^(R), the most widely used Geographic Information System (GIS) available, is capable of an array of functions including building models capable of digitizing images. We expanded upon a previously designed model built using Arc ModelBuilder^(R) to quantify photomicrographs and scanned images of thin sections. In order to enhance grain boundary detection, but limit computer processing and Hard Drive Space, we utilized a preprocessing image analysis technique such that only a single image is used in the digitizing model. Preprocessing allows the model to accurately digitize grain boundaries with fewer images and requires less user intervention by using batch processing in image analysis software and ArcCatalog^^^(R). We present case studies for five basic textural analyses using a semi-automated digitized image and quantified in ArcMap^(R). Grain Size Distributions, Shape Preferred Orientations, Weak phase connections (networking), and Nearest Neighbor statistics are presented in a simplified fashion for further analyses directly obtainable from the automated digitizing method. Finally, we discuss the ramifications for incorporating this method into geological image analyses.

  • Using image analysis and ArcGIS®to improve automatic grain boundary detection and quantify geological images
    Computers and Geosciences, 2012
    Co-Authors: Michael A. Devasto, Dyanna M. Czeck, Prajukti Bhattacharyya
    Abstract:

    Geological images, such as photos and photomicrographs of rocks, are commonly used as supportive evidence to indicate geological processes. A limiting factor to quantifying images is the digitization process; therefore, image analysis has remained largely qualitative. ArcGIS®, the most widely used Geographic Information System (GIS) available, is capable of an array of functions including building models capable of digitizing images. We expanded upon a previously designed model built using Arc ModelBuilder®to quantify photomicrographs and scanned images of thin sections. In order to enhance grain boundary detection, but limit computer processing and Hard Drive Space, we utilized a preprocessing image analysis technique such that only a single image is used in the digitizing model. Preprocessing allows the model to accurately digitize grain boundaries with fewer images and requires less user intervention by using batch processing in image analysis software and ArcCatalog®. We present case studies for five basic textural analyses using a semi-automated digitized image and quantified in ArcMap®. Grain Size Distributions, Shape Preferred Orientations, Weak phase connections (networking), and Nearest Neighbor statistics are presented in a simplified fashion for further analyses directly obtainable from the automated digitizing method. Finally, we discuss the ramifications for incorporating this method into geological image analyses. © 2012 Elsevier Ltd.

Ronald D. Tarvin - One of the best experts on this subject based on the ideXlab platform.

  • Central heating plant status quo program. Final report
    1995
    Co-Authors: Martin J. Savoie, Ronald D. Tarvin, Travis L. Mccammon, Richard E. Carroll, D. A. Wicks
    Abstract:

    The Fiscal Year 1986 Defense Appropriation Act (PL-99. 190), Section 8110, directed the Department of Defense (DOD) to rehabilitate and convert central energy plants to coal firing where a cost benefit could be realized. To satisfy this requirement, the life cycle costs of potential fuel/technology alternatives must be compared. The Status Quo program is one component of a series of programs being developed by the U.S. Army Construction Engineering Research Laboratories to evaluate coal conversion alternatives. Status Quo is a microcomputer program that estimates the life cycle costs of maintaining an existing energy plant in its present condition, thereby providing a baseline for comparing the life cycle costs of alternatives to the Status quo: modernization, retrofit, or construction of a new plant. This program works in conjunction with (and requires) the Life Cycle Cost in Design (LCCID) computer program, and is designed to run on any IBM personal computer or compatible with at least 640K of random access memory and about 1.4 megabytes of free Hard Drive Space.

  • The Central Heating Plant Status Quo Program.
    1995
    Co-Authors: Martin J. Savoie, Ronald D. Tarvin, Travis L. Mccammon, Richard E. Carroll, D. A. Wicks
    Abstract:

    Abstract : The Fiscal Year 1986 Defense Appropriation Act (PL-99. 190), Section 8110, directed the Department of Defense (DOD) to rehabilitate and convert central energy plants to coal firing where a cost benefit could be realized. To satisfy this requirement, the life cycle costs of potential fuel/technology alternatives must be compared. The Status Quo program is one component of a series of programs being developed by the U.S. Army Construction Engineering Research Laboratories to evaluate coal conversion alternatives. Status Quo is a microcomputer program that estimates the life cycle costs of maintaining an existing energy plant in its present condition, thereby providing a baseline for comparing the life cycle costs of alternatives to the Status quo: modernization, retrofit, or construction of a new plant. This program works in conjunction with (and requires) the Life Cycle Cost in Design (LCCID) computer program, and is designed to run on any IBM personal computer or compatible with at least 640K of random access memory and about 1.4 megabytes of free Hard Drive Space. (AN)

  • Development of the Central Heating Plant Status Quo Program
    1993
    Co-Authors: Martin J. Savoie, Ronald D. Tarvin
    Abstract:

    Abstract : In accordance with the Defense Appropriation Act (fiscal year 1986), the Department of Defense (DOD) was directed to rehabilitate and convert central energy plants to coal firing where a cost benefit could be realized. To satisfy this requirement, the life cycle costs of potential fuel/technology alternatives must be compared. The Status Quo program is one component of a series of programs begin developed by the U.S. Army Construction Engineering Research Laboratories to evaluate coal conversion alternatives. Status Quo is a microcomputer program that estimates the life cycle costs of maintaining an existing energy plants in its present condition, thereby providing a baseline for comparing the life cycle costs of alternatives to the status quo: modernization, retrofit, or construction of a new plant. This program works in conjunction with (and requires) the Life Cycle Cost in Design (LCCID) computer program, and is designed to run on any IBM PC or compatible with at least 640K of random access memory and about 1.4 megabytes of free Hard Drive Space

  • Development of the central heating plant status quo program. Interim report
    1993
    Co-Authors: Martin J. Savoie, Ronald D. Tarvin
    Abstract:

    In accordance with the Defense Appropriation Act (fiscal year 1986), the Department of Defense (DOD) was directed to rehabilitate and convert central energy plants to coal firing where a cost benefit could be realized. To satisfy this requirement, the life cycle costs of potential fuel/technology alternatives must be compared. The Status Quo program is one component of a series of programs begin developed by the US Army Construction Engineering Research Laboratories to evaluate coal conversion alternatives. Status Quo is a microcomputer program that estimates the life cycle costs of maintaining an existing energy plants in its present condition, thereby providing a baseline for comparing the life cycle costs of alternatives to the status quo: modernization, retrofit, or construction of a new plant. This program works in conjunction with (and requires) the Life Cycle Cost in Design (LCCID) computer program, and is designed to run on any IBM PC or compatible with at least 640K of random access memory and about 1.4 megabytes of free Hard Drive Space.

Martin J. Savoie - One of the best experts on this subject based on the ideXlab platform.

  • Central heating plant status quo program. Final report
    1995
    Co-Authors: Martin J. Savoie, Ronald D. Tarvin, Travis L. Mccammon, Richard E. Carroll, D. A. Wicks
    Abstract:

    The Fiscal Year 1986 Defense Appropriation Act (PL-99. 190), Section 8110, directed the Department of Defense (DOD) to rehabilitate and convert central energy plants to coal firing where a cost benefit could be realized. To satisfy this requirement, the life cycle costs of potential fuel/technology alternatives must be compared. The Status Quo program is one component of a series of programs being developed by the U.S. Army Construction Engineering Research Laboratories to evaluate coal conversion alternatives. Status Quo is a microcomputer program that estimates the life cycle costs of maintaining an existing energy plant in its present condition, thereby providing a baseline for comparing the life cycle costs of alternatives to the Status quo: modernization, retrofit, or construction of a new plant. This program works in conjunction with (and requires) the Life Cycle Cost in Design (LCCID) computer program, and is designed to run on any IBM personal computer or compatible with at least 640K of random access memory and about 1.4 megabytes of free Hard Drive Space.

  • The Central Heating Plant Status Quo Program.
    1995
    Co-Authors: Martin J. Savoie, Ronald D. Tarvin, Travis L. Mccammon, Richard E. Carroll, D. A. Wicks
    Abstract:

    Abstract : The Fiscal Year 1986 Defense Appropriation Act (PL-99. 190), Section 8110, directed the Department of Defense (DOD) to rehabilitate and convert central energy plants to coal firing where a cost benefit could be realized. To satisfy this requirement, the life cycle costs of potential fuel/technology alternatives must be compared. The Status Quo program is one component of a series of programs being developed by the U.S. Army Construction Engineering Research Laboratories to evaluate coal conversion alternatives. Status Quo is a microcomputer program that estimates the life cycle costs of maintaining an existing energy plant in its present condition, thereby providing a baseline for comparing the life cycle costs of alternatives to the Status quo: modernization, retrofit, or construction of a new plant. This program works in conjunction with (and requires) the Life Cycle Cost in Design (LCCID) computer program, and is designed to run on any IBM personal computer or compatible with at least 640K of random access memory and about 1.4 megabytes of free Hard Drive Space. (AN)

  • Development of the Central Heating Plant Status Quo Program
    1993
    Co-Authors: Martin J. Savoie, Ronald D. Tarvin
    Abstract:

    Abstract : In accordance with the Defense Appropriation Act (fiscal year 1986), the Department of Defense (DOD) was directed to rehabilitate and convert central energy plants to coal firing where a cost benefit could be realized. To satisfy this requirement, the life cycle costs of potential fuel/technology alternatives must be compared. The Status Quo program is one component of a series of programs begin developed by the U.S. Army Construction Engineering Research Laboratories to evaluate coal conversion alternatives. Status Quo is a microcomputer program that estimates the life cycle costs of maintaining an existing energy plants in its present condition, thereby providing a baseline for comparing the life cycle costs of alternatives to the status quo: modernization, retrofit, or construction of a new plant. This program works in conjunction with (and requires) the Life Cycle Cost in Design (LCCID) computer program, and is designed to run on any IBM PC or compatible with at least 640K of random access memory and about 1.4 megabytes of free Hard Drive Space

  • Development of the central heating plant status quo program. Interim report
    1993
    Co-Authors: Martin J. Savoie, Ronald D. Tarvin
    Abstract:

    In accordance with the Defense Appropriation Act (fiscal year 1986), the Department of Defense (DOD) was directed to rehabilitate and convert central energy plants to coal firing where a cost benefit could be realized. To satisfy this requirement, the life cycle costs of potential fuel/technology alternatives must be compared. The Status Quo program is one component of a series of programs begin developed by the US Army Construction Engineering Research Laboratories to evaluate coal conversion alternatives. Status Quo is a microcomputer program that estimates the life cycle costs of maintaining an existing energy plants in its present condition, thereby providing a baseline for comparing the life cycle costs of alternatives to the status quo: modernization, retrofit, or construction of a new plant. This program works in conjunction with (and requires) the Life Cycle Cost in Design (LCCID) computer program, and is designed to run on any IBM PC or compatible with at least 640K of random access memory and about 1.4 megabytes of free Hard Drive Space.