Hard Copy Document

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1590 Experts worldwide ranked by ideXlab platform

Edward J. Delp - One of the best experts on this subject based on the ideXlab platform.

  • ISCAS - Intrinsic signatures for scanned Documents forensics : Effect of font shape and size
    Proceedings of 2010 IEEE International Symposium on Circuits and Systems, 2010
    Co-Authors: Nitin Khanna, Edward J. Delp
    Abstract:

    Recently there has been a great deal of interest in using features intrinsic to a data-generating sensor for the purpose of source identification. Numerous methods have been proposed for different problems related to sensor forensics, such as source camera identification using sensor noise as intrinsic signatures, printer identification using banding artifacts as intrinsic signatures and so on. The goal of our work is to identify the scanner used for generating a scanned (digital) version of a printed (Hard-Copy) Document. In this paper we do extensive analysis of the effect of font shape and size on our recently proposed, texture analysis based intrinsic signatures for scanned Documents. Some improvements in these intrinsic signatures are proposed to make them robust to font shapes and sizes.

  • WIFS - Source scanner identification for scanned Documents
    2009 First IEEE International Workshop on Information Forensics and Security (WIFS), 2009
    Co-Authors: Nitin Khanna, Edward J. Delp
    Abstract:

    Recently there has been a great deal of interest using features intrinsic to a data-generating sensor for the purpose of source identification. Numerous methods have been proposed for different problems related to digital image forensics. The goal of our work is to identify the scanner used for generating a scanned (digital) version of a printed (Hard-Copy) Document. In this paper we describe the use of texture analysis to identify the scanner used to scan a text Document. The efficacy of our proposed method is also demonstrated.

Peter S. Schaefer - One of the best experts on this subject based on the ideXlab platform.

  • END STATE - Commander's Visualization at the Company Level (DVD)
    2009
    Co-Authors: Carl W. Lickteig, Peter S. Schaefer, Jeffrey E. Fite, Tristan Hendrix, Steven Puchino, James Harrison, Anna T. Cianciolo
    Abstract:

    Abstract : ELECTRONIC FILE CHARACTERISTICS: 22 files; MS Word (.DOC) and MS PowerPoint (.PPT). PHYSICAL DESCRIPTION: 1 DVD-ROM and 1 Hard-Copy Document; 4 3/4 in.; 1.28 GB. ABSTRACT: Visualization is the process of developing situational understanding and envisioning how to move the force from its current state to the desired end state. It is a critical command skill that must be acquired earlier in a leader's career than ever before given today's challenging operational environment. Training is needed that provides deliberate reflection and practice opportunities to improve visualization. To meet the requirement, U.S. Army Research Institute for the Behavioral and Social Sciences (ARI) conducted research on developing training that improves company and battalion commanders' visualization. This report describes the design, development, and formative evaluation of END STATE - Commander's Visualization at the Company Level which provides instructorless, interactive training and testing to help company commanders and their units visualize the operational environment. Forty-eight captains and lieutenants participated in a formative evaluation that concluded END STATE is effective, relevant, and worth using. Revisions based on participant recommendations resulted in an END STATE product ready for pilot implementation. Ongoing ARI research on END STATE will develop parallel pre- and post-tests to assess training effectiveness and normative standards of novice, intermediate, and expert performance. Research and implementation will establish an empirical base to understand and improve the ability of company commanders and their units to visualize operations in today's operational environment.

  • Anytime, Anywhere Terrain Visualization Training System: Combining Training Theory and Technology to Train Human-Computer Visualization (DVD)
    2009
    Co-Authors: Marcia J. Rossi, M. J. Khan, Sanjeeb Nanda, Carl W. Lickteig, Peter S. Schaefer
    Abstract:

    Abstract : ELECTRONIC FILE CHARACTERISTICS: 7 files; Adobe Acrobat (.PDF), MS Word (.DOC) and video files (.MOV). PHYSICAL DESCRIPTION: 1 DVD-ROM and 1 Hard-Copy Document; 4 3/4 in.; 728 MB. ABSTRACT: This report describes the design and evaluation of a new system for training terrain visualization, an important but difficult skill to train and acquire. Recognizing the inherent limitations of traditional paper-and-pencil methods of training terrain visualization, the U.S. Army awarded a Small Business Technology Transfer (STTR) contract to combine training theory and technology to improve terrain visualization training. The prototype training system (Anytime, Anywhere Terrain Visualization Training, or A2TV) allows trainees to interactively view and vary digital representations of terrain by flying and driving through terrain, morphing terrain, and overlaying contour information. In two experiments with novices, one or more of the training methods was shown to significantly improve important terrain visualization skills. Terrain visualization performance was also correlated with spatial ability measures. A training potential and usability evaluation was conducted with active duty military personnel. Military participants affirmed the need for training terrain visualization, acclaimed the potential of the A2TV system for training as well as mission planning and support, and provided constructive recommendations on refinements needed for Phase III commercialization.

Nitin Khanna - One of the best experts on this subject based on the ideXlab platform.

  • ISCAS - Intrinsic signatures for scanned Documents forensics : Effect of font shape and size
    Proceedings of 2010 IEEE International Symposium on Circuits and Systems, 2010
    Co-Authors: Nitin Khanna, Edward J. Delp
    Abstract:

    Recently there has been a great deal of interest in using features intrinsic to a data-generating sensor for the purpose of source identification. Numerous methods have been proposed for different problems related to sensor forensics, such as source camera identification using sensor noise as intrinsic signatures, printer identification using banding artifacts as intrinsic signatures and so on. The goal of our work is to identify the scanner used for generating a scanned (digital) version of a printed (Hard-Copy) Document. In this paper we do extensive analysis of the effect of font shape and size on our recently proposed, texture analysis based intrinsic signatures for scanned Documents. Some improvements in these intrinsic signatures are proposed to make them robust to font shapes and sizes.

  • WIFS - Source scanner identification for scanned Documents
    2009 First IEEE International Workshop on Information Forensics and Security (WIFS), 2009
    Co-Authors: Nitin Khanna, Edward J. Delp
    Abstract:

    Recently there has been a great deal of interest using features intrinsic to a data-generating sensor for the purpose of source identification. Numerous methods have been proposed for different problems related to digital image forensics. The goal of our work is to identify the scanner used for generating a scanned (digital) version of a printed (Hard-Copy) Document. In this paper we describe the use of texture analysis to identify the scanner used to scan a text Document. The efficacy of our proposed method is also demonstrated.

Edward A. Guinness - One of the best experts on this subject based on the ideXlab platform.

  • Restoration of Apollo Data by the Lunar Data Project/PDS Lunar Data Node: An Update
    2016
    Co-Authors: David R. Williams, H. Kent Hills, Patrick T. Taylor, Edwin J. Grayzeck, Edward A. Guinness
    Abstract:

    The Apollo 11, 12, and 14 through 17 missions orbited and landed on the Moon, carrying scientific instruments that returned data from all phases of the missions, included long-lived Apollo Lunar Surface Experiments Packages (ALSEPs) deployed by the astronauts on the lunar surface. Much of these data were never archived, and some of the archived data were on media and in formats that are outmoded, or were deposited with little or no useful Documentation to aid outside users. This is particularly true of the ALSEP data returned autonomously for many years after the Apollo missions ended. The purpose of the Lunar Data Project and the Planetary Data System (PDS) Lunar Data Node is to take data collections already archived at the NASA Space Science Data Coordinated Archive (NSSDCA) and prepare them for archiving through PDS, and to locate lunar data that were never archived, bring them into NSSDCA, and then archive them through PDS. Preparing these data for archiving involves reading the data from the original media, be it magnetic tape, microfilm, microfiche, or Hard-Copy Document, converting the outmoded, often binary, formats when necessary, putting them into a standard digital form accepted by PDS, collecting the necessary ancillary data and Documentation (metadata) to ensure that the data are usable and well-described, summarizing the metadata in Documentation to be included in the data set, adding other information such as references, mission and instrument descriptions, contact information, and related Documentation, and packaging the results in a PDS-compliant data set. The data set is then validated and reviewed by a group of external scientists as part of the PDS final archive process. We present a status report on some of the data sets that we are processing.

Carl W. Lickteig - One of the best experts on this subject based on the ideXlab platform.

  • END STATE - Commander's Visualization at the Company Level (DVD)
    2009
    Co-Authors: Carl W. Lickteig, Peter S. Schaefer, Jeffrey E. Fite, Tristan Hendrix, Steven Puchino, James Harrison, Anna T. Cianciolo
    Abstract:

    Abstract : ELECTRONIC FILE CHARACTERISTICS: 22 files; MS Word (.DOC) and MS PowerPoint (.PPT). PHYSICAL DESCRIPTION: 1 DVD-ROM and 1 Hard-Copy Document; 4 3/4 in.; 1.28 GB. ABSTRACT: Visualization is the process of developing situational understanding and envisioning how to move the force from its current state to the desired end state. It is a critical command skill that must be acquired earlier in a leader's career than ever before given today's challenging operational environment. Training is needed that provides deliberate reflection and practice opportunities to improve visualization. To meet the requirement, U.S. Army Research Institute for the Behavioral and Social Sciences (ARI) conducted research on developing training that improves company and battalion commanders' visualization. This report describes the design, development, and formative evaluation of END STATE - Commander's Visualization at the Company Level which provides instructorless, interactive training and testing to help company commanders and their units visualize the operational environment. Forty-eight captains and lieutenants participated in a formative evaluation that concluded END STATE is effective, relevant, and worth using. Revisions based on participant recommendations resulted in an END STATE product ready for pilot implementation. Ongoing ARI research on END STATE will develop parallel pre- and post-tests to assess training effectiveness and normative standards of novice, intermediate, and expert performance. Research and implementation will establish an empirical base to understand and improve the ability of company commanders and their units to visualize operations in today's operational environment.

  • Anytime, Anywhere Terrain Visualization Training System: Combining Training Theory and Technology to Train Human-Computer Visualization (DVD)
    2009
    Co-Authors: Marcia J. Rossi, M. J. Khan, Sanjeeb Nanda, Carl W. Lickteig, Peter S. Schaefer
    Abstract:

    Abstract : ELECTRONIC FILE CHARACTERISTICS: 7 files; Adobe Acrobat (.PDF), MS Word (.DOC) and video files (.MOV). PHYSICAL DESCRIPTION: 1 DVD-ROM and 1 Hard-Copy Document; 4 3/4 in.; 728 MB. ABSTRACT: This report describes the design and evaluation of a new system for training terrain visualization, an important but difficult skill to train and acquire. Recognizing the inherent limitations of traditional paper-and-pencil methods of training terrain visualization, the U.S. Army awarded a Small Business Technology Transfer (STTR) contract to combine training theory and technology to improve terrain visualization training. The prototype training system (Anytime, Anywhere Terrain Visualization Training, or A2TV) allows trainees to interactively view and vary digital representations of terrain by flying and driving through terrain, morphing terrain, and overlaying contour information. In two experiments with novices, one or more of the training methods was shown to significantly improve important terrain visualization skills. Terrain visualization performance was also correlated with spatial ability measures. A training potential and usability evaluation was conducted with active duty military personnel. Military participants affirmed the need for training terrain visualization, acclaimed the potential of the A2TV system for training as well as mission planning and support, and provided constructive recommendations on refinements needed for Phase III commercialization.