Vision Systems

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 360 Experts worldwide ranked by ideXlab platform

Takashi Komuro - One of the best experts on this subject based on the ideXlab platform.

  • digital Vision chips and high speed Vision Systems
    Symposium on VLSI Circuits, 2001
    Co-Authors: Masatoshi Ishikawa, Takashi Komuro
    Abstract:

    Conventional image processing has a critical limit of frame rate derived from serial transmission of the video signal. In order to overcome the limit, fully parallel processing architecture without scanning has been proposed. In this paper, Vision chips with digital circuits and high speed application Systems developed in our laboratory are described.

Alessandro Astolfi - One of the best experts on this subject based on the ideXlab platform.

  • a new solution to the problem of range identification in perspective Vision Systems
    IEEE Transactions on Automatic Control, 2005
    Co-Authors: Dimitrios Karagiannis, Alessandro Astolfi
    Abstract:

    A new solution to the problem of range identification for perspective Vision Systems is proposed. These Systems arise in machine Vision problems, where the position of an object moving in the three-dimensional space has to be identified through two-dimensional images obtained from a single camera. The proposed identifier yields asymptotic estimates of the object coordinates and is significantly simpler than existing designs. Moreover, it can be easily tuned to achieve the desired convergence rate. Simulations are provided demonstrating the enhanced performance of the proposed scheme and its robustness to measurement noise.

T J Flynn - One of the best experts on this subject based on the ideXlab platform.

  • range identification for perspective Vision Systems
    IEEE Transactions on Automatic Control, 2003
    Co-Authors: Warren E Dixon, Yongchun Fang, D M Dawson, T J Flynn
    Abstract:

    In this note, a new observer is developed to determine range information (and, hence, the three-dimensional (3-D) coordinates) of an object feature moving with affine motion dynamics (or the more general Ricatti motion dynamics) with known motion parameters. The unmeasurable range information is determined from a single camera provided an observability condition is satisfied that has physical significance. To develop the observer, the perspective system is expressed in terms of the nonlinear feature dynamics. The structure of the proposed observer is inspired by recent disturbance observer results. The proposed technique facilitates a Lyapunov-based analysis that is less complex than the sliding-mode based analysis derived for recent observer designs. The analysis demonstrates that the 3-D task-space coordinates of the feature point can be asymptotically identified. Simulation results are provided that illustrate the performance of the observer in the presence of noise.

  • range identification for perspective Vision Systems
    American Control Conference, 2003
    Co-Authors: Warren E Dixon, Yongchun Fang, D M Dawson, T J Flynn
    Abstract:

    In this paper, a new continuous observer is developed to determine range information (and hence the 3-dimensional (3D) coordinates) of an object feature moving with affine motion dynamics (or the more general Riccati motion dynamics) with known motion parameters. The unmeasurable range information is determined from a single camera provided an observability condition is satisfied that has physical significance. To develop the observer, the perspective system is expressed in terms of the nonlinear feature dynamics. The structure of the proposed observer is inspired by recent disturbance observer results. The proposed technique facilitates a Lyapunov-based analysis that is less complex than the sliding-mode based analysis derived for recent discontinuous observer designs. The analysis demonstrates that the 3D task-space coordinates of the feature point can be asymptotically identified.

Lynda J. Kramer - One of the best experts on this subject based on the ideXlab platform.

  • External Vision Systems (XVS) proof-of-concept flight test evaluation
    Degraded Visual Environments: Enhanced Synthetic and External Vision Solutions 2014, 2014
    Co-Authors: Kevin J Shelton, Lynda J. Kramer, Steven P. Williams, Jarvis J. Arthur, Lawrence J. Prinzel, Randall E. Bailey
    Abstract:

    NASA’s Fundamental Aeronautics Program, High Speed Project is performing research, development, test and evaluation of flight deck and related technologies to support future low-boom, supersonic configurations (without forward-facing windows) by use of an eXternal Vision System (XVS). The challenge of XVS is to determine a combination of sensor and display technologies which can provide an equivalent level of safety and performance to that provided by forward-facing windows in today’s aircraft. This flight test was conducted with the goal of obtaining performance data on see-and-avoid and see-to-follow traffic using a proof-of-concept XVS design in actual flight conditions. Six data collection flights were flown in four traffic scenarios against two different sized participating traffic aircraft. This test utilized a 3x1 array of High Definition (HD) cameras, with a fixed forward field-of-view, mounted on NASA Langley’s UC-12 test aircraft. Test scenarios, with participating NASA aircraft serving as traffic, were presented to two evaluation pilots per flight – one using the proof-of-concept (POC) XVS and the other looking out the forward windows. The camera images were presented on the XVS display in the aft cabin with Head-Up Display (HUD)-like flight symbology overlaying the real-time imagery. The test generated XVS performance data, including comparisons to natural Vision, and post-run subjective acceptability data were also collected. This paper discusses the flight test activities, its operational challenges, and summarizes the findings to date.

  • Using Vision system technologies to enable operational improvements for low visibility approach and landing operations
    AIAA IEEE Digital Avionics Systems Conference - Proceedings, 2014
    Co-Authors: Lynda J. Kramer, Steve P Williams, Lisa R. Le Vie, Kurt Severance, Kyle K. E. Ellis, Randall E. Bailey, James R Comstock
    Abstract:

    Flight deck-based Vision Systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit Vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated head-up display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O’Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  • synthetic and enhanced Vision Systems sevs for nextgen simulation and flight test performance evaluation
    IEEE AIAA Digital Avionics Systems Conference, 2012
    Co-Authors: Kevin J Shelton, Lynda J. Kramer, Kyle K. E. Ellis, Sherri A Rehfeld
    Abstract:

    The Synthetic and Enhanced Vision Systems for NextGen (SEVS) simulation and flight tests are jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA). The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SEVS operational and system-level performance capabilities. Nine test flights (38 flight hours) were conducted over the summer and fall of 2011. The evaluations were flown in Gulfstream's G450 flight test aircraft outfitted with the SEVS technology under very low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 ft to 2400 ft visibility) into various airports from Louisiana to Maine. In-situ flight performance and subjective workload and acceptability data were collected in collaboration with ground simulation studies at LaRC's Research Flight Deck simulator.

  • synthetic and enhanced Vision Systems for nextgen sevs simulation and flight test performance evaluation
    2012
    Co-Authors: Kevin J Shelton, Lynda J. Kramer, Kyle K. E. Ellis, Sherri A Rehfeld
    Abstract:

    The Synthetic and Enhanced Vision Systems for NextGen (SEVS) simulation and flight tests are jointly sponsored by NASA's Aviation Safety Program, Vehicle Systems Safety Technology project and the Federal Aviation Administration (FAA). The flight tests were conducted by a team of Honeywell, Gulfstream Aerospace Corporation and NASA personnel with the goal of obtaining pilot-in-the-loop test data for flight validation, verification, and demonstration of selected SEVS operational and system-level performance capabilities. Nine test flights (38 flight hours) were conducted over the summer and fall of 2011. The evaluations were flown in Gulfstream.s G450 flight test aircraft outfitted with the SEVS technology under very low visibility instrument meteorological conditions. Evaluation pilots flew 108 approaches in low visibility weather conditions (600 ft to 2400 ft visibility) into various airports from Louisiana to Maine. In-situ flight performance and subjective workload and acceptability data were collected in collaboration with ground simulation studies at LaRC.s Research Flight Deck simulator.

  • Flight test comparison between enhanced Vision (FLIR) and synthetic Vision Systems
    Proceedings of SPIE - The International Society for Optical Engineering, 2005
    Co-Authors: Jarvis J. Arthur Iii, Lynda J. Kramer, Randall E. Bailey
    Abstract:

    Limited visibility and reduced situational awareness have been cited as predominant causal factors for both Controlled Flight Into Terrain (CFTT) and runway incursion accidents. NASA's Synthetic Vision Systems (SVS) project is developing practical application technologies with the goal of eliminating low visibility conditions as a causal factor to civil aircraft accidents while replicating the operational benefits of clear day flight operations, regardless of the actual outside visibility condition. A major thrust of the SVS project involves the development/demonstration of affordable, certifiable display configurations that provide intuitive out-the-window terrain and obstacle information with advanced pathway guidance. A flight test evaluation was conducted in the summer of 2004 by NASA Langley Research Center under NASA's Aviation Safety and Security, Synthetic Vision System - Commercial and Business program. A Gulfstream G-V aircraft, modified and operated under NASA contract by the Gulfstream Aerospace Corporation, was flown over a 3-week period at the Reno/Tahoe International Airport and an additional 3-week period at the NASA Wallops Flight Facility to evaluate integrated Synthetic Vision System concepts. Flight testing was conducted to evaluate the performance, usability, and acceptance of an integrated synthetic Vision concept which included advanced Synthetic Vision display concepts for a transport aircraft flight deck, a Runway Incursion Prevention System, an Enhanced Vision Systems (EVS), and real-time Database Integrity Monitoring Equipment. This paper focuses on comparing qualitative and subjective results between EVS and SVS display concepts.

Masatoshi Ishikawa - One of the best experts on this subject based on the ideXlab platform.

  • digital Vision chips and high speed Vision Systems
    Symposium on VLSI Circuits, 2001
    Co-Authors: Masatoshi Ishikawa, Takashi Komuro
    Abstract:

    Conventional image processing has a critical limit of frame rate derived from serial transmission of the video signal. In order to overcome the limit, fully parallel processing architecture without scanning has been proposed. In this paper, Vision chips with digital circuits and high speed application Systems developed in our laboratory are described.