Head-up Display

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 154791 Experts worldwide ranked by ideXlab platform

Randall E. Bailey - One of the best experts on this subject based on the ideXlab platform.

  • External Vision Systems (XVS) proof-of-concept flight test evaluation
    Degraded Visual Environments: Enhanced Synthetic and External Vision Solutions 2014, 2014
    Co-Authors: Kevin J Shelton, Lynda J. Kramer, Jarvis J. Arthur, Lawrence J Prinzel, Steven P. Williams, Randall E. Bailey
    Abstract:

    NASA’s Fundamental Aeronautics Program, High Speed Project is performing research, development, test and evaluation of flight deck and related technologies to support future low-boom, supersonic configurations (without forward-facing windows) by use of an eXternal Vision System (XVS). The challenge of XVS is to determine a combination of sensor and Display technologies which can provide an equivalent level of safety and performance to that provided by forward-facing windows in today’s aircraft. This flight test was conducted with the goal of obtaining performance data on see-and-avoid and see-to-follow traffic using a proof-of-concept XVS design in actual flight conditions. Six data collection flights were flown in four traffic scenarios against two different sized participating traffic aircraft. This test utilized a 3x1 array of High Definition (HD) cameras, with a fixed forward field-of-view, mounted on NASA Langley’s UC-12 test aircraft. Test scenarios, with participating NASA aircraft serving as traffic, were presented to two evaluation pilots per flight – one using the proof-of-concept (POC) XVS and the other looking out the forward windows. The camera images were presented on the XVS Display in the aft cabin with Head-up Display (HUD)-like flight symbology overlaying the real-time imagery. The test generated XVS performance data, including comparisons to natural vision, and post-run subjective acceptability data were also collected. This paper discusses the flight test activities, its operational challenges, and summarizes the findings to date.

  • Using vision system technologies to enable operational improvements for low visibility approach and landing operations
    AIAA IEEE Digital Avionics Systems Conference - Proceedings, 2014
    Co-Authors: Lynda J. Kramer, Steve P Williams, Lisa R. Le Vie, Kurt Severance, Kyle K. E. Ellis, Randall E. Bailey, James R Comstock
    Abstract:

    Flight deck-based vision systems, such as Synthetic and Enhanced Vision System (SEVS) technologies, have the potential to provide additional margins of safety for aircrew performance and enable the implementation of operational improvements for low visibility surface, arrival, and departure operations in the terminal environment with equivalent efficiency to visual operations. To achieve this potential, research is required for effective technology development and implementation based upon human factors design and regulatory guidance. This research supports the introduction and use of Synthetic Vision Systems and Enhanced Flight Vision Systems (SVS/EFVS) as advanced cockpit vision technologies in Next Generation Air Transportation System (NextGen) operations. Twelve air transport-rated crews participated in a motion-base simulation experiment to evaluate the use of SVS/EFVS in NextGen low visibility approach and landing operations. Three monochromatic, collimated Head-up Display (HUD) concepts (conventional HUD, SVS HUD, and EFVS HUD) and two color head-down primary flight Display (PFD) concepts (conventional PFD, SVS PFD) were evaluated in a simulated NextGen Chicago O’Hare terminal environment. Additionally, the instrument approach type (no offset, 3 degree offset, 15 degree offset) was experimentally varied to test the efficacy of the HUD concepts for offset approach operations. The data showed that touchdown landing performance were excellent regardless of SEVS concept or type of offset instrument approach being flown. Subjective assessments of mental workload and situation awareness indicated that making offset approaches in low visibility conditions with an EFVS HUD or SVS HUD may be feasible.

  • awareness and detection of traffic and obstacles using synthetic and enhanced vision systems
    2013
    Co-Authors: Randall E. Bailey
    Abstract:

    Research literature are reviewed and summarized to evaluate the awareness and detection of traffic and obstacles when using Synthetic Vision Systems (SVS) and Enhanced Vision Systems (EVS). The study identifies the critical issues influencing the time required, accuracy, and pilot workload associated with recognizing and reacting to potential collisions or conflicts with other aircraft, vehicles and obstructions during approach, landing, and surface operations. This work considers the effect of head-down Display and Head-up Display implementations of SVS and EVS as well as the influence of single and dual pilot operations. The influences and strategies of adding traffic information and cockpit alerting with SVS and EVS were also included. Based on this review, a knowledge gap assessment was made with recommendations for ground and flight testing to fill these gaps and hence, promote the safe and effective implementation of SVS/EVS technologies for the Next Generation Air Transportation System

  • synthetic vision enhances situation awareness and rnp capabilities for terrain challenged approaches
    AIAA's 3rd Annual Aviation Technology Integration and Operations (ATIO) Forum, 2003
    Co-Authors: Lynda J. Kramer, Randall E. Bailey, Lawrence J Prinzel, Jarvis J. Arthur
    Abstract:

    The Synthetic Vision Systems (SVS) Project of Aviation Safety Program is striving to eliminate poor visibility as a causal factor in aircraft accidents as well as enhance operational capabilities of all aircraft through the Display of computer generated imagery derived from an onboard database of terrain, obstacle, and airport information. To achieve these objectives, NASA 757 flight test research was conducted at the Eagle-Vail, Colorado airport to evaluate three SVS Display types (Head-up Display, Head-Down Size A, Head -Down Size X) and two terrain texture methods (photo-realistic, generic) in comparison to the simulated Baseline Boeing-757 Electronic Attitude Direction Indicator and Navigation / Terrain Awareness and Warning System Displays. These independent variables were evaluated for situation awareness, path error, and workload while making approaches to Runway 25 and 07 and during simulated engine-out Cottonwood 2 and KREMM departures. The results of the experiment showed significantly improved situation awareness, performance, and workload for SVS concepts compared to the Baseline Displays and confirmed the retrofit capability of the Head-up Display and Size A SVS concepts. The research also demonstrated that the pathway and pursuit guidance used within the SVS concepts achieved required navigation performance (RNP) criteria.

  • Flight test evaluation of tactical synthetic vision Display concepts in a terrain-challenged operating environment
    Enhanced and Synthetic Vision 2002, 2002
    Co-Authors: Randall E. Bailey, Jarvis J. Arthur, R. V. Parrish, Robert M. Norman
    Abstract:

    NASA's Aviation Safety Program, Synthetic Vision Systems Project is developing Display concepts to improve pilot terrain/situational awareness by providing a perspective synthetic view of the outside world through an on-board database driven by precise aircraft positioning information updating via Global Positioning System-based data. This work is aimed at eliminating visibility-induced errors and low visibility conditions as a causal factor to civil aircraft accidents, as well as replicating the operational benefits of clear day flight operations regardless of the actual outside visibility condition. A flight test evaluation of tactical Synthetic Vision Display concepts was recently conducted in the terrain-challenged operating environment of the Eagle County Regional Airport (Colorado). Several Display concepts for Head-up Displays and head-down Displays ranging from ARINC Standard Size A through Size X were tested. Seven pilots evaluated these Displays for acceptability, usability, and situational/terrain awareness while flying existing commercial airline operating procedures for Eagle County Regional Airport. All tactical Synthetic Vision Display concepts provided measurable increases in the pilot's subjective terrain awareness over the baseline aircraft Displays. The head-down Display presentations yielded better terrain awareness over the Head-up Display synthetic vision Display concepts that were tested. Limitations in the Head-up Display concepts were uncovered that suggest further research.

Vassilis Charissis - One of the best experts on this subject based on the ideXlab platform.

  • enhancing human responses through augmented reality head up Display in vehicular environment
    2014 11th International Conference & Expo on Emerging Technologies for a Smarter World (CEWIT), 2014
    Co-Authors: Vassilis Charissis
    Abstract:

    Contemporary needs for constant provision of information and communication has crowded the modern vehicle's interior with a variety of instrumentation Displays. This abundance of automotive infotainment devices can reduce significantly driver's decision making process and response times, leading to higher probability of collision, especially under adverse weather conditions. Typical dashboard instrumentation has proven inefficient to tackle such issues and Head-up Display (HUD) interfaces deemed as an increasingly viable alternative by recent developments in automotive research and manufacturing. This paper presents our current work towards the development of a full-windshield HUD interface that could enhance human responses and provide time-dependant and only critical information for collisions avoidance. For the evaluation of the system we have developed a VR driving simulator that simulates traffic flow and typical accident scenarios in motorway environment. Finally the paper presents the evaluation results and future work that would improve the interaction between HUD interface and driver.

  • human machine collaboration through vehicle head up Display interface
    Cognition Technology & Work, 2010
    Co-Authors: Vassilis Charissis, Stylianos Papanastasiou
    Abstract:

    This work introduces a novel design for an automotive full-windshield Head-up Display (HUD) interface which aims to improve the driver’s spatial awareness and response times under low visibility conditions. To fulfil these requirements, we have designed and implemented a working prototype of a human–machine interface (HMI). Particular emphasis was placed on the prioritisation and effective presentation of information available through vehicular sensors, which would assist, without distracting, the driver in successfully navigating the vehicle under low visibility conditions. The proposed interface is based on minimalist visual representations of real objects to offer a new form of interactive guidance for motorway environments. Overall, this work discusses the design challenges of such a human–machine system, elaborates on the interface design philosophy and presents the outcome of user trials that contrasted the effectiveness of our proposed HUD against a typical head-down Display (HDD).

  • evaluation of prototype automotive head up Display interface testing driver s focusing ability through a vr simulation
    IEEE Intelligent Vehicles Symposium, 2007
    Co-Authors: Vassilis Charissis, Martin Naef
    Abstract:

    Contemporary automotive, navigation and infotainment requirements have evolved the traditional dashboard into a complex device that can often distract the driver. Head-up Displays (HUDs) have recently attracted the attention in the field of automotive research, promoting the reduction of driver's reaction time and to improve spatial awareness. The aptitude of the proposed HUD interface lies within the driver's focusing ability to the HUD interface and the actual traffic. This paper analyses the performance behaviour through user-tests using different focal levels for the projection of a full-windshield HUD interface. For this purpose, a VR driving simulator has been developed to test the different depths of field configurations of a HUD while driving in various weather and traffic conditions with and without the HUD. Our simulation results reveal the users' preferences regarding the focal point of the superimposed interface and present a comparative evaluation of the different focal levels and their impact on drivers' behaviour and performance.

  • driving simulator for head up Display evaluation driver s response time on accident simulation cases
    2006
    Co-Authors: Vassilis Charissis, S Aarafat, M Patera, C Christomanos
    Abstract:

    This paper introduces a novel automotive full-windshield Head-up Display (HUD) interface, which aims to improve driver’s spatial awareness and response times under low visibility conditions. In order to evaluate the effectiveness of the proposed HUD system, a multidiscipline and multinational team of researchers built the driving simulator presented further on. Due to time and cost constrains this simulator was assembled from off-the-shelf components and was based on an open source code. This paper will discuss the HUD function, present the challenging construction process of the simulator and provide an analytical overview of the two accident scenarios. The evaluation outcomes of forty user tests conducted using this custom-built driving simulator will be demonstrated and the conclusions of this study will be discussed.

Girija G - One of the best experts on this subject based on the ideXlab platform.

  • Experimental Study with Enhanced Vision System Prototype Unit
    2016
    Co-Authors: Vps Naidu, Narayana Rao P, Sudesh K Kashyap, Shanthakuma N, Girija G
    Abstract:

    Abstract — The National Civil Aircraft (NCA) being developed at National Aerospace Laboratories (NAL) is expected to have the capability of operation from airports with minimal infrastructure and instrumentation facility under all-weather conditions. The key enabling technology for this is an Integrated Enhanced and Synthetic Vision System (IESVS), which is a combination of Enhanced Vision System (EVS), Synthetic Vision System (SVS), and Head-up Display. A prototype of EVS consisting of Forward Looking Infrared (FLIR) camera and CCD color camera is developed and tested at NAL. A Simulink block is developed to acquire the image data in real time (online) from a four channel frame grabber. An image fusion algorithm based on wavelets is developed to fuse the images from CCD and FLIR cameras. The affine transform used for image registration is computed by selecting the control points from both CCD and FLIR images. Test results from the experiments conducted on the runway during day and night (runway lights ON/OFF) conditions are presented

  • CSIR-National Aerospace Laboratories
    2016
    Co-Authors: Vps Naidu, Narayana Rao P, Sudesh K Kashyap, Shanthakuma N, Girija G
    Abstract:

    Abstract—The National Civil Aircraft (NCA) being developed at National Aerospace Laboratories (NAL) is expected to have the capability of operation from airports with minimal infrastructure and instrumentation facility under all-weather conditions. The key enabling technology for this is an Integrated Enhanced and Synthetic Vision System (IESVS), which is a combination of Enhanced Vision System (EVS), Synthetic Vision System (SVS), and Head-up Display. A prototype of EVS consisting of Forward Looking Infra Red (FLIR) camera and CCD color camera is developed and tested at NAL. A Simulink block is developed to acquire the image data in real time (online) from a four channel frame grabber. An image fusion algorithm based on wavelets is developed to fuse the images from CCD and FLIR cameras. The affine transform used for image registration is computed by selecting the control points from both CCD and FLIR images. Test results from the experiments conducted on the runway during day and night (runway lights ON/OFF) conditions are presented. I

  • Experimental Study with Enhanced Vision System\ud Prototype Unit- poster
    2011
    Co-Authors: Naidu Vps, Sudesh K Kashyap, Shanthakuma N, Rao, Narayana P, Girija G
    Abstract:

    The National Civil Aircraft (NCA) being\ud developed at National Aerospace Laboratories (NAL) is\ud expected to have the capability of operation from airports\ud with minimal infrastructure and instrumentation facility\ud under all-weather conditions. The key enabling technology\ud for this is an Integrated Enhanced and Synthetic Vision\ud System (IESVS), which is a combination of Enhanced\ud Vision System (EVS), Synthetic Vision System (SVS), and\ud Head-up Display. A prototype of EVS consisting of\ud Forward Looking Infrared (FLIR) camera and CCD color\ud camera is developed and tested at NAL. A Simulink block\ud is developed to acquire the image data in real time (online)\ud from a four channel frame grabber. An image fusion\ud algorithm based on wavelets is developed to fuse the\ud images from CCD and FLIR cameras. The affine\ud transform used for image registration is computed by\ud selecting the control points from both CCD and FLIR\ud images. Test results from the experiments conducted on\ud the runway during day and night (runway lights ON/OFF)\ud conditions are presented

  • Experimental Study with Enhanced Vision System\ud Prototype Unit
    2011
    Co-Authors: Naidu Vps, Narayana Rao P, Sudesh K Kashyap, Shanthakuma N, Girija G
    Abstract:

    The National Civil Aircraft (NCA) being\ud developed at National Aerospace Laboratories (NAL) is\ud expected to have the capability of operation from airports\ud with minimal infrastructure and instrumentation facility\ud under all-weather conditions. The key enabling technology\ud for this is an Integrated Enhanced and Synthetic Vision\ud System (IESVS), which is a combination of Enhanced\ud Vision System (EVS), Synthetic Vision System (SVS), and\ud Head-up Display. A prototype of EVS consisting of\ud Forward Looking Infrared (FLIR) camera and CCD color\ud camera is developed and tested at NAL. A Simulink block\ud is developed to acquire the image data in real time (online)\ud from a four channel frame grabber. An image fusion\ud algorithm based on wavelets is developed to fuse the\ud images from CCD and FLIR cameras. The affine\ud transform used for image registration is computed by\ud selecting the control points from both CCD and FLIR\ud images. Test results from the experiments conducted on\ud the runway during day and night (runway lights ON/OFF)\ud conditions are presented

Martin Naef - One of the best experts on this subject based on the ideXlab platform.

  • evaluation of prototype automotive head up Display interface testing driver s focusing ability through a vr simulation
    IEEE Intelligent Vehicles Symposium, 2007
    Co-Authors: Vassilis Charissis, Martin Naef
    Abstract:

    Contemporary automotive, navigation and infotainment requirements have evolved the traditional dashboard into a complex device that can often distract the driver. Head-up Displays (HUDs) have recently attracted the attention in the field of automotive research, promoting the reduction of driver's reaction time and to improve spatial awareness. The aptitude of the proposed HUD interface lies within the driver's focusing ability to the HUD interface and the actual traffic. This paper analyses the performance behaviour through user-tests using different focal levels for the projection of a full-windshield HUD interface. For this purpose, a VR driving simulator has been developed to test the different depths of field configurations of a HUD while driving in various weather and traffic conditions with and without the HUD. Our simulation results reveal the users' preferences regarding the focal point of the superimposed interface and present a comparative evaluation of the different focal levels and their impact on drivers' behaviour and performance.

Lotfi Abdi - One of the best experts on this subject based on the ideXlab platform.

  • In-vehicle cooperative driver information systems
    2017 13th International Wireless Communications and Mobile Computing Conference (IWCMC), 2017
    Co-Authors: Lotfi Abdi, Wiem Takrouni, Aref Meddeb
    Abstract:

    Critical traffic problems such as accidents and traffic congestion require the development of new transportation systems. Research in perceptual and human factors assessment is needed for relevant and correct Display of this information for maximal road traffic safety as well as optimal driver comfort. One of the solutions to prevent accidents is to provide information on the surrounding environment of the driver. The development and deployment of cooperative vehicular safety systems undeniably require a combination of dedicated wireless communications, computer vision, and AR technologies as the building blocks of cooperative safety systems. Augmented Reality Head-up Display (AR-HUD) can facilitate a new form of dialogue between the vehicle and the driver; and enhance ITS by superimposing surrounding traffic information on the users view and keep drivers view on roads. In this paper, we propose a fast deep-learning-based object detection approaches for identifying and recognizing road obstacles types, as well as interpreting and predicting complex traffic situations. A single Convolutional Neural Network (CNN) predicts region of interest and class probabilities directly from full images in one evaluation. We also investigated potential costs and benefits of using dynamic conformal AR cues in improving driving safety. A new AR-HUD approach to create real-time interactive traffic animations was introduced in terms of types of obstacle, rules for placement and visibility, and projection of these on an in-vehicle Display.

  • driver information system a combination of augmented reality and deep learning
    Symposium on Applied Computing, 2017
    Co-Authors: Lotfi Abdi
    Abstract:

    Improving traffic safety is one of the important goals of Intelligent Transportation Systems (ITS). In vehicle-based safety systems, it is more desirable to prevent an accident than to reduce severity of injuries. One of the solutions to prevent accidents is to provide information on the surrounding environment of the driver. Augmented Reality Head-up Display (AR-HUD) can facilitate a new form of dialogue between the vehicle and the driver; and enhance ITS by superimposing surrounding traffic information on the users view and keep drivers view on roads. In this paper, we propose a fast deeplearning-based object detection approaches for identifying and recognizing road obstacles types, as well as interpreting and predicting complex traffic situations. A single Convolutional Neural Network (CNN) predicts region of interest and class probabilities directly from full images in one evaluation. We also investigated potential costs and benefits of using dynamic conformal AR cues in improving driving safety.

  • In-Vehicle Augmented Reality Traffic Information System: A New Type of Communication Between Driver and Vehicle
    Procedia Computer Science, 2015
    Co-Authors: Lotfi Abdi, Faten Ben Abdallah, Aref Meddeb
    Abstract:

    Abstract In order to improve driving safety and minimize driving workload, the information provided should be represented in such a way that it is more easily understood and imposing less cognitive load onto the driver. Augmented Reality Head-up Display (AR- HUD) can facilitate a new form of dialogue between the vehicle and the driver; and enhance intelligent transportation systems by superimposing surrounding traffic information on the users view and keep drivers view on roads. In this paper, we investigated the potential costs and benefits of using AR cues to improve driving safety as new form of dialog between the vehicle and the driver. We present a new approach for marker-less AR Traffics Signs Recognition system that superimposes augmented virtual objects onto a real scene under all types of driving situations, including unfavorable weather conditions. Our method uses two steps: hypothesis generation and hypothesis verification. In the first step, Region Of Interest (ROI) is extracted using a scanning window with Haar cascade detector and AdaBoost classifier to reduce the computational region in the hypothesis generation step. The second step verifies whether a given candidate and classified into vehicle and non-vehicle classes using edge information and symmetry measurement to verify them. We employ this approach to improve the accuracy of AR traffic information system to assist the driver in various driving situations, increase the driving comfort and reduce traffic accidents.