Visually Impaired User

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 81 Experts worldwide ranked by ideXlab platform

Pavel Slavik - One of the best experts on this subject based on the ideXlab platform.

  • User Interfaces for All - Adaptive navigation of Visually Impaired Users in a virtual environment on the world wide web
    Lecture Notes in Computer Science, 2003
    Co-Authors: Vladislav Nemec, Zdenek Mikovec, Pavel Slavik
    Abstract:

    The increasing amount of new technologies (including multimedia, internet and virtual reality) allows us to use new approaches in design and implementation of applications of various kinds. Specific requirements emerge in the case of Users with specific needs. One such example might be the use of 3D information in the web environment (and navigation in such an environment) by Visually Impaired Users. Our solution provides semantic and functional description of the scene objects and inter-object relations in addition to "standard" geometric scene description. This approach permits the User to query for various information in the virtual environment (e.g. searching for a path to a specific object, searching for an object with specific properties and particularly filtering scene information). The system should allow the Visually Impaired User to virtually walk through the scene and query for information about the objects in a scene. In such a way they are able to obtain information that has been available to Users without visual impairments. The User interface itself provides the feedback in accordance with the User's group's specific requirements - the feedback is implemented as a human readable text that can be simply accessed using one of the common accessibility tools (screen reader, Braille display etc.). The embedded module capable of providing speech output has also been implemented.

Sandesh S Chiploonkar - One of the best experts on this subject based on the ideXlab platform.

  • a perceptual field of vision using image processing
    International Conference on Computing Communication and Networking Technologies, 2018
    Co-Authors: S Pruthvi, M S Shama, Hitesh V Harithas, Sandesh S Chiploonkar
    Abstract:

    Currently, estimated facts state that there are more than 285 million Visually Impaired people around the globe, of which 39 million are blind and the others have a low vision [1]. Approximately 90% of people suffering from blindness are from low-income backgrounds [2]. The main aim of this paper is to provide an efficient visual platform to enhance the perception of the surroundings for a Visually Impaired User. This is achieved by using the concept of real-time image and video processing. This data is analyzed and compared along with the database which consists of pre-stored data that aids in recognition of the captured image. A head mount camera is used to capture an image on a real-time basis whenever desired. The camera is placed to provide a maximum field of vision and to eliminate the blind spot. The captured image is then processed and compared with the information stored in the database, providing an audio output indicating the desired information. Audio output is provided through bone conduction headphones which communicate audio signals directly with the inner ear. This keeps the outer ear free to be sensitive to the surroundings. A distress alert mechanism is also included as a safety measure to the blind at times of danger. It helps in sending messages containing distress alert signal which contains current location of the User.

  • ICCCNT - A Perceptual Field of Vision, Using Image Processing
    2018 9th International Conference on Computing Communication and Networking Technologies (ICCCNT), 2018
    Co-Authors: S Pruthvi, M S Shama, Hitesh V Harithas, Sandesh S Chiploonkar
    Abstract:

    Currently, estimated facts state that there are more than 285 million Visually Impaired people around the globe, of which 39 million are blind and the others have a low vision [1]. Approximately 90% of people suffering from blindness are from low-income backgrounds [2]. The main aim of this paper is to provide an efficient visual platform to enhance the perception of the surroundings for a Visually Impaired User. This is achieved by using the concept of real-time image and video processing. This data is analyzed and compared along with the database which consists of pre-stored data that aids in recognition of the captured image. A head mount camera is used to capture an image on a real-time basis whenever desired. The camera is placed to provide a maximum field of vision and to eliminate the blind spot. The captured image is then processed and compared with the information stored in the database, providing an audio output indicating the desired information. Audio output is provided through bone conduction headphones which communicate audio signals directly with the inner ear. This keeps the outer ear free to be sensitive to the surroundings. A distress alert mechanism is also included as a safety measure to the blind at times of danger. It helps in sending messages containing distress alert signal which contains current location of the User.

Vladislav Nemec - One of the best experts on this subject based on the ideXlab platform.

  • User Interfaces for All - Adaptive navigation of Visually Impaired Users in a virtual environment on the world wide web
    Lecture Notes in Computer Science, 2003
    Co-Authors: Vladislav Nemec, Zdenek Mikovec, Pavel Slavik
    Abstract:

    The increasing amount of new technologies (including multimedia, internet and virtual reality) allows us to use new approaches in design and implementation of applications of various kinds. Specific requirements emerge in the case of Users with specific needs. One such example might be the use of 3D information in the web environment (and navigation in such an environment) by Visually Impaired Users. Our solution provides semantic and functional description of the scene objects and inter-object relations in addition to "standard" geometric scene description. This approach permits the User to query for various information in the virtual environment (e.g. searching for a path to a specific object, searching for an object with specific properties and particularly filtering scene information). The system should allow the Visually Impaired User to virtually walk through the scene and query for information about the objects in a scene. In such a way they are able to obtain information that has been available to Users without visual impairments. The User interface itself provides the feedback in accordance with the User's group's specific requirements - the feedback is implemented as a human readable text that can be simply accessed using one of the common accessibility tools (screen reader, Braille display etc.). The embedded module capable of providing speech output has also been implemented.

Christophe Jouffrais - One of the best experts on this subject based on the ideXlab platform.

  • From open geographical data to tangible maps: improving the accessibility of maps for Visually Impaired people
    2015
    Co-Authors: Julie Ducasse, Marc J.-m. Macé, Christophe Jouffrais
    Abstract:

    Visual maps must be transcribed into (interactive) raised-line maps to be accessible for Visually Impaired people. However, these tactile maps suffer from several shortcomings: they are long and expensive to produce, they cannot display a large amount of information, and they are not dynamically modifiable. A number of methods have been developed to automate the production of raised-line maps, but there is not yet any tactile map editor on the market. Tangible interactions proved to be an efficient way to help a Visually Impaired User manipulate spatial representations. Contrary to raised-line maps, tangible maps can be autonomously constructed and edited. In this paper, we present the scenarios and the main expected contributions of the AccessiMap project, which is based on the availability of many sources of open spatial data: 1/ facilitating the production of interactive tactile maps with the development of an open-source web-based editor; 2/ investigating the use of tangible interfaces for the autonomous construction and exploration of a map by a Visually Impaired User.

  • Waypoints validation strategies in assisted navigation for Visually Impaired pedestrian
    2014
    Co-Authors: Slim Kammoun, Marc J.-m. Macé, Christophe Jouffrais
    Abstract:

    In Electronic Orientation Aids, the guidance process consists in two steps: first, identify the location of a Visually Impaired User along the expected trajectory, and second, provide her/him with appropriate instructions on directions to follow, and pertinent information about the surroundings. In urban environment, positioning accuracy is not always optimal and tracking the User's progress along the expected itinerary is often challenging. We present three new waypoint-based validation strategies to track the User's location despite low positioning accuracy. These strategies are evaluated within SIMU4NAV, a multimodal virtual environment subserving the design of Electronic Orientation Aids for Visually Impaired people. Results show that the proposed strategies are more robust to positioning inaccuracies, and hence more efficient to guide Users.

  • DUCK : a deDUCtive Keyboard
    2013
    Co-Authors: Philippe Roussille, Mathieu Raynal, Slim Kammoun, Emmanuel Dubois, Christophe Jouffrais
    Abstract:

    This paper presents the deDUCtive Keyboard (DUCK), aiming to improve text entry for Visually Impaired Users on AZERTY/QWERTY based layout on software keyboards. Relying on a predictive system, DUCK allows rapid text entry without any precision on keyboard hits. A preliminary study with a Visually Impaired User indicated that usability is improved when compared to a regular virtual keyboard with a vocal feedback.

S Pruthvi - One of the best experts on this subject based on the ideXlab platform.

  • a perceptual field of vision using image processing
    International Conference on Computing Communication and Networking Technologies, 2018
    Co-Authors: S Pruthvi, M S Shama, Hitesh V Harithas, Sandesh S Chiploonkar
    Abstract:

    Currently, estimated facts state that there are more than 285 million Visually Impaired people around the globe, of which 39 million are blind and the others have a low vision [1]. Approximately 90% of people suffering from blindness are from low-income backgrounds [2]. The main aim of this paper is to provide an efficient visual platform to enhance the perception of the surroundings for a Visually Impaired User. This is achieved by using the concept of real-time image and video processing. This data is analyzed and compared along with the database which consists of pre-stored data that aids in recognition of the captured image. A head mount camera is used to capture an image on a real-time basis whenever desired. The camera is placed to provide a maximum field of vision and to eliminate the blind spot. The captured image is then processed and compared with the information stored in the database, providing an audio output indicating the desired information. Audio output is provided through bone conduction headphones which communicate audio signals directly with the inner ear. This keeps the outer ear free to be sensitive to the surroundings. A distress alert mechanism is also included as a safety measure to the blind at times of danger. It helps in sending messages containing distress alert signal which contains current location of the User.

  • ICCCNT - A Perceptual Field of Vision, Using Image Processing
    2018 9th International Conference on Computing Communication and Networking Technologies (ICCCNT), 2018
    Co-Authors: S Pruthvi, M S Shama, Hitesh V Harithas, Sandesh S Chiploonkar
    Abstract:

    Currently, estimated facts state that there are more than 285 million Visually Impaired people around the globe, of which 39 million are blind and the others have a low vision [1]. Approximately 90% of people suffering from blindness are from low-income backgrounds [2]. The main aim of this paper is to provide an efficient visual platform to enhance the perception of the surroundings for a Visually Impaired User. This is achieved by using the concept of real-time image and video processing. This data is analyzed and compared along with the database which consists of pre-stored data that aids in recognition of the captured image. A head mount camera is used to capture an image on a real-time basis whenever desired. The camera is placed to provide a maximum field of vision and to eliminate the blind spot. The captured image is then processed and compared with the information stored in the database, providing an audio output indicating the desired information. Audio output is provided through bone conduction headphones which communicate audio signals directly with the inner ear. This keeps the outer ear free to be sensitive to the surroundings. A distress alert mechanism is also included as a safety measure to the blind at times of danger. It helps in sending messages containing distress alert signal which contains current location of the User.