Spatial Redundancy

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 12297 Experts worldwide ranked by ideXlab platform

Tobi Delbruck - One of the best experts on this subject based on the ideXlab platform.

  • a 132 by 104 10μm pixel 250μw 1kefps dynamic vision sensor with pixel parallel noise and Spatial Redundancy suppression
    Symposium on VLSI Circuits, 2019
    Co-Authors: Luca Longinotti, Federico Corradi, Tobi Delbruck
    Abstract:

    This paper reports a 132 by 104 dynamic vision sensor (DVS) with $10 \mu \mathrm{m}$ pixel in a 65nm logic process and a synchronous address-event representation (SAER) readout capable of 180Meps throughput. The SAER architecture allows adjustable event frame rate control and supports pre-readout pixel-parallel noise and Spatial Redundancy suppression. The chip consumes $250 \mu \mathrm{W}$ with 100keps running at 1k event frames per second (efps), 3-5 times more power efficient than the prior art using normalized power metrics. The chip is aimed for low power IoT and real-time high-speed smart vision applications.

  • VLSI Circuits - A 132 by 104 10μm-Pixel 250μW 1kefps Dynamic Vision Sensor with Pixel-Parallel Noise and Spatial Redundancy Suppression
    2019 Symposium on VLSI Circuits, 2019
    Co-Authors: Luca Longinotti, Federico Corradi, Tobi Delbruck
    Abstract:

    This paper reports a 132 by 104 dynamic vision sensor (DVS) with $10 \mu \mathrm{m}$ pixel in a 65nm logic process and a synchronous address-event representation (SAER) readout capable of 180Meps throughput. The SAER architecture allows adjustable event frame rate control and supports pre-readout pixel-parallel noise and Spatial Redundancy suppression. The chip consumes $250 \mu \mathrm{W}$ with 100keps running at 1k event frames per second (efps), 3-5 times more power efficient than the prior art using normalized power metrics. The chip is aimed for low power IoT and real-time high-speed smart vision applications.

Gozde Bozdagi Akar - One of the best experts on this subject based on the ideXlab platform.

  • subjective evaluation of effects of spectral and Spatial Redundancy reduction on stereo images
    European Signal Processing Conference, 2005
    Co-Authors: Anil Aksay, Cagdas Bilen, Gozde Bozdagi Akar
    Abstract:

    Human visual system is more sensitive to luminance than to chrominance. In order to reduce information that is not perceived by human visual system, color channels are downsampled while keeping luminance as original. Similarly in stereo case, human visual system uses high frequency information from the high resolution image of the mixed resolution image pair. By downsampling one of the pair, higher compression is achieved in stereo image coding. In this paper, we have examined downsampling color channels in higher ratios in color stereo image pairs. In our experiments, we have used “double-stimulus continuous-quality scale” (DSCQS) method. We have found out that the depth perception is not changed by compression or filtering. However, in order to keep perceived image quality similar to the original stereo pair, filtering should be applied to chrominance but not to luminance channels.

  • EUSIPCO - Subjective evaluation of effects of spectral and Spatial Redundancy reduction on stereo images
    2005
    Co-Authors: Anil Aksay, Cagdas Bilen, Gozde Bozdagi Akar
    Abstract:

    Human visual system is more sensitive to luminance than to chrominance. In order to reduce information that is not perceived by human visual system, color channels are downsampled while keeping luminance as original. Similarly in stereo case, human visual system uses high frequency information from the high resolution image of the mixed resolution image pair. By downsampling one of the pair, higher compression is achieved in stereo image coding. In this paper, we have examined downsampling color channels in higher ratios in color stereo image pairs. In our experiments, we have used “double-stimulus continuous-quality scale” (DSCQS) method. We have found out that the depth perception is not changed by compression or filtering. However, in order to keep perceived image quality similar to the original stereo pair, filtering should be applied to chrominance but not to luminance channels.

Manuel Barranco - One of the best experts on this subject based on the ideXlab platform.

  • ETFA - Simulation of the Proactive Transmission of Replicated Frames Mechanism over TSN
    2019 24th IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), 2019
    Co-Authors: Ines Alvarez, Julian Proenza, Drago Cavka, Manuel Barranco
    Abstract:

    The Time-Sensitive Networking (TSN) Task Group (TG) is providing Ethernet with timing guarantees, reconfiguration services and fault tolerance mechanisms. Some of TSN’s targeted applications are real-time critical applications, which must provide a correct service continuously. To support these applications the TSN TG standardised a Spatial Redundancy mechanism. Even though Spatial Redundancy can tolerate permanent and temporary faults, it is not cost-effective. Instead, temporary faults can be tolerated using time Redundancy. We proposed the Proactive Transmission of Replicated Frames (PTRF) mechanism to tolerate temporary faults in the links. In this work we present a new PTRF approach, a PTRF simulation model and a comparison of the approaches using exhaustive fault injection.

  • mixing time and Spatial Redundancy over time sensitive networking
    Dependable Systems and Networks, 2018
    Co-Authors: Ines Alvarez, Julian Proenza, Manuel Barranco
    Abstract:

    In this work we propose to mix time and Spatial Redundancy over a Time Sensitive Networking (TSN)-based network to increase its reliability while reducing resource consumption.

  • DSN Workshops - Mixing Time and Spatial Redundancy Over Time Sensitive Networking
    2018 48th Annual IEEE IFIP International Conference on Dependable Systems and Networks Workshops (DSN-W), 2018
    Co-Authors: Ines Alvarez, Julian Proenza, Manuel Barranco
    Abstract:

    In this work we propose to mix time and Spatial Redundancy over a Time Sensitive Networking (TSN)-based network to increase its reliability while reducing resource consumption.

Luca Longinotti - One of the best experts on this subject based on the ideXlab platform.

  • a 132 by 104 10μm pixel 250μw 1kefps dynamic vision sensor with pixel parallel noise and Spatial Redundancy suppression
    Symposium on VLSI Circuits, 2019
    Co-Authors: Luca Longinotti, Federico Corradi, Tobi Delbruck
    Abstract:

    This paper reports a 132 by 104 dynamic vision sensor (DVS) with $10 \mu \mathrm{m}$ pixel in a 65nm logic process and a synchronous address-event representation (SAER) readout capable of 180Meps throughput. The SAER architecture allows adjustable event frame rate control and supports pre-readout pixel-parallel noise and Spatial Redundancy suppression. The chip consumes $250 \mu \mathrm{W}$ with 100keps running at 1k event frames per second (efps), 3-5 times more power efficient than the prior art using normalized power metrics. The chip is aimed for low power IoT and real-time high-speed smart vision applications.

  • VLSI Circuits - A 132 by 104 10μm-Pixel 250μW 1kefps Dynamic Vision Sensor with Pixel-Parallel Noise and Spatial Redundancy Suppression
    2019 Symposium on VLSI Circuits, 2019
    Co-Authors: Luca Longinotti, Federico Corradi, Tobi Delbruck
    Abstract:

    This paper reports a 132 by 104 dynamic vision sensor (DVS) with $10 \mu \mathrm{m}$ pixel in a 65nm logic process and a synchronous address-event representation (SAER) readout capable of 180Meps throughput. The SAER architecture allows adjustable event frame rate control and supports pre-readout pixel-parallel noise and Spatial Redundancy suppression. The chip consumes $250 \mu \mathrm{W}$ with 100keps running at 1k event frames per second (efps), 3-5 times more power efficient than the prior art using normalized power metrics. The chip is aimed for low power IoT and real-time high-speed smart vision applications.

Anil Aksay - One of the best experts on this subject based on the ideXlab platform.

  • subjective evaluation of effects of spectral and Spatial Redundancy reduction on stereo images
    European Signal Processing Conference, 2005
    Co-Authors: Anil Aksay, Cagdas Bilen, Gozde Bozdagi Akar
    Abstract:

    Human visual system is more sensitive to luminance than to chrominance. In order to reduce information that is not perceived by human visual system, color channels are downsampled while keeping luminance as original. Similarly in stereo case, human visual system uses high frequency information from the high resolution image of the mixed resolution image pair. By downsampling one of the pair, higher compression is achieved in stereo image coding. In this paper, we have examined downsampling color channels in higher ratios in color stereo image pairs. In our experiments, we have used “double-stimulus continuous-quality scale” (DSCQS) method. We have found out that the depth perception is not changed by compression or filtering. However, in order to keep perceived image quality similar to the original stereo pair, filtering should be applied to chrominance but not to luminance channels.

  • EUSIPCO - Subjective evaluation of effects of spectral and Spatial Redundancy reduction on stereo images
    2005
    Co-Authors: Anil Aksay, Cagdas Bilen, Gozde Bozdagi Akar
    Abstract:

    Human visual system is more sensitive to luminance than to chrominance. In order to reduce information that is not perceived by human visual system, color channels are downsampled while keeping luminance as original. Similarly in stereo case, human visual system uses high frequency information from the high resolution image of the mixed resolution image pair. By downsampling one of the pair, higher compression is achieved in stereo image coding. In this paper, we have examined downsampling color channels in higher ratios in color stereo image pairs. In our experiments, we have used “double-stimulus continuous-quality scale” (DSCQS) method. We have found out that the depth perception is not changed by compression or filtering. However, in order to keep perceived image quality similar to the original stereo pair, filtering should be applied to chrominance but not to luminance channels.