Unordered Set

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 324 Experts worldwide ranked by ideXlab platform

Jehoshua Bruck - One of the best experts on this subject based on the ideXlab platform.

  • ISIT - On Coding Over Sliced Information
    2019 IEEE International Symposium on Information Theory (ISIT), 2019
    Co-Authors: Jin Sima, Netanel Raviv, Jehoshua Bruck
    Abstract:

    The interest in channel models in which the data is sent as an Unordered Set of binary strings has increased lately, due to emerging applications in DNA storage, among others. In this paper we analyze the minimal redundancy of binary codes for this channel under substitution errors, and provide a code construction for a single substitution that is shown to be asymptotically optimal up to constants. The surprising result in this paper is that while the information vector is sliced into a Set of Unordered strings, the amount of redundant bits that are required to correct errors is orderwise equivalent to the amount required in the classical error correcting paradigm.

  • On Coding over Sliced Information
    arXiv: Information Theory, 2018
    Co-Authors: Jin Sima, Netanel Raviv, Jehoshua Bruck
    Abstract:

    The interest in channel models in which the data is sent as an Unordered Set of binary strings has increased lately, due to emerging applications in DNA storage, among others. In this paper we analyze the minimal redundancy of binary codes for this channel under substitution errors, and provide several constructions, some of which are shown to be asymptotically optimal. The surprising result in this paper is that while the information vector is sliced into a Set of Unordered strings, the amount of redundant bits that are required to correct errors is asymptotically equal to the amount required in the classical error correcting paradigm.

Jiri Matas - One of the best experts on this subject based on the ideXlab platform.

  • towards complete free form reconstruction of complex 3d scenes from an Unordered Set of uncalibrated images
    2nd International Workshop on Statistical Methods in Video Processing. Prague CZECH REPUBLIC. MAY 16 2004, 2004
    Co-Authors: Hugo Cornelius, Radim Sara, Daniel Martinec, Tomas Pajdla, Ondrej Chum, Jiri Matas
    Abstract:

    This paper describes a method for accurate dense reconstruction of a complex scene from a small Set of high-resolution unorganized still images taken by a hand-held digital camera. A fully automatic data processing pipeline is proposed. Highly discriminative features are first detected in all images. Correspondences are then found in all image pairs by wide-baseline stereo matching and used in a scene structure and camera reconstruction step that can cope with occlusion and outliers. Image pairs suitable for dense matching are automatically selected, rectified and used in dense binocular matching. The dense point cloud obtained as the union of all pairwise reconstructions is fused by local approximation using oriented geometric primitives. For texturing, every primitive is mapped on the image with the best resolution.

  • ECCV Workshop SMVP - Towards Complete Free-Form Reconstruction of Complex 3D Scenes from an Unordered Set of Uncalibrated Images
    Lecture Notes in Computer Science, 2004
    Co-Authors: Hugo Cornelius, Radim Sara, Daniel Martinec, Tomas Pajdla, Ondrej Chum, Jiri Matas
    Abstract:

    This paper describes a method for accurate dense reconstruction of a complex scene from a small Set of high-resolution unorganized still images taken by a hand-held digital camera. A fully automatic data processing pipeline is proposed. Highly discriminative features are first detected in all images. Correspondences are then found in all image pairs by wide-baseline stereo matching and used in a scene structure and camera reconstruction step that can cope with occlusion and outliers. Image pairs suitable for dense matching are automatically selected, rectified and used in dense binocular matching. The dense point cloud obtained as the union of all pairwise reconstructions is fused by local approximation using oriented geometric primitives. For texturing, every primitive is mapped on the image with the best resolution.

J. Grainger - One of the best experts on this subject based on the ideXlab platform.

  • An Adaptive Resonance Theory account of the implicit learning of orthographic word forms
    Journal of Physiology - Paris, 2010
    Co-Authors: Hervé Glotin, P. Warnier, F. Dandurand, S. Dufau, B. Lété, C. Touzet, J. C. Ziegler, J. Grainger
    Abstract:

    An Adaptive Resonance Theory (ART) network was trained to identify unique orthographic word forms. Each word input to the model was represented as an Unordered Set of ordered letter pairs (open bigrams) that implement a flexible prelexical orthographic code. The network learned to map this prelexical orthographic code onto unique word representations (orthographic word forms). The network was trained on a realistic corpus of reading textbooks used in French primary schools. The amount of training was strictly identical to children's exposure to reading material from grade 1 to grade 5. Network performance was examined at each grade level. Adjustment of the learning and vigilance parameters of the network allowed us to reproduce the developmental growth of word identification performance seen in children. The network exhibited a word frequency effect and was found to be sensitive to the order of presentation of word inputs, particularly with low frequency words. These words were better learned with a randomized presentation order compared with the order of presentation in the school books. These results open up interesting perspectives for the application of ART networks in the study of the dynamics of learning to read. (C) 2009 Elsevier Ltd. All rights reserved.

  • an adaptive resonance theory account of the implicit learning of orthographic word forms
    Journal of Physiology-paris, 2010
    Co-Authors: Hervé Glotin, P. Warnier, F. Dandurand, S. Dufau, B. Lété, C. Touzet, J. C. Ziegler, J. Grainger
    Abstract:

    An Adaptive Resonance Theory (ART) network was trained to identify unique orthographic word forms. Each word input to the model was represented as an Unordered Set of ordered letter pairs (open bigrams) that implement a flexible prelexical orthographic code. The network learned to map this prelexical orthographic code onto unique word representations (orthographic word forms). The network was trained on a realistic corpus of reading textbooks used in French primary schools. The amount of training was strictly identical to children’s exposure to reading material from grade 1 to grade 5. Network performance was examined at each grade level. Adjustment of the learning and vigilance parameters of the network allowed us to reproduce the developmental growth of word identification performance seen in children. The network exhibited a word frequency effect and was found to be sensitive to the order of presentation of word inputs, particularly with low frequency words. These words were better learned with a randomized presentation order compared with the order of presentation in the school books. These results open up interesting perspectives for the application of ART networks in the study of the dynamics of learning to read.

Marcus Magnor - One of the best experts on this subject based on the ideXlab platform.

  • Graphics Interface - Photo zoom: high resolution from Unordered image collections
    2010
    Co-Authors: Martin Eisemann, Elmar Eisemann, Hans-peter Seidel, Marcus Magnor
    Abstract:

    We present a system to automatically construct high resolution images from an Unordered Set of low resolution photos. It consists of an automatic preprocessing step to establish correspondences between any given photos. The user may then choose one image and the algorithm automatically creates a higher resolution result, several octaves larger up to the desired resolution. Our recursive creation scheme allows to transfer specific details at subpixel positions of the original image. It adds plausible details to regions not covered by any of the input images and eases the acquisition for large scale panoramas spanning different resolution levels.

  • Sparse Bundle Adjustment Speedup Strategies
    2010
    Co-Authors: Christian Lipski, Denis Bose, Martin Eisemann, Kai Berger, Marcus Magnor
    Abstract:

    Over the past years, Structure-from-Motion calibration algorithms have become widely popular for many applications in computer graphics. From an Unordered Set of photographs, they manage to robustly estimate intrinsic and extrinsic camera parameters for each image. One major drawback is the quadratic computation time of existing algorithms. This paper presents different strategies to overcome this problem by only working on subSets of images and merging the results. A quantitative comparison of these strategies reveals the trade-off between accuracy and computation time.

  • SIGGRAPH Posters - Photo zoom: high resolution from Unordered image collections
    ACM SIGGRAPH 2010 Posters on - SIGGRAPH '10, 2010
    Co-Authors: Martin Eisemann, Elmar Eisemann, Hans-peter Seidel, Marcus Magnor
    Abstract:

    We present a system to automatically construct high resolution images from an Unordered Set of low resolution photos. It consists of an automatic preprocessing step to establish correspondences between any given photos. The user may then choose one image and the algorithm automatically creates a higher resolution result, several octaves larger up to the desired resolution. Our recursive creation scheme allows to transfer specific details at subpixel positions of the original image. It adds plausible details to regions not covered by any of the input images and eases the acquisition for large scale panoramas spanning different resolution levels.

Andreas Lenz - One of the best experts on this subject based on the ideXlab platform.

  • Coding Over Sets for DNA Storage
    IEEE Transactions on Information Theory, 2020
    Co-Authors: Andreas Lenz, Paul H. Siegel, Antonia Wachter-zeh, Eitan Yaakobi
    Abstract:

    In this paper we study error-correcting codes for the storage of data in synthetic deoxyribonucleic acid (DNA). We investigate a storage model where a data Set is represented by an Unordered Set of $M$ sequences, each of length $L$ . Errors within that model are a loss of whole sequences and point errors inside the sequences, such as insertions, deletions and substitutions. We derive Gilbert-Varshamov lower bounds and sphere packing upper bounds on achievable cardinalities of error-correcting codes within this storage model. We further propose explicit code constructions than can correct errors in such a storage system that can be encoded and decoded efficiently. Comparing the sizes of these codes to the upper bounds, we show that many of the constructions are close to optimal.

  • Anchor-Based Correction of Substitutions in Indexed Sets
    arXiv: Information Theory, 2019
    Co-Authors: Andreas Lenz, Paul H. Siegel, Antonia Wachter-zeh, Eitan Yaakobi
    Abstract:

    Motivated by DNA-based data storage, we investigate a system where digital information is stored in an Unordered Set of several vectors over a finite alphabet. Each vector begins with a unique index that represents its position in the whole data Set and does not contain data. This paper deals with the design of error-correcting codes for such indexed Sets in the presence of substitution errors. We propose a construction that efficiently deals with the challenges that arise when designing codes for Unordered Sets. Using a novel mechanism, called anchoring, we show that it is possible to combat the ordering loss of sequences with only a small amount of redundancy, which allows to use standard coding techniques, such as tensor-product codes to correct errors within the sequences. We finally derive upper and lower bounds on the achievable redundancy of codes within the considered channel model and verify that our construction yields a redundancy that is close to the best possible achievable one. Our results surprisingly indicate that it requires less redundancy to correct errors in the indices than in the data part of vectors.

  • Coding over Sets for DNA Storage
    arXiv: Information Theory, 2018
    Co-Authors: Andreas Lenz, Paul H. Siegel, Antonia Wachter-zeh, Eitan Yaakobi
    Abstract:

    In this paper we study error-correcting codes for the storage of data in synthetic DNA. We investigate a storage model where a data Set is represented by an Unordered Set of M sequences, each of length L. Errors within that model are losses of whole sequences and point errors inside the sequences, such as insertions, deletions and substitutions. We propose code constructions which can correct errors in such a storage system that can be encoded and decoded efficiently. By deriving upper bounds on the cardinalities of these codes using sphere packing arguments, we show that many of our codes are close to optimal.

  • ISIT - Coding over Sets for DNA Storage
    2018 IEEE International Symposium on Information Theory (ISIT), 2018
    Co-Authors: Andreas Lenz, Paul H. Siegel, Antonia Wachter-zeh, Eitan Yaakobit
    Abstract:

    In this paper, we study error-correcting codes for the storage of data in synthetic deoxyribonucleic acid (DNA). We investigate a storage model where data is represented by an Unordered Set of $M$ sequences, each of length L. Errors within that model are losses of whole sequences and point errors inside the sequences, such as substitutions, insertions and deletions. We propose code constructions which can correct these errors with efficient encoders and decoders. By deriving upper bounds on the cardinalities of these codes using sphere packing arguments, we show that many of our codes are close to optimal.