Visual Pattern

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 276 Experts worldwide ranked by ideXlab platform

Michael Heisenberg - One of the best experts on this subject based on the ideXlab platform.

  • Visual Pattern memory without shape recognition
    Philosophical Transactions of the Royal Society B: Biological Sciences, 1995
    Co-Authors: M. Dill, Michael Heisenberg
    Abstract:

    Visual Pattern memory of Drosophila melanogaster at the torque meter is investigated by a new learning paradigm called novelty choice. In this procedure the fly is first exposed to four identical Patterns presented at the wall of the cylinder surrounding it. In the test it has the choice between two pairs of Patterns, a new one and one the same as the training Pattern. Flies show a lasting preference for the new figure. Figures presented during training are not recognized as familiar in the test, if displayed (i) at a different height, (ii) at a different size, (iii) rotated or (iv) after contrast reversal. No special invariance mechanisms are found. A pixel-by-pixel matching process is sufficient to explain the observed data. Minor transfer effects can be explained if a graded similarity function is assumed. Recognition depends upon the overlap between the stored template and the actual image. The similarity function is best described by the ratio of the area of overlap to the area of the actual image. The similarity function is independent of the geometrical properties of the employed figures. Visual Pattern memory at this basic level does not require the analysis of shape.

  • Visual Pattern recognition in Drosophila involves retinotopic matching
    Nature, 1993
    Co-Authors: Marcus Dill, Reinhard Wolf, Michael Heisenberg
    Abstract:

    HONEYBEES remember the shapes of flowers and are guided by Visual landmarks on their foraging trips1,2. How insects recognize Visual Patterns is poorly understood. Experiments suggest that they try to match retinotopically the incoming Visual Pattern with a previously stored memory image2–7. But bees can be conditioned to individual Pattern parameters such as orientation of contours, colour or size2,8–11. These and other results are difficult to reconcile with simple template matching. In such investigations, freely moving animals are observed; their behaviour and Visual input, therefore, are not well known. Mostly, processing strategies are inferred from stimulus design. We have studied Visual Pattern recognition with tethered flies (Drosophila melanogaster) in a flight simulator and report here that flies store Visual images at, or together with, fixed retinal positions and can retrieve them from there only5. Position invariance, an acknowledged property of human Pattern recognition, may not exist as a primary mechanism in insects.

Li Liu - One of the best experts on this subject based on the ideXlab platform.

  • Differential roles of the fan-shaped body and the ellipsoid body in Drosophila Visual Pattern memory
    Learning & memory (Cold Spring Harbor N.Y.), 2009
    Co-Authors: Yufeng Pan, Yanqiong Zhou, Chao Guo, Haiyun Gong, Zhefeng Gong, Li Liu
    Abstract:

    The central complex is a prominent structure in the Drosophila brain. Visual learning experiments in the flight simulator, with flies with genetically altered brains, revealed that two groups of horizontal neurons in one of its substructures, the fan-shaped body, were required for Drosophila Visual Pattern memory. However, little is known about the role of other components of the central complex for Visual Pattern memory. Here we show that a small set of neurons in the ellipsoid body, which is another substructure of the central complex and connected to the fan-shaped body, is also required for Visual Pattern memory. Localized expression of rutabaga adenylyl cyclase in either the fan-shaped body or the ellipsoid body is sufficient to rescue the memory defect of the rut 2080 mutant. We then performed RNA interference of rutabaga in either structure and found that they both were required for Visual Pattern memory. Additionally, we tested the above rescued flies under several Visual Pattern parameters, such as size, contour orientation, and vertical compactness, and revealed differential roles of the fan-shaped body and the ellipsoid body for Visual Pattern memory. Our study defines a complex neural circuit in the central complex for Drosophila Visual Pattern memory.

  • Visual Pattern memory requires foraging function in the central complex of drosophila
    Learning & Memory, 2008
    Co-Authors: Zhipeng Wang, Yufeng Pan, Zhefeng Gong, Huoqing Jiang, Lazaros Chatzimanolis, Jianhong Chang, Li Liu
    Abstract:

    The role of the foraging (for) gene, which encodes a cyclic guanosine-3',5'-monophosphate (cGMP)-dependent protein kinase (PKG), in food-search behavior in Drosophila has been intensively studied. However, its functions in other complex behaviors have not been well-characterized. Here, we show experimentally in Drosophila that the for gene is required in the operant Visual learning paradigm. Visual Pattern memory was normal in a natural variant rover (for(R)) but was impaired in another natural variant sitter (for(S)), which has a lower PKG level. Memory defects in for(S) flies could be rescued by either constitutive or adult-limited expression of for in the fan-shaped body. Interestingly, we showed that such rescue also occurred when for was expressed in the ellipsoid body. Additionally, expression of for in the fifth layer of the fan-shaped body restored sufficient memory for the Pattern parameter "elevation" but not for "contour orientation," whereas expression of for in the ellipsoid body restored sufficient memory for both parameters. Our study defines a Drosophila model for further understanding the role of cGMP-PKG signaling in associative learning/memory and the neural circuit underlying this for-dependent Visual Pattern memory.

Barbara Desalvo - One of the best experts on this subject based on the ideXlab platform.

  • Visual Pattern extraction using energy efficient 2 pcm synapse neuromorphic architecture
    IEEE Transactions on Electron Devices, 2012
    Co-Authors: Olivier Bichler, Barbara Desalvo, Dominique Vuillaume, Manan Suri, Damien Querlioz, Christian Gamrat
    Abstract:

    We introduce a novel energy-efficient methodology “2-PCM Synapse” to use phase-change memory (PCM) as synapses in large-scale neuromorphic systems. Our spiking neural network architecture exploits the gradual crystallization behavior of PCM devices for emulating both synaptic potentiation and synaptic depression. Unlike earlier attempts to implement a biological-like spike-timing-dependent plasticity learning rule with PCM, we use a simplified rule where long-term potentiation and long-term depression can both be produced with a single invariant crystallizing pulse. Our architecture is simulated on a special purpose event-based simulator, using a behavioral model for the PCM devices validated with electrical characterization. The system, comprising about 2 million synapses, directly learns from event-based dynamic vision sensors. When tested with real-life data, it is able to extract complex and overlapping temporally correlated features such as car trajectories on a freeway. Complete trajectories can be learned with a detection rate above 90 %. The synaptic programming power consumption of the system during learning is estimated and could be as low as 100 nW for scaled down PCM technology. Robustness to device variability is also evidenced.

  • Phase change memory as synapse for ultra-dense neuromorphic systems: Application to complex Visual Pattern extraction
    2011 International Electron Devices Meeting, 2011
    Co-Authors: Manan Suri, Véronique Sousa, Dominique Vuillaume, Luca Perniola, Olivier Bichler, Christian Gamrat, Olga Cueto, Damien Querlioz, Barbara Desalvo
    Abstract:

    We demonstrate a unique energy efficient methodology to use Phase Change Memory (PCM) as synapse in ultra-dense large scale neuromorphic systems. PCM devices with different chalcogenide materials were characterized to demonstrate synaptic behavior. Multi-physical simulations were used to interpret the results. We propose special circuit architecture (“the 2-PCM synapse”), read, write, and reset programming schemes suitable for the use of PCM in neural networks. A versatile behavioral model of PCM which can be used for simulating large scale neural systems is introduced. First demonstration of complex Visual Pattern extraction from real world data using PCM synapses in a 2-layer spiking neural network (SNN) is shown. System power analysis for different scaled PCM technologies is also provided.

Junsong Yuan - One of the best experts on this subject based on the ideXlab platform.

  • APSIPA - Common Visual Pattern discovery and search
    2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), 2017
    Co-Authors: Zhenzhen Wang, Jingjing Meng, Junsong Yuan
    Abstract:

    Automatically discovering common Visual Patterns from images and videos is a useful but challenging task. On the one hand, the definition of Visual Patterns is rather ambiguous, it refers to the spatial composition of frequently occurring Visual primitives which correspond to local features, semantic Visual parts or Visual objects. For example, the wheels and the body of a car could be seen as different Visual primitives, while the whole car can also be seen as an individual Visual primitive. On the other hand, there exhibit large variations in Visual appearance and structures even within the same kind of Visual Pattern, which makes Visual Pattern discovery a very challenging task. However, since to distinguish different kinds of Visual Patterns from each other is a fundamental problem of many tasks in computer vision, such as Pattern recognition/classification, object detection/localization, content-based image search, many studies have been introduce to solve the problem of Visual Pattern discovery in the literature. In this paper, we will revisit the representative studies on discovering Visual Patterns and discuss these methods from the view of local-feature-based and object- proposal-based Visual Patterns. The local-feature-based Visual Pattern discovery aims to mine the Visual primitives that share similar spatial layout, while the semantic-patch-based Visual Pattern discovery aims to mine similar semantic Patterns from the object proposals that are likely to contain an entire object. Then the extensive applications of Visual Pattern discovery are presented.

  • Visual Pattern discovery in image and video data: a brief survey
    Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 2013
    Co-Authors: Hongxing Wang, Gangqiang Zhao, Junsong Yuan
    Abstract:

    In image and video data, Visual Pattern refers to re-occurring composition of Visual primitives. Such Visual Patterns extract the essence of the image and video data that convey rich information. However, unlike frequent Patterns in transaction data, there are considerable Visual content variations and complex spatial structures among Visual primitives, which make effective exploration of Visual Patterns a challenging task. Many methods have been proposed to address the problem of Visual Pattern discovery during the past decade. In this article, we provide a review of the major progress in Visual Pattern discovery. We categorize the existing methods into two groups: bottom-up Pattern discovery and top-down Pattern modeling. The bottom-up Pattern discovery method starts with unordered Visual primitives followed by merging the primitives until larger Visual Patterns are found. In contrast, the top-down method starts with the modeling of Visual primitive compositions and then infers the Pattern discovery result. A summary of related applications is also presented. At the end we identify the open issues for future research. WIREs Data Mining Knowl Discov 2014, 4:24–37. doi: 10.1002/widm.1110 Conflict of interest: The authors have declared no conflicts of interest for this article. For further resources related to this article, please visit the WIREs website.

  • spatial random partition for common Visual Pattern discovery
    International Conference on Computer Vision, 2007
    Co-Authors: Junsong Yuan
    Abstract:

    Automatically discovering common Visual Patterns from a collection of images is an interesting but yet challenging task, in part because it is computationally prohibiting. Although representing images as Visual documents based on discrete Visual words offers advantages in computation, the performance of these word-based methods largely depends on the quality of the Visual word dictionary. This paper presents a novel approach base on spatial random partition and fast word-free image matching. Represented as a set of continuous Visual primitives, each image is randomly partitioned many times to form a pool of subimages. Each subimage is queried and matched against the pool, and then common Patterns can be localized by aggregating the set of matched subimages. The asymptotic property and the complexity of the proposed method are given in this paper, along with many real experiments. Both theoretical studies and experiment results show its advantages.

Christian Gamrat - One of the best experts on this subject based on the ideXlab platform.

  • Visual Pattern extraction using energy efficient 2 pcm synapse neuromorphic architecture
    IEEE Transactions on Electron Devices, 2012
    Co-Authors: Olivier Bichler, Barbara Desalvo, Dominique Vuillaume, Manan Suri, Damien Querlioz, Christian Gamrat
    Abstract:

    We introduce a novel energy-efficient methodology “2-PCM Synapse” to use phase-change memory (PCM) as synapses in large-scale neuromorphic systems. Our spiking neural network architecture exploits the gradual crystallization behavior of PCM devices for emulating both synaptic potentiation and synaptic depression. Unlike earlier attempts to implement a biological-like spike-timing-dependent plasticity learning rule with PCM, we use a simplified rule where long-term potentiation and long-term depression can both be produced with a single invariant crystallizing pulse. Our architecture is simulated on a special purpose event-based simulator, using a behavioral model for the PCM devices validated with electrical characterization. The system, comprising about 2 million synapses, directly learns from event-based dynamic vision sensors. When tested with real-life data, it is able to extract complex and overlapping temporally correlated features such as car trajectories on a freeway. Complete trajectories can be learned with a detection rate above 90 %. The synaptic programming power consumption of the system during learning is estimated and could be as low as 100 nW for scaled down PCM technology. Robustness to device variability is also evidenced.

  • Phase change memory as synapse for ultra-dense neuromorphic systems: Application to complex Visual Pattern extraction
    2011 International Electron Devices Meeting, 2011
    Co-Authors: Manan Suri, Véronique Sousa, Dominique Vuillaume, Luca Perniola, Olivier Bichler, Christian Gamrat, Olga Cueto, Damien Querlioz, Barbara Desalvo
    Abstract:

    We demonstrate a unique energy efficient methodology to use Phase Change Memory (PCM) as synapse in ultra-dense large scale neuromorphic systems. PCM devices with different chalcogenide materials were characterized to demonstrate synaptic behavior. Multi-physical simulations were used to interpret the results. We propose special circuit architecture (“the 2-PCM synapse”), read, write, and reset programming schemes suitable for the use of PCM in neural networks. A versatile behavioral model of PCM which can be used for simulating large scale neural systems is introduced. First demonstration of complex Visual Pattern extraction from real world data using PCM synapses in a 2-layer spiking neural network (SNN) is shown. System power analysis for different scaled PCM technologies is also provided.