Laziness

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 6780 Experts worldwide ranked by ideXlab platform

Hugo Mercier - One of the best experts on this subject based on the ideXlab platform.

  • the selective Laziness of reasoning
    Cognitive Science, 2016
    Co-Authors: Emmanuel Trouche, Petter Johansson, Lars Hall, Hugo Mercier
    Abstract:

    Reasoning research suggests that people use more stringent criteria when they evaluate others' arguments than when they produce arguments themselves. To demonstrate this "selective Laziness," we used a choice blindness manipulation. In two experiments, participants had to produce a series of arguments in response to reasoning problems, and they were then asked to evaluate other people's arguments about the same problems. Unknown to the participants, in one of the trials, they were presented with their own argument as if it was someone else's. Among those participants who accepted the manipulation and thus thought they were evaluating someone else's argument, more than half (56% and 58%) rejected the arguments that were in fact their own. Moreover, participants were more likely to reject their own arguments for invalid than for valid answers. This demonstrates that people are more critical of other people's arguments than of their own, without being overly critical: They are better able to tell valid from invalid arguments when the arguments are someone else's rather than their own.

  • the selective Laziness of reasoning
    Social Science Research Network, 2015
    Co-Authors: Emmanuel Trouche, Petter Johansson, Lars Hall, Hugo Mercier
    Abstract:

    Reasoning research suggests that people use more stringent criteria when they evaluate other’s arguments than when they produce arguments themselves. To demonstrate this ‘selective Laziness,’ we used a choice blindness manipulation. In two experiments, participants had to produce a series of arguments in answer to reasoning problems, and they were then asked to evaluate other people’s arguments about the same problems. Unknown to the participants, in one of the trials, they were presented with their own argument as if it was someone else’s. Among those participants who accepted the manipulation and thus thought they were evaluating someone else’s argument, more than half (56% and 58%) rejected the arguments that were in fact their own. Moreover, participants were more likely to reject their own arguments for invalid than for valid answers. This demonstrates that people are more critical of other people’s arguments than of their own, without being overly critical: they are better able to tell valid from invalid arguments when the arguments are someone else’s than their own.

Siddhartha S Srinivasa - One of the best experts on this subject based on the ideXlab platform.

  • a bayesian active learning approach to adaptive motion planning
    ISRR, 2020
    Co-Authors: Sanjiban Choudhury, Siddhartha S Srinivasa
    Abstract:

    An important requirement for a robot to operate reliably in the real world is a robust motion planning module. Current planning systems do not have consistent performance across all situations a robot encounters. We are interested in planning algorithms that adapt during a planning cycle by actively inferring the structure of the valid configuration space, and focusing on potentially good solutions. Consider the problem of evaluating edges on a graph to discover a good path. Edges are not alike in value—some are important, others are informative. Important edges have a lot of good paths flowing through them. Informative edges, on being evaluated, affect the likelihood of other neighboring edges being valid. Evaluating edges is expensive, both for robots with complex geometries like robot arms, and for robots with limited onboard computation like UAVs. Until now, we have addressed this challenge via Laziness, deferring edge evaluation until absolutely necessary, with the hope that edges turn out to be valid. Our key insight is that we can do more than passive Laziness—we can actively probe for information. We draw a novel connection between motion planning and Bayesian active learning. By leveraging existing active learning algorithms, we derive efficient edge evaluation policies which we apply on a spectrum of real world problems. We discuss insights from these preliminary results and potential research questions whose study may prove fruitful for both disciplines.

  • the provable virtue of Laziness in motion planning
    International Joint Conference on Artificial Intelligence, 2019
    Co-Authors: Nika Haghtalab, Simon Mackenzie, Ariel D Procaccia, Oren Salzman, Siddhartha S Srinivasa
    Abstract:

    The Lazy Shortest Path (LazySP) class consists of motion-planning algorithms that only evaluate edges along shortest paths between the source and target. These algorithms were designed to minimize the number of edge evaluations in settings where edge evaluation dominates the running time of the algorithm; but how close to optimal are LazySP algorithms in terms of this objective? Our main result is an analytical upper bound, in a probabilistic model, on the number of edge evaluations required by LazySP algorithms; a matching lower bound shows that these algorithms are asymptotically optimal in the worst case.

Petter Johansson - One of the best experts on this subject based on the ideXlab platform.

  • the selective Laziness of reasoning
    Cognitive Science, 2016
    Co-Authors: Emmanuel Trouche, Petter Johansson, Lars Hall, Hugo Mercier
    Abstract:

    Reasoning research suggests that people use more stringent criteria when they evaluate others' arguments than when they produce arguments themselves. To demonstrate this "selective Laziness," we used a choice blindness manipulation. In two experiments, participants had to produce a series of arguments in response to reasoning problems, and they were then asked to evaluate other people's arguments about the same problems. Unknown to the participants, in one of the trials, they were presented with their own argument as if it was someone else's. Among those participants who accepted the manipulation and thus thought they were evaluating someone else's argument, more than half (56% and 58%) rejected the arguments that were in fact their own. Moreover, participants were more likely to reject their own arguments for invalid than for valid answers. This demonstrates that people are more critical of other people's arguments than of their own, without being overly critical: They are better able to tell valid from invalid arguments when the arguments are someone else's rather than their own.

  • the selective Laziness of reasoning
    Social Science Research Network, 2015
    Co-Authors: Emmanuel Trouche, Petter Johansson, Lars Hall, Hugo Mercier
    Abstract:

    Reasoning research suggests that people use more stringent criteria when they evaluate other’s arguments than when they produce arguments themselves. To demonstrate this ‘selective Laziness,’ we used a choice blindness manipulation. In two experiments, participants had to produce a series of arguments in answer to reasoning problems, and they were then asked to evaluate other people’s arguments about the same problems. Unknown to the participants, in one of the trials, they were presented with their own argument as if it was someone else’s. Among those participants who accepted the manipulation and thus thought they were evaluating someone else’s argument, more than half (56% and 58%) rejected the arguments that were in fact their own. Moreover, participants were more likely to reject their own arguments for invalid than for valid answers. This demonstrates that people are more critical of other people’s arguments than of their own, without being overly critical: they are better able to tell valid from invalid arguments when the arguments are someone else’s than their own.

Emmanuel Trouche - One of the best experts on this subject based on the ideXlab platform.

  • the selective Laziness of reasoning
    Cognitive Science, 2016
    Co-Authors: Emmanuel Trouche, Petter Johansson, Lars Hall, Hugo Mercier
    Abstract:

    Reasoning research suggests that people use more stringent criteria when they evaluate others' arguments than when they produce arguments themselves. To demonstrate this "selective Laziness," we used a choice blindness manipulation. In two experiments, participants had to produce a series of arguments in response to reasoning problems, and they were then asked to evaluate other people's arguments about the same problems. Unknown to the participants, in one of the trials, they were presented with their own argument as if it was someone else's. Among those participants who accepted the manipulation and thus thought they were evaluating someone else's argument, more than half (56% and 58%) rejected the arguments that were in fact their own. Moreover, participants were more likely to reject their own arguments for invalid than for valid answers. This demonstrates that people are more critical of other people's arguments than of their own, without being overly critical: They are better able to tell valid from invalid arguments when the arguments are someone else's rather than their own.

  • the selective Laziness of reasoning
    Social Science Research Network, 2015
    Co-Authors: Emmanuel Trouche, Petter Johansson, Lars Hall, Hugo Mercier
    Abstract:

    Reasoning research suggests that people use more stringent criteria when they evaluate other’s arguments than when they produce arguments themselves. To demonstrate this ‘selective Laziness,’ we used a choice blindness manipulation. In two experiments, participants had to produce a series of arguments in answer to reasoning problems, and they were then asked to evaluate other people’s arguments about the same problems. Unknown to the participants, in one of the trials, they were presented with their own argument as if it was someone else’s. Among those participants who accepted the manipulation and thus thought they were evaluating someone else’s argument, more than half (56% and 58%) rejected the arguments that were in fact their own. Moreover, participants were more likely to reject their own arguments for invalid than for valid answers. This demonstrates that people are more critical of other people’s arguments than of their own, without being overly critical: they are better able to tell valid from invalid arguments when the arguments are someone else’s than their own.

Stephen Chang - One of the best experts on this subject based on the ideXlab platform.

  • profiling for Laziness
    Symposium on Principles of Programming Languages, 2014
    Co-Authors: Stephen Chang, Matthias Felleisen
    Abstract:

    While many programmers appreciate the benefits of lazy programming at an abstract level, determining which parts of a concrete program to evaluate lazily poses a significant challenge for most of them. Over the past thirty years, experts have published numerous papers on the problem, but developing this level of expertise requires a significant amount of experience.We present a profiling-based technique that captures and automates this expertise for the insertion of Laziness annotations into strict programs. To make this idea precise, we show how to equip a formal semantics with a metric that measures waste in an evaluation. Then we explain how to implement this metric as a dynamic profiling tool that suggests where to insert Laziness into a program. Finally, we present evidence that our profiler's suggestions either match or improve on an expert's use of Laziness in a range of real-world applications.

  • Laziness by need
    European Symposium on Programming, 2013
    Co-Authors: Stephen Chang
    Abstract:

    Lazy functional programming has many benefits that strict functional languages can simulate via lazy data constructors. In recognition, ML, Scheme, and other strict functional languages have supported lazy stream programming with delaytt and forcett for several decades. Unfortunately, the manual insertion of delaytt and forcett can be tedious and error-prone. We present a semantics-based refactoring that helps strict programmers manage manual lazy programming. The refactoring uses a static analysis to identify where additional delaytts and forcetts might be needed to achieve the desired simplification and performance benefits, once the programmer has added the initial lazy data constructors. The paper presents a correctness argument for the underlying transformations and some preliminary experiences with a prototype tool implementation.