Automated Decision

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 42327 Experts worldwide ranked by ideXlab platform

Andreas Krause - One of the best experts on this subject based on the ideXlab platform.

  • Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
    arXiv: Artificial Intelligence, 2018
    Co-Authors: Hoda Heidari, Claudio Ferrari, Krishna P. Gummadi, Andreas Krause
    Abstract:

    We draw attention to an important, yet largely overlooked aspect of evaluating fairness for Automated Decision making systems---namely risk and welfare considerations. Our proposed family of measures corresponds to the long-established formulations of cardinal social welfare in economics, and is justified by the Rawlsian conception of fairness behind a veil of ignorance. The convex formulation of our welfare-based measures of fairness allows us to integrate them as a constraint into any convex loss minimization pipeline. Our empirical analysis reveals interesting trade-offs between our proposal and (a) prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of individual fairness. Furthermore and perhaps most importantly, our work provides both heuristic justification and empirical evidence suggesting that a lower-bound on our measures often leads to bounded inequality in algorithmic outcomes; hence presenting the first computationally feasible mechanism for bounding individual-level inequality.

  • NeurIPS - Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
    2018
    Co-Authors: Hoda Heidari, Claudio Ferrari, Krishna P. Gummadi, Andreas Krause
    Abstract:

    We draw attention to an important, yet largely overlooked aspect of evaluating fairness for Automated Decision making systems---namely risk and welfare considerations. Our proposed family of measures corresponds to the long-established formulations of cardinal social welfare in economics, and is justified by the Rawlsian conception of fairness behind a veil of ignorance. The convex formulation of our welfare-based measures of fairness allows us to integrate them as a constraint into any convex loss minimization pipeline. Our empirical analysis reveals interesting trade-offs between our proposal and (a) prediction accuracy, (b) group discrimination, and (c) Dwork et al's notion of individual fairness. Furthermore and perhaps most importantly, our work provides both heuristic justification and empirical evidence suggesting that a lower-bound on our measures often leads to bounded inequality in algorithmic outcomes; hence presenting the first computationally feasible mechanism for bounding individual-level inequality.

Lilian Edwards - One of the best experts on this subject based on the ideXlab platform.

  • Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on Automated Decision-making and profiling
    Computer Law & Security Review, 2018
    Co-Authors: Michael Veale, Lilian Edwards
    Abstract:

    The new Article 29 Data Protection Working Party’s draft guidance on Automated Decision-making and profiling seeks to clarify the European data protection (DP) law’s little-used right to prevent Automated Decision-making, as well as the provisions around profiling more broadly, in the run-up to the General Data Protection Regulation. In this paper, we analyse these new guidelines in the context of recent scholarly debates and technological concerns. They foray into the less-trodden areas of bias and non-discrimination, the significance of advertising, the nature of “solely” Automated Decisions, impacts upon groups and the inference of special categories of data — at times, appearing more to be making or extending rules than to be interpreting them. At the same time, they provide only partial clarity — and perhaps even some extra confusion — around both the much discussed “right to an explanation” and the apparent prohibition on significant Automated Decisions concerning children. The Working Party appear to feel less mandated to adjudicate in these conflicts between the recitals and the enacting articles than to explore altogether new avenues. Nevertheless, the directions they choose to explore are particularly important ones for the future governance of machine learning and artificial intelligence in Europe and beyond.

  • Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling
    2017
    Co-Authors: Michael Veale, Lilian Edwards
    Abstract:

    Cite as: Michael Veale and Lilian Edwards, 'Clarity, Surprises, and Further Questions in the Article 29 Working Party Draft Guidance on Automated Decision-Making and Profiling' (forthcoming) Computer Law and Security ReviewThe new Article 29 Data Protection Working Party’s draft guidance on Automated Decision-making and profiling seeks to clarify the European data protection (DP) law’s little-used right to prevent Automated Decision-making, as well as the provisions around profiling more broadly, in the run-up to the General Data Protection Regulation. In this paper, we analyse these new guidelines in the context of recent scholarly debates and technological concerns. They foray into the less-trodden areas of bias and non-discrimination, the significance of advertising, the nature of “solely” Automated Decisions, impacts upon groups and the inference of special categories of data — at times, appearing more to be making or extending rules than to be interpreting them. At the same time, they provide only partial clarity — and perhaps even some extra confusion — around both the much discussed “right to an explanation” and the apparent prohibition on significant Automated Decisions concerning children. The Working Party appear to feel less mandated to adjudicate in these conflicts between the recitals and the enacting articles than to explore altogether new avenues. Nevertheless, the directions they choose to explore are particularly important ones for the future governance of machine learning and artificial intelligence in Europe and beyond.

Dietrich Manzey - One of the best experts on this subject based on the ideXlab platform.

  • Human Performance Consequences of Automated Decision Aids: The Impact of Time Pressure.
    Human factors, 2020
    Co-Authors: Tobias Rieger, Dietrich Manzey
    Abstract:

    ObjectiveThe study addresses the impact of time pressure on human interactions with Automated Decision support systems (DSSs) and related performance consequences.BackgroundWhen humans interact wit...

  • Human performance consequences of Automated Decision aids in states of fatigue
    Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 2009
    Co-Authors: Dietrich Manzey, Juliane Reichenbach, Linda Onnasch
    Abstract:

    The present study investigates how human performance consequences of Automated Decision aids are moderated by the operator's performance state and the aid's level of automation. Participants performed a simulated supervisory control task with one of two Decision aids which provided different degrees of support for fault identification and management. One session took place during the day, another one during the night, after a prolonged waking phase of more than 20 hours. Results show that both, primary and secondary task performance benefit from Automated support compared to manual performance. During the night, participants supported by the higher Automated aid were better able to maintain a high level of performance. Clear evidence for automation bias was found, but only during the day session. Automation verification was performed more carefully during the night, indicating a less complacent behavior when operators used the Decision aids in a state of sleepiness and fatigue.

  • Misuse of Automated Decision aids: Complacency, automation bias and the impact of training experience
    International Journal of Human-Computer Studies, 2008
    Co-Authors: J. Elin Bahner, Anke-dorothea Hüper, Dietrich Manzey
    Abstract:

    The present study investigates automation misuse based on complacency and automation bias in interacting with a Decision aid in a process control system. The effect of a preventive training intervention which includes exposing participants to rare automation failures is examined. Complacency is reflected in an inappropriate checking and monitoring of Automated functions. In interaction with Automated Decision aids complacency might result in commission errors, i.e., following automatically generated recommendations even though they are false. Yet, empirical evidence proving this kind of relationship is still lacking. A laboratory experiment (N=24) was conducted using a process control simulation. An Automated Decision aid provided advice for fault diagnosis and management. Complacency was directly measured by the participants' information sampling behavior, i.e., the amount of information sampled in order to verify the Automated recommendations. Possible commission errors were assessed when the aid provided false recommendations. The results provide clear evidence for complacency, reflected in an insufficient verification of the automation, while commission errors were associated with high levels of complacency. Hence, commission errors seem to be a possible, albeit not an inevitable consequence of complacency. Furthermore, exposing operators to automation failures during training significantly decreased complacency and thus represents a suitable means to reduce this risk, even though it might not avoid it completely. Potential applications of this research include the design of training protocols in order to prevent automation misuse in interaction with Automated Decision aids.

Helle Zinner Henriksen - One of the best experts on this subject based on the ideXlab platform.

  • Digital Discretion: Unpacking Human and Technological Agency in Automated Decision Making in Sweden’s Social Services
    Social Science Computer Review, 2020
    Co-Authors: Agneta Ranerup, Helle Zinner Henriksen
    Abstract:

    The introduction of robotic process automation (RPA) into the public sector has changed civil servants’ daily life and practices. One of these central practices in the public sector is discretion. The shift to a digital mode of discretion calls for an understanding of the new situation. This article presents an empirical case where Automated Decision making driven by RPA has been implemented in social services in Sweden. It focuses on the aspirational values and effects of the RPA in social services. Context, task, and activities are captured by a detailed analysis of humans and technology. This research finds that digitalization in social services has a positive effect on civil servants’ discretionary practices mainly in terms of their ethical, democratic, and professional values. The long-term effects and the influence on fair and uniform Decision making also merit future research. In addition, the article finds that a human–technology hybrid actor redefines social assistance practices. Simplifications are needed to unpack the Automated Decision-making process because of the technological and theoretical complexities.

  • Value positions viewed through the lens of Automated Decision-making : The case of social services
    Government Information Quarterly, 2019
    Co-Authors: Agneta Ranerup, Helle Zinner Henriksen
    Abstract:

    Abstract As the use of digitalization and Automated Decision-making becomes more common in the public sector, civil servants and clients find themselves in an environment where automation and robot technology can be expected to make dramatic changes. Social service delivery in Trelleborg, Sweden, is the setting for a case study of the goals, policies, procedures, and responses to a change in how social assistance is delivered using Automated Decision-making. Interviews with politicians and professionals complemented with government documents and reports provide the empirical data for the analysis. Four value positions: Professionalism, Efficiency, Service, and Engagement, are used as the analytical framework. The findings reveal that the new technology in some respects has increased accountability, decreased costs, and enhanced efficiency, in association with a focus on citizen centricity. While the findings establish a congruence among instances of some value positions, a divergence is observed among others. Examples of divergence are professional knowledge vs. Automated treatment, a decrease in costs vs. the need to share costs, and citizen trust vs. the lack of transparency. The study confirms the power of applying the value positions lens in e-Government research.

Hoda Heidari - One of the best experts on this subject based on the ideXlab platform.

  • Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
    arXiv: Artificial Intelligence, 2018
    Co-Authors: Hoda Heidari, Claudio Ferrari, Krishna P. Gummadi, Andreas Krause
    Abstract:

    We draw attention to an important, yet largely overlooked aspect of evaluating fairness for Automated Decision making systems---namely risk and welfare considerations. Our proposed family of measures corresponds to the long-established formulations of cardinal social welfare in economics, and is justified by the Rawlsian conception of fairness behind a veil of ignorance. The convex formulation of our welfare-based measures of fairness allows us to integrate them as a constraint into any convex loss minimization pipeline. Our empirical analysis reveals interesting trade-offs between our proposal and (a) prediction accuracy, (b) group discrimination, and (c) Dwork et al.'s notion of individual fairness. Furthermore and perhaps most importantly, our work provides both heuristic justification and empirical evidence suggesting that a lower-bound on our measures often leads to bounded inequality in algorithmic outcomes; hence presenting the first computationally feasible mechanism for bounding individual-level inequality.

  • NeurIPS - Fairness Behind a Veil of Ignorance: A Welfare Analysis for Automated Decision Making
    2018
    Co-Authors: Hoda Heidari, Claudio Ferrari, Krishna P. Gummadi, Andreas Krause
    Abstract:

    We draw attention to an important, yet largely overlooked aspect of evaluating fairness for Automated Decision making systems---namely risk and welfare considerations. Our proposed family of measures corresponds to the long-established formulations of cardinal social welfare in economics, and is justified by the Rawlsian conception of fairness behind a veil of ignorance. The convex formulation of our welfare-based measures of fairness allows us to integrate them as a constraint into any convex loss minimization pipeline. Our empirical analysis reveals interesting trade-offs between our proposal and (a) prediction accuracy, (b) group discrimination, and (c) Dwork et al's notion of individual fairness. Furthermore and perhaps most importantly, our work provides both heuristic justification and empirical evidence suggesting that a lower-bound on our measures often leads to bounded inequality in algorithmic outcomes; hence presenting the first computationally feasible mechanism for bounding individual-level inequality.