Disparity

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 318 Experts worldwide ranked by ideXlab platform

Julian Mcauley - One of the best experts on this subject based on the ideXlab platform.

  • does mitigating ml s impact Disparity require treatment Disparity
    Neural Information Processing Systems, 2018
    Co-Authors: Zachary C. Lipton, Julian Mcauley, Alexandra Chouldechova
    Abstract:

    Following precedent in employment discrimination law, two notions of Disparity are widely-discussed in papers on fairness and ML. Algorithms exhibit treatment Disparity if they formally treat members of protected subgroups differently; algorithms exhibit impact Disparity when outcomes differ across subgroups (even unintentionally). Naturally, we can achieve impact parity through purposeful treatment Disparity. One line of papers aims to reconcile the two parities proposing disparate learning processes (DLPs). Here, the sensitive feature is used during training but a group-blind classifier is produced. In this paper, we show that: (i) when sensitive and (nominally) nonsensitive features are correlated, DLPs will indirectly implement treatment Disparity, undermining the policy desiderata they are designed to address; (ii) when group membership is partly revealed by other features, DLPs induce within-class discrimination; and (iii) in general, DLPs provide suboptimal trade-offs between accuracy and impact parity. Experimental results on several real-world datasets highlight the practical consequences of applying DLPs.

  • Does mitigating ML's impact Disparity require treatment Disparity?
    arXiv: Machine Learning, 2017
    Co-Authors: Zachary C. Lipton, Alexandra Chouldechova, Julian Mcauley
    Abstract:

    Following related work in law and policy, two notions of Disparity have come to shape the study of fairness in algorithmic decision-making. Algorithms exhibit treatment Disparity if they formally treat members of protected subgroups differently; algorithms exhibit impact Disparity when outcomes differ across subgroups, even if the correlation arises unintentionally. Naturally, we can achieve impact parity through purposeful treatment Disparity. In one thread of technical work, papers aim to reconcile the two forms of parity proposing disparate learning processes (DLPs). Here, the learning algorithm can see group membership during training but produce a classifier that is group-blind at test time. In this paper, we show theoretically that: (i) When other features correlate to group membership, DLPs will (indirectly) implement treatment Disparity, undermining the policy desiderata they are designed to address; (ii) When group membership is partly revealed by other features, DLPs induce within-class discrimination; and (iii) In general, DLPs provide a suboptimal trade-off between accuracy and impact parity. Based on our technical analysis, we argue that transparent treatment Disparity is preferable to occluded methods for achieving impact parity. Experimental results on several real-world datasets highlight the practical consequences of applying DLPs vs. per-group thresholds.

Alexandra Chouldechova - One of the best experts on this subject based on the ideXlab platform.

  • does mitigating ml s impact Disparity require treatment Disparity
    Neural Information Processing Systems, 2018
    Co-Authors: Zachary C. Lipton, Julian Mcauley, Alexandra Chouldechova
    Abstract:

    Following precedent in employment discrimination law, two notions of Disparity are widely-discussed in papers on fairness and ML. Algorithms exhibit treatment Disparity if they formally treat members of protected subgroups differently; algorithms exhibit impact Disparity when outcomes differ across subgroups (even unintentionally). Naturally, we can achieve impact parity through purposeful treatment Disparity. One line of papers aims to reconcile the two parities proposing disparate learning processes (DLPs). Here, the sensitive feature is used during training but a group-blind classifier is produced. In this paper, we show that: (i) when sensitive and (nominally) nonsensitive features are correlated, DLPs will indirectly implement treatment Disparity, undermining the policy desiderata they are designed to address; (ii) when group membership is partly revealed by other features, DLPs induce within-class discrimination; and (iii) in general, DLPs provide suboptimal trade-offs between accuracy and impact parity. Experimental results on several real-world datasets highlight the practical consequences of applying DLPs.

  • Does mitigating ML's impact Disparity require treatment Disparity?
    arXiv: Machine Learning, 2017
    Co-Authors: Zachary C. Lipton, Alexandra Chouldechova, Julian Mcauley
    Abstract:

    Following related work in law and policy, two notions of Disparity have come to shape the study of fairness in algorithmic decision-making. Algorithms exhibit treatment Disparity if they formally treat members of protected subgroups differently; algorithms exhibit impact Disparity when outcomes differ across subgroups, even if the correlation arises unintentionally. Naturally, we can achieve impact parity through purposeful treatment Disparity. In one thread of technical work, papers aim to reconcile the two forms of parity proposing disparate learning processes (DLPs). Here, the learning algorithm can see group membership during training but produce a classifier that is group-blind at test time. In this paper, we show theoretically that: (i) When other features correlate to group membership, DLPs will (indirectly) implement treatment Disparity, undermining the policy desiderata they are designed to address; (ii) When group membership is partly revealed by other features, DLPs induce within-class discrimination; and (iii) In general, DLPs provide a suboptimal trade-off between accuracy and impact parity. Based on our technical analysis, we argue that transparent treatment Disparity is preferable to occluded methods for achieving impact parity. Experimental results on several real-world datasets highlight the practical consequences of applying DLPs vs. per-group thresholds.

Zachary C. Lipton - One of the best experts on this subject based on the ideXlab platform.

  • does mitigating ml s impact Disparity require treatment Disparity
    Neural Information Processing Systems, 2018
    Co-Authors: Zachary C. Lipton, Julian Mcauley, Alexandra Chouldechova
    Abstract:

    Following precedent in employment discrimination law, two notions of Disparity are widely-discussed in papers on fairness and ML. Algorithms exhibit treatment Disparity if they formally treat members of protected subgroups differently; algorithms exhibit impact Disparity when outcomes differ across subgroups (even unintentionally). Naturally, we can achieve impact parity through purposeful treatment Disparity. One line of papers aims to reconcile the two parities proposing disparate learning processes (DLPs). Here, the sensitive feature is used during training but a group-blind classifier is produced. In this paper, we show that: (i) when sensitive and (nominally) nonsensitive features are correlated, DLPs will indirectly implement treatment Disparity, undermining the policy desiderata they are designed to address; (ii) when group membership is partly revealed by other features, DLPs induce within-class discrimination; and (iii) in general, DLPs provide suboptimal trade-offs between accuracy and impact parity. Experimental results on several real-world datasets highlight the practical consequences of applying DLPs.

  • Does mitigating ML's impact Disparity require treatment Disparity?
    arXiv: Machine Learning, 2017
    Co-Authors: Zachary C. Lipton, Alexandra Chouldechova, Julian Mcauley
    Abstract:

    Following related work in law and policy, two notions of Disparity have come to shape the study of fairness in algorithmic decision-making. Algorithms exhibit treatment Disparity if they formally treat members of protected subgroups differently; algorithms exhibit impact Disparity when outcomes differ across subgroups, even if the correlation arises unintentionally. Naturally, we can achieve impact parity through purposeful treatment Disparity. In one thread of technical work, papers aim to reconcile the two forms of parity proposing disparate learning processes (DLPs). Here, the learning algorithm can see group membership during training but produce a classifier that is group-blind at test time. In this paper, we show theoretically that: (i) When other features correlate to group membership, DLPs will (indirectly) implement treatment Disparity, undermining the policy desiderata they are designed to address; (ii) When group membership is partly revealed by other features, DLPs induce within-class discrimination; and (iii) In general, DLPs provide a suboptimal trade-off between accuracy and impact parity. Based on our technical analysis, we argue that transparent treatment Disparity is preferable to occluded methods for achieving impact parity. Experimental results on several real-world datasets highlight the practical consequences of applying DLPs vs. per-group thresholds.

Shude Zhu - One of the best experts on this subject based on the ideXlab platform.

  • an orientation map for Disparity defined edges in area v4
    Cerebral Cortex, 2019
    Co-Authors: Yang Fang, Ming Chen, Chao Han, Shude Zhu
    Abstract:

    Binocular Disparity information is an important source of 3D perception. Neurons sensitive to binocular Disparity are found in almost all major visual areas in nonhuman primates. In area V4, Disparity processes are suggested for the purposes of 3D-shape representation and fine Disparity perception. However, whether neurons in V4 are sensitive to Disparity-defined edges used in shape representation is not clear. Additionally, a functional organization for Disparity edges has not been demonstrated so far. With intrinsic signal optical imaging, we studied functional organization for Disparity edges in the monkey visual areas V1, V2, and V4. We found that there is an orientation map in V4 activated by edges purely defined by binocular Disparity. This map is consistent with the orientation map obtained with regular luminance-defined edges, indicating a cue-invariant edge representation in this area. In contrast, such a map is much weaker in V2 and totally absent in V1. These findings reveal a hierarchical processing of 3D shape along the ventral pathway and the important role that V4 plays in shape-from-Disparity detection.

Naim Dahnoun - One of the best experts on this subject based on the ideXlab platform.

  • iterative roll angle estimation from dense Disparity map
    Mediterranean Conference on Embedded Computing, 2018
    Co-Authors: Meghan Evans, Rui Fan, Naim Dahnoun
    Abstract:

    The v-Disparity map is predominantly used to estimate the parameters of the vertical profile of the road surface. Once the road surface is modelled, an object that lies away from it can be detected and determined as either an obstacle or a pothole. The accuracy of this estimation is largely affected by the clarity of the v-Disparity map which can be vastly improved by eliminating the effect of a non-zero roll angle. With a rotation around the roll angle for the Disparity map, a better v-Disparity histogram can be provided. This paper presents a method for accurate roll angle estimation through analysis of the Disparity and v-Disparity maps. Since the quality of the v-Disparity map is improved by rotating the Disparity map by the estimated roll angle, this leads to improved road modelling. The more accurate the roll angle estimation, the larger this improvement is.

  • MECO - Iterative roll angle estimation from dense Disparity map
    2018 7th Mediterranean Conference on Embedded Computing (MECO), 2018
    Co-Authors: Meghan Evans, Rui Fan, Naim Dahnoun
    Abstract:

    The v-Disparity map is predominantly used to estimate the parameters of the vertical profile of the road surface. Once the road surface is modelled, an object that lies away from it can be detected and determined as either an obstacle or a pothole. The accuracy of this estimation is largely affected by the clarity of the v-Disparity map which can be vastly improved by eliminating the effect of a non-zero roll angle. With a rotation around the roll angle for the Disparity map, a better v-Disparity histogram can be provided. This paper presents a method for accurate roll angle estimation through analysis of the Disparity and v-Disparity maps. Since the quality of the v-Disparity map is improved by rotating the Disparity map by the estimated roll angle, this leads to improved road modelling. The more accurate the roll angle estimation, the larger this improvement is.

  • Robust obstacle detection based on a novel Disparity calculation method and G-Disparity
    Computer Vision and Image Understanding, 2014
    Co-Authors: Yifei Wang, Yuan Gao, Alin Achim, Naim Dahnoun
    Abstract:

    Abstract This paper presents a Disparity calculation algorithm based on stereo-vision for obstacle detection and free space calculation. This algorithm incorporates line segmentation, multi-pass aggregation and efficient local optimisation in order to produce accurate Disparity values. It is specifically designed for traffic scenes where most of the objects can be represented by planes in the Disparity domain. The accurate horizontal Disparity gradient for the side planes are also extracted during the Disparity optimisation stage. Then, an obstacle detection algorithm based on the U–V-Disparity is introduced. Instead of using the Hough transform for line detection which is extremely sensitive to the parameter settings, the G-Disparity image is proposed for the detection of side planes. Then, the vertical planes are detected separately after removing all the side planes. Faster detection speed, lower parameter sensitivity and improved performance are achieved comparing with the Hough transform based detection. After the obstacles are located and removed from the Disparity map, most of the remaining pixels are projections from the road surface. Using a spline as the road model, the vertical profile of the road surface is estimated. Finally, the free-space is calculated based on the vertical road profile which is not restricted by the planar road surface assumption.