The Experts below are selected from a list of 3216 Experts worldwide ranked by ideXlab platform
Kihak Hong - One of the best experts on this subject based on the ideXlab platform.
-
a new stratified three stage unrelated randomized response model for estimating a rare Sensitive Attribute based on the poisson distribution
Communications in Statistics-theory and Methods, 2018Co-Authors: Kihak HongAbstract:This article suggests an efficient method of estimating a rare Sensitive Attribute which is assumed following Poisson distribution by using three-stage unrelated randomized response model instead o...
-
a stratified estimation of Sensitive Attribute using adjusted kuk s randomization device
Revista Colombiana de Estadistica, 2017Co-Authors: Kihak HongAbstract:This paper suggests a stratified Kuk model to estimate the proportion of Sensitive Attributes of a population composed by a number of strata; this is undertaken by applying stratified sampling to the adjusted Kuk model. The paper estimates Sensitive parameters when the size of the stratum is known by taking proportional and optimal allocation methods into account and then extends to the case of an unknown stratum size, estimating Sensitive parameters by applying stratified double sampling and checking the two allocation methods. Finally, the paper compares the efficiency of the proposed model to that of the Su, Sedory and Singh model and the adjusted Kuk model in terms of the estimator variance.
-
A stratified two-stage unrelated randomized response model for estimating a rare Sensitive Attribute based on the Poisson distribution
Journal of Statistical Theory and Practice, 2016Co-Authors: Kihak HongAbstract:This article estimates the mean number of individuals with a rare Sensitive Attribute by using the Poisson distribution and stratified two-stage sampling and extends the Land et al. model to a stratified population. A rare Sensitive parameter is estimated for the case in which stratum size is known, and proportional and optimal allocation methods are taken into account. We extended the Land et al. model to the case of an unknown stratum size; a rare Sensitive parameter is estimated by applying stratified double sampling to the Land et al. model, and these two allocation methods are checked. Finally, the efficiency of the proposed model is compared with that of Land et al. in terms of the estimator variance.
-
estimation of Sensitive Attribute using stratified kuk s randomization device
ANPOR Conference Bangkok 2015, 2015Co-Authors: Kihak HongAbstract:In this paper, we suggest the stratified Kuk???s model to estimate of Sensitive Attribute proportion of population which is composed of a number of strata by applying stratified sampling to the adjusted Kuk???s model suggested by Su et al.(2014). We estimate the Sensitive parameter in the case of knowing the size of stratum and take the proportional and the optimum allocation methods into account. We extend it the case of unknown stratum size, estimate the Sensitive parameter by applying to stratified double sampling, and check the above two allocation methods. We compare the efficiency of the suggested model to the Su et al.???s and adjusted Kuk???s one in terms of the variance of estimator.
-
Estimation of Sensitive Attribute using stratified Kuk???s randomization device
2015Co-Authors: Kihak HongAbstract:In this paper, we suggest the stratified Kuk???s model to estimate of Sensitive Attribute proportion of population which is composed of a number of strata by applying stratified sampling to the adjusted Kuk???s model suggested by Su et al.(2014). We estimate the Sensitive parameter in the case of knowing the size of stratum and take the proportional and the optimum allocation methods into account. We extend it the case of unknown stratum size, estimate the Sensitive parameter by applying to stratified double sampling, and check the above two allocation methods. We compare the efficiency of the suggested model to the Su et al.???s and adjusted Kuk???s one in terms of the variance of estimator.
Housila P Singh - One of the best experts on this subject based on the ideXlab platform.
-
a randomization device for estimating a rare Sensitive Attribute in stratified sampling using poisson distribution
Afrika Matematika, 2018Co-Authors: Tanveer A Tarray, Housila P SinghAbstract:The nitty-gritty of this paper is to estimate the mean of the number of persons possessing a rare Sensitive Attribute by utilizing the Poisson distribution in stratified survey sampling. It is also shown that the proposed models are more efficient than Lee et al.’s (Statistics 47:575–589, 2013) models in both the cases when the proportion of persons possessing a rare unrelated Attribute is known and that when it is unknown. Properties of the proposed randomized response models have been studied alongwith recommendations. Numerical illustrations are also given in support of the present study.
-
an optional randomized response model for estimating a rare Sensitive Attribute using poisson distribution
Communications in Statistics-theory and Methods, 2017Co-Authors: Tanveer A Tarray, Housila P SinghAbstract:ABSTRACTThe crux of this article is to estimate the mean of the number of persons possessing a rare Sensitive Attribute based on the Mangat (1991) randomization device by utilizing the Poisson distribution in simple random sampling and stratified sampling. Properties of the proposed randomized response (RR) model have been studied along with recommendations. It is also shown that the proposed model is more efficient than that of Land et al. (2011) in simple random sampling and that of Lee et al. (2013) in stratified random sampling when the proportion of persons possessing a rare unrelated Attribute is known. Numerical illustrations are also given in support of the present study.
-
a two stage land et al s randomized response model for estimating a rare Sensitive Attribute using poisson distribution
Communications in Statistics-theory and Methods, 2017Co-Authors: Housila P Singh, Tanveer A TarrayAbstract:ABSTRACTThe crux of this paper is to estimate the mean of the number of persons possessing a rare Sensitive Attribute based on the Mangat (1992) randomization device by utilizing the Poisson distribution in survey sampling. It is shown that the proposed model is more efficient than Land et al. (2011) when the proportion of persons possessing a rare unrelated Attribute is known. Properties of the proposed randomized response model have been studied along with recommendations. We have also extended the proposed model to stratified random sampling on the lines of Lee et al. (2013). It has been also shown that the proposed estimator is better than Lee et al.'s (2013) estimator. Numerical illustrations are also given in support of the present study.
-
on the use of randomization device for estimating the proportion and truthful reporting of a qualitative Sensitive Attribute
Pakistan Journal of Statistics and Operation Research, 2015Co-Authors: Housila P Singh, Tanveer A TarrayAbstract:In this paper, a simple and obvious procedure is presented that allows to estimate the population proportion Pi possessing Sensitive Attribute using simple random sampling with replacement (SRSWR). In addition to T, the probability that a respondent truthfully states that he or she bears a Sensitive character when experienced in a direct response survey. An efficiency comparison is carried out to investigate in the performance of the proposed method. It is found that the proposed strategy is more efficient than Warner’s (1965) as well as Huang’s (2004) randomized response techniques under some realistic conditions. Numerical illustrations and graphical representations are also given in support of the present study.
-
a revisit to the singh horn singh and mangat s randomization device for estimating a rare Sensitive Attribute using poisson distribution
Model Assisted Statistics and Applications, 2015Co-Authors: Housila P Singh, Tanveer A TarrayAbstract:The crux of this paper is to estimate the mean of the number of persons possessing a rare Sensitive Attribute based on Singh et al. (24) randomization device by utilizing the Poisson distribution in survey sampling. It is also shown that the proposed models are more efficient than Land et al. (6) in both the cases when the proportion of persons possessing a rare unrelated Attribute is known and that when it is unknown. Properties of the proposed randomized response model have been studied along with recommendations. Numerical illustrations are also given in support of the present study.
Hong Zhao - One of the best experts on this subject based on the ideXlab platform.
-
test cost Sensitive Attribute reduction on heterogeneous data for adaptive neighborhood model
Soft Computing, 2016Co-Authors: Hong ZhaoAbstract:Test-cost-Sensitive Attribute reduction is an important component in data mining applications, and plays a key role in cost-Sensitive learning. Some previous approaches in test-cost-Sensitive Attribute reduction focus mainly on homogeneous datasets. When heterogeneous datasets must be taken into account, the previous approaches convert nominal Attribute to numerical Attribute directly. In this paper, we introduce an adaptive neighborhood model for heterogeneous Attribute and deal with test-cost-Sensitive Attribute reduction problem. In the adaptive neighborhood model, the objects with numerical Attributes are dealt with classical covering neighborhood, and the objects with nominal Attributes are dealt with the overlap metric neighborhood. Compared with the previous approaches, the proposed model can avoid that objects with different values of nominal Attribute are classified into one neighborhood. The number of inconsistent objects of a neighborhood reflects the discriminating capability of an Attribute subset. With the adaptive neighborhood model, an inconsistent objects-based heuristic reduction algorithm is constructed. The proposed algorithm is compared with the $$\lambda $$ź-weighted heuristic reduction algorithm which nominal Attribute is normalized. Experimental results demonstrate that the proposed algorithm is more effective and more practical significance than the $$\lambda $$ź-weighted heuristic reduction algorithm.
-
test cost Sensitive Attribute reduction of data with normal distribution measurement errors
Mathematical Problems in Engineering, 2013Co-Authors: Hong ZhaoAbstract:The measurement error with normal distribution is universal in applications. Generally, smaller measurement error requires better instrument and higher test cost. In decision making, we will select an Attribute subset with appropriate measurement error to minimize the total test cost. Recently, error-range-based covering rough set with uniform distribution error was proposed to investigate this issue. However, the measurement errors satisfy normal distribution instead of uniform distribution which is rather simple for most applications. In this paper, we introduce normal distribution measurement errors to covering-based rough set model and deal with test-cost-Sensitive Attribute reduction problem in this new model. The major contributions of this paper are fourfold. First, we build a new data model based on normal distribution measurement errors. Second, the covering-based rough set model with measurement errors is constructed through the “3-sigma” rule of normal distribution. With this model, coverings are constructed from data rather than assigned by users. Third, the test-cost-Sensitive Attribute reduction problem is redefined on this covering-based rough set. Fourth, a heuristic algorithm is proposed to deal with this problem. The experimental results show that the algorithm is more effective and efficient than the existing one. This study suggests new research trends concerning cost-Sensitive learning.
-
test cost Sensitive Attribute reduction of data with normal distribution measurement errors
arXiv: Artificial Intelligence, 2012Co-Authors: Hong ZhaoAbstract:The measurement error with normal distribution is universal in applications. Generally, smaller measurement error requires better instrument and higher test cost. In decision making based on Attribute values of objects, we shall select an Attribute subset with appropriate measurement error to minimize the total test cost. Recently, error-range-based covering rough set with uniform distribution error was proposed to investigate this issue. However, the measurement errors satisfy normal distribution instead of uniform distribution which is rather simple for most applications. In this paper, we introduce normal distribution measurement errors to covering-based rough set model, and deal with test-cost-Sensitive Attribute reduction problem in this new model. The major contributions of this paper are four-fold. First, we build a new data model based on normal distribution measurement errors. With the new data model, the error range is an ellipse in a two-dimension space. Second, the covering-based rough set with normal distribution measurement errors is constructed through the "3-sigma" rule. Third, the test-cost-Sensitive Attribute reduction problem is redefined on this covering-based rough set. Fourth, a heuristic algorithm is proposed to deal with this problem. The algorithm is tested on ten UCI (University of California - Irvine) datasets. The experimental results show that the algorithm is more effective and efficient than the existing one. This study is a step toward realistic applications of cost-Sensitive learning.
-
Test-cost-Sensitive Attribute reduction based on neighborhood rough set
2011 IEEE International Conference on Granular Computing, 2011Co-Authors: Hong ZhaoAbstract:Recent research in machine learning and data mining has produced a wide variety of algorithms for cost-Sensitive learning. Most existing rough set methods on this issue deal with nominal Attributes. This is because that nominal Attributes produce equivalent relations and therefore are easy to process. However, in real applications, datasets often contain numerical Attributes. As we know, numerical Attributes are more complex than nominal ones and require more computational resources. Consequently, respective learning tasks are more challenging. This paper deals with test-cost-Sensitive Attribute reduction for numerical valued decision systems. Neighborhood rough set achieved success in numerical data processing, hence we adopt the model to define the minimal test cost reduct problem. Due to the complexity of the new problem, heuristic algorithms are needed to find a sub-optimal solution. We propose one kind of heuristic information, which is the sum of the positive region and weighted test cost. When the test cost is not considered, the information degrades to the positive region, which is the most commonly used one in classical rough set. Three metrics are adopted to evaluate the performance of reduction algorithms from a statistical viewpoint. Experimental results show that the proposed method takes advantages of test costs and therefore produces satisfactory results.
Tanveer A Tarray - One of the best experts on this subject based on the ideXlab platform.
-
nimble randomization device for estimating a rare Sensitive Attribute using poisson distribution
Engineering Mathematics Letters, 2019Co-Authors: Tanveer A TarrayAbstract:This paper addresses the problem of estimating the mean of the number of persons possessing a rare Sensitive Attribute utilizing the Poisson distribution in survey sampling. Properties of the proposed randomized response model have been studied along with recommendations. It is also shown that the proposed model is more efficient than Land et al. (2011) when the proportion of persons possessing a rare unrelated Attribute is known. Numerical illustration is also given in support of the present study.
-
a randomization device for estimating a rare Sensitive Attribute in stratified sampling using poisson distribution
Afrika Matematika, 2018Co-Authors: Tanveer A Tarray, Housila P SinghAbstract:The nitty-gritty of this paper is to estimate the mean of the number of persons possessing a rare Sensitive Attribute by utilizing the Poisson distribution in stratified survey sampling. It is also shown that the proposed models are more efficient than Lee et al.’s (Statistics 47:575–589, 2013) models in both the cases when the proportion of persons possessing a rare unrelated Attribute is known and that when it is unknown. Properties of the proposed randomized response models have been studied alongwith recommendations. Numerical illustrations are also given in support of the present study.
-
an optional randomized response model for estimating a rare Sensitive Attribute using poisson distribution
Communications in Statistics-theory and Methods, 2017Co-Authors: Tanveer A Tarray, Housila P SinghAbstract:ABSTRACTThe crux of this article is to estimate the mean of the number of persons possessing a rare Sensitive Attribute based on the Mangat (1991) randomization device by utilizing the Poisson distribution in simple random sampling and stratified sampling. Properties of the proposed randomized response (RR) model have been studied along with recommendations. It is also shown that the proposed model is more efficient than that of Land et al. (2011) in simple random sampling and that of Lee et al. (2013) in stratified random sampling when the proportion of persons possessing a rare unrelated Attribute is known. Numerical illustrations are also given in support of the present study.
-
a two stage land et al s randomized response model for estimating a rare Sensitive Attribute using poisson distribution
Communications in Statistics-theory and Methods, 2017Co-Authors: Housila P Singh, Tanveer A TarrayAbstract:ABSTRACTThe crux of this paper is to estimate the mean of the number of persons possessing a rare Sensitive Attribute based on the Mangat (1992) randomization device by utilizing the Poisson distribution in survey sampling. It is shown that the proposed model is more efficient than Land et al. (2011) when the proportion of persons possessing a rare unrelated Attribute is known. Properties of the proposed randomized response model have been studied along with recommendations. We have also extended the proposed model to stratified random sampling on the lines of Lee et al. (2013). It has been also shown that the proposed estimator is better than Lee et al.'s (2013) estimator. Numerical illustrations are also given in support of the present study.
-
on the use of randomization device for estimating the proportion and truthful reporting of a qualitative Sensitive Attribute
Pakistan Journal of Statistics and Operation Research, 2015Co-Authors: Housila P Singh, Tanveer A TarrayAbstract:In this paper, a simple and obvious procedure is presented that allows to estimate the population proportion Pi possessing Sensitive Attribute using simple random sampling with replacement (SRSWR). In addition to T, the probability that a respondent truthfully states that he or she bears a Sensitive character when experienced in a direct response survey. An efficiency comparison is carried out to investigate in the performance of the proposed method. It is found that the proposed strategy is more efficient than Warner’s (1965) as well as Huang’s (2004) randomized response techniques under some realistic conditions. Numerical illustrations and graphical representations are also given in support of the present study.
Fatemeh Deldar - One of the best experts on this subject based on the ideXlab platform.
-
PDP-SAG: Personalized Privacy Protection in Moving Objects Databases by Combining Differential Privacy and Sensitive Attribute Generalization
IEEE Access, 2019Co-Authors: Fatemeh Deldar, Mahdi AbadiAbstract:Moving objects databases have become an enabling technology for location-based applications. They mostly focus on the storing and processing of data about moving objects. Privacy protection is one of the most important concerns related to such databases. In recent years, some mechanisms have been proposed to answer statistical queries over moving objects databases, while satisfying differential privacy. However, none of them consider the case where a moving objects database contains non-spatiotemporal Sensitive Attributes other than spatiotemporal Attributes. Besides, most of them do not support the personalized privacy protection requirements of different moving objects. In this paper, we address these problems by presenting PDP-SAG, a differentially private mechanism that combines the Sensitive Attribute generalization with personalized privacy in a unified manner. By this combination, we aim to provide different levels of differential privacy protection for moving objects that have non-spatiotemporal Sensitive Attributes as well. In this regard, we generalize the Sensitive Attribute values of trajectory data records based on their privacy descriptor and define a new personalized differentially private tree structure to keep different noisy frequencies for each trajectory according to the generalized Sensitive Attribute values of trajectory data records passing through that trajectory. We also propose intra- and inter-consistency constraints enforcements to make noisy frequencies consistent with each other. The extensive experiments on synthetic and real datasets verify that PDP-SAG significantly improves the utility of Sensitive query answers and provides the required level of privacy protection for each moving object, in comparison to the case when no personalization and generalization are permitted.
-
pptd preserving personalized privacy in trajectory data publishing by Sensitive Attribute generalization and trajectory local suppression
Knowledge Based Systems, 2016Co-Authors: Elahe Ghasemi Komishani, Mahdi Abadi, Fatemeh DeldarAbstract:Abstract Trajectory data often provide useful information that can be used in real-life applications, such as traffic management, Geo-marketing, and location-based advertising. However, a trajectory database may contain detailed information about moving objects and associate them with Sensitive Attributes, such as disease, job, and income. Therefore, improper publishing of the trajectory database can put the privacy of moving objects at risk, especially when an adversary uses partial trajectory information as its background knowledge. The existing approaches for privacy preservation in trajectory data publishing provide the same privacy protection for all moving objects. The consequence is that some moving objects may be offered insufficient privacy protection, while some others may not require high privacy protection. In this paper, we address this problem and present PPTD, a novel approach for preserving privacy in trajectory data publishing based on the concept of personalized privacy. It aims to strike a balance between the conflicting goals of data utility and data privacy in accordance with the privacy requirements of moving objects. To the best of our knowledge, this is the first paper that combines Sensitive Attribute generalization and trajectory local suppression to achieve a tailored personalized privacy model for trajectory data publishing. Our experiments on two synthetic trajectory datasets suggest that PPTD is effective for preserving personalized privacy in trajectory data publishing. In particular, PPTD can significantly improve the data utility of anonymized trajectory databases when compared with previous work in the literature.