Social Sensing

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 312 Experts worldwide ranked by ideXlab platform

Dong Wang - One of the best experts on this subject based on the ideXlab platform.

  • CovidSens: a vision on reliable Social Sensing for COVID-19
    Artificial Intelligence Review, 2020
    Co-Authors: Md Tahmid Rashid, Dong Wang
    Abstract:

    With the spiraling pandemic of the Coronavirus Disease 2019 (COVID-19), it has becoming inherently important to disseminate accurate and timely information about the disease. Due to the ubiquity of Internet connectivity and smart devices, Social Sensing is emerging as a dynamic AI-driven Sensing paradigm to extract real-time observations from online users. In this paper, we propose CovidSens, a vision of Social Sensing-based risk alert systems to spontaneously obtain and analyze Social data to infer the state of the COVID-19 propagation. CovidSens can actively help to keep the general public informed about the COVID-19 spread and identify risk-prone areas by inferring future propagation patterns. The CovidSens concept is motivated by three observations: (1) people have been actively sharing their state of health and experience of the COVID-19 via online Social media, (2) official warning channels and news agencies are relatively slower than people reporting their observations and experiences about COVID-19 on Social media, and (3) online users are frequently equipped with substantially capable mobile devices that are able to perform non-trivial on-device computation for data processing and analytics. We envision an unprecedented opportunity to leverage the posts generated by the ordinary people to build a real-time Sensing and analytic system for gathering and circulating vital information of the COVID-19 propagation. Specifically, the vision of CovidSens attempts to answer the questions: How to distill reliable information about the COVID-19 with the coexistence of prevailing rumors and misinformation in the Social media? How to inform the general public about the latest state of the spread timely and effectively, and alert them to remain prepared? How to leverage the computational power on the edge devices (e.g., smartphones, IoT devices, UAVs) to construct fully integrated edge-based Social Sensing platforms for rapid detection of the COVID-19 spread? In this vision paper, we discuss the roles of CovidSens and identify the potential challenges in developing reliable Social Sensing-based risk alert systems. We envision that approaches originating from multiple disciplines (e.g., AI, estimation theory, machine learning, constrained optimization) can be effective in addressing the challenges. Finally, we outline a few research directions for future work in CovidSens.

  • covidsens a vision on reliable Social Sensing for covid 19
    arXiv: Social and Information Networks, 2020
    Co-Authors: Tahmid Rashid, Dong Wang
    Abstract:

    With the spiraling pandemic of the Coronavirus Disease 2019 (COVID-19), it has becoming inherently important to disseminate accurate and timely information about the disease. Due to the ubiquity of Internet connectivity and smart devices, Social Sensing is emerging as a dynamic AI-driven Sensing paradigm to extract real-time observations from online users. In this paper, we propose CovidSens, a vision of Social Sensing based risk alert systems to spontaneously obtain and analyze Social data to infer COVID-19 propagation. CovidSens can actively help to keep the general public informed about the COVID-19 spread and identify risk-prone areas. The CovidSens concept is motivated by three observations: 1) people actively share their experience of COVID-19 via online Social media, 2) official warning channels and news agencies are relatively slower than people reporting on Social media, and 3) online users are frequently equipped with powerful mobile devices that can perform data processing and analytics. We envision unprecedented opportunities to leverage posts generated by ordinary people to build real-time Sensing and analytic system for gathering and circulating COVID-19 propagation data. Specifically, the vision of CovidSens attempts to answer the questions: How to distill reliable information on COVID-19 with prevailing rumors and misinformation? How to inform the general public about the state of the spread timely and effectively? How to leverage the computational power on edge devices to construct fully integrated edge-based Social Sensing platforms? In this vision paper, we discuss the roles of CovidSens and identify potential challenges in developing reliable Social Sensing based risk alert systems. We envision that approaches originating from multiple disciplines can be effective in addressing the challenges. Finally, we outline a few research directions for future work in CovidSens.

  • covidsens a vision on reliable Social Sensing based risk alerting systems for covid 19 spread
    arXiv: Social and Information Networks, 2020
    Co-Authors: Md Tahmid Rashid, Dong Wang
    Abstract:

    With the spiraling pandemic of the Coronavirus Disease 2019 (COVID-19), it has becoming inherently important to disseminate accurate and timely information about the disease. Due to the ubiquity of Internet connectivity and smart devices, Social Sensing is emerging as a dynamic Sensing paradigm to collect real-time observations from online users. In this vision paper we propose CovidSens, the concept of Social-Sensing-based risk alerting systems to notify the general public about the COVID-19 spread. The CovidSens concept is motivated by two recent observations: 1) people have been actively sharing their state of health and experience of the COVID-19 via online Social media, and 2) official warning channels and news agencies are relatively slower than people reporting their observations and experiences about COVID-19 on Social media. We anticipate an unprecedented opportunity to leverage the posts generated by the Social media users to build a real-time analytic system for gathering and circulating vital information of the COVID-19 propagation. Specifically, the vision of CovidSens attempts to answer the questions of: how to track the spread of the COVID-19? How to distill reliable information about the disease with the coexistence of prevailing rumors and misinformation in the Social media? How to inform the general public about the latest state of the spread timely and effectively and alert them to remain prepared? In this vision paper, we discuss the roles of CovidSens and identify the potential challenges in implementing reliable Social-Sensing-based risk alerting systems. We envision that approaches originating from multiple disciplines (e.g. estimation theory, machine learning, constrained optimization) can be effective in addressing the challenges. Finally, we outline a few research directions for future work in CovidSens.

  • sead towards a Social media driven energy aware drone Sensing framework
    International Conference on Parallel and Distributed Systems, 2019
    Co-Authors: Tahmid Rashid, Daniel Yue Zhang, Lanyu Shang, Dong Wang
    Abstract:

    Autonomous unmanned aerial vehicles (UAVs) have become an important tool for efficient disaster response. Despite the virtues of UAVs in disaster response applications, various limitations (e.g., requiring manual input, finite battery life) hinder their mass adoption. In contrast, Social Sensing is emerging as a new Sensing paradigm that utilizes signals provided by "human sensors" to gather awareness of the events occurring in the physical world. Despite being inherently broader in scope, a shortcoming of Social Sensing is the reliability of the Sensing data that are contributed by humans. In this paper, we introduce the concept of jointly exploiting the reliability of drones and the scope of Social Sensing to efficiently uncover the truthful events during disasters. However, such a tight integration of Social and physical Sensing introduces several technical challenges. The first challenge is satisfying the conflicting objectives of event coverage of the application and energy conservation of drones. The second challenge is adapting to the dynamics of the physical world and Social media. In this paper, we present a Social-media-driven Energy-Aware Drone (SEAD) Sensing framework to address the above challenges. In particular, we develop a reinforcement learning-based drone dispatching scheme that adapts to the physical and Social environments and launches an appropriate proportion of drones for event exploration. We further utilize a bottom-up game-theoretic task allocation approach to guide drones effectively to the event locations. The evaluation with a real-world disaster case study show that SEAD noticeably outperforms state-of-the-art baselines in terms of detection effectiveness and energy efficiency.

  • an online reinforcement learning approach to quality cost aware task allocation for multi attribute Social Sensing
    Pervasive and Mobile Computing, 2019
    Co-Authors: Yang Zhang, Nathan Vance, Daniel Zhang, Dong Wang
    Abstract:

    Abstract Social Sensing has emerged as a new Sensing paradigm where humans (or devices on their behalf) collectively report measurements about the physical world. This paper focuses on a quality-cost-aware task allocation problem in multi-attribute Social Sensing applications. The goal is to identify a task allocation strategy (i.e., decide when and where to collect Sensing data) to achieve an optimized tradeoff between the data quality and the Sensing cost. While recent progress has been made to tackle similar problems, three important challenges have not been well addressed: (i) “online task allocation”: the task allocation schemes need to respond quickly to the potentially large dynamics of the measured variables in Social Sensing; (ii) “multi-attribute constrained optimization”: minimizing the overall Sensing error given the dependencies and constraints of multiple attributes of the measured variables is a non-trivial problem to solve; (iii) “nonuniform task allocation cost”: the task allocation cost in Social Sensing often has a nonuniform distribution which adds additional complexity to the optimized task allocation problem. This paper develops a Quality-Cost-Aware Online Task Allocation (QCO-TA) scheme to address the above challenges using a principled online reinforcement learning framework. We evaluate the QCO-TA scheme through a real-world Social Sensing application and the results show that our scheme significantly outperforms the state-of-the-art baselines in terms of both Sensing accuracy and cost.

Yu Liu - One of the best experts on this subject based on the ideXlab platform.

  • quantifying urban areas with multi source data based on percolation theory
    Remote Sensing of Environment, 2020
    Co-Authors: Wenpu Cao, Lei Dong, Yu Liu
    Abstract:

    Abstract Quantifying urban areas is crucial for addressing associated urban issues such as environmental and sustainable problems. Remote Sensing data, especially the nighttime light images, have been widely used to delineate urbanized areas across the world. Meanwhile, some emerging urban data, such as volunteered geographical information (e.g., OpenStreetMap) and Social Sensing data (e.g., mobile phone and Social media), have also shown great potential in revealing urban boundaries and dynamics. However, consistent and robust methods to quantify urban areas from these multi-source data have remained elusive. Here, we propose a percolation-based method to extract urban areas from these multi-source urban data. We derive the optimal urban/non-urban threshold by considering the critical nature of urban systems with the support of the percolation theory. Furthermore, we apply the method with three open-source datasets – population, road, and nighttime light – to 28 countries. We show that the proposed method captures the similar urban characteristics in terms of urban areas from multi-source data, and Zipf's law holds well in most countries. The accuracy of the derived urban areas by different datasets has been validated with the Landsat-based reference data in 10 cities, and the accuracy can be further improved through data fusion (κ = 0.69–0.85, mean κ = 0.78). Our study not only provides an efficient method to quantify urban areas with open-source data, but also deepens the understanding of urban systems and sheds some light on multi-source data fusion in geographical fields.

  • Quantifying urban areas with multi-source data based on percolation theory
    'Elsevier BV', 2020
    Co-Authors: Cao Wenpu, Dong Lei, Wu Lun, Yu Liu
    Abstract:

    Quantifying urban areas is crucial for addressing associated urban issues such as environmental and sustainable problems. Remote Sensing data, especially the nighttime light images, have been widely used to delineate urbanized areas across the world. Meanwhile, some emerging urban data, such as volunteered geographical information (e.g., OpenStreetMap) and Social Sensing data (e.g., mobile phone and Social media), have also shown great potential in revealing urban boundaries and dynamics. However, consistent and robust methods to quantify urban areas from these multi-source data have remained elusive. Here, we propose a percolation-based method to extract urban areas from these multi-source urban data. We derive the optimal urban/non-urban threshold by considering the critical nature of urban systems with the support of the percolation theory. Furthermore, we apply the method with three open-source datasets - population, road, and nighttime light - to 28 countries. We show that the proposed method captures the similar urban characteristics in terms of urban areas from multi-source data, and Zipf's law holds well in most countries. The accuracy of the derived urban areas by different datasets has been validated with the Landsat-based reference data in 10 cities, and the accuracy can be further improved through data fusion ($\kappa=0.69-0.85$, mean $\kappa=0.78$). Our study not only provides an efficient method to quantify urban areas with open-source data, but also deepens the understanding of urban systems and sheds some light on multi-source data fusion in geographical fields.Comment: Accepted for publication in Remote Sensing of Environmen

  • Social Sensing from street level imagery a case study in learning spatio temporal urban mobility patterns
    Isprs Journal of Photogrammetry and Remote Sensing, 2019
    Co-Authors: Fan Zhang, Di Zhu, Yu Liu
    Abstract:

    Abstract Street-level imagery has covered the comprehensive landscape of urban areas. Compared to satellite imagery, this new source of image data has the advantage in fine-grained observations of not only physical environment but also Social Sensing. Prior studies using street-level imagery focus primarily on urban physical environment auditing. In this study, we demonstrate the potential usage of street-level imagery in uncovering spatio-temporal urban mobility patterns. Our method assumes that the streetscape depicted in street-level imagery reflects urban functions and that urban streets of similar functions exhibit similar temporal mobility patterns. We present how a deep convolutional neural network (DCNN) can be trained to identify high-level scene features from street view images that can explain up to 66.5% of the hourly variation of taxi trips along with the urban road network. The study shows that street-level imagery, as the counterpart of remote Sensing imagery, provides an opportunity to infer fine-scale human activity information of an urban region and bridge gaps between the physical space and human space. This approach can therefore facilitate urban environment observation and smart urban planning.

  • Social Sensing: A New Approach to Understanding Our Socioeconomic Environments
    Annals of the Association of American Geographers, 2015
    Co-Authors: Yu Liu, Ye Zhi, Guanghua Chi, Chaogui Kang, Li Gong, Song Gao, Xi Liu, Li Shi
    Abstract:

    The emergence of big data brings new opportunities for us to understand our socioeconomic environments. We use the term Social Sensing for such individual-level big geospatial data and the associated analysis methods. The word Sensing suggests two natures of the data. First, they can be viewed as the analogue and complement of remote Sensing, as big data can capture well socioeconomic features while conventional remote Sensing data do not have such privilege. Second, in Social Sensing data, each individual plays the role of a sensor. This article conceptually bridges Social Sensing with remote Sensing and points out the major issues when applying Social Sensing data and associated analytics. We also suggest that Social Sensing data contain rich information about spatial interactions and place semantics, which go beyond the scope of traditional remote Sensing data. In the coming big data era, GIScientists should investigate theories in using Social Sensing data, such as data representativeness and quality,...

Tarek Abdelzaher - One of the best experts on this subject based on the ideXlab platform.

  • the age of Social Sensing
    IEEE Computer, 2019
    Co-Authors: Dong Wang, Tarek Abdelzaher, Boleslaw K Szymanski, Lance Kaplan
    Abstract:

    Online Social media have democratized the broadcasting of information, encouraging users to view the world through the lens of Social networks. The exploitation of this lens, termed Social Sensing, presents challenges for researchers at the intersection of computer science and the Social sciences.

  • a constrained maximum likelihood estimator for unguided Social Sensing
    International Conference on Computer Communications, 2018
    Co-Authors: Huajie Shao, Lance Kaplan, Yiran Zhao, Chao Zhang, Lu Su, Tarek Abdelzaher
    Abstract:

    This paper develops a constrained expectation maximization algorithm (CEM) that improves the accuracy of truth estimation in unguided Social Sensing applications. Unguided Social Sensing refers to the act of leveraging naturally occurring observations on Social media as “sensor measurements”, when the sources post at will and not in response to specific Sensing campaigns or surveys. A key challenge in Social Sensing, in general, lies in estimating the veracity of reported observations, when the sources reporting these observations are of unknown reliability and their observations themselves cannot be readily verified. This problem is known as fact-finding. Unsupervised solutions have been proposed to the fact-finding problem that explore notions of internal data consistency in order to estimate observation veracity. This paper observes that unguided Social Sensing gives rise to a new (and very simple) constraint that dramatically reduces the space of feasible fact-finding solutions, hence significantly improving the quality of fact-finding results. The constraint relies on a simple approximate test of source independence, applicable to unguided Sensing, and incorporates information about the number of independent sources of an observation to constrain the posterior estimate of its probability of correctness. Two different approaches are developed to test the independence of sources for purposes of applying this constraint, leading to two flavors of the CEM algorithm, we call CEM and CEM-Jaccard. We show using both simulation and real data sets collected from Twitter that by forcing the algorithm to converge to a solution in which the constraint is satisfied, the quality of solutions is significantly improved.

  • the age of Social Sensing
    arXiv: Social and Information Networks, 2018
    Co-Authors: Dong Wang, Tarek Abdelzaher, Boleslaw K Szymanski, Lance Kaplan
    Abstract:

    Online Social media, such as Twitter and Instagram, democratized information broadcast, allowing anyone to share information about themselves and their surroundings at an unprecedented scale. The large volume of information thus posted on these media offer a new lens into the physical world through the eyes of the Social network. The exploitation of this lens to inspect aspects of world state has recently been termed Social Sensing. The power of manipulating reality via the use (or intentional misuse) of Social media opened concerns with issues ranging from radicalization by terror propaganda to potential manipulation of elections in mature democracies. Many important challenges and open research questions arise in this emerging field that aims to better understand how information can be extracted from the medium and what properties characterize the extracted information and the world it represents. Addressing the above challenges requires multi-disciplinary research at the intersection of computer science and Social sciences that combines cyber-physical computing, sociology, sensor networks, Social networks, cognition, data mining, estimation theory, data fusion, information theory, linguistics, machine learning, behavioral economics, and possibly others. This paper surveys important directions in Social Sensing, identifies current research challenges, and outlines avenues for future research.

  • on source dependency models for reliable Social Sensing algorithms and fundamental error bounds
    International Conference on Distributed Computing Systems, 2016
    Co-Authors: Shuochao Yao, Lance Kaplan, Yiran Zhao, Aylin Yener, Tarek Abdelzaher
    Abstract:

    This paper develops a simplified dependency model for sources on Social networks that is shown to improve the quality of fact-finding -- assessing veracity of observations shared on Social media. Recent literature developed a mathematical approach for exploiting Social networks, such as Twitter, as noisy sensor networks that report observations on the state of the physical world. It was shown that the quality of state estimation from such noisy data, known as fact-finding, was a function of assumptions made regarding the independence of sources or lack thereof. When sources propagate information they hear from others (without verification), correlated errors may arise that degrade fact-finding performance. This work advances the state of the art by developing a simplified model of dependencies between sources and designing an improved dependency-aware estimator to assess veracity of observations, taking into account the observed dependency structure. A fundamental error bound is derived for this estimator to understand the gap in its performance from optimal. It is shown that the new estimator outperforms state of the art fact-finders and, in some cases, yields an accuracy close to the fundamental error bound.

  • Social trove a self summarizing storage service for Social Sensing
    International Conference on Autonomic Computing, 2015
    Co-Authors: Tanvir Al Amin, Tarek Abdelzaher, Raghu K Ganti, Shiguang Wang, Mudhakar Srivatsa, Muntasir Raihan Rahman, Panindra Tumkur Seetharamu, Indranil Gupta, Reaz Ahmed
    Abstract:

    The increasing availability of smartphones, cameras, and wearables with instant data sharing capabilities, and the exploitation of Social networks for information broadcast, heralds a future of real-time information overload. With the growing excess of worldwide streaming data, such as images, geotags, text annotations, and sensory measurements, an increasingly common service will become one of data summarization. The objective of such a service will be to obtain a representative sampling of large data streams at a configurable granularity, in real-time, for subsequent consumption by a range of data-centric applications. This paper describes a general-purpose self-summarizing storage service, called Social Trove, for Social Sensing applications. The service summarizes data streams from human sources, or sensors in their possession, by hierarchically clustering received information in accordance with an application-specific distance metric. It then serves a sampling of produced clusters at a configurable granularity in response to application queries. While Social Trove is a general service, we illustrate its functionality and evaluate it in the specific context of workloads collected from Twitter. Results show that Social Trove supports a high query throughput, while maintaining a low access latency to the produced real-time application-specific data summaries. As a specific application case-study, we implement a fact-finding service on top of Social Trove.

Yang Zhang - One of the best experts on this subject based on the ideXlab platform.

  • an online reinforcement learning approach to quality cost aware task allocation for multi attribute Social Sensing
    Pervasive and Mobile Computing, 2019
    Co-Authors: Yang Zhang, Nathan Vance, Daniel Zhang, Dong Wang
    Abstract:

    Abstract Social Sensing has emerged as a new Sensing paradigm where humans (or devices on their behalf) collectively report measurements about the physical world. This paper focuses on a quality-cost-aware task allocation problem in multi-attribute Social Sensing applications. The goal is to identify a task allocation strategy (i.e., decide when and where to collect Sensing data) to achieve an optimized tradeoff between the data quality and the Sensing cost. While recent progress has been made to tackle similar problems, three important challenges have not been well addressed: (i) “online task allocation”: the task allocation schemes need to respond quickly to the potentially large dynamics of the measured variables in Social Sensing; (ii) “multi-attribute constrained optimization”: minimizing the overall Sensing error given the dependencies and constraints of multiple attributes of the measured variables is a non-trivial problem to solve; (iii) “nonuniform task allocation cost”: the task allocation cost in Social Sensing often has a nonuniform distribution which adds additional complexity to the optimized task allocation problem. This paper develops a Quality-Cost-Aware Online Task Allocation (QCO-TA) scheme to address the above challenges using a principled online reinforcement learning framework. We evaluate the QCO-TA scheme through a real-world Social Sensing application and the results show that our scheme significantly outperforms the state-of-the-art baselines in terms of both Sensing accuracy and cost.

  • on scalable and robust truth discovery in big data Social media Sensing applications
    IEEE Transactions on Big Data, 2019
    Co-Authors: Daniel Zhang, Dong Wang, Nathan Vance, Yang Zhang, Steven Mike
    Abstract:

    Identifying trustworthy information in the presence of noisy data contributed by numerous unvetted sources from online Social media (e.g., Twitter, Facebook, and Instagram) has been a crucial task in the era of big data. This task, referred to as truth discovery, targets at identifying the reliability of the sources and the truthfulness of claims they make without knowing either a priori. In this work, we identified three important challenges that have not been well addressed in the current truth discovery literature. The first one is “misinformation spread” where a significant number of sources are contributing to false claims, making the identification of truthful claims difficult. For example, on Twitter, rumors, scams, and influence bots are common examples of sources colluding, either intentionally or unintentionally, to spread misinformation and obscure the truth. The second challenge is “data sparsity” or the “long-tail phenomenon” where a majority of sources only contribute a small number of claims, providing insufficient evidence to determine those sources’ trustworthiness. For example, in the Twitter datasets that we collected during real-world events, more than 90 percent of sources only contributed to a single claim. Third, many current solutions are not scalable to large-scale Social Sensing events because of the centralized nature of their truth discovery algorithms. In this paper, we develop a Scalable and Robust Truth Discovery (SRTD) scheme to address the above three challenges. In particular, the SRTD scheme jointly quantifies both the reliability of sources and the credibility of claims using a principled approach. We further develop a distributed framework to implement the proposed truth discovery scheme using Work Queue in an HTCondor system. The evaluation results on three real-world datasets show that the SRTD scheme significantly outperforms the state-of-the-art truth discovery methods in terms of both effectiveness and efficiency.

  • deeprisk a deep transfer learning approach to migratable traffic risk estimation in intelligent transportation using Social Sensing
    Distributed Computing in Sensor Systems, 2019
    Co-Authors: Yang Zhang, Daniel Zhang, Hongxiao Wang, D Wang
    Abstract:

    This paper focuses on the migratable traffic risk estimation problem in intelligent transportation systems using the Social (human-centric) Sensing. The goal is to accurately estimate the traffic risk of a target area where the ground truth traffic accident reports are not available by leveraging an estimation model from a source area where such data is available. Two important challenges exist. The first challenge lies in the discrepancy between source and target areas (e.g., layouts, road conditions, and local regulations) and such discrepancy would prevent a direct application of a model from the source area to the target area. The second challenge lies in the difficulty of identifying all potential features in the migratable traffic risk estimation problem and decide the importance of identified features due to the lack of ground truth labels in the target area. To address these challenges, we develop DeepRisk, a Social Sensing based migratable traffic risk estimation scheme using deep transfer learning techniques. The evaluation results on a real world dataset in New York City show the DeepRisk significantly outperforms the state-of-the-art baselines in accurately estimating the traffic risk of locations in a city.

  • privacy aware edge computing in Social Sensing applications using ring signatures
    International Conference on Parallel and Distributed Systems, 2018
    Co-Authors: Nathan Vance, Yang Zhang, Daniel Yue Zhang, Dong Wang
    Abstract:

    Privacy on the Internet has become a major concern in recent years and is an ethical dilemma for Internet based applications. Social Sensing based Edge Computing (SSEC)is a category of distributed applications in which privately owned devices at the “edge” of the network participate in data collection and processing. Traditionally in SSEC, it has been difficult to manage privacy without making a tradeoff with performance or accuracy. In this paper, we present Privacy-aware Edge Computing (PEC), a framework that utilizes ring signatures for implementing Social Sensing applications with full privacy and without sacrificing data quality. Under PEC, privacy is managed by the edge devices without the need of a third party trusted authority. We evaluate the performance of PEC on a realworld edge computing platform and through extensive simulation studies. The results demonstrate that PEC achieves a high level of anonymity with a constrained cryptographic overhead.

  • optimizing online task allocation for multi attribute Social Sensing
    International Conference on Computer Communications and Networks, 2018
    Co-Authors: Yang Zhang, Nathan Vance, Daniel Zhang, Dong Wang
    Abstract:

    Social Sensing has emerged as a new Sensing paradigm where humans (or devices on their behalf) collectively report measurements about the physical world. This paper focuses on an optimized task allocation problem in multi- attribute Social Sensing applications where the goal is to effectively allocate the tasks of collecting multiple attributes of the measured variables to human sensors while respecting the application's budget constraints. While recent progress has been made to tackle the optimized task allocation problem, two important challenges have not been well addressed. The first challenge is "online task allocation": the task allocation schemes need to respond quickly to the potentially large dynamics of the measured variables (e.g., temperature, noise, traffic) in Social Sensing. Delayed task allocation may lead to inaccurate Sensing results and/or unnecessarily high Sensing costs. The second challenge is the "multi-attribute constrained optimization": minimizing the overall Sensing error given the dependencies and constraints of multiple attributes of the measured variables is a non-trivial problem to solve. To address the above challenges, this paper develops an Online Optimized Multi-attribute Task Allocation (OO-MTA) scheme inspired by techniques from machine learning and information theory. We evaluate the OO-MTA scheme using an urban Sensing dataset collected from a real-world Social Sensing application. The evaluation results show that OO- MTA scheme significantly outperforms the state-of-the-art baselines in terms of the Sensing accuracy.

Daniel Zhang - One of the best experts on this subject based on the ideXlab platform.

  • an online reinforcement learning approach to quality cost aware task allocation for multi attribute Social Sensing
    Pervasive and Mobile Computing, 2019
    Co-Authors: Yang Zhang, Nathan Vance, Daniel Zhang, Dong Wang
    Abstract:

    Abstract Social Sensing has emerged as a new Sensing paradigm where humans (or devices on their behalf) collectively report measurements about the physical world. This paper focuses on a quality-cost-aware task allocation problem in multi-attribute Social Sensing applications. The goal is to identify a task allocation strategy (i.e., decide when and where to collect Sensing data) to achieve an optimized tradeoff between the data quality and the Sensing cost. While recent progress has been made to tackle similar problems, three important challenges have not been well addressed: (i) “online task allocation”: the task allocation schemes need to respond quickly to the potentially large dynamics of the measured variables in Social Sensing; (ii) “multi-attribute constrained optimization”: minimizing the overall Sensing error given the dependencies and constraints of multiple attributes of the measured variables is a non-trivial problem to solve; (iii) “nonuniform task allocation cost”: the task allocation cost in Social Sensing often has a nonuniform distribution which adds additional complexity to the optimized task allocation problem. This paper develops a Quality-Cost-Aware Online Task Allocation (QCO-TA) scheme to address the above challenges using a principled online reinforcement learning framework. We evaluate the QCO-TA scheme through a real-world Social Sensing application and the results show that our scheme significantly outperforms the state-of-the-art baselines in terms of both Sensing accuracy and cost.

  • collabdrone a collaborative spatiotemporal aware drone Sensing system driven by Social Sensing signals
    International Conference on Computer Communications and Networks, 2019
    Co-Authors: Tahmid Rashid, Daniel Zhang, Zhiyu Liu, Hai Lin, Dong Wang
    Abstract:

    While autonomous unmanned aerial vehicles (UAVs) have attained a reputable stance in modern disaster response applications, their practical adoptions are impeded due to various constraints (e.g., requiring manual input, battery life). In this paper, we develop a novel spatiotemporal aware drone Sensing system that is driven by harnessing Social media signals, a process known as Social Sensing. Social Sensing has emerged as a new Sensing paradigm where humans act as "sensors" to report their observations about the physical world. However, maneuvering drones with "Social signals" introduces a new realm of challenges. The first challenge is to drive the drones by leveraging noisy and unreliable Social media signals. The second challenge is to optimize the drone deployment by exploring the highly dynamic and latent correlations among event locations. In this paper, we present CollabDrone that devises a new spatiotemporal correlation inference model and game-theoretic drone dispatching mechanism to address the above challenges. The evaluation results on a real-world case study show that CollabDrone significantly outperforms current drone and Social Sensing baselines in terms of accuracy and deadline hit rate.

  • towards reliability in online high churn edge computing a deviceless pipelining approach
    IEEE International Conference on Smart Computing, 2019
    Co-Authors: Nathan Vance, Daniel Zhang, Tahmid Rashid, Dong Wang
    Abstract:

    Social Sensing based Edge Computing (SSEC) is an emerging application paradigm for rich context awareness in Sensing applications in which people collaborate in both data acquisition and processing at the edge of the network. While keeping people in the loop can be an immense benefit, the unreliability and churn introduced can pose a dangerous stability threat for long-running SSEC applications. In the past this problem was addressed by offloading these responsibilities to more reliable server hardware in the cloud or a fog layer, however, this does not take full advantage of the power at the edge. In this paper we address the issue of reliable edge computing on dynamic high-churn edge systems. We develop a deviceless pipeline based approach (DPA) to establish workflows in which stages of the analysis pipeline are completed on edge devices, and any devices that leave the system can be replaced without data loss. We evaluate the performance of our system on a real-world edge computing system performing an object detection application and demonstrate that it can provide significant performance gains over traditional computation offloading schemes in terms of throughput and error recovery.

  • on scalable and robust truth discovery in big data Social media Sensing applications
    IEEE Transactions on Big Data, 2019
    Co-Authors: Daniel Zhang, Dong Wang, Nathan Vance, Yang Zhang, Steven Mike
    Abstract:

    Identifying trustworthy information in the presence of noisy data contributed by numerous unvetted sources from online Social media (e.g., Twitter, Facebook, and Instagram) has been a crucial task in the era of big data. This task, referred to as truth discovery, targets at identifying the reliability of the sources and the truthfulness of claims they make without knowing either a priori. In this work, we identified three important challenges that have not been well addressed in the current truth discovery literature. The first one is “misinformation spread” where a significant number of sources are contributing to false claims, making the identification of truthful claims difficult. For example, on Twitter, rumors, scams, and influence bots are common examples of sources colluding, either intentionally or unintentionally, to spread misinformation and obscure the truth. The second challenge is “data sparsity” or the “long-tail phenomenon” where a majority of sources only contribute a small number of claims, providing insufficient evidence to determine those sources’ trustworthiness. For example, in the Twitter datasets that we collected during real-world events, more than 90 percent of sources only contributed to a single claim. Third, many current solutions are not scalable to large-scale Social Sensing events because of the centralized nature of their truth discovery algorithms. In this paper, we develop a Scalable and Robust Truth Discovery (SRTD) scheme to address the above three challenges. In particular, the SRTD scheme jointly quantifies both the reliability of sources and the credibility of claims using a principled approach. We further develop a distributed framework to implement the proposed truth discovery scheme using Work Queue in an HTCondor system. The evaluation results on three real-world datasets show that the SRTD scheme significantly outperforms the state-of-the-art truth discovery methods in terms of both effectiveness and efficiency.

  • deeprisk a deep transfer learning approach to migratable traffic risk estimation in intelligent transportation using Social Sensing
    Distributed Computing in Sensor Systems, 2019
    Co-Authors: Yang Zhang, Daniel Zhang, Hongxiao Wang, D Wang
    Abstract:

    This paper focuses on the migratable traffic risk estimation problem in intelligent transportation systems using the Social (human-centric) Sensing. The goal is to accurately estimate the traffic risk of a target area where the ground truth traffic accident reports are not available by leveraging an estimation model from a source area where such data is available. Two important challenges exist. The first challenge lies in the discrepancy between source and target areas (e.g., layouts, road conditions, and local regulations) and such discrepancy would prevent a direct application of a model from the source area to the target area. The second challenge lies in the difficulty of identifying all potential features in the migratable traffic risk estimation problem and decide the importance of identified features due to the lack of ground truth labels in the target area. To address these challenges, we develop DeepRisk, a Social Sensing based migratable traffic risk estimation scheme using deep transfer learning techniques. The evaluation results on a real world dataset in New York City show the DeepRisk significantly outperforms the state-of-the-art baselines in accurately estimating the traffic risk of locations in a city.