Trustworthiness Data

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 33 Experts worldwide ranked by ideXlab platform

Anil K. Gupta - One of the best experts on this subject based on the ideXlab platform.

  • Perceived Trustworthiness Within the Organization: The Moderating Impact of Communication Frequency on Trustor and Trustee Effects
    Organization Science, 2003
    Co-Authors: Manuel Becerra, Anil K. Gupta
    Abstract:

    This paper investigates the antecedents of intraorganizational trust and, more specifically, how the frequency of communication between trustor and trustee moderates the impact of these factors on perceived Trustworthiness. Data on 157 dyadic relationships among 50 senior managers within a multinational corporation confirm that the effect of both trustor, as well as trustee characteristics on the level of perceived Trustworthiness, is moderated by the frequency of communication between the two parties. As communication frequency increases, the trustor's general attitudinal predisposition towards peers becomes less important as a determinant of his/her evaluation of Trustworthiness of other managers within the organization. In contrast, as communication frequency increases, the trustor's and trustee's contexts within the organization become more important determinants of perceived Trustworthiness.

Manuel Becerra - One of the best experts on this subject based on the ideXlab platform.

Ali Mahdi Ali Al-salim - One of the best experts on this subject based on the ideXlab platform.

  • Energy Efficient Big Data Networks
    2018
    Co-Authors: Ali Mahdi Ali Al-salim
    Abstract:

    The continuous increase of big Data applications in number and types creates new challenges that should be tackled by the green ICT community. Data scientists classify big Data into four main categories (4Vs): Volume (with direct implications on power needs), Velocity (with impact on delay requirements), Variety (with varying CPU requirements and reduction ratios after processing) and Veracity (with cleansing and backup constraints). Each V poses many challenges that confront the energy efficiency of the underlying networks carrying big Data traffic. In this work, we investigated the impact of the big Data 4Vs on energy efficient bypass IP over WDM networks. The investigation is carried out by developing Mixed Integer Linear Programming (MILP) models that encapsulate the distinctive features of each V. In our analyses, the big Data network is greened by progressively processing big Data raw traffic at strategic locations, dubbed as processing nodes (PNs), built in the network along the path from big Data sources to the Data centres. At each PN, raw Data is processed and lower rate useful information is extracted progressively, eventually reducing the network power consumption. For each V, we conducted an in-depth analysis and evaluated the network power saving that can be achieved by the energy efficient big Data network compared to the classical approach. Along the volume dimension of big Data, the work dealt with optimally handling and processing an enormous amount of big Data Chunks and extracting the corresponding knowledge carried by those Chunks, transmitting knowledge instead of Data, thus reducing the Data volume and saving power. Variety means that there are different types of big Data such as CPU intensive, memory intensive, Input/output (IO) intensive, CPU-Memory intensive, CPU/IO intensive, and memory-IO intensive applications. Each type requires a different amount of processing, memory, storage, and networking resources. The processing of different varieties of big Data was optimised with the goal of minimising power consumption. In the velocity dimension, we classified the processing velocity of big Data into two modes: expedited-Data processing mode and relaxed-Data processing mode. Expedited-Data demanded higher amount of computational resources to reduce the execution time compared to the relaxed-Data. The big Data processing and transmission were optimised given the velocity dimension to reduce power consumption. Veracity specifies Trustworthiness, Data protection, Data backup, and Data cleansing constraints. We considered the implementation of Data cleansing and backup operations prior to big Data processing so that big Data is cleansed and readied for entering big Data analytics stage. The analysis was carried out through dedicated scenarios considering the influence of each V’s characteristic parameters. For the set of network parameters we considered, our results for network energy efficiency under the impact of volume, variety, velocity and veracity scenarios revealed that up to 52%, 47%, 60%, 58%, network power savings can be achieved by the energy efficient big Data networks approach compared to the classical approach, respectively.

Love Ekenberg - One of the best experts on this subject based on the ideXlab platform.

  • learning from information crises exploring aggregated Trustworthiness in big Data production
    European Conference on Computer Supported Cooperative Work, 2015
    Co-Authors: Karin Hansson, Love Ekenberg
    Abstract:

    In a crisis situation when traditional venues for information dissemination aren't reliable and information is needed immediately "aggregated Trustworthiness", Data verification through network evaluation and social validation, becomes an important alternative. However, the risk with evaluating credibility through trust and network reputation is that the perspective can get biased. In these socially distributed information systems there is therefore of particularly high importance to understand how Data is socially produced by whom. The purpose with the research project presented in this position paper is to explore how patters of bias in information production online can become more transparent by including tools that analyze and visualize aggregated Trustworthiness. the research project consists of two interconnected parts. We will first look into a recent crisis situation, the case Red Hook after Hurricane Sandy, to see how the dissemination of information took place in the recovery work, focusing on questions of credibility and trust. Thereafter, this case study will inform the design of two collaborative tools where we investigate how social validation processes can be made more transparent.

Karin Hansson - One of the best experts on this subject based on the ideXlab platform.

  • learning from information crises exploring aggregated Trustworthiness in big Data production
    European Conference on Computer Supported Cooperative Work, 2015
    Co-Authors: Karin Hansson, Love Ekenberg
    Abstract:

    In a crisis situation when traditional venues for information dissemination aren't reliable and information is needed immediately "aggregated Trustworthiness", Data verification through network evaluation and social validation, becomes an important alternative. However, the risk with evaluating credibility through trust and network reputation is that the perspective can get biased. In these socially distributed information systems there is therefore of particularly high importance to understand how Data is socially produced by whom. The purpose with the research project presented in this position paper is to explore how patters of bias in information production online can become more transparent by including tools that analyze and visualize aggregated Trustworthiness. the research project consists of two interconnected parts. We will first look into a recent crisis situation, the case Red Hook after Hurricane Sandy, to see how the dissemination of information took place in the recovery work, focusing on questions of credibility and trust. Thereafter, this case study will inform the design of two collaborative tools where we investigate how social validation processes can be made more transparent.