Vulnerability Data

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 309 Experts worldwide ranked by ideXlab platform

Fabio Massacci - One of the best experts on this subject based on the ideXlab platform.

  • security events and Vulnerability Data for cybersecurity risk estimation
    Risk Analysis, 2017
    Co-Authors: Luca Allodi, Fabio Massacci
    Abstract:

    Current industry standards for estimating cybersecurity risk are based on qualitative risk matrices as opposed to quantitative risk estimates. In contrast, risk assessment in most other industry sectors aims at deriving quantitative risk estimations (e.g., Basel II in Finance). This article presents a model and methodology to leverage on the large amount of Data available from the IT infrastructure of an organization's security operation center to quantitatively estimate the probability of attack. Our methodology specifically addresses untargeted attacks delivered by automatic tools that make up the vast majority of attacks in the wild against users and organizations. We consider two†stage attacks whereby the attacker first breaches an Internet†facing system, and then escalates the attack to internal systems by exploiting local vulnerabilities in the target. Our methodology factors in the power of the attacker as the number of “weaponized†vulnerabilities he/she can exploit, and can be adjusted to match the risk appetite of the organization. We illustrate our methodology by using Data from a large financial institution, and discuss the significant mismatch between traditional qualitative risk assessments and our quantitative approach.

  • An automatic method for assessing the versions affected by a Vulnerability
    Empirical Software Engineering, 2016
    Co-Authors: Viet Hung Nguyen, Stanislav Dashevskyi, Fabio Massacci
    Abstract:

    Vulnerability Data sources are used by academics to build models, and by industry and government to assess compliance. Errors in such Data sources therefore not only are threats to validity in scientific studies, but also might cause organizations, which rely on retro versions of software, to lose compliance. In this work, we propose an automated method to determine the code evidence for the presence of vulnerabilities in retro software versions. The method scans the code base of each retro version of software for the code evidence to determine whether a retro version is vulnerable or not. It identifies the lines of code that were changed to fix vulnerabilities. If an earlier version contains these deleted lines, it is highly likely that this version is vulnerable. To show the scalability of the method we performed a large scale experiments on Chrome and Firefox (spanning 7,236 vulnerable files and approximately 9,800 vulnerabilities) on the National Vulnerability Database (NVD). The elimination of spurious Vulnerability claims (e.g. entries to a Vulnerability Database such as NVD) found by our method may change the conclusions of studies on the prevalence of foundational vulnerabilities.

  • An Empirical Methodology to Evaluate Vulnerability Discovery Models
    IEEE Transactions on Software Engineering, 2014
    Co-Authors: Fabio Massacci, Viet Hung Nguyen
    Abstract:

    Vulnerability discovery models (VDMs) operate on known Vulnerability Data to estimate the total number of vulnerabilities that will be reported after a software is released. VDMs have been proposed by industry and academia, but there has been no systematic independent evaluation by researchers who are not model proponents. Moreover, the traditional evaluation methodology has some issues that biased previous studies in the field. In this work we propose an empirical methodology that systematically evaluates the performance of VDMs along two dimensions (quality and predictability) and addresses all identified issues of the traditional methodology. We conduct an experiment to evaluate most existing VDMs on popular web browsers’ Vulnerability Data. Our comparison shows that the results obtained by the proposed methodology are more informative than those by the traditional methodology. Among evaluated VDMs, the simplest linear model is the most appropriate choice in terms of both quality and predictability for the first 6-12 months since a release date. Otherwise, logistics-based models are better choices.

  • The (Un)Reliability of NVD Vulnerable Versions Data: an Empirical Experiment on Google Chrome Vulnerabilities
    arXiv: Cryptography and Security, 2013
    Co-Authors: Viet Hung Nguyen, Fabio Massacci
    Abstract:

    NVD is one of the most popular Databases used by researchers to conduct empirical research on Data sets of vulnerabilities. Our recent analysis on Chrome Vulnerability Data reported by NVD has revealed an abnormally phenomenon in the Data where almost vulnerabilities were originated from the first versions. This inspires our experiment to validate the reliability of the NVD vulnerable version Data. In this experiment, we verify for each version of Chrome that NVD claims vulnerable is actually vulnerable. The experiment revealed several errors in the Vulnerability Data of Chrome. Furthermore, we have also analyzed how these errors might impact the conclusions of an empirical study on foundational Vulnerability. Our results show that different conclusions could be obtained due to the Data errors.

  • AsiaCCS - The (un)reliability of NVD vulnerable versions Data: an empirical experiment on Google Chrome vulnerabilities
    Proceedings of the 8th ACM SIGSAC symposium on Information computer and communications security - ASIA CCS '13, 2013
    Co-Authors: Viet Hung Nguyen, Fabio Massacci
    Abstract:

    NVD is one of the most popular Databases used by researchers to conduct empirical research on Data sets of vulnerabilities. Our recent analysis on Chrome Vulnerability Data reported by NVD has revealed an abnormally phenomenon in the Data where almost vulnerabilities were originated from the first versions. This inspires our experiment to validate the reliability of the NVD vulnerable version Data. In this experiment, we verify for each version of Chrome that NVD claims vulnerable is actually vulnerable. The experiment revealed several errors in the Vulnerability Data of Chrome. Furthermore, we have also analyzed how these errors might impact the conclusions of an empirical study on foundational Vulnerability. Our results show that different conclusions could be obtained due to the Data errors.

Tadashi Dohi - One of the best experts on this subject based on the ideXlab platform.

  • quantitative security evaluation for software system from Vulnerability Database
    Journal of Software Engineering and Applications, 2013
    Co-Authors: Hiroyuki Okamura, Masataka Tokuzane, Tadashi Dohi
    Abstract:

    This paper proposes a quantitative security evaluation for software system from the Vulnerability Data consisting of discovery date, solution date and exploit publish date based on a stochastic model. More precisely, our model considers a Vulnerability life-cycle model and represents the Vulnerability discovery process as a non-homogeneous Poisson process. In a numerical example, we show the quantitative measures for contents management system of an open source project.

  • Security Evaluation for Software System with Vulnerability Life Cycle and User Profiles
    2012 Workshop on Dependable Transportation Systems Recent Advances in Software Dependability, 2012
    Co-Authors: Hiroyuki Okamura, Masataka Tokuzane, Tadashi Dohi
    Abstract:

    This paper proposes the definition of a security criterion and security assessment based on the criterion. More precisely, we present a stochastic model with a Vulnerability life-cycle model and a user profile using continuous-time Markov chains. The composite model can be represented by a Markov arrival process under some assumptions. Using the Vulnerability Data for existing applications, we perform the security assessment numerically.

  • ISSRE - Optimal Security Patch Release Timing under Non-homogeneous Vulnerability-Discovery Processes
    2009 20th International Symposium on Software Reliability Engineering, 2009
    Co-Authors: Hiroyuki Okamura, Masataka Tokuzane, Tadashi Dohi
    Abstract:

    This paper proposes a patch management model with non-homogeneous Vulnerability-discovery processes to find the optimal security patch release times. The proposed model is an extension of Cavusoglu et al.\ (2006, 2008) by applying non-homogeneous Vulnerability-discovery processes which are based on a Vulnerability life-cycle model, and provides the optimal schedule for security patch release times over a software life cycle by means of cost analysis. In numerical examples, we show that the optimal patch release policy becomes an aperiodic release strategy, and compare the minimum cost under the optimal policy with that under a periodic release strategy. In addition, based on opened Vulnerability Data, we illustrate the optimal security patch release policy for a real software product.

Viet Hung Nguyen - One of the best experts on this subject based on the ideXlab platform.

  • An automatic method for assessing the versions affected by a Vulnerability
    Empirical Software Engineering, 2016
    Co-Authors: Viet Hung Nguyen, Stanislav Dashevskyi, Fabio Massacci
    Abstract:

    Vulnerability Data sources are used by academics to build models, and by industry and government to assess compliance. Errors in such Data sources therefore not only are threats to validity in scientific studies, but also might cause organizations, which rely on retro versions of software, to lose compliance. In this work, we propose an automated method to determine the code evidence for the presence of vulnerabilities in retro software versions. The method scans the code base of each retro version of software for the code evidence to determine whether a retro version is vulnerable or not. It identifies the lines of code that were changed to fix vulnerabilities. If an earlier version contains these deleted lines, it is highly likely that this version is vulnerable. To show the scalability of the method we performed a large scale experiments on Chrome and Firefox (spanning 7,236 vulnerable files and approximately 9,800 vulnerabilities) on the National Vulnerability Database (NVD). The elimination of spurious Vulnerability claims (e.g. entries to a Vulnerability Database such as NVD) found by our method may change the conclusions of studies on the prevalence of foundational vulnerabilities.

  • An Empirical Methodology to Evaluate Vulnerability Discovery Models
    IEEE Transactions on Software Engineering, 2014
    Co-Authors: Fabio Massacci, Viet Hung Nguyen
    Abstract:

    Vulnerability discovery models (VDMs) operate on known Vulnerability Data to estimate the total number of vulnerabilities that will be reported after a software is released. VDMs have been proposed by industry and academia, but there has been no systematic independent evaluation by researchers who are not model proponents. Moreover, the traditional evaluation methodology has some issues that biased previous studies in the field. In this work we propose an empirical methodology that systematically evaluates the performance of VDMs along two dimensions (quality and predictability) and addresses all identified issues of the traditional methodology. We conduct an experiment to evaluate most existing VDMs on popular web browsers’ Vulnerability Data. Our comparison shows that the results obtained by the proposed methodology are more informative than those by the traditional methodology. Among evaluated VDMs, the simplest linear model is the most appropriate choice in terms of both quality and predictability for the first 6-12 months since a release date. Otherwise, logistics-based models are better choices.

  • The (Un)Reliability of NVD Vulnerable Versions Data: an Empirical Experiment on Google Chrome Vulnerabilities
    arXiv: Cryptography and Security, 2013
    Co-Authors: Viet Hung Nguyen, Fabio Massacci
    Abstract:

    NVD is one of the most popular Databases used by researchers to conduct empirical research on Data sets of vulnerabilities. Our recent analysis on Chrome Vulnerability Data reported by NVD has revealed an abnormally phenomenon in the Data where almost vulnerabilities were originated from the first versions. This inspires our experiment to validate the reliability of the NVD vulnerable version Data. In this experiment, we verify for each version of Chrome that NVD claims vulnerable is actually vulnerable. The experiment revealed several errors in the Vulnerability Data of Chrome. Furthermore, we have also analyzed how these errors might impact the conclusions of an empirical study on foundational Vulnerability. Our results show that different conclusions could be obtained due to the Data errors.

  • AsiaCCS - The (un)reliability of NVD vulnerable versions Data: an empirical experiment on Google Chrome vulnerabilities
    Proceedings of the 8th ACM SIGSAC symposium on Information computer and communications security - ASIA CCS '13, 2013
    Co-Authors: Viet Hung Nguyen, Fabio Massacci
    Abstract:

    NVD is one of the most popular Databases used by researchers to conduct empirical research on Data sets of vulnerabilities. Our recent analysis on Chrome Vulnerability Data reported by NVD has revealed an abnormally phenomenon in the Data where almost vulnerabilities were originated from the first versions. This inspires our experiment to validate the reliability of the NVD vulnerable version Data. In this experiment, we verify for each version of Chrome that NVD claims vulnerable is actually vulnerable. The experiment revealed several errors in the Vulnerability Data of Chrome. Furthermore, we have also analyzed how these errors might impact the conclusions of an empirical study on foundational Vulnerability. Our results show that different conclusions could be obtained due to the Data errors.

Hiroyuki Okamura - One of the best experts on this subject based on the ideXlab platform.

  • quantitative security evaluation for software system from Vulnerability Database
    Journal of Software Engineering and Applications, 2013
    Co-Authors: Hiroyuki Okamura, Masataka Tokuzane, Tadashi Dohi
    Abstract:

    This paper proposes a quantitative security evaluation for software system from the Vulnerability Data consisting of discovery date, solution date and exploit publish date based on a stochastic model. More precisely, our model considers a Vulnerability life-cycle model and represents the Vulnerability discovery process as a non-homogeneous Poisson process. In a numerical example, we show the quantitative measures for contents management system of an open source project.

  • Security Evaluation for Software System with Vulnerability Life Cycle and User Profiles
    2012 Workshop on Dependable Transportation Systems Recent Advances in Software Dependability, 2012
    Co-Authors: Hiroyuki Okamura, Masataka Tokuzane, Tadashi Dohi
    Abstract:

    This paper proposes the definition of a security criterion and security assessment based on the criterion. More precisely, we present a stochastic model with a Vulnerability life-cycle model and a user profile using continuous-time Markov chains. The composite model can be represented by a Markov arrival process under some assumptions. Using the Vulnerability Data for existing applications, we perform the security assessment numerically.

  • ISSRE - Optimal Security Patch Release Timing under Non-homogeneous Vulnerability-Discovery Processes
    2009 20th International Symposium on Software Reliability Engineering, 2009
    Co-Authors: Hiroyuki Okamura, Masataka Tokuzane, Tadashi Dohi
    Abstract:

    This paper proposes a patch management model with non-homogeneous Vulnerability-discovery processes to find the optimal security patch release times. The proposed model is an extension of Cavusoglu et al.\ (2006, 2008) by applying non-homogeneous Vulnerability-discovery processes which are based on a Vulnerability life-cycle model, and provides the optimal schedule for security patch release times over a software life cycle by means of cost analysis. In numerical examples, we show that the optimal patch release policy becomes an aperiodic release strategy, and compare the minimum cost under the optimal policy with that under a periodic release strategy. In addition, based on opened Vulnerability Data, we illustrate the optimal security patch release policy for a real software product.

Masataka Tokuzane - One of the best experts on this subject based on the ideXlab platform.

  • quantitative security evaluation for software system from Vulnerability Database
    Journal of Software Engineering and Applications, 2013
    Co-Authors: Hiroyuki Okamura, Masataka Tokuzane, Tadashi Dohi
    Abstract:

    This paper proposes a quantitative security evaluation for software system from the Vulnerability Data consisting of discovery date, solution date and exploit publish date based on a stochastic model. More precisely, our model considers a Vulnerability life-cycle model and represents the Vulnerability discovery process as a non-homogeneous Poisson process. In a numerical example, we show the quantitative measures for contents management system of an open source project.

  • Security Evaluation for Software System with Vulnerability Life Cycle and User Profiles
    2012 Workshop on Dependable Transportation Systems Recent Advances in Software Dependability, 2012
    Co-Authors: Hiroyuki Okamura, Masataka Tokuzane, Tadashi Dohi
    Abstract:

    This paper proposes the definition of a security criterion and security assessment based on the criterion. More precisely, we present a stochastic model with a Vulnerability life-cycle model and a user profile using continuous-time Markov chains. The composite model can be represented by a Markov arrival process under some assumptions. Using the Vulnerability Data for existing applications, we perform the security assessment numerically.

  • ISSRE - Optimal Security Patch Release Timing under Non-homogeneous Vulnerability-Discovery Processes
    2009 20th International Symposium on Software Reliability Engineering, 2009
    Co-Authors: Hiroyuki Okamura, Masataka Tokuzane, Tadashi Dohi
    Abstract:

    This paper proposes a patch management model with non-homogeneous Vulnerability-discovery processes to find the optimal security patch release times. The proposed model is an extension of Cavusoglu et al.\ (2006, 2008) by applying non-homogeneous Vulnerability-discovery processes which are based on a Vulnerability life-cycle model, and provides the optimal schedule for security patch release times over a software life cycle by means of cost analysis. In numerical examples, we show that the optimal patch release policy becomes an aperiodic release strategy, and compare the minimum cost under the optimal policy with that under a periodic release strategy. In addition, based on opened Vulnerability Data, we illustrate the optimal security patch release policy for a real software product.