Evaluation Methods

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 1382055 Experts worldwide ranked by ideXlab platform

Silvia Abrahao - One of the best experts on this subject based on the ideXlab platform.

  • evaluating software architecture Evaluation Methods an internal replication
    Evaluation and Assessment in Software Engineering, 2017
    Co-Authors: Silvia Abrahao, Emilio Insfran
    Abstract:

    Context: The size and complexity of software systems along with the demand for ensuring quality requirements have fostered the interest in software architecture Evaluation Methods. Although several empirical studies have been reported, the actual body of knowledge is still insufficient. To address this concern, we presented a family of four controlled experiments that compares a recently proposed method, the Quality-Driven Architecture Derivation and Improvement (QuaDAI) method against the well-known Architecture Tradeoff Analysis Method (ATAM).Objective: To provide further evidence on the efficiency, effectiveness, and perceived satisfaction of participants using these two software architecture Evaluation Methods. We report the results of a differentiated internal replication study.Method: The same materials used in the baseline experiments were employed in this replication but the participants were sixteen practitioners. In addition, we used a simpler design to reduce the treatments' application sequences.Results: The participants obtained architectures with better quality when applying QuaDAI, and they found this method to be more useful and likely to be used than ATAM, but no difference in terms of efficiency and perceived ease of use were found.Conclusions: The results are in line with the baseline experiments and support the hypothesis that QuaDAI achieve better results than ATAM when performing architectural Evaluations; however, further work is need to improve the Methods usability.

  • usability Evaluation Methods for the web a systematic mapping study
    Information & Software Technology, 2011
    Co-Authors: Adrian Fernandez, Emilio Insfran, Silvia Abrahao
    Abstract:

    Context: In recent years, many usability Evaluation Methods (UEMs) have been employed to evaluate Web applications. However, many of these applications still do not meet most customers' usability expectations and many companies have folded as a result of not considering Web usability issues. No studies currently exist with regard to either the use of usability Evaluation Methods for the Web or the benefits they bring. Objective: The objective of this paper is to summarize the current knowledge that is available as regards the usability Evaluation Methods (UEMs) that have been employed to evaluate Web applications over the last 14years. Method: A systematic mapping study was performed to assess the UEMs that have been used by researchers to evaluate Web applications and their relation to the Web development process. Systematic mapping studies are useful for categorizing and summarizing the existing information concerning a research question in an unbiased manner. Results: The results show that around 39% of the papers reviewed reported the use of Evaluation Methods that had been specifically crafted for the Web. The results also show that the type of method most widely used was that of User Testing. The results identify several research gaps, such as the fact that around 90% of the studies applied Evaluations during the implementation phase of the Web application development, which is the most costly phase in which to perform changes. A list of the UEMs that were found is also provided in order to guide novice usability practitioners. Conclusions: From an initial set of 2703 papers, a total of 206 research papers were selected for the mapping study. The results obtained allowed us to reach conclusions concerning the state-of-the-art of UEMs for evaluating Web applications. This allowed us to identify several research gaps, which subsequently provided us with a framework in which new research activities can be more appropriately positioned, and from which useful information for novice usability practitioners can be extracted.

Ross Jeffery - One of the best experts on this subject based on the ideXlab platform.

  • a framework for classifying and comparing software architecture Evaluation Methods
    Australian Software Engineering Conference, 2004
    Co-Authors: Muhammad Ali Babar, Liming Zhu, Ross Jeffery
    Abstract:

    Software architecture Evaluation has been proposed as a means to achieve quality attributes such as maintainability and reliability in a system. The objective of the Evaluation is to assess whether or not the architecture lead to the desired quality attributes. Recently, there have been a number of Evaluation Methods proposed. There is, however, little consensus on the technical and nontechnical issues that a method should comprehensively address and which of the existing Methods is most suitable for a particular issue. We present a set of commonly known but informally described features of an Evaluation method and organizes them within a framework that should offer guidance on the choice of the most appropriate method for an Evaluation exercise. We use this framework to characterise eight SA Evaluation Methods.

Peréz Luque Estela - One of the best experts on this subject based on the ideXlab platform.

  • Aiding Observational Ergonomic Evaluation Methods Using MOCAP Systems Supported by AI-Based Posture Recognition
    'IOS Press', 2020
    Co-Authors: Igelmo Victor, Syberfeldt Anna, Högberg Dan, García Rivera Francisco, Peréz Luque Estela
    Abstract:

    Observational ergonomic Evaluation Methods have inherent subjectivity. Observers’ assessment results might differ even with the same dataset. While motion capture (MOCAP) systems have improved the speed and the accuracy of motiondata gathering, the algorithms used to compute assessments seem to rely on predefined conditions to perform them. Moreover, the authoring of these conditions is not always clear. Making use of artificial intelligence (AI), along with MOCAP systems, computerized ergonomic assessments can become more alike to human observation and improve over time, given proper training datasets. AI can assist ergonomic experts with posture detection, useful when using Methods that require posture definition, such as Ovako Working Posture Assessment System (OWAS). This study aims to prove the usefulness of an AI model when performing ergonomic assessments and to prove the benefits of having a specialized database for current and future AI training. Several algorithms are trained, using Xsens MVN MOCAP datasets, and their performance within a use case is compared. AI algorithms can provide accurate posture predictions. The developed approach aspires to provide with guidelines to perform AI-assisted ergonomic assessment based on observation of multiple workers.CC BY-NC 4.0 Funder: Knowledge Foundation and the INFINIT research environment (KKS Dnr. 20180167). This work has been made possible with the support of the Knowledge Foundation and the associated INFINIT research environment at the University of Skövde, in the Synergy Virtual Ergonomics (SVE) project, and by the participating organizations. This support is gratefully acknowledged.Synergy Virtual Ergonomics project (SVE

  • Implementation of Ergonomics Evaluation Methods in a Multi-Objective Optimization Framework
    'IOS Press', 2020
    Co-Authors: Iriondo Pascual Aitor, Syberfeldt Anna, Högberg Dan, García Rivera Francisco, Peréz Luque Estela, Hanson Lars
    Abstract:

    Simulations of future production systems enable engineers to find effective and efficient design solutions with fewer physical prototypes and fewer reconstructions. This can save development time and money and be more sustainable. Better design solutions can be found by linking simulations to multiobjective optimization Methods to optimize multiple design objectives. When production systems involve manual work, humans and human activity should be included in the simulation. This can be done using digital human modeling (DHM) software which simulates humans and human activities and can be used to evaluate ergonomic conditions. This paper addresses challenges related to including existing ergonomics Evaluation Methods in the optimization framework. This challenge arises because ergonomics Evaluation Methods are typically developed to enable people to investigate ergonomic conditions by observing real work situations. The Methods are rarely developed to be used by computer algorithms to draw conclusions about ergonomic conditions. This paper investigates how to adapt ergonomics Evaluation Methods to implement the results as objectives in the optimization framework. This paper presents a use case of optimizing a workstation using two different approaches: 1) an observational ergonomics Evaluation method, and 2) a direct measurement method. Both approaches optimized two objectives: the average ergonomics results, and the 90th percentile ergonomics results.CC BY-NC 4.0 Funder: Knowledge Foundation and the INFINIT research environment (KKS Dnr. 20180167). This work has been made possible with support from ITEA3 in the project MOSIM, and with the support from the Knowledge Foundation and the associated INFINIT research environment at the University of Skövde, within the Virtual Factories – KnowledgeDriven Optimization (VF-KDO) research profile and the Synergy Virtual Ergonomics (SVE) project, and by the participating organizations. This support is gratefully acknowledged.MOSIMSynergy Virtual Ergonomics project (SVE

  • Aiding Observational Ergonomic Evaluation Methods Using MOCAP Systems Supported by AI-Based Posture Recognition
    'IOS Press', 2020
    Co-Authors: Igelmo Victor, Syberfeldt Anna, Högberg Dan, García Rivera Francisco, Peréz Luque Estela
    Abstract:

    Observational ergonomic Evaluation Methods have inherent subjectivity. Observers’ assessment results might differ even with the same dataset. While motion capture (MOCAP) systems have improved the speed and the accuracy of motiondata gathering, the algorithms used to compute assessments seem to rely on predefined conditions to perform them. Moreover, the authoring of these conditions is not always clear. Making use of artificial intelligence (AI), along with MOCAP systems, computerized ergonomic assessments can become more alike to human observation and improve over time, given proper training datasets. AI can assist ergonomic experts with posture detection, useful when using Methods that require posture definition, such as Ovako Working Posture Assessment System (OWAS). This study aims to prove the usefulness of an AI model when performing ergonomic assessments and to prove the benefits of having a specialized database for current and future AI training. Several algorithms are trained, using Xsens MVN MOCAP datasets, and their performance within a use case is compared. AI algorithms can provide accurate posture predictions. The developed approach aspires to provide with guidelines to perform AI-assisted ergonomic assessment based on observation of multiple workers.Funder: Knowledge Foundation and the INFINIT research environment (KKS Dnr. 20180167)Synergy Virtual Ergonomic

Muhammad Ali Babar - One of the best experts on this subject based on the ideXlab platform.

  • comparison of scenario based software architecture Evaluation Methods
    Asia-Pacific Software Engineering Conference, 2004
    Co-Authors: Muhammad Ali Babar, Ian Gorton
    Abstract:

    Software engineering community has proposed several Methods to evaluate software architectures with respect to desired quality attributes such as maintainability, performance, and so on. There is, however, little effort on systematically comparing such Methods to discover similarities and differences between existing approaches. In this paper, we compare four well known scenario-based SA Evaluation Methods using an Evaluation framework. The framework considers each method from the point of view of method context, stakeholders, structure, and reliability. The comparison reveals that most of the studied Methods are structurally similar but there are a number of differences among their activities and techniques. Therefore, some Methods overlap, which guides us to identify five common activities that can form a generic process model.

  • a framework for classifying and comparing software architecture Evaluation Methods
    Australian Software Engineering Conference, 2004
    Co-Authors: Muhammad Ali Babar, Liming Zhu, Ross Jeffery
    Abstract:

    Software architecture Evaluation has been proposed as a means to achieve quality attributes such as maintainability and reliability in a system. The objective of the Evaluation is to assess whether or not the architecture lead to the desired quality attributes. Recently, there have been a number of Evaluation Methods proposed. There is, however, little consensus on the technical and nontechnical issues that a method should comprehensively address and which of the existing Methods is most suitable for a particular issue. We present a set of commonly known but informally described features of an Evaluation method and organizes them within a framework that should offer guidance on the choice of the most appropriate method for an Evaluation exercise. We use this framework to characterise eight SA Evaluation Methods.

Emilio Insfran - One of the best experts on this subject based on the ideXlab platform.

  • evaluating software architecture Evaluation Methods an internal replication
    Evaluation and Assessment in Software Engineering, 2017
    Co-Authors: Silvia Abrahao, Emilio Insfran
    Abstract:

    Context: The size and complexity of software systems along with the demand for ensuring quality requirements have fostered the interest in software architecture Evaluation Methods. Although several empirical studies have been reported, the actual body of knowledge is still insufficient. To address this concern, we presented a family of four controlled experiments that compares a recently proposed method, the Quality-Driven Architecture Derivation and Improvement (QuaDAI) method against the well-known Architecture Tradeoff Analysis Method (ATAM).Objective: To provide further evidence on the efficiency, effectiveness, and perceived satisfaction of participants using these two software architecture Evaluation Methods. We report the results of a differentiated internal replication study.Method: The same materials used in the baseline experiments were employed in this replication but the participants were sixteen practitioners. In addition, we used a simpler design to reduce the treatments' application sequences.Results: The participants obtained architectures with better quality when applying QuaDAI, and they found this method to be more useful and likely to be used than ATAM, but no difference in terms of efficiency and perceived ease of use were found.Conclusions: The results are in line with the baseline experiments and support the hypothesis that QuaDAI achieve better results than ATAM when performing architectural Evaluations; however, further work is need to improve the Methods usability.

  • usability Evaluation Methods for the web a systematic mapping study
    Information & Software Technology, 2011
    Co-Authors: Adrian Fernandez, Emilio Insfran, Silvia Abrahao
    Abstract:

    Context: In recent years, many usability Evaluation Methods (UEMs) have been employed to evaluate Web applications. However, many of these applications still do not meet most customers' usability expectations and many companies have folded as a result of not considering Web usability issues. No studies currently exist with regard to either the use of usability Evaluation Methods for the Web or the benefits they bring. Objective: The objective of this paper is to summarize the current knowledge that is available as regards the usability Evaluation Methods (UEMs) that have been employed to evaluate Web applications over the last 14years. Method: A systematic mapping study was performed to assess the UEMs that have been used by researchers to evaluate Web applications and their relation to the Web development process. Systematic mapping studies are useful for categorizing and summarizing the existing information concerning a research question in an unbiased manner. Results: The results show that around 39% of the papers reviewed reported the use of Evaluation Methods that had been specifically crafted for the Web. The results also show that the type of method most widely used was that of User Testing. The results identify several research gaps, such as the fact that around 90% of the studies applied Evaluations during the implementation phase of the Web application development, which is the most costly phase in which to perform changes. A list of the UEMs that were found is also provided in order to guide novice usability practitioners. Conclusions: From an initial set of 2703 papers, a total of 206 research papers were selected for the mapping study. The results obtained allowed us to reach conclusions concerning the state-of-the-art of UEMs for evaluating Web applications. This allowed us to identify several research gaps, which subsequently provided us with a framework in which new research activities can be more appropriately positioned, and from which useful information for novice usability practitioners can be extracted.