Customer Representative

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 120 Experts worldwide ranked by ideXlab platform

Adam Wojciechowski - One of the best experts on this subject based on the ideXlab platform.

  • OTM Workshops - Experimental evaluation of 'on-site Customer' XP practice on quality of software and team effectiveness
    On the Move to Meaningful Internet Systems: OTM 2010 Workshops, 2010
    Co-Authors: Adam Wojciechowski, Maciej Wesolowski, Wojciech Complak
    Abstract:

    Extreme Programming (XP) is an agile software production methodology based on organizational foundations collected in so-called practices. One of them: On-site Customer is focused on frequent and intensive involvement of Customer Representative in software creation process. It is said that no one knows Customer's business and its specific needs better than the Customer himself. However, it is hard to argue whether On-site Customer practice brings positive results on quality of software and effort effectiveness without experimental evaluation of the procedure. In order to provide assessment of the influence how On-site Customer affects quality of produced software and effectiveness of software team we performed an experiment where six software teams worked in parallel having an on-site Customer while the other seven teams could only contact their Customer Representative by telephone or email. Results collected in the paper provide a description of experiment based on extended version of educational game eXtreme89 as well as results collected in experiment and analysis of quality of software produced by teams working according to different software creation paradigms. Data gained during the experiment confirmed that On-site Customer practice has substantial positive influence on quality of communication and speed of software production. Experimental results gave us quantitative assessment in discussion on effectiveness of this software production XP practice.

  • Extreme89 : An XP war game
    Lecture Notes in Computer Science, 2006
    Co-Authors: Jerzy Nawrocki, Adam Wojciechowski
    Abstract:

    Extreme89 is a simulation game designed to introduce software teams - programmers and Customers - to Extreme Programming practices. The game is run by a moderator and lasts 89 minutes - this is the reason why we named it Extreme89. Several teams build-up of Customer Representative and programmers compete to earn maximum number of points. Teams earn points for delivering properly produced artifacts. Artifacts in the game correspond to software modules delivered to Customer in real software projects. Every artifact in the game is assigned a Fibonacci-like function. Manual computing values of the functions performed by the programmers substitutes real programming. Rules of Extreme89 closely correspond to XP practices. The game has two releases while each release is build-up of two increments. Extreme89 with its atmosphere of the competition and time-compressed active lesson of XP was successfully introduced to Computer Science students at Poznan University of Technology.

  • RISE - Extreme89 : an XP war game
    Rapid Integration of Software Engineering Techniques, 2006
    Co-Authors: Jerzy Nawrocki, Adam Wojciechowski
    Abstract:

    Extreme89 is a simulation game designed to introduce software teams – programmers and Customers – to Extreme Programming practices. The game is run by a moderator and lasts 89 minutes – this is the reason why we named it Extreme89. Several teams build-up of Customer Representative and programmers compete to earn maximum number of points. Teams earn points for delivering properly produced artifacts. Artifacts in the game correspond to software modules delivered to Customer in real software projects. Every artifact in the game is assigned a Fibonacci-like function. Manual computing values of the functions performed by the programmers substitutes real programming. Rules of Extreme89 closely correspond to XP practices. The game has two releases while each release is build-up of two increments. Extreme89 with its atmosphere of the competition and time-compressed active lesson of XP was successfully introduced to Computer Science students at Poznan University of Technology.

  • Extreme programming modified: Embrace requirements engineering practices
    Proceedings of the IEEE International Conference on Requirements Engineering, 2002
    Co-Authors: Jerzy Nawrocki, Bartosz Walter, Michał Jasiński, Adam Wojciechowski
    Abstract:

    Extreme programming (XP) is an agile (lightweight) software development methodology and it becomes more and more popular. XP proposes many interesting practices, but it also has some weaknesses. From the software engineering point of view the most important issues are: maintenance problems resulting from very limited documentation (XP relies on code and test cases only), and lack of wider perspective of a system to be built. Moreover, XP assumes that there is only one Customer Representative. In many cases there are several Representatives (each one with his own view of the system and different priorities) and then some XP practices should be modified. In the paper we assess XP from two points of view: the capability maturity model and the Sommerville-Sawyer model (1997). We also propose how to introduce documented requirements to XP, how to modify the planning game to allow many Customer Representatives and how to get a wider perspective of a system to be built at the beginning of the project lifecycle.

Jerzy Nawrocki - One of the best experts on this subject based on the ideXlab platform.

  • RISE - Extreme89 : an XP war game
    Rapid Integration of Software Engineering Techniques, 2006
    Co-Authors: Jerzy Nawrocki, Adam Wojciechowski
    Abstract:

    Extreme89 is a simulation game designed to introduce software teams – programmers and Customers – to Extreme Programming practices. The game is run by a moderator and lasts 89 minutes – this is the reason why we named it Extreme89. Several teams build-up of Customer Representative and programmers compete to earn maximum number of points. Teams earn points for delivering properly produced artifacts. Artifacts in the game correspond to software modules delivered to Customer in real software projects. Every artifact in the game is assigned a Fibonacci-like function. Manual computing values of the functions performed by the programmers substitutes real programming. Rules of Extreme89 closely correspond to XP practices. The game has two releases while each release is build-up of two increments. Extreme89 with its atmosphere of the competition and time-compressed active lesson of XP was successfully introduced to Computer Science students at Poznan University of Technology.

  • Extreme89 : An XP war game
    Lecture Notes in Computer Science, 2006
    Co-Authors: Jerzy Nawrocki, Adam Wojciechowski
    Abstract:

    Extreme89 is a simulation game designed to introduce software teams - programmers and Customers - to Extreme Programming practices. The game is run by a moderator and lasts 89 minutes - this is the reason why we named it Extreme89. Several teams build-up of Customer Representative and programmers compete to earn maximum number of points. Teams earn points for delivering properly produced artifacts. Artifacts in the game correspond to software modules delivered to Customer in real software projects. Every artifact in the game is assigned a Fibonacci-like function. Manual computing values of the functions performed by the programmers substitutes real programming. Rules of Extreme89 closely correspond to XP practices. The game has two releases while each release is build-up of two increments. Extreme89 with its atmosphere of the competition and time-compressed active lesson of XP was successfully introduced to Computer Science students at Poznan University of Technology.

  • Extreme programming modified: Embrace requirements engineering practices
    Proceedings of the IEEE International Conference on Requirements Engineering, 2002
    Co-Authors: Jerzy Nawrocki, Bartosz Walter, Michał Jasiński, Adam Wojciechowski
    Abstract:

    Extreme programming (XP) is an agile (lightweight) software development methodology and it becomes more and more popular. XP proposes many interesting practices, but it also has some weaknesses. From the software engineering point of view the most important issues are: maintenance problems resulting from very limited documentation (XP relies on code and test cases only), and lack of wider perspective of a system to be built. Moreover, XP assumes that there is only one Customer Representative. In many cases there are several Representatives (each one with his own view of the system and different priorities) and then some XP practices should be modified. In the paper we assess XP from two points of view: the capability maturity model and the Sommerville-Sawyer model (1997). We also propose how to introduce documented requirements to XP, how to modify the planning game to allow many Customer Representatives and how to get a wider perspective of a system to be built at the beginning of the project lifecycle.

Paul Davis - One of the best experts on this subject based on the ideXlab platform.

  • towards performance evaluation of cloudant for Customer Representative workloads
    IEEE International Conference on Cloud Engineering, 2016
    Co-Authors: Sina Meraji, Connor Lavoy, Brian J Hall, Amy J Wang, Gabi Rothenstein, Paul Davis
    Abstract:

    NoSQL Database Management Systems (DBMS) have increased in prominence and market share in the last few years. IBM Cloudant is one of the well-known enterprise NoSQL DBMS which has been offered in both cloud and also as an onpremise standalone product (i.e. Cloudant Local). Cloudant has a large number of Customers such as Samsung, Adobe, DHL, etc. In this paper, we will demonstrate how Cloudant can maintain its high throughput for Customer-Representative workloads which have a mix of create/read/update/delete operations and also queries that contain views. Moreover, we also show how we can dynamically add/remove database nodes to/from the cluster without bringing down the working nodes. We also demonstrate how we have improved the performance for queries with views by changing a native library.

  • IC2E Workshops - Towards Performance Evaluation of Cloudant for Customer Representative Workloads
    2016 IEEE International Conference on Cloud Engineering Workshop (IC2EW), 2016
    Co-Authors: Sina Meraji, Connor Lavoy, Brian J Hall, Amy J Wang, Gabi Rothenstein, Paul Davis
    Abstract:

    NoSQL Database Management Systems (DBMS) have increased in prominence and market share in the last few years. IBM Cloudant is one of the well-known enterprise NoSQL DBMS which has been offered in both cloud and also as an onpremise standalone product (i.e. Cloudant Local). Cloudant has a large number of Customers such as Samsung, Adobe, DHL, etc. In this paper, we will demonstrate how Cloudant can maintain its high throughput for Customer-Representative workloads which have a mix of create/read/update/delete operations and also queries that contain views. Moreover, we also show how we can dynamically add/remove database nodes to/from the cluster without bringing down the working nodes. We also demonstrate how we have improved the performance for queries with views by changing a native library.

Lorraine Morgan - One of the best experts on this subject based on the ideXlab platform.

  • Beyond the Customer
    Information and Software Technology, 2011
    Co-Authors: Kieran Conboy, Lorraine Morgan
    Abstract:

    ContextA particular strength of agile systems development approaches is that they encourage a move away from 'introverted' development, involving the Customer in all areas of development, leading to more innovative and hence more valuable information system. However, a move toward open innovation requires a focus that goes beyond a single Customer Representative, involving a broader range of stakeholders, both inside and outside the organisation in a continuous, systematic way. ObjectiveThis paper provides an in-depth discussion of the applicability and implications of open innovation in an agile environment. MethodWe draw on two illustrative cases from industry. ResultsWe highlight some distinct problems that arose when two project teams tried to combine agile and open innovation principles. For example, openness is often compromised by a perceived competitive element and lack of transparency between business units. In addition, minimal documentation often reduce effective knowledge transfer while the use of short iterations, stand-up meetings and presence of on-site Customer reduce the amount of time for sharing ideas outside the team. ConclusionA clear understanding of the inter- and intra-organisational applicability and implications of open innovation in agile systems development is required to address key challenges for research and practice.

  • Agile Software Development - Future research in agile systems development: applying open innovation principles within the agile organisation
    Agile Software Development, 2010
    Co-Authors: Kieran Conboy, Lorraine Morgan
    Abstract:

    A particular strength of agile approaches is that they move away from ‘introverted’ development and intimately involve the Customer in all areas of development, supposedly leading to the development of a more innovative and hence more valuable information system. However, we argue that a single Customer Representative is too narrow a focus to adopt and that involvement of stakeholders beyond the software development itself is still often quite weak and in some cases non-existent. In response, we argue that current thinking regarding innovation in agile development needs to be extended to include multiple stakeholders outside the business unit. This paper explores the intra-organisational applicability and implications of open innovation in agile systems development. Additionally, it argues for a different perspective of project management that includes collaboration and knowledge-sharing with other business units, Customers, partners, and other relevant stakeholders pertinent to the business success of an organisation, thus embracing open innovation principles.

  • OPAALS - Exploring the Role of Value Networks for Software Innovation
    Lecture Notes of the Institute for Computer Sciences Social Informatics and Telecommunications Engineering, 2010
    Co-Authors: Lorraine Morgan, Kieran Conboy
    Abstract:

    This paper describes a research-in-progress that aims to explore the applicability and implications of open innovation practices in two firms - one that employs agile development methods and another that utilizes open source software. The open innovation paradigm has a lot in common with open source and agile development methodologies. A particular strength of agile approaches is that they move away from ‘introverted’ development, involving only the development personnel, and intimately involves the Customer in all areas of software creation, supposedly leading to the development of a more innovative and hence more valuable information system. Open source software (OSS) development also shares two key elements of the open innovation model, namely the collaborative development of the technology and shared rights to the use of the technology. However, one shortfall with agile development in particular is the narrow focus on a single Customer Representative. In response to this, we argue that current thinking regarding innovation needs to be extended to include multiple stakeholders both across and outside the organization. Additionally, for firms utilizing open source, it has been found that their position in a network of potential complementors determines the amount of superior value they create for their Customers. Thus, this paper aims to get a better understanding of the applicability and implications of open innovation practices in firms that employ open source and agile development methodologies. In particular, a conceptual framework is derived for further testing.

Matthew E. Tolentino - One of the best experts on this subject based on the ideXlab platform.

  • hcbench methodology development and characterization of a Customer usage Representative big data hadoop benchmark
    IEEE International Symposium on Workload Characterization, 2013
    Co-Authors: Vikram A. Saletore, Karthik Krishnan, Vish Viswanathan, Matthew E. Tolentino
    Abstract:

    Big Data analytics using Map-Reduce over Hadoop has become a leading edge paradigm for distributed programming over large server clusters. The Hadoop platform is used extensively for interactive and batch analytics in ecommerce, telecom, media, retail, social networking, and being actively evaluated for use in other areas. However, to date no industry standard or Customer Representative benchmarks exist to measure and evaluate the true performance of a Hadoop cluster. Current Hadoop micro-benchmarks such as HiBench-2, GridMix-3, Terasort, etc. are narrow functional slices of applications that Customers run to evaluate their Hadoop clusters. However, these benchmarks fail to capture the real usages and performance in a datacenter environment. Given that typical datacenter deployments of Hadoop process a wide variety of analytic interactive and query jobs in addition to batch transform jobs under strict Service Level Agreement (SLA) requirements, performance benchmarks used to evaluate clusters must capture the effects of concurrently running such diverse job types in production environments. In this paper, we present the methodology and the development of a Customer datacenter usage Representative Hadoop benchmark "HcBench" which includes a mix of large number of Customer Representative interactive, query, machine learning, and transform jobs, a variety of data sizes, and includes compute, storage 110, and network intensive jobs, with inter-job arrival times as in a typical datacenter environment. We present the details of this benchmark and discuss application level, server and cluster level performance characterization collected on an Intel Sandy Bridge Xeon Processor Hadoop cluster.

  • IISWC - HcBench: Methodology, development, and characterization of a Customer usage Representative big data/Hadoop benchmark
    2013 IEEE International Symposium on Workload Characterization (IISWC), 2013
    Co-Authors: Vikram A. Saletore, Karthik Krishnan, Vish Viswanathan, Matthew E. Tolentino
    Abstract:

    Big Data analytics using Map-Reduce over Hadoop has become a leading edge paradigm for distributed programming over large server clusters. The Hadoop platform is used extensively for interactive and batch analytics in ecommerce, telecom, media, retail, social networking, and being actively evaluated for use in other areas. However, to date no industry standard or Customer Representative benchmarks exist to measure and evaluate the true performance of a Hadoop cluster. Current Hadoop micro-benchmarks such as HiBench-2, GridMix-3, Terasort, etc. are narrow functional slices of applications that Customers run to evaluate their Hadoop clusters. However, these benchmarks fail to capture the real usages and performance in a datacenter environment. Given that typical datacenter deployments of Hadoop process a wide variety of analytic interactive and query jobs in addition to batch transform jobs under strict Service Level Agreement (SLA) requirements, performance benchmarks used to evaluate clusters must capture the effects of concurrently running such diverse job types in production environments. In this paper, we present the methodology and the development of a Customer datacenter usage Representative Hadoop benchmark "HcBench" which includes a mix of large number of Customer Representative interactive, query, machine learning, and transform jobs, a variety of data sizes, and includes compute, storage 110, and network intensive jobs, with inter-job arrival times as in a typical datacenter environment. We present the details of this benchmark and discuss application level, server and cluster level performance characterization collected on an Intel Sandy Bridge Xeon Processor Hadoop cluster.