Management Analysis

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 15426 Experts worldwide ranked by ideXlab platform

Michael A. Krebs - One of the best experts on this subject based on the ideXlab platform.

  • A Sensitivity Analysis of the National Fire Management Analysis System
    Western Journal of Applied Forestry, 2004
    Co-Authors: Ervin G. Schuster, Michael A. Krebs
    Abstract:

    Abstract A sensitivity Analysis was conducted of the National Fire Management Analysis System (NFMAS) to better understand the relationship between data input and model outcomes, as reflected by changes in C+NVC and MEL program options. Five input variables were selected for sensitization: Unit Mission Costs, Average Acre Costs, Net Value Change, Production Rates, and Escaped Fire Limits. A stratified random sample of 32 national forests was selected, according to the distribution of national forests within Forest Service regions and fire frequency classes, on the basis of historical fire data. NFMAS database tables were obtained and manipulated, with each variable increased and decreased at six levels (±25, ±50, and ±100%). Results indicated that Production Rates was always the most influential variable, Unit Mission Costs was always least influential, and the influence of the other variables depends on the choice of model outcome. In general, greater sensitivity changes resulted in greater changes in model outcome, but no consistent pattern of influence could be found regarding program option.West. J. Appl. For. 19(1):5–12.

  • Sensitivity of National Fire Management Analysis System (NFMAS) Solutions to Changes in Interagency Initital Attack (IIAA) Input Data 1
    1999
    Co-Authors: Ervin G. Schuster, Michael A. Krebs
    Abstract:

    A sensitivity Analysis was conducted of the National Fire Management Analysis System (NFMAS) to better understand the relationship between data input and model output. After consultations with fire managers and researchers, five input variables were selected for sensitization: unit mission costs, average acre costs, net value change, production rates, and escaped fire limits. A random sample of 23 National Forests was selected, according to the distribution of forests within regions and fire frequency classes, on the basis of historical fire data. Database tables were manipulated, with each variable increased and decreased at six levels (±25, ±50, and +100 percent). The Interagency Initial Attack Assessment (IIAA) model was run at each successive level, generating a new set of output, cost plus net value change (C+NVC), for each sensitized variable. Results were analyzed statistically, and production rates and average acre costs were found to be the most influential, while unit mission costs was least influential. In general, greater sensitivity changes resulted in greater changes in C+NVC. The National Fire Management Analysis System (NFMAS) was designed in the late 1970's by Richard Chase, for use by the USDA Forest Service in strategic fire Management and budget planning. It was later adopted by other fire-Management agencies, including the USDI's Bureau of Land Management and the National Park Service. The NFMAS simulation model (NARTC 1997) currently consists of two software programs: Personal Computer Historical Analysis (PCHA), which provides historical weather and fire behavior data, and the Interagency Initial Attack Assessment (IIAA). IIAA (COMPUS 1997) is the analytical engine, a tool intended to help analyze various fire-Management scenarios or program options that represent various combinations of fire-fighting resources and other budget items. The overriding purpose of IIAA Analysis is to help identify the most efficient level (MEL) of funding for a given National Forest, that is, the program option associated with lowest sum of the presuppression budget, emergency fire-suppression costs, and net value change of resources (the sum of positive and negative resource effects). An administrative unit's initial attack organization will use these data in developing budget requests. NFMAS is applied to a sub-forest area, the fire Management zone, and results are aggregated to produce Forest, Forest Service Region, and agencywide totals. This service-wide total is used as the basis for budget proposals. Management of NFMAS is a major undertaking in the Forest Service. Training sessions are held at the Marana, Arizona, facility. NFMAS training is received by selected Forest Service personnel, ranging from district fire Management officers to upper level managers, and by personnel from USDI agencies and various state fire-fighting organizations. A NFMAS "certification” process is implemented at both the regional and forest levels. Certification involves inspection teams reviewing and revising data and procedures used by field personnel to help ensure compliance with national policy, promoting consistency between units. NFMAS-related databases are constructed, calibrated, and analyzed to identify the most efficient initial attack organization for each Forest. The budgets associated with those most-efficient organizations, MEL, become part of the Forest Service appropriations process. 1

Amia Baker - One of the best experts on this subject based on the ideXlab platform.

William A Hyman - One of the best experts on this subject based on the ideXlab platform.

  • Prospective Risk Management: Analysis Evaluation and Control
    Patient Safety & Quality Healthcare, 2010
    Co-Authors: Begona Narvaez, William A Hyman
    Abstract:

    Prospective Risk Management Analysis, Evaluation, and Control By Begoña Narvaez and William A. Hyman, ScD, PE Begona Narvaez & William A. Hyman. (2010) Patient Safety & Quality Healthcare. September/October 2010. 7:5 pp 34-36 Available from: http://www.psqh.com/september-october-2010/633-prospective-risk-Management-.html In medical device design and regulation, risk Management has been embodied in the ISO 14971 standard: Medical Devices—Application of risk Management to medical devices. However, the philosophy and requirements of 14971 can be applied more broadly within the healthcare setting. For example, this type of application is incorporated in draft standard IEC 80001: Application of risk Management for IT-networks incorporating medical devices (Gee, 2008). ISO, the International Standards Organization, is a worldwide federation of national standards bodies. IEC, the International Electrotechnical Commission, is a developer of voluntary standards for a wide range of electrotechnology including various medical systems. As described in 14971, the modern approach to risk Management is that it should be a pro-active and systematic method for identifying and controlling risks. “Systematic” here means that there is a defined process, and that this process is routinely applied. This approach recognizes that risk control does not happen by itself but instead must be an overt activity. Defining Risk Management ISO 14971 defines risk Management as “the systematic application of Management policies, procedures, and practices to the tasks of analyzing, evaluating, and controlling risk.” Risk is formally defined for specific adverse events as having two components: the probability of the occurrence of harm and the severity of the occurrence of that harm if it occurs. The general objective in risk Management is to identify those risks that must be mitigated, to take steps to mitigate them, and to assess if they have in fact been mitigated. Important here is the realization that risks may not be completely eliminated for a variety of practical reasons, and that therefore there will be residual risks. In this context, safety means the freedom from unacceptable risk. Risk Management Process Risk Management as defined in 14971 has three main steps: risk Analysis, risk evaluation, and risk control. Risk Analysis Risk Analysis means to identify all hazards associated with the devices, procedures, and other activities of the hospital in both normal and fault conditions. A normal condition is when, for example, equipment is operating properly, while a fault condition is when the equipment has undergone some kind of failure. On the personnel side, a normal condition might be when a nursing unit is fully staffed, while a fault condition might be unplanned for understaffing. Similarly a normal condition is when all nurses have been trained on the brand of infusion pump in use, while a fault condition might be when an agency nurse is brought in who is not familiar with the particular pump in use. Fault conditions are usually considered one at a time in order to avoid the endless permutations of multiple faults. However, multiple faults can of course actually occur, and some multiple fault scenarios should be considered. Risk Evaluation Following risk Analysis comes risk evaluation in which the risk(s) associated with identified hazardous situations are estimated by giving at least a qualitative scale to the two components of risk: probability and severity. It must then be decided if the risk is acceptable as it currently exists, or if it needs to be reduced. It might be tempting here to assert that all risks should be addressed and minimized with equal attention. However in most cases this is impractical because of the resources that would be required and the possible lack of means to do so. This realization is part of the concept that risk Management does not mean the elimination of all risks. Risk Control When the estimated risk exceeds the internally determined acceptable limit, the risk needs to be reduced. In order to achieve this, the following approaches are generally considered: • Improving inherent safety by (re)design of systems or processes. • Adding protective measures through new systems or processes. • Focusing on training. As listed, these approaches are in a hierarchal order of effectiveness. Controlling the risk by improving inherent safety is always the best choice, and reliance on training is always the weakest choice. An example here is the risk of patient harm from an overdose of a drug from a manually programmed infusion pump where the overdose is a result of the operator inputting a wrong value. While this theoretically should not occur, it not only does occur but it occurs with some frequency. A new pump that has a barcode reader that reads pharmacy supplied drug information can eliminate the programming error because there is then no programming. However the barcode information must of course be correct, which then presents a new potential hazard. “Smart” pumps with dosing libraries fall under protective measures. These pumps can catch some programming errors, but they do not eliminate all such errors, given the multiple proper uses of many drugs and the propensity of users to overrule an alert that is generated. Telling the users to be more careful, or to double check their work, is in the third category. If hazards are considered instead of harm, then detection before harm occurs can also be included in the Analysis. Systems with alarms are a good example here. Assuming that an alarm always works perfectly, and is always responded to in a timely manner, then the hazard that triggers the alarm can be prevented from causing harm. However the likelihood of detection and response must be included in this Analysis. Reassessment of Risk After a control measure for a specific risk is identified, a further Analysis must be done of the expected impact of the control measure. Does the control measure reduce severity or probability, or add detection? In predicting effectiveness, the temptation to be too generous with respect to the degree of improvement must be avoided. It is also appropriate to assess whether the risk control measures can introduce any new hazards. Once a control method is decided upon, it must of course be implemented. In turn the implementation must be verified to see that it has actually become operational. If there is a significant time element involved in implementation, then interim control measures may also have to be considered. After implementation, additional Analysis should be undertaken to see if the risk that was previously identified has actually been affected by the new control measures. As with all important activities, proper documentation is required as an ongoing record of the process. This serves as a means to control work in progress, to monitor results, and to demonstrate when necessary that there is an ongoing risk Management process. Methods Used in the Risk Management Process ISO 14971 includes a discussion of several relatively standard techniques for analyzing risks. These techniques or tools can be combined with others to create a systematic assessment. However it must be remembered that tools are not a substitute for intelligent thoughtfulness; they are only an application guide. A valuable first step is the Preliminary Hazard Analysis (PHA) during which hazards and hazardous situations are first identified based on generally known hazards, personal and professional experience, and local conditions of activities, equipment, staffing, and adverse events. An effective PHA requires a good deal of pessimism (or reality). It also sets the tone of risk Management by not allowing an overly rosy perspective to lead to outright rejection of a hazard. Fault Tree Analysis (FTA) is a powerful tool that does not seem to be widely used in healthcare, although there are some published examples (Hyman & Johnson, 2008). An FTA is based on a specific hypothetical negative event. For this “top event” a number of possible causes are then identified. Some of these may be able to cause the top event by themselves, and thus they are a set of “or” events. Others may have to occur in combination; these are “and” events. The Analysis is then continued to additional layers of causation. This has some similarity to a Five Whys type of root cause Analysis except that FTA is applied prospectively to cover many possible causes while the Five Whys is generally applied retrospectively after a specific event. There is a clear link however. If an FTA is complete, then any root cause Analysis of a specific event should be a specific path in the FTA. Once an FTA is constructed it can be used as a graphic tool to illustrate and discuss the associated risks, and to identify ways to control causation pathways. Any proposed control measure must impact the FTA by either eliminating a cause, or by reducing the probability of that cause occurring. Another risk Analysis method is Failure Mode and Effects Analysis (FMEA). This approach is based on “What happens if…?” questions to analyze specific potential faults. In engineering, this is often a specific component of a design, but here it can be a broader system or human failure. As the name suggests, the first step is to consider the various ways in which a specific system can fail. These are the failure modes. The next step is to assess the effects of each specific failure mode with respect to both hazard and harm. The risks of these hazards are then assessed for severity and probability. Mitigation methods can then be identified to address either the failure mode or the effects. Healthcare FMEA (HFMEA®), developed by the Veterans Administration, is a form of FMEA that is specific to healthcare. HFMEA begins with developing an understanding of the multiple steps in a process and then applies FMEA to each of these steps. This process approach makes HFMEA simila

Wolfgang Ketter - One of the best experts on this subject based on the ideXlab platform.

  • Agent-assisted supply chain Management: Analysis and lessons learned
    Decision Support Systems, 2014
    Co-Authors: William Groves, Maria Gini, John Collins, Wolfgang Ketter
    Abstract:

    This work explores "big data" Analysis in the context of supply chain Management. Specifically we propose the use of agent-based competitive simulation as a tool to develop complex decision making strategies and to stress test them under a variety of market conditions. We propose an extensive set of business key performance indicators (KPIs) and apply them to analyze market dynamics. We present these results through statistics and visualizations. Our testbed is a competitive simulation, the Trading Agent Competition for Supply-Chain Management (TAC SCM), which simulates a one-year product life-cycle where six autonomous agents compete to procure component parts and sell finished products to customers. The paper provides Analysis techniques and insights applicable to other supply chain environments. © 2013 Elsevier B.V.

Ervin G. Schuster - One of the best experts on this subject based on the ideXlab platform.

  • A Sensitivity Analysis of the National Fire Management Analysis System
    Western Journal of Applied Forestry, 2004
    Co-Authors: Ervin G. Schuster, Michael A. Krebs
    Abstract:

    Abstract A sensitivity Analysis was conducted of the National Fire Management Analysis System (NFMAS) to better understand the relationship between data input and model outcomes, as reflected by changes in C+NVC and MEL program options. Five input variables were selected for sensitization: Unit Mission Costs, Average Acre Costs, Net Value Change, Production Rates, and Escaped Fire Limits. A stratified random sample of 32 national forests was selected, according to the distribution of national forests within Forest Service regions and fire frequency classes, on the basis of historical fire data. NFMAS database tables were obtained and manipulated, with each variable increased and decreased at six levels (±25, ±50, and ±100%). Results indicated that Production Rates was always the most influential variable, Unit Mission Costs was always least influential, and the influence of the other variables depends on the choice of model outcome. In general, greater sensitivity changes resulted in greater changes in model outcome, but no consistent pattern of influence could be found regarding program option.West. J. Appl. For. 19(1):5–12.

  • Sensitivity of National Fire Management Analysis System (NFMAS) Solutions to Changes in Interagency Initital Attack (IIAA) Input Data 1
    1999
    Co-Authors: Ervin G. Schuster, Michael A. Krebs
    Abstract:

    A sensitivity Analysis was conducted of the National Fire Management Analysis System (NFMAS) to better understand the relationship between data input and model output. After consultations with fire managers and researchers, five input variables were selected for sensitization: unit mission costs, average acre costs, net value change, production rates, and escaped fire limits. A random sample of 23 National Forests was selected, according to the distribution of forests within regions and fire frequency classes, on the basis of historical fire data. Database tables were manipulated, with each variable increased and decreased at six levels (±25, ±50, and +100 percent). The Interagency Initial Attack Assessment (IIAA) model was run at each successive level, generating a new set of output, cost plus net value change (C+NVC), for each sensitized variable. Results were analyzed statistically, and production rates and average acre costs were found to be the most influential, while unit mission costs was least influential. In general, greater sensitivity changes resulted in greater changes in C+NVC. The National Fire Management Analysis System (NFMAS) was designed in the late 1970's by Richard Chase, for use by the USDA Forest Service in strategic fire Management and budget planning. It was later adopted by other fire-Management agencies, including the USDI's Bureau of Land Management and the National Park Service. The NFMAS simulation model (NARTC 1997) currently consists of two software programs: Personal Computer Historical Analysis (PCHA), which provides historical weather and fire behavior data, and the Interagency Initial Attack Assessment (IIAA). IIAA (COMPUS 1997) is the analytical engine, a tool intended to help analyze various fire-Management scenarios or program options that represent various combinations of fire-fighting resources and other budget items. The overriding purpose of IIAA Analysis is to help identify the most efficient level (MEL) of funding for a given National Forest, that is, the program option associated with lowest sum of the presuppression budget, emergency fire-suppression costs, and net value change of resources (the sum of positive and negative resource effects). An administrative unit's initial attack organization will use these data in developing budget requests. NFMAS is applied to a sub-forest area, the fire Management zone, and results are aggregated to produce Forest, Forest Service Region, and agencywide totals. This service-wide total is used as the basis for budget proposals. Management of NFMAS is a major undertaking in the Forest Service. Training sessions are held at the Marana, Arizona, facility. NFMAS training is received by selected Forest Service personnel, ranging from district fire Management officers to upper level managers, and by personnel from USDI agencies and various state fire-fighting organizations. A NFMAS "certification” process is implemented at both the regional and forest levels. Certification involves inspection teams reviewing and revising data and procedures used by field personnel to help ensure compliance with national policy, promoting consistency between units. NFMAS-related databases are constructed, calibrated, and analyzed to identify the most efficient initial attack organization for each Forest. The budgets associated with those most-efficient organizations, MEL, become part of the Forest Service appropriations process. 1