The Experts below are selected from a list of 16092 Experts worldwide ranked by ideXlab platform
Liudong Xing - One of the best experts on this subject based on the ideXlab platform.
-
reliability evaluation of unrepairable k out of n g systems with phased mission requirements based on record values
Reliability Engineering & System Safety, 2018Co-Authors: Guanjun Wang, Liudong Xing, Rui PengAbstract:Abstract In this paper, the reliability evaluation problem for k-out-of-n: G phased-mission systems with imperfect Fault Coverage is studied. The system is composed of n identical components, and the mission consists of multiple, consecutive, and non-overlapping phases. Each phase of the mission has a specified requirement of the minimal number of working components, and therefore the system forms a certain k-out-of-n: G system at the phase. The failure distributions of the components are affected by working circumstance of the mission phases, and the degradation accumulates with the phases for each component. The formulas for computing the state probabilities of the system at different phases and the overall mission reliability are derived with the consideration of imperfect Fault Coverage for the components. The explicit expression of mission reliability is presented for the phased-mission systems with the same components requirements for all phases. In numerical examples, not only the mission reliability of the system is calculated, but also the optimal number of components is obtained to maximize the reliability for a given phased mission system.
-
binary decision diagram based reliability evaluation of k out of n k warm standby systems subject to Fault level Coverage
Proceedings of the Institution of Mechanical Engineers Part O: Journal of Risk and Reliability, 2013Co-Authors: Qingqing Zhai, Liudong Xing, Rui Peng, Jun YangAbstract:Warm standby sparing is a Fault-tolerance technique that attempts to improve system reliability while compromising the system energy consumption and recovery time. However, when the imperfect Fault Coverage effect (an uncovered component Fault can propagate and cause the whole system to fail) is considered, the reliability of a warm standby sparing can decrease with an increasing level of the redundancy. This article studies the reliability of a warm standby sparing subject to imperfect Fault Coverage, in particular, Fault level Coverage where the Coverage probability of a component depends on the number of failed components in the system. The suggested approach is combinatorial and based on a generalized binary decision diagrams technique. The complexity for the binary decision diagram construction is analyzed, and several case studies are given to illustrate the application of the approach.
-
reliability evaluation of phased mission systems with imperfect Fault Coverage and common cause failures
IEEE Transactions on Reliability, 2007Co-Authors: Liudong XingAbstract:This paper proposes efficient methods to assess the reliability of phased-mission systems (PMS) considering both imperfect Fault Coverage (IPC), and common-cause failures (CCF). The IPC introduces multimode failures that must be considered in the accurate reliability analysis of PMS. Another difficulty in analysis is to allow for multiple CCF that can affect different subsets of system components, and which can occur s-dependently. Our methodology for resolving the above difficulties is to separate the consideration of both IPC and CCF from the combinatorics of the binary decision diagram-based solution, and adjust the input and output of the program to generate the reliability of PMS with IPC and CCF. According to the separation order, two equivalent approaches are developed. The applications and advantages of the approaches are illustrated through examples. PMS without IPC and/or CCF appear as special cases of the approaches
I Pomeranz - One of the best experts on this subject based on the ideXlab platform.
-
reduced Fault Coverage as a target for design scaffolding security
International On-Line Testing Symposium, 2020Co-Authors: I Pomeranz, Sandip KunduAbstract:The hardware design process adds, at each level, scaffolding logic to aid test, debug and engineering changes. Fault injection attacks can place a design in a non-functional state where the scaffolding is utilized for obtaining information that is not intended for the user. To counter such security threats, this paper suggests to include in the design logic that will identify invalid state patterns, and reset or hold a sufficient part of the state to prevent useful computations from occurring. To support such a solution, the paper also suggests that the Fault Coverage of a scan-based test set in the presence of single stuck-at Faults can be used for designing the reset and hold logic. Such a test set relies on the use of non-functional states for Fault detection. Without using the reset and hold logic, the test set achieves a high Fault Coverage (over 80% in the experiments reported in this paper). When the reset and hold logic is used, the Fault Coverage of the same test set is reduced below a preselected target (30% in the experiments reported in this paper). The reduction in the Fault Coverage occurs because much of the non-functional state space is no longer available. This also eliminates the security risk associated with these states.
-
design for testability for improved path delay Fault Coverage of critical paths
International Conference on VLSI Design, 2008Co-Authors: I Pomeranz, S M ReddyAbstract:The path delay Fault Coverage achievable for a circuit may be low even when enhanced scan is available and only Faults associated with critical paths are considered. To address this issue we describe a design-for-testability (DFT) approach that targets the critical (or longest) paths of the circuit. In a basic step of the proposed procedure, a fanout branch that is not on a longest path is disconnected from its stem, and driven from a new input in order to reduce the dependencies between off- path inputs of a target path delay Fault. We present experimental results to demonstrate the increase in Fault Coverage of Faults associated with longest paths as the number of new inputs is increased. We also discuss the implementation of the DFT approach in the context of scan design.
-
primary input vectors to avoid in random test sequences for synchronous sequential circuits
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2008Co-Authors: I Pomeranz, S M ReddyAbstract:Random test sequences may be used for manufacturing testing as well as for simulation-based design verification. This paper studies one of the reasons for the fact that random primary input sequences achieve very low Fault Coverage for synchronous sequential circuits. It is shown that a synchronous sequential circuit may have input cubes, or incompletely specified input vectors, that synchronize a subset of its state variables, i.e., it forces them to certain specified values. When an input cube c that synchronizes the subset of state variables S(c) has a small number of specified inputs, the input vectors covered by it may appear often in a random primary input sequence. As a result, the sequence will force the same values on the state variables in S(c) repeatedly. This may limit the Fault Coverage that the sequence can obtain. To address this issue, a procedure is described for modifying a random primary input sequence to eliminate the appearance of input vectors that synchronize subsets of state variables. It is demonstrated that this procedure has a significant effect on the Fault Coverage that can be achieved by random primary input sequences.
-
proptest a property based test pattern generator for sequential circuits using test compaction
Design Automation Conference, 1999Co-Authors: S M Reddy, I PomeranzAbstract:We describe a property based test generation procedure that uses static compaction to generate test sequences that achieve high Fault Coverages at a low computational complexity. A class of test compaction procedures are proposed and used in the property based test generator. Experimental results indicate that these compaction procedures can be used to implement the proposed test generator to achieve high Fault Coverage with relatively smaller run times.
-
vector restoration based static compaction of test sequences for synchronous sequential circuits
International Conference on Computer Design, 1997Co-Authors: I Pomeranz, S M ReddyAbstract:The authors propose a new procedure for static compaction that belongs to the class of procedures that omit test vectors from a given test sequence in order to reduce its size without reducing the Fault Coverage. The previous procedures that achieved high levels of compaction using this technique attempted to omit test vectors from a given test sequence one at a time or in consecutive subsequences. Consequently, the omission of each vector or subsequence required extensive simulation to determine the effects of each vector omission on the Fault Coverage. The proposed procedure first omits (almost) all the test vectors from the sequence, and then restores some of them as necessary to achieve the required Fault Coverage. The decision to restore a vector requires simulation of a single Fault. Thus, the overall computational effort of this procedure is significantly lower. The loss of compaction compared to the scheme that omits the vectors one at a time or in subsequences is small in most cases. Experimental results are presented to support these claims.
H Levendel - One of the best experts on this subject based on the ideXlab platform.
-
availability requirement for a Fault management server in high availability communication systems
IEEE Transactions on Reliability, 2003Co-Authors: H LevendelAbstract:This paper investigates the availability requirement for the Fault management server in high-availability communication systems. This study shows that the availability of the Fault management server does not need to be 99.999% in order to guarantee a 99.999% system availability, as long as the fail-safe ratio (the probability that the failure of the Fault management server does not bring down the system) and the Fault Coverage ratio (probability that the failure in the system can be detected and recovered by the Fault management server) are sufficiently high. Tradeoffs can be made among the availability of the Fault management server, the fail-safe ratio, and the Fault Coverage ratio to optimize system availability. A cost-effective design for the Fault management server is proposed.
S M Reddy - One of the best experts on this subject based on the ideXlab platform.
-
design for testability for improved path delay Fault Coverage of critical paths
International Conference on VLSI Design, 2008Co-Authors: I Pomeranz, S M ReddyAbstract:The path delay Fault Coverage achievable for a circuit may be low even when enhanced scan is available and only Faults associated with critical paths are considered. To address this issue we describe a design-for-testability (DFT) approach that targets the critical (or longest) paths of the circuit. In a basic step of the proposed procedure, a fanout branch that is not on a longest path is disconnected from its stem, and driven from a new input in order to reduce the dependencies between off- path inputs of a target path delay Fault. We present experimental results to demonstrate the increase in Fault Coverage of Faults associated with longest paths as the number of new inputs is increased. We also discuss the implementation of the DFT approach in the context of scan design.
-
primary input vectors to avoid in random test sequences for synchronous sequential circuits
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, 2008Co-Authors: I Pomeranz, S M ReddyAbstract:Random test sequences may be used for manufacturing testing as well as for simulation-based design verification. This paper studies one of the reasons for the fact that random primary input sequences achieve very low Fault Coverage for synchronous sequential circuits. It is shown that a synchronous sequential circuit may have input cubes, or incompletely specified input vectors, that synchronize a subset of its state variables, i.e., it forces them to certain specified values. When an input cube c that synchronizes the subset of state variables S(c) has a small number of specified inputs, the input vectors covered by it may appear often in a random primary input sequence. As a result, the sequence will force the same values on the state variables in S(c) repeatedly. This may limit the Fault Coverage that the sequence can obtain. To address this issue, a procedure is described for modifying a random primary input sequence to eliminate the appearance of input vectors that synchronize subsets of state variables. It is demonstrated that this procedure has a significant effect on the Fault Coverage that can be achieved by random primary input sequences.
-
proptest a property based test pattern generator for sequential circuits using test compaction
Design Automation Conference, 1999Co-Authors: S M Reddy, I PomeranzAbstract:We describe a property based test generation procedure that uses static compaction to generate test sequences that achieve high Fault Coverages at a low computational complexity. A class of test compaction procedures are proposed and used in the property based test generator. Experimental results indicate that these compaction procedures can be used to implement the proposed test generator to achieve high Fault Coverage with relatively smaller run times.
-
vector restoration based static compaction of test sequences for synchronous sequential circuits
International Conference on Computer Design, 1997Co-Authors: I Pomeranz, S M ReddyAbstract:The authors propose a new procedure for static compaction that belongs to the class of procedures that omit test vectors from a given test sequence in order to reduce its size without reducing the Fault Coverage. The previous procedures that achieved high levels of compaction using this technique attempted to omit test vectors from a given test sequence one at a time or in consecutive subsequences. Consequently, the omission of each vector or subsequence required extensive simulation to determine the effects of each vector omission on the Fault Coverage. The proposed procedure first omits (almost) all the test vectors from the sequence, and then restores some of them as necessary to achieve the required Fault Coverage. The decision to restore a vector requires simulation of a single Fault. Thus, the overall computational effort of this procedure is significantly lower. The loss of compaction compared to the scheme that omits the vectors one at a time or in subsequences is small in most cases. Experimental results are presented to support these claims.
Srimat T Chakradhar - One of the best experts on this subject based on the ideXlab platform.
-
hybrid delay scan a low hardware overhead scan based delay test technique for high Fault Coverage and compact test sets
Design Automation and Test in Europe, 2004Co-Authors: Seongmoon Wang, Xiao Liu, Srimat T ChakradharAbstract:A novel scan-based delay test approach, referred as the hybrid delay scan, is proposed in this paper. The proposed scan-based delay testing method combines advantages of the skewed-load and broad-side approaches. Unlike the skewed-load approach whose design requirement is often too costly to meet due to the fast switching scan enable signal, the hybrid delay scan does not require a strong buffer or buffer tree to drive the fast switching scan enable signal. Hardware overhead added to standard scan designs to implement the hybrid approach is negligible. Since the fast scan enable signal is internally generated, no external pin is required. Transition delay Fault Coverage achieved by the hybrid approach is equal to or higher than that achieved by the broad-side load for all ISCAS 89 benchmark circuits. On an average, about 4.5% improvement in Fault Coverage is obtained by the hybrid approach over the broad-side approach.