Unix File System

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 4029 Experts worldwide ranked by ideXlab platform

Rob Johnson - One of the best experts on this subject based on the ideXlab platform.

  • fixing races for good portable and reliable Unix File System race detection
    Computer and Communications Security, 2015
    Co-Authors: Xiang Cai, Rucha Lale, Xincheng Zhang, Rob Johnson
    Abstract:

    We present a System for performing arbitrary sequences of FileSystem operations and provably detecting any violation of serializable isolation semantics, i.e. any interleaving of attacker and defender actions is equivalent to a non-interleaved sequence of attacker and defender actions. Thus, our System provides a provably secure defense against all Unix File-name race conditions, including the infamous access/open race. Our solution operates entirely in user-space and is portable to any POSIX.1-2008 System, making it usable today. Developers can adopt our solution selectively, using it for security-critical code and using the standard POSIX interface for non-security-critical parts of their programs. Furthermore, the proofs of correctness suggest several simple improvements to the POSIX standard.

  • AsiaCCS - Fixing Races For Good: Portable and Reliable Unix File-System Race Detection
    Proceedings of the 10th ACM Symposium on Information Computer and Communications Security - ASIA CCS '15, 2015
    Co-Authors: Xiang Cai, Rucha Lale, Xincheng Zhang, Rob Johnson
    Abstract:

    We present a System for performing arbitrary sequences of FileSystem operations and provably detecting any violation of serializable isolation semantics, i.e. any interleaving of attacker and defender actions is equivalent to a non-interleaved sequence of attacker and defender actions. Thus, our System provides a provably secure defense against all Unix File-name race conditions, including the infamous access/open race. Our solution operates entirely in user-space and is portable to any POSIX.1-2008 System, making it usable today. Developers can adopt our solution selectively, using it for security-critical code and using the standard POSIX interface for non-security-critical parts of their programs. Furthermore, the proofs of correctness suggest several simple improvements to the POSIX standard.

  • exploiting Unix File System races via algorithmic complexity attacks
    IEEE Symposium on Security and Privacy, 2009
    Co-Authors: Rob Johnson
    Abstract:

    We defeat two proposed Unix File-System race condition defense mechanisms. First, we attack the probabilistic defense mechanism of Tsafrir, et al., published at USENIX FAST 2008. We then show that the same attack breaks the kernel-based dynamic race detector of Tsyrklevich and Yee, published at USENIX Security 2003. We then argue that all kernel-based dynamic race detectors must have a model of the programs they protect or provide imperfect protection. The techniques we develop for performing these attacks work on multiple Unix operating Systems, on uni- and multi-processors, and are useful for exploiting most Unix File-System races. We conclude that programmers should use provably-secure methods for avoiding race conditions when accessing the File-System.

  • exploiting Unix File System races via algorithmic complexity attacks
    IEEE Symposium on Security and Privacy, 2009
    Co-Authors: Xiang Cai, Yuwei Gui, Rob Johnson
    Abstract:

    We defeat two proposed Unix File-System race condition defense mechanisms. First, we attack the probabilistic defense mechanism of Tsafrir, et al., published at USENIX FAST 2008. We then show that the same attack breaks the kernel-based dynamic race detector of Tsyrklevich and Yee, published at USENIX Security 2003. We then argue that all kernel-based dynamic race detectors must have a model of the programs they protect or provide imperfect protection. The techniques we develop for performing these attacks work on multiple Unix operating Systems, on uni- and multi-processors, and are useful for exploiting most Unix File-System races. We conclude that programmers should use provably-secure methods for avoiding race conditions when accessing the File-System.

  • IEEE Symposium on Security and Privacy - Exploiting Unix File-System Races via Algorithmic Complexity Attacks
    2009 30th IEEE Symposium on Security and Privacy, 2009
    Co-Authors: Xiang Cai, Yuwei Gui, Rob Johnson
    Abstract:

    We defeat two proposed Unix File-System race condition defense mechanisms. First, we attack the probabilistic defense mechanism of Tsafrir, et al., published at USENIX FAST 2008. We then show that the same attack breaks the kernel-based dynamic race detector of Tsyrklevich and Yee, published at USENIX Security 2003. We then argue that all kernel-based dynamic race detectors must have a model of the programs they protect or provide imperfect protection. The techniques we develop for performing these attacks work on multiple Unix operating Systems, on uni- and multi-processors, and are useful for exploiting most Unix File-System races. We conclude that programmers should use provably-secure methods for avoiding race conditions when accessing the File-System.

Liuba Shrira - One of the best experts on this subject based on the ideXlab platform.

  • la. REPORT SECURITY CLASSIFICATION lb. RESTRICTIVE MARKINGS Unclassified 2a. SECURITY CLASSIFICATION AUTHORITY 3. DISTRIBUTION / AVAILABILITY OF REPORT Approved for public release; distribution
    2016
    Co-Authors: Lb Massachusetts, Barbara Liskov, Sanjay Ghemawat, Robert Gruber, Paul Johnson, Liuba Shrira, Dtics Electe-sep B, M. Williams
    Abstract:

    logs '9. ABSTRACT (Continue on reverse if necessary and identify by block number) This paper describes the design and implementation of the Harp File System. Harp is a replicated Unix File System accessible via the VFS interface. It provides highly available and reliable storage 'or Files and guarantees that File operations are executed atomically in spite of concurrency and failures. it uses a novel variation of the primary copy replication technique that provides good performance because it allows us to trade disk accesses for network communication. Harp is intended to be used within a fil

  • SOSP - Replication in the harp File System
    Proceedings of the thirteenth ACM symposium on Operating systems principles - SOSP '91, 1991
    Co-Authors: Barbara Liskov, Sanjay Ghemawat, Robert Gruber, Paul Johnson, Liuba Shrira, M. Williams
    Abstract:

    This paper describes the design and implementation of the Harp File System. Harp is a replicated Unix File System accessible via the VFS interface. It provides highly available and reliable storage for Files and guarantees that File operations are executed atomically in spite of concurrency and failures. It uses a novel variation of the primary copy replication technique that provides good performance because it allows us to trade disk accesses for network communication. Harp is intended to be used within a File service in a distributed network; in our current implementation, it is accessed via NFS. Preliminary performance results indicate that Harp provides equal or better response time and System capacity than an unreplicated implementation of NFS that uses Unix Files directly.

  • a replicated Unix File System
    ACM SIGOPS European Workshop, 1990
    Co-Authors: Barbara Liskov, Robert Gruber, Paul Johnson, Liuba Shrira
    Abstract:

    An implementation of a replicated Unix File System for use via the NFS protocol is reported. The replication method is intended to support the following goals: when used via NFS, the System should provide the same semantics as an unreplicated NFS server, and it should be usable with whatever NFS client code exists at the client machine; the System should not depend on proprietary information; the System should continue to provide service even when one replica is crashed or inaccessible, but have only two copies of each File; the System should perform as reliably as a single, unreplicated NFS server; and the System should provide response time comparable to that provided by a single NFS server. In particular, the delay observed by the client in doing a read or write should be no greater than with a single server. The System organization, replication method, performance, and Unix issues are discussed. >

  • Workshop on the Management of Replicated Data - A replicated Unix File System
    Proceedings of the 4th workshop on ACM SIGOPS European workshop - EW 4, 1990
    Co-Authors: Barbara Liskov, Robert Gruber, Paul Johnson, Liuba Shrira
    Abstract:

    An implementation of a replicated Unix File System for use via the NFS protocol is reported. The replication method is intended to support the following goals: when used via NFS, the System should provide the same semantics as an unreplicated NFS server, and it should be usable with whatever NFS client code exists at the client machine; the System should not depend on proprietary information; the System should continue to provide service even when one replica is crashed or inaccessible, but have only two copies of each File; the System should perform as reliably as a single, unreplicated NFS server; and the System should provide response time comparable to that provided by a single NFS server. In particular, the delay observed by the client in doing a read or write should be no greater than with a single server. The System organization, replication method, performance, and Unix issues are discussed. >

  • Efficient recovery in Harp (replicated Unix File System)
    [1992 Proceedings] Second Workshop on the Management of Replicated Data, 1
    Co-Authors: Barbara Liskov, Sanjay Ghemawat, Robert Gruber, Paul Johnson, Liuba Shrira
    Abstract:

    Harp is a replicated Unix File System accessible via the VFS interface. It provides highly available and reliable storage for Files and guarantees that File operations are executed atomically in spite of concurrency and failures. Replication enables Harp to safely trade disk accesses for network communication and thus to provide good performance both during normal operation and during recovery. The authors focus on the techniques Harp uses to achieve efficient recovery. >

Xiang Cai - One of the best experts on this subject based on the ideXlab platform.

  • fixing races for good portable and reliable Unix File System race detection
    Computer and Communications Security, 2015
    Co-Authors: Xiang Cai, Rucha Lale, Xincheng Zhang, Rob Johnson
    Abstract:

    We present a System for performing arbitrary sequences of FileSystem operations and provably detecting any violation of serializable isolation semantics, i.e. any interleaving of attacker and defender actions is equivalent to a non-interleaved sequence of attacker and defender actions. Thus, our System provides a provably secure defense against all Unix File-name race conditions, including the infamous access/open race. Our solution operates entirely in user-space and is portable to any POSIX.1-2008 System, making it usable today. Developers can adopt our solution selectively, using it for security-critical code and using the standard POSIX interface for non-security-critical parts of their programs. Furthermore, the proofs of correctness suggest several simple improvements to the POSIX standard.

  • AsiaCCS - Fixing Races For Good: Portable and Reliable Unix File-System Race Detection
    Proceedings of the 10th ACM Symposium on Information Computer and Communications Security - ASIA CCS '15, 2015
    Co-Authors: Xiang Cai, Rucha Lale, Xincheng Zhang, Rob Johnson
    Abstract:

    We present a System for performing arbitrary sequences of FileSystem operations and provably detecting any violation of serializable isolation semantics, i.e. any interleaving of attacker and defender actions is equivalent to a non-interleaved sequence of attacker and defender actions. Thus, our System provides a provably secure defense against all Unix File-name race conditions, including the infamous access/open race. Our solution operates entirely in user-space and is portable to any POSIX.1-2008 System, making it usable today. Developers can adopt our solution selectively, using it for security-critical code and using the standard POSIX interface for non-security-critical parts of their programs. Furthermore, the proofs of correctness suggest several simple improvements to the POSIX standard.

  • exploiting Unix File System races via algorithmic complexity attacks
    IEEE Symposium on Security and Privacy, 2009
    Co-Authors: Xiang Cai, Yuwei Gui, Rob Johnson
    Abstract:

    We defeat two proposed Unix File-System race condition defense mechanisms. First, we attack the probabilistic defense mechanism of Tsafrir, et al., published at USENIX FAST 2008. We then show that the same attack breaks the kernel-based dynamic race detector of Tsyrklevich and Yee, published at USENIX Security 2003. We then argue that all kernel-based dynamic race detectors must have a model of the programs they protect or provide imperfect protection. The techniques we develop for performing these attacks work on multiple Unix operating Systems, on uni- and multi-processors, and are useful for exploiting most Unix File-System races. We conclude that programmers should use provably-secure methods for avoiding race conditions when accessing the File-System.

  • IEEE Symposium on Security and Privacy - Exploiting Unix File-System Races via Algorithmic Complexity Attacks
    2009 30th IEEE Symposium on Security and Privacy, 2009
    Co-Authors: Xiang Cai, Yuwei Gui, Rob Johnson
    Abstract:

    We defeat two proposed Unix File-System race condition defense mechanisms. First, we attack the probabilistic defense mechanism of Tsafrir, et al., published at USENIX FAST 2008. We then show that the same attack breaks the kernel-based dynamic race detector of Tsyrklevich and Yee, published at USENIX Security 2003. We then argue that all kernel-based dynamic race detectors must have a model of the programs they protect or provide imperfect protection. The techniques we develop for performing these attacks work on multiple Unix operating Systems, on uni- and multi-processors, and are useful for exploiting most Unix File-System races. We conclude that programmers should use provably-secure methods for avoiding race conditions when accessing the File-System.

Barbara Liskov - One of the best experts on this subject based on the ideXlab platform.

  • la. REPORT SECURITY CLASSIFICATION lb. RESTRICTIVE MARKINGS Unclassified 2a. SECURITY CLASSIFICATION AUTHORITY 3. DISTRIBUTION / AVAILABILITY OF REPORT Approved for public release; distribution
    2016
    Co-Authors: Lb Massachusetts, Barbara Liskov, Sanjay Ghemawat, Robert Gruber, Paul Johnson, Liuba Shrira, Dtics Electe-sep B, M. Williams
    Abstract:

    logs '9. ABSTRACT (Continue on reverse if necessary and identify by block number) This paper describes the design and implementation of the Harp File System. Harp is a replicated Unix File System accessible via the VFS interface. It provides highly available and reliable storage 'or Files and guarantees that File operations are executed atomically in spite of concurrency and failures. it uses a novel variation of the primary copy replication technique that provides good performance because it allows us to trade disk accesses for network communication. Harp is intended to be used within a fil

  • SOSP - Replication in the harp File System
    Proceedings of the thirteenth ACM symposium on Operating systems principles - SOSP '91, 1991
    Co-Authors: Barbara Liskov, Sanjay Ghemawat, Robert Gruber, Paul Johnson, Liuba Shrira, M. Williams
    Abstract:

    This paper describes the design and implementation of the Harp File System. Harp is a replicated Unix File System accessible via the VFS interface. It provides highly available and reliable storage for Files and guarantees that File operations are executed atomically in spite of concurrency and failures. It uses a novel variation of the primary copy replication technique that provides good performance because it allows us to trade disk accesses for network communication. Harp is intended to be used within a File service in a distributed network; in our current implementation, it is accessed via NFS. Preliminary performance results indicate that Harp provides equal or better response time and System capacity than an unreplicated implementation of NFS that uses Unix Files directly.

  • a replicated Unix File System
    ACM SIGOPS European Workshop, 1990
    Co-Authors: Barbara Liskov, Robert Gruber, Paul Johnson, Liuba Shrira
    Abstract:

    An implementation of a replicated Unix File System for use via the NFS protocol is reported. The replication method is intended to support the following goals: when used via NFS, the System should provide the same semantics as an unreplicated NFS server, and it should be usable with whatever NFS client code exists at the client machine; the System should not depend on proprietary information; the System should continue to provide service even when one replica is crashed or inaccessible, but have only two copies of each File; the System should perform as reliably as a single, unreplicated NFS server; and the System should provide response time comparable to that provided by a single NFS server. In particular, the delay observed by the client in doing a read or write should be no greater than with a single server. The System organization, replication method, performance, and Unix issues are discussed. >

  • Workshop on the Management of Replicated Data - A replicated Unix File System
    Proceedings of the 4th workshop on ACM SIGOPS European workshop - EW 4, 1990
    Co-Authors: Barbara Liskov, Robert Gruber, Paul Johnson, Liuba Shrira
    Abstract:

    An implementation of a replicated Unix File System for use via the NFS protocol is reported. The replication method is intended to support the following goals: when used via NFS, the System should provide the same semantics as an unreplicated NFS server, and it should be usable with whatever NFS client code exists at the client machine; the System should not depend on proprietary information; the System should continue to provide service even when one replica is crashed or inaccessible, but have only two copies of each File; the System should perform as reliably as a single, unreplicated NFS server; and the System should provide response time comparable to that provided by a single NFS server. In particular, the delay observed by the client in doing a read or write should be no greater than with a single server. The System organization, replication method, performance, and Unix issues are discussed. >

  • Efficient recovery in Harp (replicated Unix File System)
    [1992 Proceedings] Second Workshop on the Management of Replicated Data, 1
    Co-Authors: Barbara Liskov, Sanjay Ghemawat, Robert Gruber, Paul Johnson, Liuba Shrira
    Abstract:

    Harp is a replicated Unix File System accessible via the VFS interface. It provides highly available and reliable storage for Files and guarantees that File operations are executed atomically in spite of concurrency and failures. Replication enables Harp to safely trade disk accesses for network communication and thus to provide good performance both during normal operation and during recovery. The authors focus on the techniques Harp uses to achieve efficient recovery. >

Padma Lochan Pradhan - One of the best experts on this subject based on the ideXlab platform.

  • A Literature Survey on Risk Assessment for Unix Operating System: Risk Assessment on Unix OS
    International Journal of Advanced Pervasive and Ubiquitous Computing, 2019
    Co-Authors: Padma Lochan Pradhan
    Abstract:

    This proposed literature survey provides basic data regarding the first step of risk identification and analysis to achieve a secured infrastructure. The demand and risk are two parts of the same coin. The demand is directly proportional to the risk, but preventive control is inversely proportional to risk. The necessity of preventive control in any organization has increased because of the changes in logic, structure, and the type of technology applied to services that generate risks. Finally, the business increases along with technology, which creates risks and spreads over its infrastructure. We have to focus on protecting, detecting, correcting, verifying and validating the Unix File System. This survey article proposes and resolves the Unix File System by applying a hardening, re-configuration and access control mechanism up to the highest level of preventive control.

  • Dynamic RWX ACM Model Optimizing The Risk on Real Time Unix File System
    Indonesian Journal of Electrical Engineering and Computer Science, 2015
    Co-Authors: Prashant Kumar Patra, Padma Lochan Pradhan
    Abstract:

    The preventive control is one of the well advance controls for recent security for protection of data and services from the uncertainty. Because, increasing the importance of business, communication technologies and growing the external risk is a very common phenomenon now-a-days. The System security risks put forward to the management focus on IT infrastructure (OS). The top management has to decide whether to accept expected losses or to invest into technical security mechanisms in order to minimize the frequency of attacks, thefts as well as uncertainty. This work contributes to the development of an optimization model that aims to determine the optimal cost to be invested into security mechanisms deciding on the measure component of UFS attribute. Our model should be design in such way, the Read, Write & Execute automatically Protected, Detected and Corrected on RTOS. We have to optimize the System attacks and down time by implementing RWX ACM mechanism based on semi-group structure, mean while improving the throughput of the Business, Resources & Technology. DOI:  http://dx.doi.org/10.11591/telkomnika.v13i2.7059  Full Text: PDF

  • Dynamic FCFS ACM Model for Risk Assessment on Real Time Unix File System
    Transportation Systems and Engineering, 2015
    Co-Authors: Prashant Kumar Patra, Padma Lochan Pradhan
    Abstract:

    The access control is a mechanism that a System grants, revoke the right to access the object. The subject and object can able to integrate, synchronize, communicate and optimize through read, write and execute over a UFS. The access control mechanism is the process of mediating each and every request to System resources, application and data maintained by a operating System and determining whether the request should be approve, created, granted or denied as per top management policy. The AC mechanism, management and decision is enforced by implementing regulations established by a security policy. The management has to investigate the basic concepts behind access control design and enforcement, point out different security requirements that may need to be taken into consideration. The authors have to formulate and implement several ACM on normalizing and optimizing them step by step, that have been highlighted in proposed model for development and production purpose. This research paper contributes to the development of an optimization model that aims and objective to determine the optimal cost, time and maximize the quality of services to be invested into security model and mechanisms deciding on the measure components of UFS. This model has to apply to ACM utilities over a Web portal server on object oriented and distributed environment. This ACM will be resolve the uncertainty, un-order, un formal and unset up (U^4) problems of web portal on right time and right place of any where & any time in around the globe. It will be more measurable and accountable for performance, fault tolerance, throughput, bench marking and risk assessment on any application.