Virtual File System

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 22029 Experts worldwide ranked by ideXlab platform

José A.b. Fortes - One of the best experts on this subject based on the ideXlab platform.

  • Seamless Access to Decentralized Storage Services in Computational Grids via a Virtual File System
    Cluster Computing, 2004
    Co-Authors: Renato J. Figueiredo, Nirav Kapadia, José A.b. Fortes
    Abstract:

    This paper describes a novel technique for establishing a Virtual File System that allows data to be transferred user-transparently and on-demand across computing and storage servers of a computational grid. Its implementation is based on extensions to the Network File System (NFS) that are encapsulated in software proxies. A key differentiator between this approach and previous work is the way in which File servers are partitioned: while conventional File Systems share a single (logical) server across multiple users, the Virtual File System employs multiple proxy servers that are created, customized and terminated dynamically, for the duration of a computing session, on a per-user basis. Furthermore, the solution does not require modifications to standard NFS clients and servers. The described approach has been deployed in the context of the PUNCH network-computing infrastructure, and is unique in its ability to integrate unmodified, interactive applications (even commercial ones) and existing computing infrastructure into a network computing environment. Experimental results show that: (1) the Virtual File System performs well in comparison to native NFS in a local-area setup, with mean overheads of 1 and 18%, for the single-client execution of the Andrew benchmark in two representative computing environments, (2) the average overhead for eight clients can be reduced to within 1% of native NFS with the use of concurrent proxies, (3) the wide-area performance is within 1% of the local-area performance for a typical compute-intensive PUNCH application (SimpleScalar), while for the I/O-intensive application Andrew the wide-area performance is 5.5 times worse than the local-area performance.

  • Seamless Access to Decentralized Storage Services in Computational Grids via a Virtual File System
    Cluster Computing, 2004
    Co-Authors: Renato J. Figueiredo, Nirav Kapadia, José A.b. Fortes
    Abstract:

    This paper describes a novel technique for establishing a Virtual File System that allows data to be transferred user-transparently and on-demand across computing and storage servers of a computational grid. Its implementation is based on extensions to the Network File System (NFS) that are encapsulated in software proxies. A key differentiator between this approach and previous work is the way in which File servers are partitioned: while conventional File Systems share a single (logical) server across multiple users, the Virtual File System employs multiple proxy servers that are created, customized and terminated dynamically, for the duration of a computing session, on a per-user basis. Furthermore, the solution does not require modifications to standard NFS clients and servers. The described approach has been deployed in the context of the PUNCH network-computing infrastructure, and is unique in its ability to integrate unmodified, interactive applications (even commercial ones) and existing computing infrastructure into a network computing environment. Experimental results show that: (1) the Virtual File System performs well in comparison to native NFS in a local-area setup, with mean overheads of 1 and 18%, for the single-client execution of the Andrew benchmark in two representative computing environments, (2) the average overhead for eight clients can be reduced to within 1% of native NFS with the use of concurrent proxies, (3) the wide-area performance is within 1% of the local-area performance for a typical compute-intensive PUNCH application (SimpleScalar), while for the I/O-intensive application Andrew the wide-area performance is 5.5 times worse than the local-area performance.

  • The PUNCH Virtual File System: seamless access to decentralized storage services in a computational grid
    Proceedings 10th IEEE International Symposium on High Performance Distributed Computing, 2001
    Co-Authors: Renato J. Figueiredo, N.h. Kapadia, José A.b. Fortes
    Abstract:

    Describes a Virtual File System that allows data to be transferred on demand between storage and computational servers for the duration of a computing session. The solution works with unmodified applications (even commercial ones) running on standard operating Systems and hardware. The Virtual File System employs software proxies to broker transactions between standard NFS (Network File System) clients and servers; the proxies are dynamically configured and controlled by computational grid middleware. The approach has been implemented and extensively exercised in the context of PUNCH (Purdue University Network Computing Hubs), an operational computing portal that has more than 1,500 users across 24 countries. The results show that the Virtual File System performs well in comparison to native NFS: performance analyses show that the proxy incurs mean overheads of 1% and 18% with respect to native NFS for a single-client execution of the Andrew benchmark in two representative computing environments, and that the average overhead for eight clients can be reduced to within 1% of native NFS with concurrent proxies.

  • HPDC - The PUNCH Virtual File System: seamless access to decentralized storage services in a computational grid
    Proceedings 10th IEEE International Symposium on High Performance Distributed Computing, 1
    Co-Authors: Renato Figueiredo, N.h. Kapadia, José A.b. Fortes
    Abstract:

    Describes a Virtual File System that allows data to be transferred on demand between storage and computational servers for the duration of a computing session. The solution works with unmodified applications (even commercial ones) running on standard operating Systems and hardware. The Virtual File System employs software proxies to broker transactions between standard NFS (Network File System) clients and servers; the proxies are dynamically configured and controlled by computational grid middleware. The approach has been implemented and extensively exercised in the context of PUNCH (Purdue University Network Computing Hubs), an operational computing portal that has more than 1,500 users across 24 countries. The results show that the Virtual File System performs well in comparison to native NFS: performance analyses show that the proxy incurs mean overheads of 1% and 18% with respect to native NFS for a single-client execution of the Andrew benchmark in two representative computing environments, and that the average overhead for eight clients can be reduced to within 1% of native NFS with concurrent proxies.

Renato J. Figueiredo - One of the best experts on this subject based on the ideXlab platform.

  • Seamless Access to Decentralized Storage Services in Computational Grids via a Virtual File System
    Cluster Computing, 2004
    Co-Authors: Renato J. Figueiredo, Nirav Kapadia, José A.b. Fortes
    Abstract:

    This paper describes a novel technique for establishing a Virtual File System that allows data to be transferred user-transparently and on-demand across computing and storage servers of a computational grid. Its implementation is based on extensions to the Network File System (NFS) that are encapsulated in software proxies. A key differentiator between this approach and previous work is the way in which File servers are partitioned: while conventional File Systems share a single (logical) server across multiple users, the Virtual File System employs multiple proxy servers that are created, customized and terminated dynamically, for the duration of a computing session, on a per-user basis. Furthermore, the solution does not require modifications to standard NFS clients and servers. The described approach has been deployed in the context of the PUNCH network-computing infrastructure, and is unique in its ability to integrate unmodified, interactive applications (even commercial ones) and existing computing infrastructure into a network computing environment. Experimental results show that: (1) the Virtual File System performs well in comparison to native NFS in a local-area setup, with mean overheads of 1 and 18%, for the single-client execution of the Andrew benchmark in two representative computing environments, (2) the average overhead for eight clients can be reduced to within 1% of native NFS with the use of concurrent proxies, (3) the wide-area performance is within 1% of the local-area performance for a typical compute-intensive PUNCH application (SimpleScalar), while for the I/O-intensive application Andrew the wide-area performance is 5.5 times worse than the local-area performance.

  • Seamless Access to Decentralized Storage Services in Computational Grids via a Virtual File System
    Cluster Computing, 2004
    Co-Authors: Renato J. Figueiredo, Nirav Kapadia, José A.b. Fortes
    Abstract:

    This paper describes a novel technique for establishing a Virtual File System that allows data to be transferred user-transparently and on-demand across computing and storage servers of a computational grid. Its implementation is based on extensions to the Network File System (NFS) that are encapsulated in software proxies. A key differentiator between this approach and previous work is the way in which File servers are partitioned: while conventional File Systems share a single (logical) server across multiple users, the Virtual File System employs multiple proxy servers that are created, customized and terminated dynamically, for the duration of a computing session, on a per-user basis. Furthermore, the solution does not require modifications to standard NFS clients and servers. The described approach has been deployed in the context of the PUNCH network-computing infrastructure, and is unique in its ability to integrate unmodified, interactive applications (even commercial ones) and existing computing infrastructure into a network computing environment. Experimental results show that: (1) the Virtual File System performs well in comparison to native NFS in a local-area setup, with mean overheads of 1 and 18%, for the single-client execution of the Andrew benchmark in two representative computing environments, (2) the average overhead for eight clients can be reduced to within 1% of native NFS with the use of concurrent proxies, (3) the wide-area performance is within 1% of the local-area performance for a typical compute-intensive PUNCH application (SimpleScalar), while for the I/O-intensive application Andrew the wide-area performance is 5.5 times worse than the local-area performance.

  • The PUNCH Virtual File System: seamless access to decentralized storage services in a computational grid
    Proceedings 10th IEEE International Symposium on High Performance Distributed Computing, 2001
    Co-Authors: Renato J. Figueiredo, N.h. Kapadia, José A.b. Fortes
    Abstract:

    Describes a Virtual File System that allows data to be transferred on demand between storage and computational servers for the duration of a computing session. The solution works with unmodified applications (even commercial ones) running on standard operating Systems and hardware. The Virtual File System employs software proxies to broker transactions between standard NFS (Network File System) clients and servers; the proxies are dynamically configured and controlled by computational grid middleware. The approach has been implemented and extensively exercised in the context of PUNCH (Purdue University Network Computing Hubs), an operational computing portal that has more than 1,500 users across 24 countries. The results show that the Virtual File System performs well in comparison to native NFS: performance analyses show that the proxy incurs mean overheads of 1% and 18% with respect to native NFS for a single-client execution of the Andrew benchmark in two representative computing environments, and that the average overhead for eight clients can be reduced to within 1% of native NFS with concurrent proxies.

Yifeng Zhu - One of the best experts on this subject based on the ideXlab platform.

  • CEFT: A cost-effective, fault-tolerant parallel Virtual File System
    Journal of Parallel and Distributed Computing, 2006
    Co-Authors: Yifeng Zhu, Hong Jiang
    Abstract:

    The vulnerability of computer nodes due to component failures is a critical issue for cluster-based File Systems. This paper studies the development and deployment of mirroring in cluster-based parallel Virtual File Systems to provide fault tolerance and analyzes the tradeoffs between the performance and the reliability in the mirroring scheme. It presents the design and implementation of CEFT, a scalable RAID-10 style File System based on PVFS, and proposes four novel mirroring protocols depending on whether the mirroring operations are server-driven or client-driven, whether they are asynchronous or synchronous. The comparisons of their write performances, measured in a real cluster, and their reliability and availability, obtained through analytical modeling, show that these protocols strike different tradeoffs between the reliability and performance. Protocols with higher peak write performance are less reliable than those with lower peak write performance, and vice versa. A hybrid protocol is proposed to optimize this tradeoff.

  • NPC - I/O Response Time in a Fault-Tolerant Parallel Virtual File System
    Lecture Notes in Computer Science, 2004
    Co-Authors: Dan Feng, Hong Jiang, Yifeng Zhu
    Abstract:

    A fault tolerant parallel Virtual File System is designed and implemented to provide high I/O performance and high reliability. A queuing model is used to analyze in detail the average response time when multiple clients access the System. The results show that I/O response time is with a function of several operational parameters. It decreases with the increase in I/O buffer hit rate for read requests, write buffer size for write requests and number of server nodes in the parallel File System, while higher I/O requests arrival rate increases I/O response time.

  • Design, implementation and performance evaluation of a cost-effective, fault-tolerant parallel Virtual File System
    Proceedings of the international workshop on Storage network architecture and parallel I Os - SNAPI '03, 2003
    Co-Authors: Yifeng Zhu, Dan Feng, Hong Jiang, Xiao Qin, David Swanson
    Abstract:

    Fault tolerance is one of the most important issues for parallel File Systems. This paper presents the design, implementation and performance evaluation of a cost-effective, fault-tolerant parallel Virtual File System (CEFT-PVFS) that provides parallel I/O service without requiring any additional hardware by utilizing existing commodity disks on cluster nodes and incorporates fault tolerance in the form of disk mirroring. While mirroring is a straightforward idea, we have implemented this open source System and conducted extensive experiments to evaluate the feasibility, efficiency and scalability of this fault tolerant approach on one of the current largest clusters, where the issues of data consistency and recovery are also investigated. Four mirroring protocols are proposed, reflecting whether the fault-tolerant operations are client driven or server driven; synchronous or asynchronous. Their relative merits are assessed by comparing their write performances, measured in the real Systems, and their reliability and availability measures, obtained through analytical modeling. The results indicate that, in cluster environments, mirroring can improve the reliability by a factor of over 40 (4000%) while sacrificing the peak write performance by 33--58% when both Systems are of identical sizes (i.e., counting the 50% mirroring disks in the mirrored System). In addition, protocols with higher peak write performance are less reliable than those with lower peak write performance, with the latter achieving a higher reliability and availability at the expense of some write bandwidth. A hybrid protocol is proposed to optimize this tradeoff.

  • CCGRID - Improved read performance in a cost-effective, fault-tolerant parallel Virtual File System (CEFT-PVFS)
    CCGrid 2003. 3rd IEEE ACM International Symposium on Cluster Computing and the Grid 2003. Proceedings., 2003
    Co-Authors: Yifeng Zhu, Dan Feng, Hong Jiang, Xiao Qin, David Swanson
    Abstract:

    Due to the ever-widening performance gap between processors and disks, I/O operations tend to become the major performance bottleneck of data-intensive applications on modern clusters. If all the existing disks on the nodes of a cluster are connected together to establish high performance parallel storage Systems, the cluster's overall performance can be boosted at no additional cost. CEFT-PVFS (a RAID 10 style parallel File System that extends the original PVFS), as one such System, divides the cluster nodes into two groups, stripes the data across one group in a round-robin fashion, and then duplicates the same data to the other group to provide storage service of high performance and high reliability. Previous research has shown that the System reliability is improved by a factor of more than 40 with mirroring while maintaining a comparable write performance. This paper presents another benefit of CEFT-PVFS in which the aggregate peak read performance can be improved by as much as 100% over that of the original PVFS by exploiting the increased parallelism. Additionally, when the data servers, which typically are also computational nodes in a cluster environment, are loaded in an unbalanced way by applications running in the cluster, the read performance of PVFS will be degraded significantly. On the contrary, in the CEFT-PVFS, a heavily loaded data server can be skipped and all the desired data is read from its mirroring node. Thus the performance will not be affected unless both the server node and its mirroring node are heavily loaded.

Daeyeon Park - One of the best experts on this subject based on the ideXlab platform.

  • S-VFS: Searchable Virtual File System for an Intelligent Ubiquitous Storage
    IEICE Transactions on Information and Systems, 2007
    Co-Authors: Yongjoo Song, Yongjin Choi, Hyunbin Lee, Daeyeon Park
    Abstract:

    With advances in ubiquitous environments, user demand for easy data-lookup is growing rapidly. Not only users but intelligent ubiquitous applications also require data-lookup services for a ubiquitous computing framework. This paper proposes a backward-compatible, searchable Virtual File System (S-VFS) for easy data-lookup. We add search functionality to the VFS, the de facto standard abstraction layer over the File System. Users can find a File by its attributes without remembering the full path. S-VFS maintains the attributes and the indexing structures in a normal File per partition. It processes queries and returns the results in a form of a Virtual directory. S-VFS is the modified VFS, but uses legacy File Systems without any modification. Since S-VFS supports full backward compatibility, users can even browse hierarchically with the legacy path name. We implement S-VFS in Linux kernel 2.6.7-21. Experiments with randomly generated queries demonstrate outstanding lookup performance with a small overhead for indexing.

  • SAC - Providing context-awareness to Virtual File System
    Proceedings of the 2007 ACM symposium on Applied computing - SAC '07, 2007
    Co-Authors: Yongjoo Song, Daeyeon Park
    Abstract:

    In ubiquitous environment, adaptive applications require the context-aware data access. This paper proposes a backward-compatible, context-aware Virtual File System (CaVFS). Users can find a File by the context without remembering full path. Since CaVFS supports the full backward compatibility, users can even browse hierarchically with the legacy path name, and the legacy applications can access the flies in a context-aware way. We implement CaVFS in Linux kernel 2.6.7-21. The experiments show that the overhead to maintain the context is reasonably small.

  • Searchable Virtual File System : Toward an intelligent ubiquitous storage
    Lecture Notes in Computer Science, 2006
    Co-Authors: Yongjoo Song, Yongjin Choi, Hyunbin Lee, Donggook Kim, Daeyeon Park
    Abstract:

    As moving toward ubiquitous environment, demand for a easy data-lookup is growing rapidly. In an ocean of the exploding data, users should use some tools to find an right data. Intelligent ubiquitous applications also make the data-lookup service essential to the ubiquitous computing framework. This paper proposes a new, searchable, backward-compatible, Virtual File System (S-VFS) for a easy File-lookup. We add the lookup functionality to VFS, the de facto standard layer in the File System. Users don't need to remember a full path to find a File any longer. Instead, each File has the attributes to use at lookup. S-VFS maintains the attributes in a normal File per partition. The indexing structures for the attributes are placed on a separated partition. Using the attribute Files and the indexing structures, S-VFS processes queries provided by users and returns the result as a form of directory. In spite of this modification in VFS, S-VFS uses the legacy File Systems without any modification. Since S-VFS supports the full backward compatibility, users can even browse hierarchically with the legacy path name.

  • GPC - Searchable Virtual File System: toward an intelligent ubiquitous storage
    Advances in Grid and Pervasive Computing, 2006
    Co-Authors: Yongjoo Song, Yongjin Choi, Hyunbin Lee, Donggook Kim, Daeyeon Park
    Abstract:

    As moving toward ubiquitous environment, demand for a easy data-lookup is growing rapidly. In an ocean of the exploding data, users should use some tools to find an right data. Intelligent ubiquitous applications also make the data-lookup service essential to the ubiquitous computing framework. This paper proposes a new, searchable, backward-compatible, Virtual File System (S-VFS) for a easy File-lookup. We add the lookup functionality to VFS, the de facto standard layer in the File System. Users don't need to remember a full path to find a File any longer. Instead, each File has the attributes to use at lookup. S-VFS maintains the attributes in a normal File per partition. The indexing structures for the attributes are placed on a separated partition. Using the attribute Files and the indexing structures, S-VFS processes queries provided by users and returns the result as a form of directory. In spite of this modification in VFS, S-VFS uses the legacy File Systems without any modification. Since S-VFS supports the full backward compatibility, users can even browse hierarchically with the legacy path name.

Yuichi Tsujita - One of the best experts on this subject based on the ideXlab platform.

  • Remote MPI-I/O on parallel Virtual File System using a circular buffer for high throughput
    International Journal of Computers and Applications, 2007
    Co-Authors: Yuichi Tsujita
    Abstract:

    A flexible intermediate library named Stampi realizes seamless remote MPI-I/O operations on interconnected computers with the help of its MPI-I/O process which is invoked on a remote computer. The MPI-I/O process carries out I/O operations by using vendor's MPI-I/O library according to I/O requests from user processes. If the vendor's one is not available, UNIX I/O functions are used instead of it. A parallel Virtual File System (PVFS) was supported in the remote MPI-I/O mechanism for data-intensive applications. Although this mechanism provided parallel I/O operations on a PVFS File System, its performance with UNIX I/O functions was low. Attempts to obtain high throughput have been made for this case with adopting a circular buffer mechanism in the MPI-I/O process to cache a part of or whole data. By optimizing configuration of the buffer, remote MPI-I/O operations with UNIX I/O functions outperformed those with direct calls of PVFS I/O functions on a PVFS File System.

  • Realizing Effective MPI-I/O to a Remote Computer Using a Parallel Virtual File System
    The IEICE transactions on information and systems, 2006
    Co-Authors: Yuichi Tsujita
    Abstract:

    This paper presents a newly implemented remote MPI-I/O mechanism using a Parallel Virtual File System (PVFS) to achieve high performance data-intensive I/O operations among computers. MPI-I/O extensions were realized in a flexible intermediate library named Stampi to support seamless MPI-I/O operations among computers. With the help of a flexible underlying communication mechanism of the library, MPI-I/O operations are available both inside a computer and among computers with the same MPI-I/O APIs without awareness of underlying communication and I/O mechanisms. To cope with recent data-intensive applications, PVFS was developed, and it provides a huge amount of parallel File System on a Linux PC cluster. Although it is available inside a PC cluster, it is not accessible from a remote computer. To exploit advantage of the PVFS File System in the remote MPI-I/O operations using Stampi, PVFS I/O functions have been implemented in the remote MPI-I/O mechanism. Through performance measurement of primitive MPI-I/O functions, sufficient performance has been achieved and effectiveness of the implementation has been confirmed.

  • optimization of nonblocking mpi i o to a remote parallel Virtual File System using a circular buffer
    High Performance Computing and Communications, 2005
    Co-Authors: Yuichi Tsujita
    Abstract:

    Parallel computation applications output intermediate data periodically, and typically the outputs are moved to a remote computer for visualization. A flexible intermediate library named Stampi realizes seamless MPI-I/O operations both inside a computer and among computers. MPI-I/O operations to a remote computer are realized by its MPI-I/O processes which are invoked on a remote computer. To realize data-intensive I/O operations, a Parallel Virtual File System (PVFS) was supported in the MPI-I/O mechanism. MPI-I/O operations to a PVFS File System on a remote computer are available with seamless interfaces of the Stampi library. Among many kinds of I/O functions, nonblocking MPI-I/O functions provide overlap of computation with I/O operations, and visible I/O times can be minimized with them. Due to its architectural constraints and slow network, visible I/O times of them became long with an increase in the number of user processes and message data size. To minimize the times, a circular buffer System has been implemented in the mechanism. With the help of the circular buffer, the visible I/O times have been minimized effectively.

  • HPCC - Optimization of nonblocking MPI-I/O to a remote parallel Virtual File System using a circular buffer
    High Performance Computing and Communications, 2005
    Co-Authors: Yuichi Tsujita
    Abstract:

    Parallel computation applications output intermediate data periodically, and typically the outputs are moved to a remote computer for visualization. A flexible intermediate library named Stampi realizes seamless MPI-I/O operations both inside a computer and among computers. MPI-I/O operations to a remote computer are realized by its MPI-I/O processes which are invoked on a remote computer. To realize data-intensive I/O operations, a Parallel Virtual File System (PVFS) was supported in the MPI-I/O mechanism. MPI-I/O operations to a PVFS File System on a remote computer are available with seamless interfaces of the Stampi library. Among many kinds of I/O functions, nonblocking MPI-I/O functions provide overlap of computation with I/O operations, and visible I/O times can be minimized with them. Due to its architectural constraints and slow network, visible I/O times of them became long with an increase in the number of user processes and message data size. To minimize the times, a circular buffer System has been implemented in the mechanism. With the help of the circular buffer, the visible I/O times have been minimized effectively.

  • MPI-I/O with a Shared File Pointer Using a Parallel Virtual File System in Remote I/O Operations
    The International Series in Engineering and Computer Science, 1
    Co-Authors: Yuichi Tsujita
    Abstract:

    A flexible intermediate library named Stampi realizes seamless MPI operations on a heterogeneous computing environment. With this library, dynamic process creation and MPI-I/O in both local and remote I/O operations are available. To realize distributed I/O operations with high performance, a Parallel Virtual File System (PVFS) has been implemented in the MPI-I/O mechanism of Stampi. MPI-I/O functions with a shared File pointer have been evaluated and sufficient performance has been achieved.