Free Disk Space

14,000,000 Leading Edge Experts on the ideXlab platform

Scan Science and Technology

Contact Leading Edge Experts & Companies

Scan Science and Technology

Contact Leading Edge Experts & Companies

The Experts below are selected from a list of 156 Experts worldwide ranked by ideXlab platform

Peter A Franaszek - One of the best experts on this subject based on the ideXlab platform.

  • analysis of reorganization overhead in log structured file systems
    International Conference on Data Engineering, 1994
    Co-Authors: John T Robinson, Peter A Franaszek
    Abstract:

    In a log-structured file system (LFS), in general each block written to Disk causes another Disk block to become invalid data, resulting in one block of Free Space. Over time Free Disk Space becomes highly fragmented, and a high level of dynamic reorganization may be required to coalesce Free blocks into physically contiguous areas that subsequently can be used for logs. By consuming available Disk bandwidth, this reorganization can degrade system performance. In a segmented Disk LFS organization, the copy-and-compact reorganization method reads entire segments and then writes back all valid blocks. Other methods, suggested by earlier work on reduction of storage fragmentation for non-LFS Disks, may access far fewer blocks (at the cost of increased CPU time). An analytic model is used to evaluate the effects on available Disk bandwidth of dynamic reorganization, as a function of the read/write ratio, storage utilization, and degree of data movement required by dynamic reorganization for steady-state operation. It is shown that decreasing reorganization overhead can have dramatic effects on available Disk bandwidth. >

  • ICDE - Analysis of reorganization overhead in log-structured file systems
    Proceedings of 1994 IEEE 10th International Conference on Data Engineering, 1
    Co-Authors: John T Robinson, Peter A Franaszek
    Abstract:

    In a log-structured file system (LFS), in general each block written to Disk causes another Disk block to become invalid data, resulting in one block of Free Space. Over time Free Disk Space becomes highly fragmented, and a high level of dynamic reorganization may be required to coalesce Free blocks into physically contiguous areas that subsequently can be used for logs. By consuming available Disk bandwidth, this reorganization can degrade system performance. In a segmented Disk LFS organization, the copy-and-compact reorganization method reads entire segments and then writes back all valid blocks. Other methods, suggested by earlier work on reduction of storage fragmentation for non-LFS Disks, may access far fewer blocks (at the cost of increased CPU time). An analytic model is used to evaluate the effects on available Disk bandwidth of dynamic reorganization, as a function of the read/write ratio, storage utilization, and degree of data movement required by dynamic reorganization for steady-state operation. It is shown that decreasing reorganization overhead can have dramatic effects on available Disk bandwidth. >

Kang G Shin - One of the best experts on this subject based on the ideXlab platform.

  • Use of Free Space to Enhance the Performance, Energy Efficiency, and Fault-Tolerance of a File System
    2009
    Co-Authors: Kang G Shin
    Abstract:

    Abstract : This project has made several significant contributions in enhancing the energy efficiency, performance and fault-tolerance of computer storage systems. First, we developed Power-Aware Virtual Memory (PAVM) that finds and aggregates unmapped and unused memory pages. By powering down unused memory ranks, we can save a significant amount of energy dissipated by the main memory with virtually no performance degradation. Second, we developed the Free Space File System (FS2) based on the popular Ext2 file system by replicating temporally-related data blocks then using the Free Disk Space to place these blocks closer to one another on the Disk and thus allowing the Disk heads to move less. This results in higher performance, lower energy consumption and higher fault-tolerance at almost zero cost. Finally, we characterized the Disk failure patterns and used it to place replicas of critical information on the Disk so as to protect them from common Disk failures.

  • Exploiting unused storage resources to enhance systems' energy efficiency, performance, and fault-tolerance
    2006
    Co-Authors: Kang G Shin, Hai Huang
    Abstract:

    The invention of better fabrication materials and processes in solid-state devices has led to unprecedented technological breakthroughs in computer hardware. Today's system software, however, often cannot take full advantage of the hardware's rapidly improving capabilities, thus resulting in idling resources, e.g., unoccupied memory and Disk Space. To make hardware operate more efficiently and to reduce the amount of idle resources, this thesis proposes several techniques that can harness such resources to the benefit of users. Although there are many different types of hardware resources, this thesis focuses on the reclamation of idle resources in the storage hierarchy. First, we implemented a pure software technique to reduce the power dissipation of main memory. By aggregating unmapped and unused memory pages and powering down unused memory ranks, a significant amount of energy can be saved with little or no performance degradation. Next, we explored several architectural-level solutions that can more aggressively reduce energy, but at the expense of performance. We also developed techniques to exploit unused Disk capacity to improve Disks' performance and energy-efficiency. These techniques are realized in our implementation of the Free Space File System (FS2). Unlike traditional file systems where extra Disk Space is not used, FS2 actively makes use of it to hold replicas of temporally-related data blocks that were poorly placed by the underlying file system. Using contiguous regions of Free Disk Space to place related data blocks closer to one another enables in Disk heads to work more efficiently. Free Disk Space may also be used to enhance the fault-tolerance of Disks. The placement of replicas is shown to be critical to both the fault-tolerance and performance of file systems that make use of replicas. However, without a thorough understanding of how Disks fail and how data become corrupted when failures occur, good data placement strategies are difficult to devise. We studied a large number of failed Disks and analyzed their failure characteristics. This characterization study will help design more fault-tolerant file systems that can take advantage of today's large capacity hard drives.

  • fs2 dynamic data replication in Free Disk Space for improving Disk performance and energy consumption
    Symposium on Operating Systems Principles, 2005
    Co-Authors: Hai Huang, Wanda Hung, Kang G Shin
    Abstract:

    Disk performance is increasingly limited by its head positioning latencies, i.e., seek time and rotational delay. To reduce the head positioning latencies, we propose a novel technique that dynamically places copies of data in file system's Free blocks according to the Disk access patterns observed at runtime. As one or more replicas can now be accessed in addition to their original data block, choosing the "nearest" replica that provides fastest access can significantly improve performance for Disk I/O operations.We implemented and evaluated a prototype based on the popular Ext2 file system. In our prototype, since the file system layout is modified only by using the Free/unused Disk Space (hence the name Free Space File System, or FS2), users are completely oblivious to how the file system layout is modified in the background; they will only notice performance improvements over time. For a wide range of workloads running under Linux, FS2 is shown to reduce Disk access time by 41--68% (as a result of a 37--78% shorter seek time and a 31--68% shorter rotational delay) making a 16--34% overall user-perceived performance improvement. The reduced Disk access time also leads to a 40--71% energy savings per access.

  • SOSP - FS2: dynamic data replication in Free Disk Space for improving Disk performance and energy consumption
    Proceedings of the twentieth ACM symposium on Operating systems principles - SOSP '05, 2005
    Co-Authors: Hai Huang, Wanda Hung, Kang G Shin
    Abstract:

    Disk performance is increasingly limited by its head positioning latencies, i.e., seek time and rotational delay. To reduce the head positioning latencies, we propose a novel technique that dynamically places copies of data in file system's Free blocks according to the Disk access patterns observed at runtime. As one or more replicas can now be accessed in addition to their original data block, choosing the "nearest" replica that provides fastest access can significantly improve performance for Disk I/O operations.We implemented and evaluated a prototype based on the popular Ext2 file system. In our prototype, since the file system layout is modified only by using the Free/unused Disk Space (hence the name Free Space File System, or FS2), users are completely oblivious to how the file system layout is modified in the background; they will only notice performance improvements over time. For a wide range of workloads running under Linux, FS2 is shown to reduce Disk access time by 41--68% (as a result of a 37--78% shorter seek time and a 31--68% shorter rotational delay) making a 16--34% overall user-perceived performance improvement. The reduced Disk access time also leads to a 40--71% energy savings per access.

H. Mori - One of the best experts on this subject based on the ideXlab platform.

  • NBiS - Case Study on the Recovery of a Virtual Large-Scale Disk
    Network-Based Information Systems, 2008
    Co-Authors: E. Chai, Minoru Uehara, H. Mori
    Abstract:

    With the recent flood of data, one of the major issues is the storage thereof. Although commodity HDDs are now very cheap, appliance storage systems are still relatively expensive. As a result we developed the VLSD (Virtual Large-Scale Disk) toolkit in order to construct large-scale storage using only cheap commodity hardware and software. We also developed a prototype of the large-scale storage system by using the VLSD to collect Free Disk Space on PCs. However, the reliability of this storage depends on the MTTR (Mean Time to Repair). In this paper, we evaluate the MTTR of our prototype and then discuss its efficiency.

  • A Case Study on Large-Scale Disk System Concatenating Free Space
    Second International Conference on Innovative Computing Informatio and Control (ICICIC 2007), 2007
    Co-Authors: E. Chai, Minoru Uehara, H. Mori
    Abstract:

    The cost of a conventional centralized file server, suitable for use in a learning environment such as a school, is high. However, in PC labs, there are normally several hundreds PCs, with plenty of Free Disk Space. The total size of the unused capacity of the HDDs is almost equivalent to the capacity of a file server. In this paper, we propose constructing a virtual large-scale storage system by concatenating Free Disk Space. We develop the VLSD (virtual large-scale Disk) toolkit to aid the construction of such a system. This toolkit is implemented in Java and consists of RAID (redundant arrays of inexpensive/independent Disks) and NBDs (network block device). In this paper, we describe how to use VLSD to implement a large-scale storage system.

  • NBiS - Virtual large-scale Disk system for pc-room
    Network-Based Information Systems, 1
    Co-Authors: E. Chai, Minoru Uehara, H. Mori, Nobuyoshi Sato
    Abstract:

    There are many PCs in a PC room. For example, there are 500 PCs in our University. Each PC has a HDD, which is typically not full. If the Disk utilization is 50% and each PC has a 240GB HDD, there is 60TB (500×120GB) Free Disk Space. The total size of the unused capacity of these HDDs is nearly equal to the capacity of a file server. Institutions, however, tend to buy expensive appliance file servers. In this paper, we propose an efficient large-scale storage system that combines client Free Disk Space. We have developed a java-based toolkit to construct a virtual large-scale storage system, which we call VLSD (Virtual Large-scale Disk). This toolkit is implemented in Java and consists of RAID (Redundant Arrays of Inexpensive/Independent Disks) and NBDs (Network Block Device). Using VLSD, we show how to construct a large Disk that consists of multiple Free Spaces distributed over networks. VLSD supports typical RAID and other utility classes. These can be combined Freely with one another.

Laurence R Lines - One of the best experts on this subject based on the ideXlab platform.

  • overcoming computational cost problems of reverse time migration
    2010
    Co-Authors: Zaiming Jiang, Kayla Bonham, John C Bancroft, Laurence R Lines
    Abstract:

    Summary Prestack reverse-time migration is computationally expensive. Program run times are long, in terms of the total number of CPU cycles, and it requires large amounts of hard Disk Free Space. To accelerate computing, we do parallel processing using Intel Threading Building Blocks (TBB) and multi-core computers, for both the forward-time modelling and reverse-time migration phases of the computation. To solve the problem of limited Free Disk Space, we use a technique that may seem counter-intuitive: the forward modelling phase is done twice instead of once.

  • Reverse-time migration imaging with/without multiples
    2010
    Co-Authors: Zaiming Jiang, John C Bancroft, Laurence R Lines
    Abstract:

    One of the challenges with reverse-time migration based on finite-difference method is the problems of computational costs, in terms of Free Disk Space and/or computational time. This report discusses the principle of a new imaging condition, referred as ‘first arrival imaging condition’, and shows the advantage of less computational costs of this method, compared to the widely used source-normalized crosscorrelation imaging condition for reverse-time migration. Principally, with crosscorrelation imaging conditions, all the multiples inside both forward modelling and reverse-time migration wavefields are involved; on the other hand, with the first arrival imaging condition, the multiples in the wavefields are not included.

John T Robinson - One of the best experts on this subject based on the ideXlab platform.

  • analysis of reorganization overhead in log structured file systems
    International Conference on Data Engineering, 1994
    Co-Authors: John T Robinson, Peter A Franaszek
    Abstract:

    In a log-structured file system (LFS), in general each block written to Disk causes another Disk block to become invalid data, resulting in one block of Free Space. Over time Free Disk Space becomes highly fragmented, and a high level of dynamic reorganization may be required to coalesce Free blocks into physically contiguous areas that subsequently can be used for logs. By consuming available Disk bandwidth, this reorganization can degrade system performance. In a segmented Disk LFS organization, the copy-and-compact reorganization method reads entire segments and then writes back all valid blocks. Other methods, suggested by earlier work on reduction of storage fragmentation for non-LFS Disks, may access far fewer blocks (at the cost of increased CPU time). An analytic model is used to evaluate the effects on available Disk bandwidth of dynamic reorganization, as a function of the read/write ratio, storage utilization, and degree of data movement required by dynamic reorganization for steady-state operation. It is shown that decreasing reorganization overhead can have dramatic effects on available Disk bandwidth. >

  • ICDE - Analysis of reorganization overhead in log-structured file systems
    Proceedings of 1994 IEEE 10th International Conference on Data Engineering, 1
    Co-Authors: John T Robinson, Peter A Franaszek
    Abstract:

    In a log-structured file system (LFS), in general each block written to Disk causes another Disk block to become invalid data, resulting in one block of Free Space. Over time Free Disk Space becomes highly fragmented, and a high level of dynamic reorganization may be required to coalesce Free blocks into physically contiguous areas that subsequently can be used for logs. By consuming available Disk bandwidth, this reorganization can degrade system performance. In a segmented Disk LFS organization, the copy-and-compact reorganization method reads entire segments and then writes back all valid blocks. Other methods, suggested by earlier work on reduction of storage fragmentation for non-LFS Disks, may access far fewer blocks (at the cost of increased CPU time). An analytic model is used to evaluate the effects on available Disk bandwidth of dynamic reorganization, as a function of the read/write ratio, storage utilization, and degree of data movement required by dynamic reorganization for steady-state operation. It is shown that decreasing reorganization overhead can have dramatic effects on available Disk bandwidth. >