Scheduling algorithms for modern disk drives
SIGMETRICS '94 Proceedings of the 1994 ACM SIGMETRICS conference on Measurement and modeling of computer systems
A case for flash memory ssd in enterprise database applications
Proceedings of the 2008 ACM SIGMOD international conference on Management of data
Storage-class memory: the next storage system technology
IBM Journal of Research and Development
Phase-change random access memory: a scalable technology
IBM Journal of Research and Development
Migrating server storage to SSDs: analysis of tradeoffs
Proceedings of the 4th ACM European conference on Computer systems
Better I/O through byte-addressable, persistent memory
Proceedings of the ACM SIGOPS 22nd symposium on Operating systems principles
High performance solid state storage under Linux
MSST '10 Proceedings of the 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST)
Moneta: A High-Performance Storage Array Architecture for Next-Generation, Non-volatile Memories
MICRO '43 Proceedings of the 2010 43rd Annual IEEE/ACM International Symposium on Microarchitecture
ACM Transactions on Storage (TOS)
Onyx: a protoype phase change memory storage array
HotStorage'11 Proceedings of the 3rd USENIX conference on Hot topics in storage and file systems
SCMFS: a file system for storage class memory
Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis
When poll is better than interrupt
FAST'12 Proceedings of the 10th USENIX conference on File and Storage Technologies
Hi-index | 0.00 |
Recently, the demand for fast storages is rapidly increasing in HPC environments such as cloud platforms, social network services and desktop users. But HDD-based storages cannot satisfy these demands, and a variety of high performance storages providing lower I/O latency and higher I/O bandwidth have been eagerly developed. Although merely adopting these fast devices in the storage system can take some advantages, it cannot fully utilize their high performance. So, proper optimizations are needed. In this work, we focus on the granularity of the I/O request from the application layer to the block layer. We found that the I/O operation at the page granularity causes huge performance degradation in the case of small size random I/O patterns that are often observed during the execution of mail servers, DB servers, etc. This is because it allows non-requested data to be transferred. Therefore, we propose new file system design that contains two optimizations; 1) an extended I/O interface maintaining user requested data size over all the layers in I/O subsystem, and 2) sub-page mechanism for minimizing non-requested data transfer effectively. We have implemented our approach in the Linux file system. The experimental results show that our solution achieves 1.6 to 6.3 times performance gains.