EVENODD: An Efficient Scheme for Tolerating Double Disk Failures in RAID Architectures
IEEE Transactions on Computers - Special issue on fault-tolerant computing
A tutorial on Reed-Solomon coding for fault-tolerance in RAID-like systems
Software—Practice & Experience
IEEE Transactions on Computers
Fault-Tolerant Distributed Mass Storage for LHC Computing
CCGRID '03 Proceedings of the 3st International Symposium on Cluster Computing and the Grid
Proceedings of the 1st IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis
Instruction Set Extensions for Reed-Solomon Encoding and Decoding
ASAP '05 Proceedings of the 2005 IEEE International Conference on Application-Specific Systems, Architecture Processors
FPGA based RAID 6 hardware accelerator
Proceedings of the 2006 ACM/SIGDA 14th international symposium on Field programmable gate arrays
Comparison of Redundancy Schemes for Distributed Storage Systems
NCA '06 Proceedings of the Fifth IEEE International Symposium on Network Computing and Applications
Data consistent up- and downstreaming in a distributed storage system
SNAPI '03 Proceedings of the international workshop on Storage network architecture and parallel I/Os
PVFS: a parallel file system for linux clusters
ALS'00 Proceedings of the 4th annual Linux Showcase & Conference - Volume 4
Experiences with a FPGA-based Reed/Solomon-encoding coprocessor
Microprocessors & Microsystems
A code-based analytical approach for using separate device coprocessors in computing systems
ARCS'11 Proceedings of the 24th international conference on Architecture of computing systems
Hi-index | 0.00 |
Distributed storage systems often have to guarantee data availability despite of failures or temporal downtimes of storage nodes. For this purpose, a deletion-tolerant code is applied that allows to reconstruct missing parts in a codeword, i.e. to tolerate a distinct number of failures. The Reed/Solomon (R/S) code is the most general deletion-tolerant code and can be adapted to a required number of tolerable failures. In terms of its least information overhead, R/S is optimal, but it consumes significantly more computation power than parity-based codes. Reconfigurable hardware can be employed for particular operations in finite fields for R/S coding by specialized arithmetics, so that the higher computation effort is compensated by faster and parallel operations. We present architectures for an application-specific acceleration by FPGAs. In this paper, strategies for an efficient communication with the accelerating FPGA and a performance comparison between a pure software-based solution and the accelerated system are provided.