The working set model for program behavior
Communications of the ACM
Microprogamming under a page on demand strategy
Communications of the ACM
Steps toward a general-purpose time-sharing system using large capacity core storage and TSS/360
ACM '68 Proceedings of the 1968 23rd ACM national conference
Dynamic program behavior under paging
ACM '66 Proceedings of the 1966 21st national conference
Effects of scheduling on file memory operations
AFIPS '67 (Spring) Proceedings of the April 18-20, 1967, spring joint computer conference
A Short Theory of Multiprogramming
MASCOTS '95 Proceedings of the 3rd International Workshop on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems
Token-ordered LRU: an effective page replacement policy and its implementation in Linux systems
Performance Evaluation - Performance modelling and evaluation of high-performance parallel and distributed systems
A Computer Resource Allocation Model with Some Measured and Simulation Results
IEEE Transactions on Computers
On the Performance Enhancement of Paging Systems Through Program Analysis and Transformations
IEEE Transactions on Computers
R69-27 Demand Paging in Perspective
IEEE Transactions on Computers
An Adaptive Replacement Algorithm for Paged-Memory Computer Systems
IEEE Transactions on Computers
Equipment Configuration in Balanced Computer Systems
IEEE Transactions on Computers
Operating systems architecture
AFIPS '70 (Spring) Proceedings of the May 5-7, 1970, spring joint computer conference
Optimal sizing, loading and re-loading in a multi-level memory hierarchy system
AFIPS '71 (Spring) Proceedings of the May 18-20, 1971, spring joint computer conference
AFIPS '72 (Spring) Proceedings of the May 16-18, 1972, spring joint computer conference
Experimental data on page replacement algorithm
AFIPS '74 Proceedings of the May 6-10, 1974, national computer conference and exposition
A study on optimally co-scheduling jobs of different lengths on chip multiprocessors
Proceedings of the 6th ACM conference on Computing frontiers
Dynamic code footprint optimization for the IBM Cell Broadband Engine
IWMSE '09 Proceedings of the 2009 ICSE Workshop on Multicore Software Engineering
OS/VS1 concepts and philosophies
IBM Systems Journal
Virtual storage and virtual machine concepts
IBM Systems Journal
A Solution to Resource Underutilization for Web Services Hosted in the Cloud
OTM '09 Proceedings of the Confederated International Conferences, CoopIS, DOA, IS, and ODBASE 2009 on On the Move to Meaningful Internet Systems: Part I
SEUS'07 Proceedings of the 5th IFIP WG 10.2 international conference on Software technologies for embedded and ubiquitous systems
Generalized ERSS tree model: Revisiting working sets
Performance Evaluation
Region scheduling: efficiently using the cache architectures via page-level affinity
ASPLOS XVII Proceedings of the seventeenth international conference on Architectural Support for Programming Languages and Operating Systems
A delay time-based peak load control for stable performance
ISPA'06 Proceedings of the 4th international conference on Parallel and Distributed Processing and Applications
Is reuse distance applicable to data locality analysis on chip multiprocessors?
CC'10/ETAPS'10 Proceedings of the 19th joint European conference on Theory and Practice of Software, international conference on Compiler Construction
Swap fairness for thrashing mitigation
ECSA'13 Proceedings of the 7th European conference on Software Architecture
Network-aware data caching and prefetching for cloud-hosted metadata retrieval
NDM '13 Proceedings of the Third International Workshop on Network-Aware Data Management
Hi-index | 0.01 |
A particularly troublesome phenomenon, thrashing, may seriously interfere with the performance of paged memory systems, reducing computing giants (Multics, IBM System 360, and others not necessarily excepted) to computing dwarfs. The term thrashing denotes excessive overhead and severe performance degradation or collapse caused by too much paging. Thrashing inevitably turns a shortage of memory space into a surplus of processor time.