A natural semantics for lazy evaluation
POPL '93 Proceedings of the 20th ACM SIGPLAN-SIGACT symposium on Principles of programming languages
Reasoning by analogy and causality: a model and application
Reasoning by analogy and causality: a model and application
Time and space profiling for non-strict, higher-order functional languages
POPL '95 Proceedings of the 22nd ACM SIGPLAN-SIGACT symposium on Principles of programming languages
The Results of: Profiling Large-Scale Lazy Functional Programs
IFL '96 Selected Papers from the 8th International Workshop on Implementation of Functional Languages
Gprof: A call graph execution profiler
SIGPLAN '82 Proceedings of the 1982 SIGPLAN symposium on Compiler construction
Symbolic Profiling for Multi-paradigm Declarative Languages
LOPSTR '01 Selected papers from the 11th International Workshop on Logic Based Program Synthesis and Transformation
A Strategic Profiler for Glasgow Parallel Haskell
IFL '98 Selected Papers from the 10th International Workshop on 10th International Workshop
Portable and architecture independent parallel performance tuning using BSP
Parallel Computing
HsDebug: debugging lazy programs by not being lazy
Haskell '03 Proceedings of the 2003 ACM SIGPLAN workshop on Haskell
Fast, accurate call graph profiling
Software—Practice & Experience
PADL'11 Proceedings of the 13th international conference on Practical aspects of declarative languages
Combining static analysis and profiling for estimating execution times
PADL'07 Proceedings of the 9th international conference on Practical Aspects of Declarative Languages
Cost analysis of concurrent OO programs
APLAS'11 Proceedings of the 9th Asian conference on Programming Languages and Systems
Hi-index | 0.00 |
The LOLITA natural language processor is an example of one of the ever-increasing number of large-scale systems written entirely in a functional programming language. The system consists of over 47,000 lines of Haskell code (excluding comments) and is able to perform a wide range of tasks such as semantic and pragmatic analysis of text, information extraction and query analysis. The efficiency of such a system is critical; interactive tasks (such as query analysis) must ensure that the user is not inconvenienced by long pauses, and batch mode tasks (such as information extraction) must ensure that an adequate throughput can be achieved. For the past three years the profiling tools supplied with GHC and HBC have been used to analyse and reason about the complexity of the LOLITA system. There have been good results, however experience has shown that in a large system the profiling life-cycle is often too long to make detailed analysis possible, and the results are often misleading. In response to these problems a profiler has been developed which allows the complete set of program costs to be recorded in so-called cost-centre stacks. These program costs are then analysed using a post-processing tool to allow the developer to explore the costs of the program in ways that are either not possible with existing tools or would require repeated compilations and executions of the program. The modifications to the Glasgow Haskell compiler based on detailed cost semantics and an efficient implementation scheme are discussed. The results of using this new profiling tool in the analysis of a number of Haskell programs are also presented. The overheads of the scheme are discussed and the benefits of this new system are considered. An outline is also given of how this approach can be modified to assist with the tracing and debugging of programs.