A Procedure for Analyzing Unbalanced Datasets
IEEE Transactions on Software Engineering
Software Engineering Economics
Software Engineering Economics
OPM vs. UML—Experimenting with Comprehension and Construction of Web Application Models
Empirical Software Engineering
uComplexity: Estimating Processor Design Effort
Proceedings of the 38th annual IEEE/ACM International Symposium on Microarchitecture
Quantifying identifier quality: an analysis of trends
Empirical Software Engineering
The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data
IEEE Transactions on Software Engineering
Hi-index | 0.00 |
Data analysis is a major and important activity in software engineering research. For example, productivity analysis and evaluation of new technologies almost always conduct statistical analysis on collected data. Software data are usually unbalanced because they are collected from actual projects, not from formal experiments, and therefore their population is biased. Fixed-effects models have often been used for data analysis though they are for balanced datasets. This misuse causes analysis to be insufficient and conclusion to be wrong. The past study[1] proposed an iterative procedure to treat unbalanced datasets for productivity analysis. However, this procedure was sometimes failed to identify partially-confounded factors and its estimated effects were not easy to interpret. This study examined mixed-effects models for productivity analysis. Mixed-effects models can work the same for unbalanced datasets as for balanced datasets. Furthermore its application is straightforward and estimated effects are easy to interpret. Experiments with four datasets showed advantages of the mixed-effects models clearly.