Reliability of function points measurement: a field experiment
Communications of the ACM
Software metrics (2nd ed.): a rigorous and practical approach
Software metrics (2nd ed.): a rigorous and practical approach
Software metrics for small database applications
SAC '00 Proceedings of the 2000 ACM symposium on Applied computing - Volume 2
A Vector-Based Approach to Software Size Measurement and Effort Estimation
IEEE Transactions on Software Engineering
ACM SIGSOFT Software Engineering Notes
The software engineering capstone: structure and tradeoffs
SIGCSE '02 Proceedings of the 33rd SIGCSE technical symposium on Computer science education
Software Metrics: A Practitioner's Guide to Improved Product Development
Software Metrics: A Practitioner's Guide to Improved Product Development
Function point measurement from Java programs
Proceedings of the 24th International Conference on Software Engineering
Project-based learning practices in computer science education
FIE '98 Proceedings of the 28th Annual Frontiers in Education - Volume 03
WIER-implementing artifact reuse in an educational environment with real projects-work in progress
FIE '01 Proceedings of the Frontiers in Education Conference, 2001. 31st Annual - Volume 02
Evaluating student teams developing unique industry projects
ACE '05 Proceedings of the 7th Australasian conference on Computing education - Volume 42
Probabilistic size proxy for software effort prediction: A framework
Information and Software Technology
Hi-index | 0.00 |
Final year students in the Bachelor of Computing complete an industry project where they work in teams to build an IT system for an external client. Grading projects in these circumstances is difficult because of the huge variability of projects and clients. A method of ameliorating some of the variation is to perform a function point count on the projects. Due to the large number of projects and the changing scope of projects a method of automatically counting function points has been devised that uses the output from design tools that students have used. Principally the method counts use cases and database tables. The method has been successful in that no statistical difference in function point counts was found regardless of the implementation environments of systems. However, the first function point count produced during the design phase resulted in values that are lower than expected. The reason for this is that there are omissions from the design. The students will perform another at the user testing stage. The average function point count is 270 with a standard deviation of 130. Currently, the method also assumes that the students are following a traditional waterfall development model. The paper discusses two issues (a) proposing a metric for project size and (b) automating the production of that metric.