Automated interface probing applied to cots component evaluation

  • Authors:
  • Bogdan Korel;Carl James Mueller

  • Affiliations:
  • -;-

  • Venue:
  • Automated interface probing applied to cots component evaluation
  • Year:
  • 2003

Quantified Score

Hi-index 0.00

Visualization

Abstract

Component based software development offers a promise of reducing the cost and the time to develop new software applications, but the cost of selecting components can limit these savings. Selecting components for critical applications (i.e., medicine, aviation, etc.) requires conducting more evaluations at the beginning of the development process. Commercial-Off-The-Shelf (COTS) components are frequently distributed without source code, and with limited documentation, making them more difficult to evaluate. International treaties prohibit reverse engineering to obtain the source code. Although much work has been spent on the overall selection process, very little work has been done in developing new tools for component evaluation. Automated Interface Probing (AIP) uses formal methods, execution-based testing methods and automated test-data generation techniques to evaluate components. A component is evaluated from a formal specification describing its public interface, how the component interacts with other components and any state behavior exhibited. Once a formal evaluation specification is available, the developer selects the type of automatic data generation technique used in the evaluation. Automatic data generation methods might include random data, black-box data, robustness data and interactive data generation methods such as a chaining approach. Some applications require components to have a specific collection of behaviors, as specified by an Extended Finite State Machine (EFSM). AIP uses a state-test expression to determine a component's current state. It is sometimes difficult to manually derive a state-test from a large EFSM. AIP addresses this problem with a heuristic to assist developers in deriving state-tests. This heuristic uses the observable outcome from distinguishable transitions to develop state-tests. To provide confidence in this research, two major experiments were conducted to evaluate the effectiveness of the concepts. In the first experiment, 22 C standard library functions from four (4) different C/C++ compilers were evaluated using AIP. Of the 132 comparisons required to evaluate the 88 components, 102 differences were identified. For the second experiment, 10 EFSM specifications, ranging in size from 3 to 16 states, were evaluated using the state-test derivation heuristic. State-tests were automatically derived for 84% of the 82 states using the state-test derivation heuristic.