Automated Test Case Generation from Domain-Specific High-Level Requirement Models

  • Oyindamola Olajubu

Student thesis: Doctoral Thesis

Abstract

One of the most researched aspects of the software engineering process is the verification and validation of software systems using various techniques. The need to ensure that the developed software system addresses its intended specifications has led to several approaches that link the requirements gathering and software testing phases of development.

This thesis presents a framework that bridges the gap between requirement specification and testing of software using domain-specific modelling concepts. The proposed modelling notation, High-Level Requirement Modelling Language (HRML), addresses the drawbacks of Natural Language (NL) for high-level requirement specifications including ambiguity and incompleteness. Real-time checks are implemented to ensure valid HRML specification models are utilised for the automated test cases generation.

The type of HRML requirement specified in the model determines the approach to be employed to generate corresponding test cases. Boundary Value Analysis and Equivalence Partitioning is applied to specifications with predefined range values to generate valid and invalid inputs for robustness test cases. Structural coverage test cases are also generated to satisfy the Modified Condition/Decision Coverage (MC/DC) criteria for HRML specifications with logic expressions. In scenarios where the conditional statements are combined with logic expressions, the MC/DC approach is extended to generate the corresponding tests cases.

Evaluation of the proposed framework by industry experts in a case study, its scalability, comparative study and the assessment of its learnability by non-experts are reported. The results indicate a reduction in the test case generation process in the case study, however non-experts spent more time in modelling the requirement in HRML while the time taken for test case generation is also reduced.
Date of AwardOct 2018
Original languageEnglish
Awarding Institution
  • University of Northampton
SupervisorSuraj Ajit (Supervisor), Mark Johnson (Supervisor) & Scott Turner (Supervisor)

Keywords

  • domain specific language
  • model based testing
  • requirement based testing

Cite this

'