Requirement Engineering and Software Testing Alignment Tool
This repository is dedicated for Bao's & Nicole's thesis work.
Scripts should be run as modules to ensure that the relative imports work. E.g.:
$ python -m <path.to.module> # Omit the -.py file name extensionCurrently only some of the modules in src/core/ are partially tested. To run the tests, run the following command from src/:
$ python -m unittest discoverThe following file structures are REQUIRED for REST-at to work properly. All input files MUST
be in Comma Separated Value (.csv) format.
Requirements files must have the following rows (case sensitive) in whichever order:
- ID
- Feature
- Description
Test cases files must have the following rows (case sensitive) in whichever order:
- ID
- Purpose
- Test steps
Only for development evaluations. Alignment files must have the following rows (case sensitive) in whichever order:
- Req IDs
- Test ID
- This column must consist of a list of Test IDs separated by commas
- Python 3.10 or later
- Hardware capable of running LLMs (large amounts of vRAM)
- A virtual Python environment (optional but recommended)
Make sure that you're in the correct Python environment before you begin!
- Clone this repository.
cdinto the newly created directory.- Run
pip install -r requirements.txt
Make sure that you're in the correct Python environment before you begin!
- Create a
.envfile in the project root. - Add the following variables to the
.envfile:MODEL_PATH- The relative path to a local model.TOKEN_LIMIT- Themax_new_tokensto pass to a model.REQ_PATH- The relative path to the requirements file.TEST_PATH- The relative path to the tests file.OPENAI_API_KEY- If using the OpenAI API.OPENAI_BASE_URL- If using the OpenAI API.
- Run one of two scripts:
python -m src.send_data- To run on a local model. Adjust thesession_namevariable to your desired output directory name.python -m src.send_data_gpt- To run on OpenAI's GPT.
Adjust themodelvariable to your desired model.
The scripts will output files in the out/{model}/{date}/{time}/ directory.
Make sure that you're in the correct Python environment before you begin!
- Follow the steps in Running REST-at Scripts
- Add the following variable to the
.envfile:MAP_PATH- The relative path to the alignment file.
- Run one of two scripts:
python -m src.eval- To evaluate each REST trace link.python -m src.label_eval- To evaluate "is tested" labels.
The script will output an eval.log or a label-eval.log in out/{model}/{date}/{time}/ for each model, date, and time, depending on the script used. The file contains key metrics of REST-at, such as accuracy and precision.
The script will also output the following files in the res/{date}/{time}(-label) directory:
eval.log- The verbose output of the evaluation.res.log- All the evaluation results.{model}.logfor each model inout/- The average metrics of all runs with the model.