- Notifications
You must be signed in to change notification settings - Fork6
A test harness and collection of crowdsourced test cases for@phf's compilers course
License
baileyparker/phf-compilers-tests
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
A crowdsourced collection of test cases for @phf's compiler's course.
$ git clone https://github.com/baileyparker/phf-compilers-tests integration_tests$ ./integration_tests/bin/run_harnessThe second command will run the test harness against the./sc compiler. Ifyou can't run your compiler with./sc from the current directory you canadd the--sc argument to let the test harness know where your compiler is:
$ ./integration_tests/bin/run_harness --sc ../../path/to/my/scFor convenience, I recommend adding a target to yourMakefile to run this:
integration-test:./integration_tests/bin/run_harness.PHONY: integration-testDon't forget if you've cloned this repo into your own repo (for versioningyour compiler) to addintegration_tests/ to your.gitignore.
This repo'smaster should always be safe, so you can just pull:
$ git -C 'integration_tests' pull origin masterYou can run the test suites for only certain phases of the compiler byspecifying them as arguments torun_harness. The phases so far are:
scannercstst(symbol table)ast
For example, to run just the scanner and symbol table:
$ ./integration_tests/bin/run_harness scanner astFor assignments after the CST, if your compiler is implemented in such a waythat even if the-c is passed the semantic analysis (symbol table and AST)are still run, those later phases may find semantic errors in a CST fixture(that alone would pass-c, because it is syntactically valid, but would failsemantic analysis and thus fail the test). Although you should design yourcompiler in a way that these phases can be turned off, that may be more troublethan it's worth. So, to prevent false negatives, pass--skip-cst-passes(after the CST assignment):
$ ./integration_tests/bin/run_harness --skip-cst-passesFor the Symbol Table assignment (before the AST assignment), your symbol tableshould replace all constant integer values with5. But then for the ASTassignment and after, the constants should be the proper (constant folded)values. For the Symbol Table assignment, you should run the harness with--st-all-fives to assert that all constant integers values are indeed5:
$ ./integration_tests/bin/run_harness --st-all-fivesYou are too kind 😄! The process is pretty straightforward (it's the standardopen source pull requestworkflow):
- Forkthis repo
git clone https://github.com/YOUR-USERNAME/phf-compilers-tests- Create a branch describing the test cases you're adding:
git checkout -b bogosort-scanner-fixture - Add, commit, and push your changes:
git add simple_test/fixtures,git commit -m "Add bogosort scanner fixture",git push origin bogosort-scanner-fixture - Create a pull requestfrom Github
There are fixtures insimple_test/fixtures. A fixture is a pair of two filesthat provide input to the compiler and describe the expected output:
- A
*.simfile (called theinput file) that will be given to thecompiler under test - A
*.{scanner|cst|st|ast}file (called thephase file) that describedwhat the expected output of running the compiler under test in the phasedescribed by its file extension against the*.simfile of the same name
In a line (if you trust your compiler!), a fixture for the scanner can becreated like so (assumingquicksort.sim exists):
./sc -s simple_test/fixtures/quicksort.sim 2>&1 > simple_test/fixtures/quicksort.scannerNotice how the name of the files (without the extension) matches. This is howthe test harness knows to feed the input sim file in to the compiler under testand expect the output*.scanner file. The test harness derives the phase to runthe compiler in from the extension of the second file, currently the phases are:
*.scanner-./sc -s*.cst-./sc -c*.st-./sc -t(do not replace allINTEGERvalues with5s inthese files!)*.ast-./sc -a
More will be added with future assignments.
Note that one input*.sim file can have multiple expected outputs fordifferent compiler phases (ex.random.scanner andrandom.cst are twophase files that describe the expected output for./sc -s, the scanner, and./sc -c, the parser, respectively when given the inputrandom.sim).
The second file should contain both the expected stdout and stderr from runningthe simple compiler on the input*.sim file. An example of this file is:
identifier<ics142>@(4, 9):@(10, 10)ARRAY@(12, 16)integer<5>@(18, 18)OF@(20, 21)identifier<INTEGER>@(23, 29);@(30, 30)eof@(32, 32)Lines in this file that begin witherror: are not expected to be present instdout. Instead, it signals to the test harness that the compiler should printat least one error to stderr. Note that while we can append a description tothese error lines (to make the fixture clearer to anyone reading it tounderstand why their compiler fails for it), the test harness will not check ifthe line exactly matches.
So afoobar.scanner file like this:
identifier<ics142>@(4, 9):@(10, 10)error: bad character ';' at line 1, col 11Will accept output from the simple compiler under test with a different message(as long as the error is in the same place):
identifier<ics142>@(4, 9):@(10, 10)error: unexpected `;`@(11, 11)The test will fail though if the output looks like this (note how the error istoo early):
identifier<ics142>@(4, 9)error: unexpected ';' at (11, 11)- Test fixtures must have
snake_case_names. - There should not be duplicate input files (
*.simfiles). Ifa.simandb.simare identical, then you should merge their phase files.
To ensure a bug free harness, I've written tests for the test harness itself(I know,so meta, right?). To run them, you needpipenv to pull in the required dependencies (asimplepython3 -m pip install pipenv should suffice, although you may needtosudo apt install python3-pip first).
To run the tests:
$ pipenv run python3 setup.py testIn a very opinionated new pattern that I'm trying, linting and mypy staticchecks are two of the test cases. If the code fails typechecking or linting,the tests fail.
To get test coverage reports:
$ pipenv run python3 setup.py coverageTo ensure the test harness really works, there are some very comprehensiveintegration tests that test all components together against a faked compiler.This, of course, involves a lot of file I/O and takes a decent amount of timeto run (about 30 seconds on a 2017 MBP). You can of course run testsindividually with (for example):
$ pipenv run python3 -m tests.simple_test.test_mainBut you can also pass the environment variableSLOW_TESTS=0 to exclude theseslow integration tests (as well as linting, which also takes a few seconds):
$ SLOW_TESTS=0 pipenv run python3 setup.py coverage- Peter's Lubuntu VM (Lubuntu 16.04)
- Python 3.5 (already on the VM)
- Pipenv (to run the meta tests, the tests that test the test harness)
Harness made byBailey Parker. Specialthanks to these wonderful people who contributed test cases:
- Nicholas Hale
- Sam Beckley
- Peter Lazorchak
- Rachel Kinney
- Andrew Rojas
- Your name could be here!
If you find a bug in the test harness or in one of the fixtures, pleasefile an issue.
About
A test harness and collection of crowdsourced test cases for@phf's compilers course
Topics
Resources
License
Uh oh!
There was an error while loading.Please reload this page.
Stars
Watchers
Forks
Releases
Packages0
Contributors5
Uh oh!
There was an error while loading.Please reload this page.