Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork2.8k
Description
Suggestion
In#3258 we added a new testing framework for our AST parsing tests.
We did this for a few reasons:
- it allows us to colocate tests alongside the AST definitions - making the ast-spec self documenting of sorts
- i.e. "this AST node is produced by this code"
- as an aside - in some future state we'd love to generate documentation on our website using the fixtures - making it truly self documenting!
- it allows us to automatically diff against the babel AST to clearly document differences
- in the
shared-fixtures
tests we rely on sets ofmanual patches to the AST to get our "alignment" tests to pass (which is obviously pretty hacky and hard to maintain) - in the
shared-fixtures
tests we also rely on aset of ignores to document and skip fixtures we know don't align.- This isn't great because there's no checks to enforce an ignored fixture fails like it should (meaning it's easy to forget to un-ignore a fixture when it should pass)
- This also doesn't document what's actually different - we usually comment with an explanation, but that doesn't document the actual AST difference!
- in the
With a new framework comes a migration! We need to create fixtures for all of our AST nodes to test all the many combinations!
There's already a number of examples present in the theast-spec
package (for example all of thedeclaration
nodes are done), but to walk through the process:
First, setup the fixture folder:
- open any AST node folder (any folder with a
spec.ts
egpackages/ast-spec/src/declaration/ClassDeclaration
). - create a
fixtures
folder
Next, add "success" fixture cases:
- within your
fixtures
folder, create a folder with a short kebab-cased name that describes the case you're covering- For example if you're testing the AST for a class declaration that implements one interface you might name it
implements-one
. Each fixture should only cover one hyper-targeted, specific case to keep things easy to debug and review.
- For example if you're testing the AST for a class declaration that implements one interface you might name it
- within your new folder, create a file called
fixture.ts
(egpackages/ast-spec/src/declaration/ClassDeclaration/implements-one/fixture.ts
) and fill this file with your test case.- As mentioned, the code should be hyper-targeted! Put another way - don't try to make your code "semantically" correct (i.e. it doesn't have to pass a type check!) - it just has to be syntactically correct.
For example - in our "Class declarationimplements-one
test case, we shouldn't add a matching interface declaration!
- As mentioned, the code should be hyper-targeted! Put another way - don't try to make your code "semantically" correct (i.e. it doesn't have to pass a type check!) - it just has to be syntactically correct.
- run the test to generate the snapshots:
cd packages/ast-spec && yarn jest /fixtures
- This will generate a
snapshots
subfolder next to yourfixture.ts
that contains a number of snapshots generated from your test. At the time of writing that's:1-TSESTree-AST.shot
- the AST output from parsing your fixture (parsed using ourtypescript-estree
parser)2-TSESTree-Tokens.shot
- the AST tokens parsed out of your fixture (parsed using ourtypescript-estree
parser)3-Babel-AST.shot
- same as (1) but parsed using babel4-Babel-Tokens.shot
- same as (2) but parsed using babel5-AST-Alignment-AST.shot
- the diff of (1) and (3)6-AST-Alignment-Tokens.shot
- the diff of (2) and (4)
- If one of the parser throws an error on your fixture then your fixture should instead be an error fixture (see below).
- If there is a difference in the AST, you'll notice your fixture is auto-added to the snapshot
fixtures-with-differences-errors.shot
. This is intentional - it's essentially an automatically generated register of the places where we our parser and babel's don't align!
- This will generate a
To add "error" fixture cases:
- within your
fixtures
folder, create a folder called_error_
. - Follow the above "success" guide (create a kebab-case folder within
_error_
and then afixture.ts
within that folder, then run jest). - The snapshots generated will be a little different - they're there to document the thrown errors instead of the AST output! So you'll see:
1-TSESTree-Error.shot
- the error caught during the parse or"NO ERROR"
if no error was thrown (parsed using ourtypescript-estree
parser)2-Babel-Error.shot
- same as (1) but parsed with babel3-Alignment-Error.shot
- a string describing the error state:"Babel errored but TSESTree didn't"
"TSESTree errored but Babel didn't"
"Both errored"
When creating fixtures it's important to stress that each fixture should be as hyper-focused as you can make it! The more code there is, the more AST gets generated and the harder it is for a human to parse the AST output to find the relevant bits of the AST. More code also means more surface area for changes to impact which means that there's more and more snapshots that unnecessarily need updating when we make parser changes (which just creates noise!).
How to decide what to create a fixture for? The best way it to look at the spec for the AST node and try to create a fixture that targets each case for each property. For exampleClassDeclaration
has a boolean propertyabstract: boolean
- so we want to ensure we have at least one fixture which generates atrue
value for the property, and one that generates afalse
value for the property. Again - fixtures are cheap and easy to create, so if in doubt, just add another fixture to cover a case you're thinking about.
Make sure you consider both good and bad cases! Even though our parser doesn't currently throw many syntax errors (#1852), it's good practice to also add bad code fixtures so that we can clearly document the expected current behaviour of our parser and how it aligns with babel!
Finally - don't endeavour to cover every single case in a single PR (unless you really, really want to) - feel free to just create one PR that covers one or a small number of nodes. The beauty of this framework is that we don't need to do everything at once!