Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork2.8k
Enhancement: snapshot testing for lint rule failures#6499
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
Resurrecting#56 I found it's a really great way to work because it allows you to visualise the error ranges with the code instead of abstractly referencing the line/cols. However the ergonomics of the package are off, IMO, because (a) it uses its own custom class to do testing which doesn't have any of the features of ESLint's I propose we do two things: (1) fork the base ESLint rule tester properly. Right now we are subclassing it so that we can add our customisations and it makes the code pretty complicated and convoluted. As I established in#6456 - I think we're getting to the scale with a lot of things where it's just plain better for us to fork instead of extending. (2) add a new option to the For example consttester=newRuleTester({snapshot:true});tester.run('no-methods',rule,{valid:[ ...],invalid:[{description:'Some meaningful label',code:`class Foo { myMethodName() { // ... }} `,errors:[{messageId:'noMethod'}],},],}); Would create a snapshot file like: exports[`no-methods invalid Some meaningful label`]=`class Foo{myMethodName() { ~~~~~~~~~~~~~~~~// ...~~~~~~~~~~ }~~~myMethodNameisdeclaredasamethod,wepreferalwaysusingarrowfunctionsbecausereasons}`;exports[`no-methods invalid Some meaningful label`]=`class Foo{myMethodName() { ~~~~~~~~~~~~~~~~// ...~~~~~~~~~~ }~~~myMethodNameisdeclaredasamethod,wepreferalwaysusingarrowfunctionsbecausereasons}`; Which is much, much more readable than the current alternative test: invalid:[{description:'Some meaningful label',code:`class Foo { myMethodName() { // ... }} `,errors:[{messageId:'noMethod',data:{name:'myMethodName'},line:3,endLine:5,column:3,endColumn:4},],},], Pros:
Cons:
This system could also be extended to snapshot fixers. In this case we'd emit 3..n snapshots per invalid case - one showing the error report location, (or clearly stating there was no fix performed), one showing the fix diff, and two for each suggestion fixer. Problem to solve - sane organisation of snapshots.We might have to think through how this should be structured though because emitting 3 snapshots per invalid test could be REALLY difficult to understand and mentally match up - especially if there are a lot of test cases. TBH just emitting 1 per test could be hard enough given the scale of invalid tests for some rules. As mentioned before - enforcing the use of Even though our AST fixture system emits 6 snapshots per fixture (IMO) it is really easy to understand things because each snapshot which is a separate file on disk and is clearly named. Spitballing possible solutions:
Cons:
(2) create a unique, filename-safe name per test and emit one snapshot file for each test case.
Cons:
(3) use the description to create folders on disk and emit individual snapshots.
Cons:
(4) break compatibility entirely and instead use a similar structure to our AST fixture system - one file per fixture.
Cons:
(5) do a similar system to For example they could use inline snapshots if they wanted: consttester=newSnapshotRuleTester();tester.run('rule-name',rule,{valid:[ ...],invalid:[tester.invalid('foos-the-bar',(test)=>{constresult=test({code:' ... ',options:[ ...],errors:[{messageId:'foo'}],});expect(result).toMatchInlineSnapshot();});],}); Pros:
Cons:
|
BetaWas this translation helpful?Give feedback.
All reactions
❤️ 2
Replies: 5 comments 8 replies
-
ESLint may support snapshot test in futureeslint/eslint#14936 FYI: I build snapshot tester for https://github.com/sindresorhus/eslint-plugin-unicorn/blob/main/test/utils/snapshot-rule-tester.mjs |
BetaWas this translation helpful?Give feedback.
All reactions
👍 2
-
"may support" - the last comment was over a year ago! Last I heard from the team they wanted to try and make the tooling system independent - so snapshot testing mightn't fit in to that vision (this was the response I get when I suggested adding dependency constraints to tests). Those snapshots look pretty neat! I kind of want one per test to make it easier to find specific snapshots but IDK how that'll work exactly. There's still a bit to nut out but either way - we've officially forked |
BetaWas this translation helpful?Give feedback.
All reactions
Uh oh!
There was an error while loading.Please reload this page.
Uh oh!
There was an error while loading.Please reload this page.
-
We have to build something test-framework agnostic (the entire In the past I've leveraged Considering these snapshots are 110% meant to be human-readable, I think we need to go in a different direction and do a custom snapshot format so that we can have syntax highlighting and a nicer experience. I kind of like how AVA does things - a markdown file with the snapshot contents inside code fences. However to avoid parsing markdown ava also dumps out a separate binary blob which is the test metadata - so it compares against the blob and writes the markdown separately for humans. Do we think that two files being written is okay to trade-off for increased human readability? Ava examples -#6499 (comment) |
BetaWas this translation helpful?Give feedback.
All reactions
-
🤔 My gut instinct is -1 on having two files. I have't worked with Ava snapshots before, so maybe I'm just being a Luddite... but having multiple files checked in feels off to me. If possible it'd be nice to keep it leaner with just one.
How bad is this, though? We already do Markdown parsing work in our website rule docs generators. The code inhttps://github.com/typescript-eslint/typescript-eslint/blob/4bf2d7360eaf74c9ef87b196ff4c459b8f50800b/packages/website/plugins/generated-rule-docs.ts isn't so bad. My personal current favorite approach would be:
|
BetaWas this translation helpful?Give feedback.
All reactions
-
The binary approach makes for a dead simple implementation because essentially we're just doing The other benefit of the binary approach is that we can store data for the test in it - for example the line/column information for the error message. This is good because it means we can test on exact values and report messages like "Expected the report to start on line number 1, but received a report starting on line number 2". With the markdown approach we have to hand-craft a deserialiser (MD -> JSON) which is the mirrored inverse of the serialiser (JSON -> MD). There's a maintenance burden here because we need to ensure that each change we make to one is also made in reverse to the other. This is also complex because serialising to MD is easy (can do it very simplyvia template strings) - but deserialising requires a parser+AST for accuracy. This means the two algorithms will be asymmetric which increases the burden. Finally we also cannot store random data within the markdown contents. For example if we dump this code fence into MD as the human-readable representation of the error: |
BetaWas this translation helpful?Give feedback.
All reactions
-
Hmm, got it. That does sound like a lot of work. Agreed on the points for code complexity - at least for the 1.0 I'd rather focus efforts elsewhere. Maybe a good followup to build a separate standalone package for this kind of thing (I do love adding to our list of Tidelift-funded packages 😂)... For the diff differences, I honestly don't mind. A big visual diff can be quite helpful when trying to understand what's wrong. That last screenshot makes me happy! |
BetaWas this translation helpful?Give feedback.
All reactions
-
The other consideration is how many snapshots to create for each test. There's ofc two options - one snapshot per test case, or one snapshot for the entire test file. I can see pros and cons to both - increased readability for the former due to less scrolling / shorter file traded off against having many more files on disk. I'm kind of leaning towards having just one file right now (same as ava and jest do by default) - but I'm open to doing the other way (it's how I've done all of our other snapshot implementations). What do we think? |
BetaWas this translation helpful?Give feedback.
All reactions
-
I'm definitely +1 on having one snapshot file per test file. Some of these tests can have not just dozens but >100 cases. Lots of potential file noise if we go with multiple snapshot files per test file. |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
-
In angular-eslint I have managed to solve some of these problems in a different way, and there are additional wins in terms of docs. I think it could be expanded even further with some more custom tooling around the rule tester. Each rule has a
Importantly for the The additional really powerful thing this facilitates is that I generated my docs from the unit tests. So every single covered case is automatically documented and guaranteed to be up to date and verified so long as the unit tests are green: You can still use |
BetaWas this translation helpful?Give feedback.
All reactions
-
(In case it wasn't clear, the separate files are needed because the docs are generated by actually evaluating the contents of the cases.ts file. If everything were in one file then the RuleTester would end up being executed too). |
BetaWas this translation helpful?Give feedback.
All reactions
-
this is cool, for sure! perhaps this is something we should build in later as a separate option if people want? |
BetaWas this translation helpful?Give feedback.
All reactions
-
The We could always add a new method like The thing you miss out on from this is that you need to explicitly assert the fixer outputs (auto and suggestion), whereas snapshots let you do that implicitly for free. |
BetaWas this translation helpful?Give feedback.
All reactions
-
I have a POC ready which creates one markdown snapshot and one binary snapshot for each test file: It looks pretty nice and is pretty easy to understand, I think. As I saidhere I'd like to make this a progressive enhancement - but maybe it'd be better to just create a brand new way of doing tests? |
BetaWas this translation helpful?Give feedback.
All reactions
👍 1
This discussion was converted from issue #6498 on February 20, 2023 01:40.