Uh oh!
There was an error while loading.Please reload this page.
- Notifications
You must be signed in to change notification settings - Fork8.1k
Coverage config#8003
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
Coverage config#8003
Uh oh!
There was an error while loading.Please reload this page.
Conversation
dopplershift commentedJan 31, 2017
- Cleaning up codecov config to remove commenting
- Remove coveralls
- Trying to set up codecov to report coverage in test code separately
- Also enabled coverage on AppVeyor
dopplershift commentedJan 31, 2017
If this works first time through I'll be amazed... |
tacaswell commentedFeb 1, 2017
Not liking something about the configuration on windows. |
QuLogic commentedFeb 1, 2017
Adding tests to coverage fixes#6902. |
f52966c tocefda55Compare
NelleV left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
LGTM 👍
QuLogic commentedFeb 1, 2017 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
So if I understand this correctly; codecov should pass if test files are 100% covered. How is this true, if we aren't technically fully running on Python 2.7 at the moment? Surely there's a skip for Py 2 in at least one test? |
dopplershift commentedFeb 1, 2017
It's not working properly yet. There should be 3 separate statuses:
Not sure why it didn't work, since this was verbatim from my own project with just some paths changed... |
dopplershift commentedFeb 1, 2017
The way this works is that codecov merges coverage runs...so if every line is run at least once, you'll get 100% coverage. We won't be able to see if we're not running things on py2. It looks like there is a feature called flags where we could mark each run as python 2 or 3. Wecould then segment that way, but then if we have version-specific test code, we would never get to 100%... |
QuLogic commentedFeb 2, 2017
If it's anything like the PR to stop codecov from commenting, we might need to actually merge it to see any effect. |
QuLogic commentedFeb 2, 2017
Oh wait, maybe it's because we've already got a |
dopplershift commentedFeb 2, 2017
Yeah, the duplication is probably the problem. Let's see how this one goes. |
f0fa141 to503adfaComparedopplershift commentedFeb 2, 2017 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Still didn't take. I've rebased on master (squashed some commits) and fixed a couple minor validation problems with the yaml file I made. If this doesn't seem to take, my only thoughts are to try merging with master and hope for the best. |
dopplershift commentedFeb 2, 2017
Bingo! And somehow we only have 97% coverage of test files... Should we commit (maybe with a different target) and try to fix it elsewhere? |
dstansby left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
👍
tacaswell commentedFeb 2, 2017
Can we push the test coverage limit down before merging? |
NelleV commentedFeb 2, 2017
We've got 3% of our tests not running on travis? That's interesting to know. |
QuLogic commentedFeb 2, 2017
It's mostly backends (e.g., Qt4/Qt5 can't be run in same process or installed in the same conda environment), some pickle and animation stuff, and everything else seems small. But when looking at the file on codecov, it says 'File not found in report', so I can't see what the actual problematic lines are.@dopplershift any ideas why it's not coming up properly? |
503adfa to127301cComparedopplershift commentedFeb 3, 2017
@NelleV Well, 2.2% of the lines of code in our tests are not being executed anyways. @QuLogic I think it's failing when it tries to compare against previous coverage on the matplotlib repo, which was excluding the tests. I've rebased to address the conflicts in |
127301c to819f78eComparedopplershift commentedFeb 3, 2017
Rebased now that#7974 is merged. Hopefully everything will work as expected. |
dopplershift commentedFeb 4, 2017
I think we're good now. I did notice at least one coverage change caused by the fact that we have a wait loop (for locks in |
QuLogic commentedFeb 4, 2017
All AppVeyor builds show "No coverage report found". |
This way we can get coverage for platform-dependant lines.
This sets it to report even if CI fails (just to get to see numbers).Separate out coverage for lines in tests (which should be 100%, but iscurrently ~97.7%), versus those in the library itself.
819f78e tof78c205Comparedopplershift commentedFeb 4, 2017
Was submitting the coverageafter we cleaned the repos for building packages--which nuked the coverage results. I've now added the upload to the testing phase. (I'm open to other suggestions for fixing this.) Maybe now everything will work. |
dopplershift commentedFeb 4, 2017
Ok, it looks to be properly running on AppVeyor now--though only for pytest=True. I thought we were entirely using pytest now? (Irrelevant for this PR either way.) |
| # Generate a html for visual tests | ||
| -python visual_tests.py | ||
| -if x%USE_PYTEST% == xyes pip install codecov | ||
| -if x%USE_PYTEST% == xyes codecov -e PYTHON_VERSION PLATFORM |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
These lines mean that codecov only happens ifUSE_PYTEST is true, which is only for the one build. pytest is still being run in the other builds, just throughtests.py in the root which callstest() inmatplotlib/__init__.py.
I don't know if running the tests usingtest.py collects coverage, but if it does I guess these if statements can just go.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others.Learn more.
Yea, these conditions could be dropped.