Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Testing#4

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.

Already on GitHub?Sign in to your account

Open
MarkKoz wants to merge28 commits intoalgorithm-visualizer:master
base:master
Choose a base branch
Loading
fromMarkKoz:testing
Open

Conversation

MarkKoz
Copy link
Member

@MarkKozMarkKoz commentedJul 24, 2019
edited
Loading

I did thesetup.py before writing tests and then realised I was getting ahead of myself. I had planned to look into using tox to test for 3.5, 3.6, and 3.7 but it may be overkill.

TODO:

  • Add coverage.py dependency
  • Write all tests (100% coverage?)
    • Tracers
      • Array1D
      • Array2D
      • Graph
      • Log
      • Tracer
    • Commander
    • Layouts
    • Randomize
    • JSON output
    • API request
  • Add helper function to drastically reduce repeated code in tests
    • Need to find a way to handle discrepancy in arguments betweenCommander instances and their keys in command JSON objects.

64json reacted with thumbs up emoji
Modify the class attribute instead of the instance attribute.
* Use a dictionary for command assertion instead of one assertion per  kv pair* Explicitly check for the name "Commander"
* Reset object count and commands list to previous values* Use dictionaries for all assertions rather than checking single keys* Create a Commander object in setUp() for use with most tests
@MarkKoz
Copy link
MemberAuthor

MarkKoz commentedJul 28, 2019
edited
Loading

Regarding the helper function, there are two approaches I can think of, roughly:

defassertCommandEqual(self,method:str,*args:Union[Serializable,Undefined],key:Optional[str]=None):cmd=Commander.commands[-1]expected_cmd= {"key":key,"method":method,"args":list(args),    }self.assertEqual(expected_cmd,cmd)

or more comprehensively

deftest_command(self,command:Callable,*args:Union[Serializable,Undefined],key:Optional[str]=UNDEFINED):ifkeyisUNDEFINED:key=command.__self__.key# This would need to be done recursivelyargs= [arg.keyifisinstance(arg,Commander)elseargforarginargs]cmd=Commander.commands[-1]expected_cmd= {"key":key,"method":command.__name__,"args":args,    }self.assertEqual(expected_cmd,cmd)

And really it could be taken a step further by looping over each tracer and calling the latter helper function for each function in a tracer. Suitable arguments for testing each function could be generated automatically based on type annotations lol. Stupid idea, I know, to rely on function definitions for testing like that.

But I am concerned that the second helper function (and especially that idea at the end) would be a bad approach because it'd effectively remove the "explicitness" of tests. But still, can't help but feel there's so much repetition in the tests even with the first helper function just because the tracers are such thin wrappers.

@64json
Copy link
Member

I like your idea of randomizing the arguments to test it. 😂 It actually sounds better than the current (manual) way of testingtracers.js.

I was also concerned about the same issue that tests are kinda meaningless at this moment. Should we solely (and more thoroughly) testcommander.py andrandomize.py? Like testing if it correctly serializes different types of random values, if it prints commands out tovisualization.json correctly, etc.

@MarkKoz
Copy link
MemberAuthor

I don't know what the best way to do this is. I am not really familiar with best practices and methodologies for testing. I do agree there should be some testing on thevisualization.json file. As for more thorough testing, I don't know what more can be done forcommander.py andrandomize.py. It seems to me testing serialisation of various data types doesn't make sense since what we'd really be doing is testing Python's standard library'sjson module instead of our own code and that wouldn't be necessary unless we want to be paranoid and not trust Python to ship with a working and testedjson module.

The problem with my suggestion for parameters based on function definitions is that the tests would rely on the function definitions to work. If a function definition were to be changed to something it's not meant to be, the test wouldn't catch that mistake. I feel like right now the only point of having all the tracer testsis to test for function definitions.

@64json
Copy link
Member

64json commentedJul 29, 2019
edited
Loading

Hmm good points. I think for now we can stick with the current way of testing. Maybe once we have plenty of algorithms translated from js to java/cpp/py, we can run tests on them (that are not using randomized input) to see if they result in the samevisualization.jsons?

@MarkKoz
Copy link
MemberAuthor

Yes, I think some practical usage tests would be good to have. I think we could get away with random inputs if we just seed the RNG.

64json reacted with thumbs up emoji

It reduces code redundancy by performing the common operations offetching the last command and constructing the command dictionaryfrom the given arguments.
* Add type alias equivalent to os.PathLike to support Python 3.5
@MarkKoz
Copy link
MemberAuthor

Could use ideas for how to write a test forget_url. I can test for the return type easily but not sure what else.

@64json
Copy link
Member

I think testing if the URL returned fromget_url responds with 200 would be good.

@MarkKozMarkKoz marked this pull request as ready for reviewAugust 27, 2019 17:59
@MarkKoz
Copy link
MemberAuthor

MarkKoz commentedAug 27, 2019
edited
Loading

I think I came up with something decent by mocking some things. I didn't want to test that it returns 200 because that's effectively testing the API, which is out of scope.

This is ready to be reviewed now. I think we can work on tox and automated testing when we're working on setting up CI/CD and packaging.

64json reacted with thumbs up emoji

@64json
Copy link
Member

It seems great to me. I am no expert in Python so I might miss something. I would appreciate if anyone else reading this thread can review once more.

Sign up for freeto join this conversation on GitHub. Already have an account?Sign in to comment
Reviewers

@64json64json64json approved these changes

Assignees
No one assigned
Labels
None yet
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

2 participants
@MarkKoz@64json

[8]ページ先頭

©2009-2025 Movatter.jp