- Notifications
You must be signed in to change notification settings - Fork14
Testing#4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to ourterms of service andprivacy statement. We’ll occasionally send you account related emails.
Already on GitHub?Sign in to your account
base:master
Are you sure you want to change the base?
Testing#4
Uh oh!
There was an error while loading.Please reload this page.
Conversation
* Add comments to the Graph randomizer code
Modify the class attribute instead of the instance attribute.
* Use a dictionary for command assertion instead of one assertion per kv pair* Explicitly check for the name "Commander"
* Reset object count and commands list to previous values* Use dictionaries for all assertions rather than checking single keys* Create a Commander object in setUp() for use with most tests
MarkKoz commentedJul 28, 2019 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Regarding the helper function, there are two approaches I can think of, roughly: defassertCommandEqual(self,method:str,*args:Union[Serializable,Undefined],key:Optional[str]=None):cmd=Commander.commands[-1]expected_cmd= {"key":key,"method":method,"args":list(args), }self.assertEqual(expected_cmd,cmd) or more comprehensively deftest_command(self,command:Callable,*args:Union[Serializable,Undefined],key:Optional[str]=UNDEFINED):ifkeyisUNDEFINED:key=command.__self__.key# This would need to be done recursivelyargs= [arg.keyifisinstance(arg,Commander)elseargforarginargs]cmd=Commander.commands[-1]expected_cmd= {"key":key,"method":command.__name__,"args":args, }self.assertEqual(expected_cmd,cmd) And really it could be taken a step further by looping over each tracer and calling the latter helper function for each function in a tracer. Suitable arguments for testing each function could be generated automatically based on type annotations lol. Stupid idea, I know, to rely on function definitions for testing like that. But I am concerned that the second helper function (and especially that idea at the end) would be a bad approach because it'd effectively remove the "explicitness" of tests. But still, can't help but feel there's so much repetition in the tests even with the first helper function just because the tracers are such thin wrappers. |
I like your idea of randomizing the arguments to test it. 😂 It actually sounds better than the current (manual) way of testing I was also concerned about the same issue that tests are kinda meaningless at this moment. Should we solely (and more thoroughly) test |
I don't know what the best way to do this is. I am not really familiar with best practices and methodologies for testing. I do agree there should be some testing on the The problem with my suggestion for parameters based on function definitions is that the tests would rely on the function definitions to work. If a function definition were to be changed to something it's not meant to be, the test wouldn't catch that mistake. I feel like right now the only point of having all the tracer testsis to test for function definitions. |
64json commentedJul 29, 2019 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
Hmm good points. I think for now we can stick with the current way of testing. Maybe once we have plenty of algorithms translated from js to java/cpp/py, we can run tests on them (that are not using randomized input) to see if they result in the same |
Yes, I think some practical usage tests would be good to have. I think we could get away with random inputs if we just seed the RNG. |
It reduces code redundancy by performing the common operations offetching the last command and constructing the command dictionaryfrom the given arguments.
* Add type alias equivalent to os.PathLike to support Python 3.5
Could use ideas for how to write a test for |
I think testing if the URL returned from |
MarkKoz commentedAug 27, 2019 • edited
Loading Uh oh!
There was an error while loading.Please reload this page.
edited
Uh oh!
There was an error while loading.Please reload this page.
I think I came up with something decent by mocking some things. I didn't want to test that it returns 200 because that's effectively testing the API, which is out of scope. This is ready to be reviewed now. I think we can work on tox and automated testing when we're working on setting up CI/CD and packaging. |
It seems great to me. I am no expert in Python so I might miss something. I would appreciate if anyone else reading this thread can review once more. |
Uh oh!
There was an error while loading.Please reload this page.
I did the
setup.py
before writing tests and then realised I was getting ahead of myself. I had planned to look into using tox to test for 3.5, 3.6, and 3.7 but it may be overkill.TODO:
Commander
instances and their keys in command JSON objects.