- Notifications
You must be signed in to change notification settings - Fork1.2k
Testing
- unit test (
*.unit.test.ts
): testing a unit ofbehaviour (not units ofcode) - integration test (
*.test.ts
): testing multiple units of behaviour and how they work together - smoke test (
*.smoke.test.ts
): testing a usage scenario - test plan: what totest manually due to not being covered by the above types of tests
- Testing against external systems? Preference is (in order):live system, fake, stub, mock
- Test theoutcome,not the implementation (i.e. if I were to refactor all private code, no tests should break)
- All code hasa test at one of the levels outlined in the Terminology section
- Experiments have tests (you do not need to test all permutations of experiments being on/off, though)
Note: Unit tests are those in files with extension.unit.test.ts
.
- Make sure you have compiled all code (done automatically when using incremental building)
- Ensure you have disabled breaking into 'Uncaught Exceptions' when running the Unit Tests
- For the linters and formatters tests to pass successfully, you will need to have those corresponding Python libraries installed locally
- Run the Tests via the
Unit Tests
launch option.
You can also run them from the command-line (after compiling):
npm run test:unittests# runs all unit testsnpm run test:unittests -- --grep='<NAME-OF-SUITE>'
To run only a specific test suite for unit tests:Alter thelaunch.json
file in the"Debug Unit Tests"
section by setting thegrep
field:
"args":["--timeout=60000","--grep","<suite name>"],
...this will only run the suite with the tests you care about during a test run (be sure to set the debugger to run theDebug Unit Tests
launcher).
Functional tests are those in files with extension.functional.test.ts
.These tests are similar to system tests in scope, but are run like unit tests.
You can run functional tests in a similar way to that for unit tests:
- via the "Functional Tests" launch option, or
- on the command line via
npm run test:functional
Note: System tests are those in files with extension.test*.ts
but which are neither.functional.test.ts
nor.unit.test.ts
.
- Make sure you have compiled all code (done automatically when using incremental building)
- Ensure you have disabled breaking into 'Uncaught Exceptions' when running the Unit Tests
- For the linters and formatters tests to pass successfully, you will need to have those corresponding Python libraries installed locally by using the
./requirements.txt
andbuild/test-requirements.txt
files - Run the tests via
npm run
or the Debugger launch options (you can "Start Without Debugging"). - Note you will be running tests under the default Python interpreter for the system.
You can also run the tests from the command-line (after compiling):
npm run testSingleWorkspace# will launch the VSC UInpm run testMultiWorkspace# will launch the VSC UI
If you want to change which tests are run or which version of Python is used,you can do this by setting environment variables. The same variables work whenrunning from the command line or launching from within VSCode, though themechanism used to specify them changes a little.
- Setting
CI_PYTHON_PATH
lets you change the version of python the tests are executed with - Setting
VSC_PYTHON_CI_TEST_GREP
lets you filter the tests by name
CI_PYTHON_PATH
In some tests a Python executable is actually run. The default executable ispython
(for now). Unless you've run the tests inside a virtual environment,this will almost always mean Python 2 is used, which probably isn't what youwant.
By setting theCI_PYTHON_PATH
environment variable you cancontrol the exact Python executable that gets used. If the executableyou specify isn't on$PATH
then be sure to use an absolute path.
This is also the mechanism for testing against other versions of Python.
VSC_PYTHON_CI_TEST_GREP
This environment variable allows providing a regular expression which willbe matched against suite and test "names" to be run. By default all testsare run.
For example, to run only the tests in theSorting
suite (fromsrc/test/format/extension.sort.test.ts
)you would set the value toSorting
. To run theProcessService
andProcessService Observable
tests which relate tostderr
handling, you mightuse the valueProcessService.*stderr
.
Be sure to escape any grep-sensitive characters in your suite name.
In some rare cases in the "system" tests theVSC_PYTHON_CI_TEST_GREP
environment variable is ignored. If that happens then you will need totemporarily modify theconst grep =
line insrc/test/index.ts
.
Launching from VS Code
In order to set environment variables when launching the tests from VSCode youshould edit thelaunch.json
file. For example you can add the following to theappropriate configuration you want to run to change the interpreter used duringtesting:
"env":{"CI_PYTHON_PATH":"/absolute/path/to/interpreter/of/choice/python"}
On the command line
The mechanism to set environment variables on the command line will vary basedon your system, however most systems support a syntax like the following forsetting a single variable for a subprocess:
VSC_PYTHON_CI_TEST_GREP=Sorting npm run testSingleWorkspace
The extension has a number of scripts in ./pythonFiles. Tests for thesescripts are found in ./pythonFiles/tests. To run those tests:
python2.7 pythonFiles/tests/run_all.py
python3 -m pythonFiles.tests
By default, functional tests are included. To exclude them:
python3 -m pythonFiles.tests --no-functional
To run only the functional tests:
python3 -m pythonFiles.tests --functional
Clone the repo into any directory, open that directory in V SCode, and use theExtension
launch option within VS Code.
The easiest way to debug the Python Debugger (in our opinion) is to clone this git repo directory intoyour extensions directory.From there use theExtension + Debugger
launch option.
Once we code freeze for a release, we need to verify that everything is working appropriately. While automated tests are wonderful and help prevent regressions, physically verifying also helps in cases where a test might not be thorough enough or testing is simply too difficult to automate.
We use VS Code's releasing testing procedure during theirendgame. This entails:
- Writing test plan items (TPIs) for large features (w/ the TPI having the
testplan-item
label and the issue it is for having theon-testplan
label) - Verifying anybugs fixed in this release.
- Verifying simple features added in this release (w/ the
verification-needed
label)
What this means is all of the issues included in themilestone should have abug
,verification-needed
, or aon-testplan
label with an accompanying TPI when closed.