Movatterモバイル変換


[0]ホーム

URL:


Skip to content

Navigation Menu

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up

Green is a clean, colorful, fast python test runner.

License

NotificationsYou must be signed in to change notification settings

CleanCut/green

Repository files navigation

VersionPyPI downloadsCI StatusCoverage Status

A clean, colorful, fast python test runner.

Features

  • Clean - Low redundancy in output. Result statistics for each test is vertically aligned.
  • Colorful - Terminal output makes good use of color when the terminal supports it.
  • Fast - Tests run in independent processes. (One per processor by default. Doesnot play nicely withgevent)
  • Powerful - Multi-target + auto-discovery.
  • Traditional - Use the normalunittest classes and methods for your unit tests.
  • Descriptive - Multiple verbosity levels, from just dots to full docstring output.
  • Convenient - Bash-completion and ZSH-completion of options and test targets.
  • Thorough - Built-in integration withcoverage.
  • Embedded - Can be run with a setup command without in-site installation.
  • Modern - Supports Python 3.8+. Additionally,PyPy is supported on a best-effort basis.
  • Portable - macOS, Linux, and BSDs are fully supported. Windows is supported on a best-effort basis.
  • Living - This project grows and changes. See thechangelog

Community

Training Course

There is a training course available if you would like professional training:Python Testing with Green.

Python Testing with Green

Screenshots

Top: With Green! Bottom: Without Green :-(

Python Unit Test Output

Quick Start

pip3 install green# To upgrade: "pip3 install --upgrade green"

Now run green...

# From inside your code directorygreen# From outside your code directorygreen code_directory# A specific filegreen test_stuff.py# A specific test inside a large package.## Assuming you want to run TestClass.test_function inside# package/test/test_module.py ...green package.test.test_module.TestClass.test_function# To see all examples of all the failures, errors, etc. that could occur:green green.examples# To run Green's own internal unit tests:green green

For more help, see thecomplete command-lineoptions or rungreen --help.

Config Files

Configuration settings are resolved in this order, with settings found laterin the resolution chain overwriting earlier settings (last setting wins).

  1. $HOME/.green
  2. A config file specified by the environment variable$GREEN_CONFIG
  3. setup.cfg in the current working directory of test run
  4. .green in the current working directory of the test run
  5. A config file specified by the command-line argument--config FILE
  6. Command-line arguments

Any arguments specified in more than one place will be overwritten by thevalue of the LAST place the setting is seen. So, for example, if a settingis turned on in~/.green and turned off by acommand-line argument,then the setting will be turned off.

Config file format syntax isoption = value on separate lines.option isthe same as the long options, just without the double-dash (--verbose becomesverbose).

Most values should beTrue orFalse. Accumulated values (verbose, debug)should be specified as integers (-vv would beverbose = 2).

Example:

verbose       = 2logging       = Trueomit-patterns = myproj*,*prototype*

Troubleshooting

One easy way to avoid common importing problems is to navigate to theparentdirectory of the directory your python code is in. Then pass green thedirectory your code is in and let it autodiscover the tests (see the Tutorial belowfor tips on making your tests discoverable).

cd /parent/directorygreen code_directory

Another way to address importing problems is to carefully set up yourPYTHONPATH environment variable to include the parent path of your codedirectory. Then you should be able to just rungreen frominside your codedirectory.

export PYTHONPATH=/parent/directorycd /parent/directory/code_directorygreen

Integration

Bash and Zsh

To enable Bash-completion and Zsh-completion of options and test targets whenyou pressTab in your terminal, add the following line to the Bash or Zshconfig file of your choice (usually~/.bashrc or~/.zshrc)

which green>& /dev/null&&source"$( green --completion-file)"

Coverage

Green has built-in integration support for thecoverage module. Add-r or--run-coverage when you run green.

setup.py command

Green is available as asetup.py runner, invoked as any other setup command:

python setup.py green

This requires green to be present in thesetup_requires section ofyoursetup.py file. To run green on a specific target, use thetest_suiteargument (or leave blank to let green discover tests itself):

# setup.pyfromsetuptoolsimportsetupsetup(    ...setup_requires= ['green'],# test_suite = "my_project.tests")

You can also add an alias to thesetup.cfg file, so thatpython setup.py test actually runs green:

# setup.cfg[aliases]test = green

Django

Django can use green as the test runner for running tests.

  • To just try it out, use the --testrunner option ofmanage.py:
./manage.py test --testrunner=green.djangorunner.DjangoRunner
  • Make it persistent by adding the following line to yoursettings.py:
TEST_RUNNER="green.djangorunner.DjangoRunner"
  • For verbosity, green adds an extra command-line option tomanage.py whichyou can pass the number ofv's you would have used on green.
./manage.py test --green-verbosity 3

nose-parameterized

Green will run generated tests created bynose-parameterized. They havelots of examples of how to generate tests, so follow the link above if you'reinterested.

Unit Test Structure Tutorial

This tutorial covers:

  • External structure of your project (directory and file layout)
  • Skeleton of a real test module
  • How to import stuff from your project into your test module
  • Gotchas about naming...everything.
  • Where to run green from and what the output could look like.
  • DocTests

For more in-depth online training please check outPython Testing with Green:

  • Layout your test packages and modules correctly
  • Organize your tests effectively
  • Learn the tools in theunittest andmock modules
  • Write meaningful tests that enable quick refactoring
  • Learn the difference between unit and integration tests
  • Use advanced tips and tricks to get the most out of your tests
  • Improve code quality
  • Refactor code without fear
  • Have a better coding experience
  • Be able to better help others

External Structure

This is what your project layout should look like with just one module in yourpackage:

proj                  # 'proj' is the package├── __init__.py├── foo.py            # 'foo' (or proj.foo) is the only "real" module└── test              # 'test' is a sub-package    ├── __init__.py    └── test_foo.py   # 'test_foo' is the only "test" module

Notes:

  1. There is an__init__.py in every directory. Don't forget it. It can bean empty file, but it needs to exist.

  2. proj itself is a directory that you will be storing somewhere. We'llpretend it's in/home/user

  3. Thetest directory needs to start withtest.

  4. The test modules need to start withtest.

When your project starts adding code in sub-packages, you will need to make achoice on where you put their tests. I prefer to create atest subdirectoryin each sub-package.

proj├── __init__.py├── foo.py├── subpkg│   ├── __init__.py│   ├── bar.py│   └── test              # test subdirectory in every sub-package│       ├── __init__.py│       └── test_bar.py└── test    ├── __init__.py    └── test_foo.py

The other option is to start mirroring your subpackage layout from within a single test directory.

proj├── __init__.py├── foo.py├── subpkg│   ├── __init__.py│   └── bar.py└── test    ├── __init__.py    ├── subpkg            # mirror sub-package layout inside test dir    │   ├── __init__.py    │   └── test_bar.py    └── test_foo.py

Skeleton of Test Module

Assumefoo.py contains the following contents:

defanswer():return42classSchool():deffood(self):return'awful'defage(self):return300

Here's a possible version oftest_foo.py you could have.

# Import stuff you need for the unit tests themselves to workimportunittest# Import stuff that you want to test.  Don't import extra stuff if you don't# have to.fromproj.fooimportanswer,School# If you need the whole module, you can do this:#     from proj import foo## Here's another reasonable way to import the whole module:#     import proj.foo as foo## In either case, you would obviously need to access objects like this:#     foo.answer()#     foo.School()# Then write your testsclassTestAnswer(unittest.TestCase):deftest_type(self):"answer() returns an integer"self.assertEqual(type(answer()),int)deftest_expected(self):"answer() returns 42"self.assertEqual(answer(),42)classTestSchool(unittest.TestCase):deftest_food(self):school=School()self.assertEqual(school.food(),'awful')deftest_age(self):school=School()self.assertEqual(school.age(),300)

Notes:

  1. Your test class must subclassunittest.TestCase. Technically, neitherunittest nor Green care what the test class is named, but to be consistentwith the naming requirements for directories, modules, and methods wesuggest you start your test class withTest.

  2. Start all your test method names withtest.

  3. What a test class and/or its methodsactually test is entirely up to you.In some sense it is an artform. Just use the test classes to group a bunchof methods that seem logical to go together. We suggest you try to test onething with each method.

  4. The methods ofTestAnswer have docstrings, while the methods onTestSchool do not. For more verbose output modes, green will use themethod docstring to describe the test if it is present, and the name of themethod if it is not. Notice the difference in the output below.

DocTests

Green can also run tests embedded in documentation via Python's built-indoctest module. Returning to our previous example, we could add docstringswith example code to ourfoo.py module:

defanswer():"""    >>> answer()    42    """return42classSchool():deffood(self):"""        >>> s = School()        >>> s.food()        'awful'        """return'awful'defage(self):return300

Then in sometest module you need to add adoctest_modules = [ ... ] listto the top-level of the test module. So lets revisittest_foo.py and addthat:

# we could add this to the top or bottom of the existing file...doctest_modules= ['proj.foo']

Then runninggreen -vv might include this output:

  DocTests via `doctest_modules = [...]`.   proj.foo.School.food.   proj.foo.answer

...or with one more level of verbosity (green -vvv)

  DocTests via `doctest_modules = [...]`.   proj.foo.School.food -> /Users/cleancut/proj/green/example/proj/foo.py:10.   proj.foo.answer -> /Users/cleancut/proj/green/example/proj/foo.py:1

Notes:

  1. There needs to be at least oneunittest.TestCase subclass with a testmethod present in the test module fordoctest_modules to be examined.

Running Green

To run the unittests, we would change to the parent directory of the project(/home/user in this example) and then rungreen proj.

In a real terminal, this output is syntax highlighted

$ green proj....Ran 4 tests in 0.125s using 8 processesOK (passes=4)

Okay, so that's the classic short-form output for unit tests. Green reallyshines when you start getting more verbose:

In a real terminal, this output is syntax highlighted

$ green -vvv projGreen 4.1.0, Coverage 7.4.1, Python 3.12.2test_foo  TestAnswer.   answer() returns 42.   answer() returns an integer  TestSchool.   test_age.   test_foodRan 4 tests in 0.123s using 8 processesOK (passes=4)

Notes:

  1. Green outputs clean, hierarchical output.

  2. Test status is aligned on theleft (the four periods correspond to fourpassing tests)

  3. Method names are replaced with docstrings when present. The first two testshave docstrings you can see.

  4. Green always outputs a summary of statuses that will add up to the totalnumber of tests that were run. For some reason, many test runners forgetabout statuses other than Error and Fail, and even the built-in unittest runnerforgets about passing ones.

  5. Possible values for test status (these match theunittest short status characters exactly)

  • . Pass
  • F Failure
  • E Error
  • s Skipped
  • x Expected Failure
  • u Unexpected pass

Origin Story

Green grew out of a desire to see pretty colors. Really! A big part of thewholeRed/Green/Refactor process in test-driven-development isactually getting to see red and green output. Most python unit testingactually goesGray/Gray/Refactor (at least on my terminal, which is graytext on black background). That's a shame. Even TV is in color these days.Why not terminal output? Even worse, the default output for most test runnersis cluttered, hard-to-read, redundant, and the dang status indicators are notlined up in a vertical column! Green fixes all that.

But how did Green come to be? Why not just use one of the existing testrunners out there? It's an interesting story, actually. And it starts withtrial.

trial

I really like Twisted's trial test runner, though I don't really have any needfor the rest of the Twisted event-driven networking engine library. I startedprofessionally developing in Python when version 2.3 was the latest, greatestversion and none of us in my small shop had ever even heard of unit testing(gasp!). As we grew, we matured and started testing and we chose trial to dothe test running. If most of my projects at my day job hadn't moved to Python3, I probably would have just stuck with trial, but at the time I wrote greentrial didn't run on Python 3(but since 15.4.0 it does). Trial was and is the foundation for my inspirationfor having better-than-unittest output in the first place. It is a greatexample of reducing redundancy (report module/class once, not on every line),lining up status vertically, and using color. I feel like Green trumped trialin two important ways: 1) It wasn't a part of an immense event-drivennetworking engine, and 2) it was not stuck in Python 2 as trial was at thetime. Green will obviously never replace trial, as trial has featuresnecessary to run asynchronous unit tests on Twisted code. After discoveringthat I couldn't run trial under Python 3, I next tried...

nose

I had really high hopes for nose. It seemed to be widely accepted. It seemedto be powerful. The output was just horrible (exactly the same as unittest'soutput). But it had a plugin system! I tried all the plugins I could findthat mentioned improving upon the output. When I couldn't find one I liked, Istarted developing Green (yes, this Green)as a plugin for nose. I chose thename Green for three reasons: 1) It was available on PyPi! 2) I like to focuson the positive aspect of testing (everything passes!), and 3) It made a nicecounterpoint to several nose plugins that had "Red" in the name. I made steadyprogress on my plugin until I hit a serious problem in the nose plugin API.That's when I discovered thatnose is in maintenancemode --abandoned by the original developers, handed off to someone who won't fixanything if it changes the existing behavior. What a downer. Despite the hugeuser base, I already consider nose dead and gone. A project which will notchange (even to fix bugs!) will die. Even the maintainer keeps pointingeveryone to...

nose2

So I pivoted to nose2! I started over developing Green (same repo -- it's inthe history). I can understand the allure of a fresh rewrite as much as theother guy. Nose had made less-than-ideal design decisions, and this time theywould be done right! Hopefully. I had started reading nose code while writingthe plugin for it, and so I dived deep into nose2. And ran into a mess. Nose2is alpha. That by itself is not necessarily a problem, if the devs willrelease early and often and work to fix things you run into. I submitted a3-line pull request tofix someproblems where the behavior didnot conform to the already-written documentation which broke my plugin. Thepull request wasn't initially accepted because I (ironically) didn't write unittests for it. This got me thinking "I can write a better test runner thanthis". I got tired of the friction dealing with the nose/nose2 and decidedto see what it would take to write my own test runner. That brought be to...

unittest

I finally went and started reading unittest (Python 2.7 and 3.4) source code.unittest is its own special kind of mess, but it's universally built-in, andmost importantly, subclassing or replacing unittest objects to customize theoutput looked a loteasier than writing a plugin for nose and nose2. And itwas, for the output portion! Writing the rest of the test runner turned out tobe quite a project, though. I started over on Greenagain, starting down theroad to what we have now. A custom runner that subclasses or replaces bits ofunittest to provide exactly the output (and other feature creep) that I wanted.

I had three initial goals for Green:

  1. Colorful, clean output (at least as good as trial's)
  2. Run on Python 3
  3. Try to avoid making it a huge bundle of tightly-coupled, hard-to-read code.

I contend that I nailed1. and2., and ended up implementing a bunch ofother useful features as well (like very high performance via running tests inparallel in multiple processes). Whether I succeeded with3. is debatable.I continue to try to refactor and simplify, but adding features on top of acomplicated bunch of built-in code doesn't lend itself to the flexibilityneeded for clear refactors.

Wait! What about the other test runners?

  • pytest -- Somehow I never realized pytest existed until a few weeksbefore I released Green 1.0. Nowadays it seems to be pretty popular. If Ihad discovered it earlier, maybe I wouldn't have made Green! Hey, don't giveme that look! I'm not omniscient!

  • tox -- I think I first ran across tox only a few weeks before I heard ofpytest. It's homepage didn't mention anything about color, so I didn't tryusing it.

  • the ones I missed -- Er, haven't heard of them yet either.

I'd love to hearyour feedback regarding Green. Like it? Hate it? Havesome awesome suggestions? Whatever the case, goopen a discussion


[8]ページ先頭

©2009-2025 Movatter.jp