This is the first iteration that implements ‘expect’ as a fixture.

This is really the third attempt at an ‘expect()’ implementation that allows multiple failures per test.

  1. First attempt was a general solution that works with any test framework, but with a slightly clunky API. The main problem with it was that it required the test to call a final ‘assert_expectations()’ from the test code. If you forgot to call that function, the failures weren’t reported.
  2. Second attempt was a pytest plugin implementation that eliminated the need for the ‘assert_expectations()’ call in the test because it was called automatically. I wasn’t thrilled with this solution. But it works.
  3. In the solution I’m presenting in this post, I’m moving all of the code into one file and implementing ‘expect’ as a pytest fixture.

I think there are advantages to having ‘expect’ as a fixture.
Let me know what you think.

For now, it’s implemented as a local plugin in ‘conftest.py’.
I’ll implement it as an installable plugin in a future iteration.

Implementation of expect fixture as a local plugin

conftest.py
`

import pytest
import inspect
import os.path

@pytest.fixture
def expect(request):
    def do_expect(expr, msg=''):
        if not expr:
            _log_failure(request.node, msg)
    return do_expect

def _log_failure(node, msg=''):
    # get filename, line, and context
    (filename, line, funcname, contextlist) =  inspect.stack()[2][1:5]
    filename = os.path.basename(filename)
    context = contextlist[0]
    # format entry
    msg = '%s\n' % msg if msg else ''
    entry = '>%s%s%s:%s\n--------' % (context, msg, filename, line)
    # add entry 
    if not hasattr(node, '_failed_expect'):
        node._failed_expect = []
    node._failed_expect.append(entry)

@pytest.mark.tryfirst
def pytest_runtest_makereport(item, call, __multicall__):
    report = __multicall__.execute()
    if (call.when == "call") and report.passed and hasattr(item, '_failed_expect'):
        report.outcome = "failed"
        summary = 'Failed Expectations:%s' % len(item._failed_expect)
        item._failed_expect.append(summary)
        report.longrepr = '\n'.join(item._failed_expect)
    return report

`

Test code that uses ‘expect’ fixture.

test_expect_fixture.py
`

def test_should_pass(expect):
    expect(1 == 1, 'one is one')

def test_should_fail(expect):
    expect(1 == 2)
    expect(3 == 4, 'three is four')

`

The output

I think this is identical to the last iteration. But here it is anyway.

test run
`

$ python -m pytest test_expect_fixture.py 
==================================== test session starts ====================================
platform darwin -- Python 2.7.9 -- py-1.4.26 -- pytest-2.6.4
collected 2 items 

test_expect_fixture.py .F

========================================= FAILURES ==========================================
_____________________________________ test_should_fail ______________________________________
>    expect(1 == 2)
test_expect_fixture.py:7
--------
>    expect(3 == 4, 'three is four')
three is four
test_expect_fixture.py:8
--------
Failed Expectations:2
============================ 1 failed, 1 passed in 0.01 seconds =============================

`

Next

  1. Make this an installable pytest plugin and package and such.
  2. Push it to github or bitbucket.
  3. Implement something like assert argument re-evaluation like pytest.
  4. Perhaps implement something like assert rewrite. Again, I’ll probably have to beg and plead with some core pytest devs to assist in this because the AST stuff scares me.
  5. At some point I’m going to have to implement it differently than using the ‘pytest_runtest_makereport(item, call, multicall)’ function, as I believe this is going to be depricated.
  6. Maybe submit it to pypi, if it’s useful for other people.

Feedback

I received some great feedback from both posts. I’m incorporating feedback from Bruno and Ronny in this post.

Please provide feedback on this iteration as well.