Python featured in April issue of PragPub

PragPub April 2016 featuring Python (and me) PragPub is the digital magazine put out by Pragmatic Bookshelf, Michael Swaine, and Nancy Groth. I’m especially excited about it because I have two articles featured. I mostly know Michael from the many years of reading Dr Dobb’s. And I respect Pragmatic Bookshelf for their work in technical publishing. So I was thrilled to be asked to contribute. From the Contents page:...

April 6, 2016 · 1 min · Brian

Given-When-Then

Designing your test methods using a simple structure such as given-when-then will help you: Communicate the purpose of your test more clearly Focus your thinking while writing the test Make test writing faster Make it easier to re-use parts of your test Highlight the assumptions you are making about the test preconditions Highlight what outcomes you are expecting and testing against. In this post I’ll be talking about designing your test cases/test methods using given-when-then. It doesn’t matter if you are using pytest, unittest, nose, or something completely different, this post will help you write better tests. Note: This was originally a writeup done after the Python Test Podcast episode 10. However, I think it stands pretty good on it’s own as a post. ...

February 10, 2016 · 11 min · Brian

pytest-expect code now in a github repo

I’ve made a few changes to the pytest-expect fixture plugin. I’ve put the plugin code on github, https://github.com/okken/pytest-expect. It is re-arranged to be a plugin installable with pip. Although I don’t have it in pypi yet. I’ve modified the code to use pytest 2.7.0 @pytest.mark.hookwrapper. I incorporated Bruno’s feedback from the last post to allow both assert failures and expect failures to be reported in the same test. There’s a tests directory to test the plugin....

March 31, 2015 · 1 min · Brian

pytest expect fixture plugin, iteration 1

This is the first iteration that implements ‘expect’ as a fixture. This is really the third attempt at an ‘expect()’ implementation that allows multiple failures per test. First attempt was a general solution that works with any test framework, but with a slightly clunky API. The main problem with it was that it required the test to call a final ‘assert_expectations()’ from the test code. If you forgot to call that function, the failures weren’t reported. Second attempt was a pytest plugin implementation that eliminated the need for the ‘assert_expectations()’ call in the test because it was called automatically. I wasn’t thrilled with this solution. But it works. In the solution I’m presenting in this post, I’m moving all of the code into one file and implementing ‘expect’ as a pytest fixture. ...

March 10, 2015 · 3 min · Brian

Test First Programming / Test First Development

Occasionally referred to as Test First Development, Test First Programming is a beautiful concept that radically changed the way I approach software development. The ideas of Test First Programming and Test Driven Development are often muddled together. However, Test First is powerful enough to stand on it’s own. I think it’s important to present the concepts separately. TDD and many other agile practices build on Test First. This isn’t just about remembering the past. The lessons learned from Test First are still very important. ...

March 3, 2015 · 8 min · Brian

pytest delayed assert / multiple failure plugin, iteration 1

In Delayed assert / multiple failures per test, I presented a first attempt at writing an ‘expect()’ function that will allow a test function to collect multiple failures and not stop execution until the end of the test. There’s one big thing about that method that I don’t like. I don’t like having to call ‘assert_expectations()’ within the test. It would be cool to push that part into a plugin. So, even though this isn’t the prettiest code, here’s a first attempt at making this a plugin. Test code that uses expect() Local conftest.py plugin for delayed assert Changes to delayed_assert.py Seeing it in action Possible issues and things I don’t like Alternative solutions Next Steps ...

February 19, 2015 · 4 min · Brian

Delayed assert / multiple failures per test

A test stops execution once it hits a failing assert statement. That’s kinda the point of an assert statement, though, so that’s not surprising. However, sometimes it’s useful to continue with the test even with a failing assert. I’m going to present one method for getting around this restriction, to test multiple things, allow multiple failures per test, and continue execution after a failure. I’m not really going to describe the code in detail, but I will give the full source so that you can take it and run with it. Reasons for multiple assert statements and not stop execution Using a failure list to keep track of failures within a test Example test code that uses the delayedAssert module And an example for unittest The output for unittest The output for pytest The output for nose The delayedAssert.py module Feedback welcome ...

February 13, 2015 · 6 min · Brian

perspectives, opinions, dogma, and an elephant

I had assumed that everyone has heard the story about the blind men and the elephant. However, in a very non-scientific poll of a hand full of fellow engineers at my day job, only about half had. So I was going to try to quote it here, but when I looked up a reference for it, I came across a joke that amused the pants off me. So here’s the joke: Six blind elephants were discussing what men were like. After arguing they decided to find one and determine what it was like by direct experience. The first blind elephant felt the man and declared, ‘Men are flat.’ After the other blind elephants felt the man, they agreed. Moral: “We have to remember that what we observe is not nature in itself, but nature exposed to our method of questioning.”- Werner Heisenberg wikipedia entry Well. I thought it was funny. Trust me that this ties in with software development and testing. ...

November 7, 2014 · 4 min · Brian

Why Most Unit Testing is Waste

I don’t rememember how I ran across this article by James O Coplien. However, I was immediately impressed with the thought and experience that went into this paper. Regardless of your viewpoints towards unit tests vs. other types of automated tests, this article is important to read. If your first reaction to the title is anger, please take a deep breath, try to keep an open mind, and actually READ what Cope has to say. I am going to reserve my own reactions to this to a future post, as I don’t want to color your views before you read it. I am posting the entire article with no changes other than formatting. ...

August 1, 2014 · 30 min · Brian

My reaction to “Is TDD Dead?”

Whatever your stance on the merits or pitfalls of Test Driven Development, I think it’s worthwhile and educational to pay attention to a discussion that’s going on lately. Testing is crucial. But is unit test focused TDD the right path? I care about the conversation about TDD because I see serious flaws in the conventional understanding of TDD. Much of the current view of TDD includes: Units are tested in isolation of the rest of the system. Unit tests are more important than any other form of testing. A “unit” is a class or a function. Nothing larger is a “unit”. If you test more than one class, that’s an integration test. If you test from the API with all resources, that’s a system test. Let QA deal with that later. Isn’t that exactly where waterfall failed? You can’t write any production code without a failing test. You have to write only one test at a time, and it must fail. Tests have to be fast. Therefore, they cannot touch hardware, the file system, other services, or database. Tests should be short. All of this rubs me the wrong way. I’ll get to my thoughts later, but my concern about this cemented view of TDD caused me to be very interested in the current talks. On to the discussion I came in after the 2nd video, while doing research on Agile and TDD. I’m not sure if the order matters, but here’s a list of what I know about the discussions. ...

May 25, 2014 · 7 min · Brian