PythonBytes.fm

Michael Kennedy from Talk Python to Me and I have launched a new podcast, called Python Bytes, “Python headlines delivered directly to your earbuds”. It’s a weekly short format podcast. Please check it out. The first few weeks of a podcast can really make a difference if we can get a bunch of listeners to try it right away. Please consider leaving a review on iTunes. Even if you don’t use iTunes to listen, early reviews can really help with visibility....

November 10, 2016 · 1 min · Brian

Transcript for episode 2: Pytest vs Unittest vs Nose

This is the transcript for Test and Code Podcast episode 2 Hello everyone. My name is Brian Okken. Welcome to the Python Test Podcast (Now called “Test & Code”). Today I want to talk about choosing a test framework. When I went to look at the different frameworks, I noticed that unittest and doctest are part of the standard library, but nose and pytest were used quite a bit also, so I figured those four were good things to look into....

September 25, 2016 · 10 min · Brian

Python featured in April issue of PragPub

PragPub April 2016 featuring Python (and me) PragPub is the digital magazine put out by Pragmatic Bookshelf, Michael Swaine, and Nancy Groth. I’m especially excited about it because I have two articles featured. I mostly know Michael from the many years of reading Dr Dobb’s. And I respect Pragmatic Bookshelf for their work in technical publishing. So I was thrilled to be asked to contribute. From the Contents page:...

April 6, 2016 · 1 min · Brian

Given-When-Then

Designing your test methods using a simple structure such as given-when-then will help you: Communicate the purpose of your test more clearly Focus your thinking while writing the test Make test writing faster Make it easier to re-use parts of your test Highlight the assumptions you are making about the test preconditions Highlight what outcomes you are expecting and testing against. In this post I’ll be talking about designing your test cases/test methods using given-when-then. It doesn’t matter if you are using pytest, unittest, nose, or something completely different, this post will help you write better tests. Note: This was originally a writeup done after the Python Test Podcast episode 10. However, I think it stands pretty good on it’s own as a post. ...

February 10, 2016 · 11 min · Brian

pytest-expect code now in a github repo

I’ve made a few changes to the pytest-expect fixture plugin. I’ve put the plugin code on github, https://github.com/okken/pytest-expect. It is re-arranged to be a plugin installable with pip. Although I don’t have it in pypi yet. I’ve modified the code to use pytest 2.7.0 @pytest.mark.hookwrapper. I incorporated Bruno’s feedback from the last post to allow both assert failures and expect failures to be reported in the same test. There’s a tests directory to test the plugin....

March 31, 2015 · 1 min · Brian

pytest expect fixture plugin, iteration 1

This is the first iteration that implements ‘expect’ as a fixture. This is really the third attempt at an ‘expect()’ implementation that allows multiple failures per test. First attempt was a general solution that works with any test framework, but with a slightly clunky API. The main problem with it was that it required the test to call a final ‘assert_expectations()’ from the test code. If you forgot to call that function, the failures weren’t reported. Second attempt was a pytest plugin implementation that eliminated the need for the ‘assert_expectations()’ call in the test because it was called automatically. I wasn’t thrilled with this solution. But it works. In the solution I’m presenting in this post, I’m moving all of the code into one file and implementing ‘expect’ as a pytest fixture. ...

March 10, 2015 · 3 min · Brian

Test First Programming / Test First Development

Occasionally referred to as Test First Development, Test First Programming is a beautiful concept that radically changed the way I approach software development. The ideas of Test First Programming and Test Driven Development are often muddled together. However, Test First is powerful enough to stand on it’s own. I think it’s important to present the concepts separately. TDD and many other agile practices build on Test First. This isn’t just about remembering the past. The lessons learned from Test First are still very important. ...

March 3, 2015 · 8 min · Brian

pytest delayed assert / multiple failure plugin, iteration 1

In Delayed assert / multiple failures per test, I presented a first attempt at writing an ‘expect()’ function that will allow a test function to collect multiple failures and not stop execution until the end of the test. There’s one big thing about that method that I don’t like. I don’t like having to call ‘assert_expectations()’ within the test. It would be cool to push that part into a plugin. So, even though this isn’t the prettiest code, here’s a first attempt at making this a plugin. Test code that uses expect() Local conftest.py plugin for delayed assert Changes to delayed_assert.py Seeing it in action Possible issues and things I don’t like Alternative solutions Next Steps ...

February 19, 2015 · 4 min · Brian

Delayed assert / multiple failures per test

A test stops execution once it hits a failing assert statement. That’s kinda the point of an assert statement, though, so that’s not surprising. However, sometimes it’s useful to continue with the test even with a failing assert. I’m going to present one method for getting around this restriction, to test multiple things, allow multiple failures per test, and continue execution after a failure. I’m not really going to describe the code in detail, but I will give the full source so that you can take it and run with it. Reasons for multiple assert statements and not stop execution Using a failure list to keep track of failures within a test Example test code that uses the delayedAssert module And an example for unittest The output for unittest The output for pytest The output for nose The delayedAssert.py module Feedback welcome ...

February 13, 2015 · 6 min · Brian

perspectives, opinions, dogma, and an elephant

I had assumed that everyone has heard the story about the blind men and the elephant. However, in a very non-scientific poll of a hand full of fellow engineers at my day job, only about half had. So I was going to try to quote it here, but when I looked up a reference for it, I came across a joke that amused the pants off me. So here’s the joke: Six blind elephants were discussing what men were like. After arguing they decided to find one and determine what it was like by direct experience. The first blind elephant felt the man and declared, ‘Men are flat.’ After the other blind elephants felt the man, they agreed. Moral: “We have to remember that what we observe is not nature in itself, but nature exposed to our method of questioning.”- Werner Heisenberg wikipedia entry Well. I thought it was funny. Trust me that this ties in with software development and testing. ...

November 7, 2014 · 4 min · Brian