After a full day of playful tutorials on monday, it was back to business on tuesday. The actual conference kicked off with an interesting keynote by Lisa Crispin. Lisa is an author/agile tester extraordinaire/donkey afficionado with Belgian roots – a winning combination if you ask me. The title of her talk was “Agile defect management” and was all about finding a suitable approach to manage and track defects in agile projects. Lisa used the limbo analogy for defects, stating that we should strive to lower the bar on defects. I liked the analogy – but had a hard time dissociating from all the alcoholic connotations of a classic limbo-fest, where the bar is generally lowered until the first drunkard ruptures his anterior cruciate ligaments. I think it’s time to groom my unconscious backlog a little.
In the agile/lean world, using a defect tracking system (DTS) is generally seen as wasteful, since agile teams strive for ‘zero defects’. Instead of filing bugs in a DTS, they prefer to fix the problem immediately by creating an automated test for that defect and adding it to the unit test suite.
I particularly liked Lisa’s “tabula rasa” idea: try to start your project *without* a defect tracking system and see what you need as you progress. Set some rules like “no more than 10 bugs at the same time” and fix the important bugs immediately. You could even use a combination of a DTS and defect cards on a board. Use the DTS for defects in production and defect cards on the story board for defects in development.
The next track I attended was “Incremental Scenario Testing: Beyond Exploratory Testing” from Matthias Ratert. He started off by explaining that they performed exploratory testing in their project, that it was helpful for about 2-3 test sessions , but that it became increasingly difficult for the testers to come up with new and creative test ideas and that too many areas of the complex system remained untested.
My exploratory tester heart was bleeding at first because they dismissed exploratory testing so quickly. When I heard that they were using unskilled, untrained and outsourced labor in the form of students, without experience and/or motivation to continue in this line of work, it all made sense. No wonder the exploratory testing yielded sub-par results.
In order to cope with the testers’ lack of imagination and sense of coverage, they developed a tool (the IST-tool) to do Incremental Scenario Testing (IST). The tool was used to automatically generate test scenarios as a starting point, which were composed of preconditions, states and events. It was tweakable by all kinds of different parameters to suit different contexts. The testers would still have the freedom to test in an exploratory fashion withing these predefined areas, without stating expected results. The tool could be configured so that important areas would appear more often in the testers’ selected scenarios.
The tool as such sounded like a good solution to generate different scenarios, to spread and divide work to have a better coverage, but in my opinion will not solve their initial problem: they were still letting unskilled and unmotivated people perform exploratory testing, which is especially known to be a highly skilled, brain-engaged activity. Replacing them was apparently no option because of budgetary reasons. But why not try to train them into first-class ET-guerilleros first?
For the last morning session I chose to attend “One small change to code, one giant leap towards testability” by the lively and ubiquitous Brett Schuchert (presenter of two track sessions and the open space facilitator on thursday, how much more omnipresent can you get?). His topic was mainly technical – how to design for testability. He used the example of a dice game in which the rolling of the dice was a factor that was beyond our control. In order to make the design testable, we should use dependency injection: create two loaded dice with a predictable result, and feed them into the game.
Schuchert’s inner showmaster came to surface when he threw some jeopardy-style quotes at us to illustrate the importance of Test Driven Development – a design practice rather than a testing practice:
The answer is 66%. What was the question?
“What is the chance that a one-line defect fix will introduce another defect?” (Jerry Weinberg)
After a copious lunch, ‘Fearless Change’-author Linda Rising inspired the big auditorium with her keynote “Deception and Estimation: How We Fool Ourselves“. She defined deception as consciously or unconsciously leading another or yourself to believe something that is not true. Her main message was that we constantly deceive ourselves and others. She illustrated her point with the typical marrying couple at the altar. Although current studies indicate that chances for a marriage to succeed are only 50-50, this knowledge doesn’t keep anyone from getting married. I didn’t know these odds when I decided to get married. I like to think it wouldn’t have made a difference – I’ve always liked to defy statistics.
Us humans, we are a strange lot. We are hardwired to be optimistic. We see what we want to see, so we unconsciously filter out the things we dislike. After all, we fear what we cannot control. And it’s here that estimations come into play. Our hardwiring biases our estimations – we constantly overestimate our ability to do things: coding, testing, everything. And do we ever learn from our mistakes? But we shouldn’t be too overwhelmed by this, there’s hope: achieving good enough estimates isn’t totally impossible, if we just take small enough steps, experiment, learn from failures as well as successes.
Software Testing Club busybee Rob Lambert provided some very good food for thought in his talk “Structures Kill Testing Creativity“. I don’t know how deliberate it was, but he did this really cool thing of standing at the door outside the room and greeting people as they came in. It certainly made me feel welcome from the beginning. He was also the first person up till then that I saw using Prezi. A kindred spirit! I sat back and enjoyed the show. The main point of his presentation that in order to foster creativity, we need *some* structure (he used the example of a sonnet which has *some* predefined rules to it), but that imposing excessive structure upon people and teams will suffocate creativity (my wording, not his). Rob defined creativity through the equation:
Expertise + Motivation + Imagination = Creativity
Rob then tried the Purdue creativity test on his audience – he asked us to draw the person sitting next to us in a mere 30 seconds, which led to some hilarious results (you can check some of the drawings on his blog – the speed-portrait of Lisa Crispin by the artist formerly known as Ruud Cox is pretty mindblowing). The point of the exercise was to show that when we have to share creative ideas, we constantly self-edit. We feel shy and embarrassed, even more so if the environment doesn’t feel safe to us. True. It struck me that almost everyone was apologizing for the bad portraying afterwards. Excessively structured environments don’t make good breeding ground for creativity.
Rob Lambert told a couple of personal stories about people who actively pursued creative environments for the better. Marlena Compton moved to Australia to work at Atlassian, a cool company without excessive structures in place. He talked about Trish Koo, who works at Campaign Monitor, a company that is apparently all about people. He mentioned Pradeep Soundararajan, who started using a videocamera to film testing in progress to make the testing language more portable and universal.

Dynamic? Peppy? Is there a stronger word than energetic? If so, it would describe the keynote by compelling storyteller Elisabeth Hendrickson. The title of her talk rang a little bell: “Lessons Learned from 100+ Simulated Agile Transitions“. The main subject was indeed the infamous WordCount experiment, which she has done numerous times, including her tutorial on day 1 of the conference (you can find my write-up for that here). Because of non-disclosure agreements, she couldn’t use actual pictures, so she used stunt hamsters to illustrate her point. Throughout her talk, it was nice to see that our WordCount group from the day before was no bunch of forgettable dilettantes. It was déjà-vu all over the place:
- Round 1. The computer is bored (check)
-
Round 2. Chaos (double check)
-
Round 3. Structure (triple check, at least the beginning of structure)
-
Round 4. Running on all cylinders (quadruple check)
The lessons learned: teams that struggle, typically
-
Hold on tight to rules, silos and work areas
-
Have everything in progress and get nothing done
-
Sit in meetings constantly instead of creating visibility
-
Fail to engage effectively with the customer
Teams that succeed, generally:
-
Drive development with customer-provided examples
-
Seek customer feedback early and often
-
Re-shape their physical environment
-
Re-invent key-agile engineering practises like ATDD and CI
The ressemblance to what our team went through the day before was striking. I was still pondering that throughout dinner, when it hit me that it would be my turn to perform the next day. By that time, the stage in the dining hall was taken over by overly enthusiastic improv actors, but I wasn’t really in the mood for that kind of entertainment. I obeyed the voice in my head. Must. Prepare. Prezi.
To be continued… Day 3
Thanks for the great summary 🙂
About the incremental scenario testing tool. I took a look at the tool in one of the breaks. My feedback to the author was, that he did really bad marketing for his talk, because the tool is really usable. In my opinion it is not the unskilled testers, that require such a tool. It is rather the insanely large testing space, that needs to be somewhat covered. In the case they are using it, the have hundreds of mobiles, a couple of service providers, different software versions, attached devices, and so forth. I think there is a market for a tool that intelligently covers the space that needs to be tested. This was not how he presented it, though.
Andreas
Hi both,
thanks a lot for your comments on my Incremental Scenario Testing!
I agree, something has to be wrong with my marketing – but maybe the main reason is that I’m a developer and not a marketing guy 😉
But I will try to improve it further…
Andreas, thanks also for explaining the idea and the potential use cases for it. You are right, the main idea is to manage the complexity, to ensure a good test coverage especially when facing time and budget pressure and finally, to find the critical bugs as early as possible.
And it was a great tool to TRAIN our people. While being guided by the scenarios they have been learning a lot. The tool forced the testers to be proactive and it was fun for them to use it.
Just stay tuned for more and better marketing activities 😉
Cheers,
Matthias