Rapid Software Testing – skilled software testing unleashed

Up to 11

The software testing profession looks like a steadily maturing profession from the outside. After all, there are certifications schemes like ISTQB, CAT, IREB and QAMP (the one to rule them all), standards (ISO 29119) and companies reaching TMM (test maturity model) levels that – just like a Spinal Tap guitar amplifier – one day might even go up to 11. The number of employees that companies send off to get certified in a mere three days is soaring, and new certification programs are being created as we speak. Quick and easy. Multiple choice exams for the win!

The reality, however, is that the field of software testing is torn between different “schools” of testing. You could see these schools as determined and persistent patterns of belief, speech and behaviour. This means that different people – all calling themselves “test professionals” – have vastly different ideas of what testing is all about. Even something elementary as the definition of testing varies from “demonstration of fitness for purpose” to “questioning a product in order to evaluate it”, depending on who you talk with (for more info on the schools of software testing, I heartily recommend Brett Pettichord’s presentation on the subject).

And so it happens that different people think differently about “good” or “mature” software testing. I, for one, don’t believe in tester certification programs, at least not in the format they are in now and the way they are being used in the testing profession. The current business model is mainly designed to get as many people as possible certified within the shortest timeframe. Its prime focus is on certifiability, not on tester skill, and certainly not on the advancement of the craft. Advancement comes from sharing, rather than shielding.

Rapid Software Testing (RST)

So what are the options for a tester on a quest for knowledge and self-improvement? What is a budding tester to do?

I think there are valuable alternatives for people who are serious about becoming a world-class tester. One of these is Rapid Software Testing (RST), a 3-day hands-on course designed by James Bach and Michael Bolton.

Actually, calling this “a course” doesn’t do it justice. RST is at the same time a methodology, a mind-set and a skill set about how to do excellent software testing in a way that is very fast, inexpensive, credible and accountable. It is a highly experiential workshop with lessons that stick.

How is RST different?

During RST you spend much of the time actually testing, working on exercises, puzzles, thought experiments and scenarios—some computer-based, some not. The goal of the course is to teach you how to test anything expertly, under extreme time pressure and conditions of uncertainty, in a way that will stand up to scrutiny.

The philosophy presented in this class is not like traditional approaches to testing, which ignore the thinking part of testing and instead focus on narrow definitions for testing terms while advocating never-ending paperwork. Products have become too complex for that, time is too short, and testers are too expensive. Rapid testing uses a cyclic approach and heuristic methods to constantly re-optimize testing to fit the needs of your clients.

What’s in it for you?

  • The ability to test something rapidly and skilfully is critical. There is a growing need to test quickly, effectively, and with little information available. Testers are expected to provide quick feedback. These short feedback loops make for more efficient and higher quality development
  • Exploratory testing is at the heart of RST. It combines test design, test execution, test result interpretation, and learning into a seamless process that finds a lot of problems quickly. Experienced testers will find out how to articulate those intellectual processes of testing that they already practice intuitively, while new testers will find lots of hands-on testing exercises that help them gain critical experience
  • RST teaches you how to think critically and ask important questions. The art of questioning is key in testing, and a very important skill for any consultant
  • RST will provide you with tools to do excellent testing
  • RST will heighten your awareness

Bold claim bottom-line:

RST will make you a better tester.

RST comes to Belgium

Co-learning and Z-sharp are proud to announce that from 30 September – 2 October, Michael Bolton will visit Belgium to deliver the first ever RST course on Belgian soil, giving you the opportunity to experience this unique course in person. More info can be found here, or feel free to contact us for more info.

Brace yourself for an mind-opening experience that will energize and boost your mind.

All the way up to 11.

(for even more information and testimonials about RST, see Michael Bolton’s RST page)

 

Advertisement

Home Swede Home – Øredev 2011

Last week I attended (and presented at) the Øredev conference in Malmö. Sigge Birgisson invited me to be part of a fully Context-driven test track, which I gratefully accepted. It turned out to be quite a memorable experience. Øredev was the first ever *developer* conference (be it with a testing twist) I attended, which gave the event a totally different vibe for me. Cosy, laid back and open-minded. Geeky too, in a good way:  they provided a cool conference app with a puzzle that could only be solved by obtaining other people’s codes. The side effect of that was that random people started addressing me with “Hi. Can I have your code?” moments before bolting off in their own space-time continuum. Speed dating for techies.

Another thing that really stood out were the graphic live-recordings by Heather and Nora from Imagethink. These talented ladies recorded every keynote live on stage, and made the beautifully looking artworks available as handouts later on. A brilliant idea.

As for the proceedings of the conference – here are some personal highlights:

Day 1

Day 1 had no real testing track, but there was enough fun to be had in other areas of the development spectrum. As the conference was centered around “the user” (Enter Userverse), it kicked off with “Only your mom wants to use your website”, an entertaining keynote by Alexis Ohanian, of Reddit and Hipmunk fame. Hey, the guy even spoke at TED about a whale called Mister Splashy Pants – top that! This time he told a compelling story about how the secret behind succesful websites is caring for your users. He told us that generally, the bar on websites is raised so low that it is really easy to stand out if you’re able to delight your user.

In “Collaboration by better understanding yourself”, Pat Kua stated that people have lots of in built reactions that hold us back from collaborating more effectively: power distance, physical distance, titles, even clothes. What could help us? Awareness, feedback, breaking the cycle, XP practices, courage. A good talk with good content, and some good book recommendations as well.

Johanna Rothman managed to keep me engaged for her whole talk about “Managing for collaboration”. She talked about how to manage the entire system for success, and how we should optimize and collaborate on the highest level, solving problems for the entire organization, not the project. I had the privilige of getting to know Johanna in her may 2011 PSL (Problem Solving Leadership) class, which she organizes together with Esther Derby and Jerry Weinberg. I knew she was a great storyteller, and she did not let us down: one gem was how she upset management by donating her entire bonus to her team and letting them decide who got what. 

Neal Ford closed off the day conference with “Abstraction distractions”, in which he dissected abstractions that have become so common that we started mistaking them for the real thing. An abstraction is a simplification of something much more complicated that is going on under the covers. As it turns out, a lot of computer programming consists of building abstractions. A file system, for instance, is a way to pretend that a hard drive isn’t really a bunch of spinning magnetic platters that can store bits at certain locations, but rather a hierarchical system of folders. And what’s that icon on a save button again? A floppy what? In addition, we shouldn’t name things that expose the underlying details. Users really don’t want save buttons, they just want their stuff to be saved. He also quoted Joel Spolsky’s Law of Leaky Abstractions: All non-trivial abstractions, to some degree, are leaky.

The day ended with drinks, dinner and some live jazz. I ended up talking testing (among other things) with Pradeep Soundarajan over dinner, when suddenly a late night evening session was announced: Copenhagen Suborbitals. At that moment, it reeked of a mediocre techno-act from the late nineties and I didn’t really feel like joining in. But Pradeep was curious enough and I decided to tag along. 

Flash forward one hour. Pradeep and I were literally blown away by a passionate tale of two Danes with a dream to build and launch their own manned rocket into space. Peter Madsen told a compelling and inspiring story about dreams, constraints, possibilities, enthusiasm, courage and rocket fuel.

Day 2

Day two was kicked off by Dan North who talked about “embracing uncertainty”. Fear – he said – leads to risk, risk leads to process, process leads to hate… and suffering and Gantt charts. Dan stressed that people would rather be wrong than uncertain, and that adding more process in times of uncertainty is wasteful and counter-productive. He also contrasted the original intentions of the agile manifesto in 2001, and what has become of that now. He stated that our ability to survive is directly related to handling the unexpected. We should embrace uncertainty, expect the unexpected and anticipate ignorance.

I decided to put up my basecamp in the “Test” room today, since this was context-driven testing day: six testing tracks covering a wide variety of topics. The only drawback was that the room looked like it was designed by an architect on acid: unfinished, an enigmatic door way up high in a wall, bare cables and sockets and a very short and high stage that forced you either to stand in front of the projection screen or to stay cemented in the same spot the whole time. Sound isolation was kind of peculiar too, although that only seemed to be  a problem when Americans were presenting nextdoors. But I’m nitpicking here: the whole Slagthuset venue was nice, and organization and technical team were super helpful, the whole day.

Pradeep Soundararajan‘s talk was titled “How I wish users knew how I help them through context driven testing”. Pradeep started by pointing out that he had the shortest abstract and the longest bio in the conference booklet. True. He seems to like long titles for his talks, too. In combination with his name, this probably makes him a nightmare to introduce at conferences. But in contrast with the title, his talk was short, crisp and funny. He was brave enough to do some live-demoing of his twitter-driven exploratory testing approach: looking for user feedback by searching in tweets with negative emoticons and profanities combined with the product or website name. I hadn’t read his blogpost before now, and it made me laugh out loud. I love the smell of profanities in the morning. Brilliant idea, that.

Next up was Shmuel Gershon, who shared an experience report of a 100% exploratory testing project, “Case Study on Team Leadership with Context-Driven Exploratory Tests”. He came well-prepared, all set to win our hearts with charisma, handouts and chocolats. He told us about how he took his team on a journey towards more context-driven testing and how he dealt with that as his role was also changing. He told us a story on test management, session based testing, recruiting even. He urged us to let people tell their stories, don’t start asking why, leaving them feeling that they have to justify themselves.

The ubiquitous Gojko Adzic (I suspect there are several clones making the rounds of conferences worldwide. Where /doesn’t/ he speak?) was his energetic self in his graveyard shift session called “Sleeping with the enemy”. Independent testing, he said, should be a thing from the past. Testers should engage with developers and business users, in order to create opportunities to accomplish things they cannot do otherwise. I like Gojko’s style, always direct and uncompromising, but always thoughtful. After Gojko’s presentation, a heated hallway discussion ensued in the so-called chalk-talk area. This embodies what conferences are all about: conferring. 

With “Diversity in team composition”, Henrik Andersson took the small stage trying to convince us that when assembling good teams, diversity rocks and uniformity, well, not so much. With some simple examples (“can I please ask everyone wearing black clothes to stand up. You are now a team”), he showed us that there’s much more to it than randomly throwing some people together.

Then it was Selena Delesie‘s turn to shine in the beamer lights. In “Focusing Testing on Business Needs”, she explained how to focus the testing effort on customer needs. She asked some pertinent questions; Are you valued in your team? How do you know?

The last presentation slot of the day in the testing track was for yours truly. In Artful Testing, I talked about how I think testing can benefit from the arts. From thoughtfully looking at it, to develop our thinking. From critical theory and the tools used by art critics, to become software critics. From artists, and how they look at the world – through artist personas). I also touched on the importance of context in evaluating art and software. I received some great reactions and feedback afterwards, and some good tips from Pradeep and Rikard as well.

After that, there was dinner, drinks and Øredev Open, where Pradeep was invited to present “The next generation software tester”. In theory. But you know how these things go. In theory, theory and practice are the same; in practice they are not: dinner took a bit longer than expected, drinks were abundant and so it happened that Pradeep took the stage for some Beer-Driven Exploratory Presenting. It was a great stand-up routine.

Ola Hylten joined in and Shmuel decided to whip out his box with tester games and puzzles. Time for some serious thinking, mixed with laughs. When the Øredev Open closed, we took ourselves and our silly games to the hotel bar where innocent passers-by quickened their pace.

Day 3

Day 3 in the test track started with “Agile testing: advanced topics” by Janet Gregory who highlighted five topics that had emerged since the release of “Agile Testing” by Lisa Crispin and herself. She mentioned feature acceptance (when you’re not able to deliver everything, focus on the features that matter), collaborative automation, large organizations, distributed teams, continuous learning.

Next up was my favorite Swedish philosopher (granted, I only know one), Rikard Edgren, who delivered did a spot-on and thought-provoking session called “Curing Our Binary Disease”. He stated that software testing is suffering from a binary disease: pass/fail addiction, coverage obsession, metrics tumor and sick test design techniques (sick as in “ill”, not “wicked” – my interpretation). Couldn’t agree more. He also mentioned his infamous “software potato”, which made for following legendary phrase: “A tester might not even know that he’s in the potato”. 

All this binary goodness got me thinking: Stay/Go? Focus/Defocus? Defocus it was. I chose to do a final round of the expo and do a quick Copenhagen visit to get some fresh air while it was still light out.

That concluded Øredev 2011. It was great to finally meet Selena, Sigge and Pradeep. And Robert Bergqvist as well. It was great catching up with others (Johanna, Shmuel, Henrik, Janet, David, Ola, Rikard,…). Next up: Eurostar in Manchester next week. A full-blooded tester conference that will rock as well. Let’s meet there.

It’s… thinking thursday

The Challenge

A couple of minutes ago, Michael Bolton tweeted

Thinking Thursday. Test this sentence: “In successful agile development teams, every team member takes responsibility for quality.”

My initial reaction was: “A Michael Bolton challenge – where’s the catch?” This is actually a sentence that shows up regularly in agile literature. Heck, I even said it myself a couple of times. What I really wanted to say at the time, was probably something along the lines of “In agile development, producing quality software should be a team effort – lots of collaboration and communication. No blaming or finger-pointing individuals.”

I tweeted some replies, but soon realised that I would hit the 140 character limit head on.

The Test

But then I thought – why not give these kinds of agile creeds Weinberg’s “Mary had a little lamb”-workout, usually reserved for demistifying ambiguous requirements. I used it earlier: stress every word in turn and see where the ambiguities are.

  • In?
    Does this mean that outside agile development teams, no team members take responsibility?
  • Successful?
    Does this imply that in unsuccessful agile development teams, no one takes responsibility for quality, or that some individuals take the blame? Successful to whom, and compared to what? What is meant with “success”, really? On time, within budget? Satisfied customers? All of these combined?
  • Agile?
    What “Agile” definition are we talking about? Capital A, small a? A mindset, a methodology?And what about successful waterfall teams? Do some individuals take responsibility there? I would like to think that in successful teams, all team members would like a part of the praise. What about those other kinds of development teams out there?
  • Development teams?
    Are we talking about developers only here? What about the tester and product owner role? Or all the other roles that played an important part in developing the product? “In agile teams, testers *are* part of the development team”, you say? I agree, as are the product owners. But in that case, we should think about another label for the team.
  • Every?
    Really? *Every* team member? Can all team members be equally responsible for quality? As Michael Bolton contends, testers do not assure quality. Do testers hire the programmers? Fix problems in the code? Design the product? Set the schedule? Set the product scope? Decide which bugs to fix, write code?
  • Team member?
    What about people that played a part in successfully delivering the product, but that are not considered as core team members? Who are the people that make up the team? Is that defined up front? Aren’t those team boundaries pretty dynamic?
  • Takes Responsibility?
    Doesn’t *taking* responsibility sound a bit too negative? Isn’t “responsibility a two-sided sword? Receiving praise when the quality is applauded, taking the blame when quality turns out to be sub par?
  • Quality?
    Quality, to whom? Qualitative, compared to what? What is quality, anyway?

Is there a problem here?

Well… The sentence under scrutiny sounds comfortably familiar, and in that sense it was a good thing to think it through in a little more detail. It sure leaves a lot to interpretation. Some of the terms used in it are highly subjective or their definitions simply not generally agreed upon.

Back to twitter

Later on, in a response to a tweet from Shrini Kulkarni, Michael said that his purpose was “exploring what bugs me (and others) about it”.

Actually, nothing bugged me about it *before* the exercise, but now it dawned upon me that the wording of that good agile practice does not do the practice justice. It is too vague; it does need rephrasing.

How about a Frustrating Friday challenge: make this sentence fresh and ambiguity-free.

You could postpone it to Semantic Saturday, if you wish. Your call.

Agile Testing Days 2010 – Day 3 (Lederhosen and Certified Self-Certifiers)

Agile Testing Days 2010 – Day 3 (Lederhosen and Certified Self-Certifiers)

October 6

Wednesday. Michael Bolton warmed up the audience with the keynote performance How am I supposed to live without you?Testers: Get Out of the Quality Assurance Business!“, and proved once again that he’s a hard act to follow. He immediately came out of the closet saying that he’s an Agile skeptic and stated what “being Agile” means to him:

  • Adhering to the Agile Manifesto
  • “Be able to move quickly and easily” (cf the definition in the Oxford English Dictionary)
  • De-emphasizing testing for repeatability
  • Re-emphasizing testing for adaptability
  • For testers, focusing on testing skills
  • Focusing on not being fooled

Michael then defined quality as “Value to some person(s) who matter” (© Weinberg, Bach, Bolton) and said that decisions about quality are always political and emotional, and taken by people who actually have the power to make these important decisions. A little bit later, the main message of the talk jumped right at us and bit us in the face:

If you are a tester, do *you* hire the programmers? Fix problems in the code? Design the product? Allocate staff? Set the company’s strategic direction? Allocate training budgets? Set the schedule? Decide on raises? Control the budget in any way? Negotiate customer contracts? Actually choose the development model? Set the product scope? Do you decide which bugs to fix, or write the code yourself?

Did you answer “No” to most of them? Then you will probably agree that it is simply impossible to “assure” quality. But no worries – it is not our job to assure quality. What we *can* do is test, and make sure we’re damn good at it. Testing in the sense of a sapient activity, providing information with the intent of *informing* a decision, not *taking* the decision. Not to be confused with checking, which mainly aims at confirming existing beliefs. Checking is automatable and non-sapient.

Michael Bolton shifted into a higher gear, and claimed that “acceptance tests” are examples, and that examples aren’t really tests. They are checks, not tests. Acceptance tests don’t tell us when we’re done, but they do tell us that we’re not finished when they fail. They should in fact be called “rejection checks”.

I looked around me. Usually, at this point in a presentation and at this time of day, people are dozing off. Even the biggest barflies were wide awake now. He ended with a set of statements that almost read like some kind of Tester’s Manifesto:

We’re not here to enforce The Law.
We are neither judge nor jury.
We’re here to add value, not collect taxes.
We’re here to be a service to the project, not an obstacle. 

I got out of the room early and skipped the Q&A part, since my presentation was up next. Apparently the Q&A got a bit out of hand (I suspect the A was probably more to blame than the Q), because the auditorium doors swung open 15 minutes late. In hindsight, I was lucky that I even had an audience; in a parallel track, Gojko Adzic was delivering one hell of a performance (a stand-up comedy routine, I was told) for an overly packed room. 

No stand-up comedy in my room, but an honest “inexperience report” called “A lucky shot at Agile?“. I had ditched Powerpoint one week earlier and decided to go for Prezi, the so much nicer alternative. Of course, this was a bit of a risk, but I think it turned out fine. The presentation went well, and I received some good and heartwarming feedback which really made the rest of my day. 

In case you are interested, here’s A lucky shot at agile – prezi.

<Shameless_plug>In case you’re interested in the full story, Eurostar conferences has released my paper on the subject in an ebook-format – available for free – here </Shameless_plug>

I stayed in the room to attend Anko Tijman‘s talk “Mitigating Agile Testing Pitfalls“. Anko’s talk revolved around five pitfalls that threaten agile teams, and what we can do to mitigate them:

  1. Not testing with the customer. We can mitigate this risk by building a relationship, building trust.
  2. Not testing as a team. Teams are collectively responsible for the quality of the product. Share knowledge not only with your testers, but with the whole team. Work on a collaborative definition of done, tackle risks.
  3. Unbalanced test strategy. Teams sometimes focus too much on unit tests or acceptance tests, postpone other test activities to the next phase. This in turn can lead to a lack of feedback. To overcome this, put more detail in Definition of Done, schedule knowledge sessions, share content on a wiki.
  4. Requirements are too vague/ambiguous. Collaboration is the key in overcoming this pitfall. Communicate!
  5. Tools. Focus only on tools that add value to the team and that support the practices of the team. Decide as a team which tools to use and which not.

By then it was time for lunch, which is always a good occasion to mingle with other testers, discuss and have some fun. And to ravage that German buffet, of course. I had the impression that everyone was eagerly anticipating the keynote that would follow, which was Stuart Reid with “Agile Testing Certification – How Could That Be Useful“. It became clear that he wasn’t exactly going to preach for his own parish.

And a controversial talk it was. Twitter servers were moaning as Stuart’s quotes and graphic interpretations thereof were launched into #AgileTD cyberspace. Strangely enough, the infamous twitter fail whale was nowhere to be seen, which surprised me since the whole auditorium was filled with bug magnets. Stuart Reid started off by stating that it is only a matter of time before a qualification for agile testing is proposed and launched, whether we like it or not. He continued to say that if we want our industry as a whole to improve, we should exert our influence to help create a certification scheme we can truly benefit from. Fair enough. But what followed next confused me.

Stuart Reid stated that “the certification genie is out of the bottle” – what started as a good intention has spiralled out of control, and there’s no way back. This sounded like nothing more than a public dismissal of ISTQB to me, coming from one of the founding fathers. He proceeded to give an overview of the typical money flows in such a certification scheme, which was pretty enlightening. At one point, Stuart even managed to upset Elisabeth Hendrickson by stating that “it’s not because you are teaching Agile, that the training itself has to be Agile”. The movie clip of that very moment will live long and prosper on the internet. The whole “if you can’t beat them, join them”-idea bothered me too, as if there are no alternatives. Instead of focusing on certifications, we could try to educate employers, starting right at the top level. Certification programs exist mainly because employers don’t really know what qualities define a good tester. For them, a certification is merely a tool to quickly filter incoming resumes. Anyway, I think it’s good that Stuart initiated the debate, which would continue the rest of the conference.

The room was buzzing afterwards. Nothing better than some good old controversy to get the afternoon started. David Evans calmed things down again with “Hitting a Moving Target – Fixing Quality on Unfixed Scope“. He had some great visuals to support a thoughtful story. Some heavily tweeted quotes here:

  • QA in Agile shouldn’t be Quality Assurance but rather Questions and Answers
  • The product of testing is confidence (to which Michael Bolton quickly added that the product of testing is actually the demolition of false confidence).
  • Acceptance Test Driven Development (ATDD) slows down development just as passengers slow down a bus. We should measure the right thing.

Then it was Markus Gärtner‘s moment to shine in the spotlights. He presented “Alternative Paths for Self-Education in Software Testing“. During the last year, I got to know Markus as a passionate professional, dedicated to learning and advancing the craft. An overly active and ever-blogging guy that may have found the secret of the 27-hour-day. He opened with the question “who is in charge of your career?” Is it your boss? Your employer? Your family? Your teachers from high school? Well, none of that. It’s YOU. If you find yourself unemployed a year from now, everything you do now is contributing to you being employed quickly again.

Markus listed several ways of learning and self-improvement:

  • Books:
  • Courses
  • Buccaneer Scholaring, a way of taking your education in your own hands, based on the book Secrets of a Buccaneer Scholar by James Bach
  • Testing challenges – challenges to and by the Testing Community
  • Testing Dojos – principles: collaboration in a safe environment, deliberate practice. Usually consists of a mission which allows the testers to practice their testing and learning. Can happen with observers or facilitators, can be a good occasion to practice pair testing too.
  • Weekend Testing – A few hours of testing + debriefing in the weekend  according to a charter or a mission. I participated in a couple of European weekend sessions, and I must say: great learnings indeed. 
  • The Miagi-Do School of Software Testing, a school founded by software craftsman Matt Heusser. It’s a zero profit school where people can improve their skills, learn from others and share knowledge, using a belt system like in martial arts. They are not widely advertised – as Markus said: the first challenge is finding them.  

Janet Gregory‘s closing keynote fitted nicely in Markus’ theme, since it was all “About Learning“. It was an inspiring talk, about congruence in learning, the importance of learning, the curiosity of children – how their unspoiled curiosity makes them natural testers. She also related the learning to the agile principles. She managed to tie in neatly with Rob Lamberts presentation about structures and creativity. A safe environment helps you to learn. She referred to trust as an important element in team safety. A blame culture will work counterproductive. No-one will learn anything.

After all this theory about learning, we were all yearning for some hands-on practice. The Diaz & Hilterscheid gang gave us the opportunity to practice that typically German custom called Oktoberfest. Just like last year, they dressed up in Lederhosen (I’m actually getting used to the look of José in Lederhosen, go figure) and started serving plenty of local food and one-liter glasses of beer. There was live music as well, which added to a fun Bayerisches athmosphere. The evening culminated in some vivid discussions of the burning issues of the day. Well, actually there was only one burning issue: certification. Elisabeth Hendrickson was determined to get everyone mobilised for a worthy cause and whipped out her iPad on which she had written some kind of self-certification manifesto. Someone threw a pile of index cards on the table. Elisabeth was on fire and started handing them out everywhere. “If you agree with it, copy it. If you don’t, don’t”. Index cards on tables. Pens. Beer. Lots of people copying index card after index card till their fingers went in a cramp. That night witnessed the birth of a community of certified self-certifyers, all of them proudly carrying the message:

We are a community of professionals.
We are dedicated to our own continuing education
and take responsibility for our careers.
We support advancing in learning and advancing our craft.
We certify ourselves.

Some people took the discussions to the hotel bar, while others decided to dance the night away. I think I even spotted some genuine limbo-ing on the dancefloor. Someone ought to tell these testers about risk…

To be continued… Day 4

Agile Testing Days 2010 – Day 2 (Defect limbo and stunt hamsters)

Agile Testing Days 2010 – Day 2 (Defect limbo and stunt hamsters)

October 5

After a full day of playful tutorials on monday, it was back to business on tuesday. The actual conference kicked off with an interesting keynote by Lisa Crispin. Lisa is an author/agile tester extraordinaire/donkey afficionado with Belgian roots – a winning combination if you ask me. The title of her talk was “Agile defect management” and was all about finding a suitable approach to manage and track defects in agile projects. Lisa used the limbo analogy for defects, stating that we should strive to lower the bar on defects. I liked the analogy – but had a hard time dissociating from all the alcoholic connotations of a classic limbo-fest, where the bar is generally lowered until the first drunkard ruptures his anterior cruciate ligaments. I think it’s time to groom my unconscious backlog a little.

In the agile/lean world, using a defect tracking system (DTS) is generally seen as wasteful, since agile teams strive for ‘zero defects’. Instead of filing bugs in a DTS, they prefer to fix the problem immediately by creating an automated test for that defect and adding it to the unit test suite.

I particularly liked Lisa’s “tabula rasa” idea: try to start your project *without* a defect tracking system and see what you need as you progress. Set some rules like “no more than 10 bugs at the same time” and fix the important bugs immediately. You could even use a combination of a DTS and defect cards on a board. Use the DTS for defects in production and defect cards on the story board for defects in development.

The next track I attended was “Incremental Scenario Testing: Beyond Exploratory Testing” from Matthias Ratert. He started off by explaining that they performed exploratory testing in their project, that it was helpful for about 2-3 test sessions , but that it became increasingly difficult for the testers to come up with new and creative test ideas and that too many areas of the complex system remained untested.

My exploratory tester heart was bleeding at first because they dismissed exploratory testing so quickly. When I heard that they were using unskilled, untrained and outsourced labor in the form of students, without experience and/or motivation to continue in this line of work, it all made sense. No wonder the exploratory testing yielded sub-par results.

In order to cope with the testers’ lack of imagination and sense of coverage, they developed a tool (the IST-tool) to do Incremental Scenario Testing (IST). The tool was used to automatically generate test scenarios as a starting point, which were composed of preconditions, states and events. It was tweakable by all kinds of different parameters to suit different contexts. The testers would still have the freedom to test in an exploratory fashion withing these predefined areas, without stating expected results. The tool could be configured so that important areas would appear more often in the testers’ selected scenarios.

The tool as such sounded like a good solution to  generate different scenarios, to spread and divide work to have a better coverage, but in my opinion will not solve their initial problem: they were still letting unskilled and unmotivated people perform exploratory testing, which is especially known to be a highly skilled, brain-engaged activity. Replacing them was apparently no option because of budgetary reasons. But why not try to train them into first-class ET-guerilleros first?

For the last morning session I chose to attend “One small change to code, one giant leap towards testability” by the lively and ubiquitous Brett Schuchert (presenter of two track sessions and the open space facilitator on thursday, how much more omnipresent can you get?).  His topic was mainly technical – how to design for testability. He used the example of a dice game in which the rolling of the dice was a factor that was beyond our control. In order to make the design testable, we should use dependency injection: create two loaded dice with a predictable result, and feed them into the game.

Schuchert’s inner showmaster came to surface when he threw some jeopardy-style quotes at us to illustrate the importance of Test Driven Development – a design practice rather than a testing practice:

The answer is 66%. What was the question?
“What is the chance that a one-line defect fix will introduce another defect?” (Jerry Weinberg)

After a copious lunch, ‘Fearless Change’-author Linda Rising inspired the big auditorium with her keynote “Deception and Estimation: How We Fool Ourselves“. She defined deception as consciously or unconsciously leading another or yourself to believe something that is not true. Her main message was that we constantly deceive ourselves and others. She illustrated her point with the typical marrying couple at the altar. Although current studies indicate that chances for a marriage to succeed are only 50-50, this knowledge doesn’t keep anyone from getting married. I didn’t know these odds when I decided to get married. I like to think it wouldn’t have made a difference – I’ve always liked to defy statistics. 

Us humans, we are a strange lot. We are hardwired to be optimistic. We see what we want to see, so we unconsciously filter out the things we dislike. After all, we fear what we cannot control. And it’s here that estimations come into play. Our hardwiring biases our estimations – we constantly overestimate our ability to do things: coding, testing, everything. And do we ever learn from our mistakes? But we shouldn’t be too overwhelmed by this, there’s hope: achieving good enough estimates isn’t totally impossible, if we just take small enough steps, experiment, learn from failures as well as successes.



Software Testing Club busybee Rob Lambert provided some very good food for thought in his talk “Structures Kill Testing Creativity“. I don’t know how deliberate it was, but he did this really cool thing of standing at the door outside the room and greeting people as they came in. It certainly made me feel welcome from the beginning. He was also the first person up till then that I saw using Prezi. A kindred spirit! I sat back and enjoyed the show. The main point of his presentation that in order to foster creativity, we need *some* structure (he used the example of a sonnet which has *some* predefined rules to it), but that imposing excessive structure upon people and teams will suffocate creativity (my wording, not his). Rob defined creativity through the equation:

Expertise + Motivation + Imagination = Creativity

Rob then tried the Purdue creativity test on his audience – he asked us to draw the person sitting next to us in a mere 30 seconds, which led to some hilarious results (you can check some of the drawings on his blog – the speed-portrait of Lisa Crispin by the artist formerly known as Ruud Cox is pretty mindblowing). The point of the exercise was to show that when we have to share creative ideas, we constantly self-edit. We feel shy and embarrassed, even more so if the environment doesn’t feel safe to us. True. It struck me that almost everyone was apologizing for the bad portraying afterwards. Excessively structured environments don’t make good breeding ground for creativity.

Rob Lambert told a couple of personal stories about people who actively pursued creative environments for the better. Marlena Compton moved to Australia to work at Atlassian, a cool company without excessive structures in place. He talked about Trish Koo, who works at Campaign Monitor, a company that is apparently all about people. He mentioned Pradeep Soundararajan, who started using a videocamera to film testing in progress to make the testing language more portable and universal.

© Quality Tree Software Inc.

Dynamic? Peppy? Is there a stronger word than energetic? If so, it would describe the keynote by compelling storyteller Elisabeth Hendrickson. The title of her talk rang a little bell: “Lessons Learned from 100+ Simulated Agile Transitions“. The main subject was indeed the infamous WordCount experiment, which she has done numerous times, including her tutorial on day 1 of the conference (you can find my write-up for that here). Because of non-disclosure agreements, she couldn’t use actual pictures, so she used stunt hamsters to illustrate her point. Throughout her talk, it was nice to see that our WordCount group from the day before was no bunch of forgettable dilettantes. It was déjà-vu all over the place:

  • Round 1. The computer is bored (check)
  • Round 2. Chaos (double check)
  • Round 3. Structure (triple check, at least the beginning of structure)
  • Round 4. Running on all cylinders (quadruple check)

 The lessons learned: teams that struggle, typically

  • Hold on tight to rules, silos and work areas
  • Have everything in progress and get nothing done
  • Sit in meetings constantly instead of creating visibility
  • Fail to engage effectively with the customer

Teams that succeed, generally:

  • Drive development with customer-provided examples
  • Seek customer feedback early and often
  • Re-shape their physical environment
  • Re-invent key-agile engineering practises like ATDD and CI

The ressemblance to what our team went through the day before was striking. I was still pondering that throughout dinner, when it hit me that it would be my turn to perform the next day. By that time, the stage in the dining hall was taken over by overly enthusiastic improv actors, but I wasn’t really in the mood for that kind of entertainment. I obeyed the voice in my head. Must. Prepare. Prezi. 

To be continued… Day 3

Agile Testing Days 2010 – Day 1 (Agile transitions)

Agile Testing Days 2010 – Day 1

After a great experience at the Agile Testing Days last year, I decided to answer their call for papers early. By the time the full program was announced (somewhere in april), I had almost forgotten that I participated. So it was a pleasant surprise to see my name listed among all those great speakers. I decided to break out of my comfort zone for once and in the last minute I “prezi-fied” my existing presentation. Confidently stressed, I flew east to Berlin to be part of what proved to be a wonderfully memorable conference. 

October 3

It was sunday October 3, which meant I arrived on the 20th anniversary of the German unification. The last time I had been in the city centre, Berlin was still a divided city. I was 16, and overwhelmed by the contrast between the neon-lit Ku’damm and the clean but spookily deserted East. Going through checkpoint Charlie to the East – and happily back again, while others desperately wanted to but couldn’t – still ranks among the most awkward moments in my otherwise pretty uneventful youth. Sure, the Alexanderplatz, Ishtar gate and Pergamon museum impressed me, but why a country would deliberately lock up its people was totally beyond my 16-year-old self.

So, with a few hours of daylight left, I headed to some sites that I still remembered from the days of yore. The Brandenburger Tor was now the backdrop for big festivities: music, beer, bratwurst and parachute commandos executing a perfect landing at Helmut Kohl’s feet at the Reichstag. No concrete walls to be seen. Unter den Linden completely opened up again. It felt great. Sometimes nostalgia isn’t what it used to be.

October 4

© Stephan Kämper

The morning of tutorial day, the Seminaris Hotel conference lobby was buzzing with coffee machines and activity. I had enrolled for Elisabeth Hendrickson‘s “Agile transitions” tutorial, which turned out to be an excellent choice. Eight people were taking part in the WordCount experiment, of which Elisabeth recounts an earlier experience here. After a round of introductions, we divided roles within the WordCount company: tester – developer – product manager – interoffice mail courier (snail mail only) – computer (yes, computers have feelings too) or observer. Strangely enough, I felt this natural urge to be a tester. I didn’t resist it, why should I? Elisabeth then proceeded to explain the rules. We would play a first round in which we had to stick to a set of fixed work agreements, like working in silos, formal handoffs and communicating only through the interoffice mail courier. The goal of the game was basically to make our customer happy by delivering features and thus earning money in the process.

We didn’t make our customer happy, that first round. On the contrary – confusion, chaos and frustration ensued. Testers belting out test cases, feeding them to the computer, getting back ambiguous results. Developers stressed out, struggling to understand the legacy code. Our product manager became hysterical because the customer kept harassing him for a demo and no-one was responding to his messages. The mail courier was bored, our computer felt pretty abandoned too. It all felt wonderfully unagile.

In round 2 we were allowed to change our work agreements any way we wanted, which sounded like music to our agile ears! We co-located immediately and fired our mail courier. We organised a big kickoff-meeting in which the customer would explain requirements and walk us through the application. We already visualised the money flowing in. In theory, theory and practice are the same. In practice – not so much. We spent a whole round discussing how we would work. We lost track of time. There were no new features, and no money. We felt pretty silly.

Round 3 was slightly better. We were able to fix some serious bugs and our first new features were developed, tested and working. But just when we thought we were on a roll, our customer coughed up some examples that she really wanted to pass too. They didn’t. 

Pressure was on in round 4, which was going to be the last one of the day. Would we make history by not delivering at all? Well, no. We actually reinvented ATDD, by letting the customer’s examples drive our development. This resulted in accepted features, and some money to go with that. We managed to develop, test and demo some additional functionalities too. A not-so-epic win, but a win nontheless. Wordcount was still in business. If there would have been a round 5, I’m pretty sure WordCount Inc. would have made a glorious entrance at the Nasdaq stock exchange.

Elisabeth did a great job facilitating the discussions in between rounds and playing a pretty realistic customer. All the participants made for a very enjoyable day too. The day really flew by and ended with a great speaker’s dinner at the borders of the Schlachtensee. A Canadian, an American, a German and a Belgian decided to walk back to the hotel instead of taking the bus. It sounds like the beginning of a bad joke, but that refreshing 5km walk through the green suburbs was actually the perfect closure of a terrific day. And without a map, I might add. As the rapid Canadian pointed out later: documentation is overrated.

Eurostar 2009 – a week to remember

A write-up of the Eurostar 2009-conference in Stockholm

I absolutely *love* Stockholm in wintertime. Pepparkakor, glögg, gravad lax… and Eurostar too. People keep telling me that I would probably love it even more in summertime, but I’ll always associate those dark days with Eurostar. I presented my first Eurostar track there in 2007 – nothing but good memories – and I was selected this year as well. The Eurostar line-up is always pretty impressive, so it can be both intimidating and exciting to be a part of that. It’s just a matter of keeping the intimidation level below the excitement level, I guess. As a boyscout, good old Baden Powell always told me to “be prepared”. Now sometimes I wouldn’t recognize a life lesson if it punched me in the face, but here’s one that I did remember. So I found myself writing a paper and assembling a presentation during those hot holiday nights in Southwestern France. You just gotta love those early deadlines!

November 29

After an uneventful flight from Brussels to Arlanda, set foot on Swedish soil. Met up with fellow Belgian Mieke Gevers, a member of this year’s program committee and in charge of the track chairs as well. I helped her carry some excess bagagge that turned out to contain presents for the trackchairs – you can’t go wrong with Belgian chocolates and “jenever“. We took the Arlanda express (easy and quick) to Stockholm C and a cab to the Rica Talk hotel.

November 30 – Tutorial day

On monday I attended a full-day tutorial by Michael Bolton called “Exploratory Testing Masterclass” (slides available here). Two years ago I attended his tutorial on Rapid Software Testing, which I found very valuable. Michael Bolton is an engaging speaker and teacher who invites you to think, rather than just sit and absorb theoretical matter. There were lots of exercises, including one on factoring (identifying dimensions of interest in a product). We were asked to identify all dimensions of a wineglass that may be relevant to testing it, using the “San Francisco Depot” – heuristic (Structure, Functions, Data, Platform, Operations, Time) – not new to me but always worth repeating. A lot of mnemonic wizardry to be found here. What about that handy mnemonic for oracles – HICCUPPS/F (History, Image, Comparable product, Claims, User expectation, Product, Purpose, Statutes, Familiar problems) – never again say that you don’t know why something should be considered a bug. Care to take a ride on that test reporting heuristic called MCOASTER? Well I’ll see your CRUSSPIC STMPL, and raise it with a FCC CUTS VIDS (Mike Kelly’s application touring heuristic). Mnemomania!

Of course, there were plenty of other impressions that kept lingering for a while.

  • A quote by Jerry Weinberg: “A tester is someone who knows things can be different” – true.
  • “If it ain’t exploratory, it’s avoidatory” – made me laugh. 
  • “A good tester doesn’t just ask “Pass or Fail?”. A good tester asks “Is there a problem here?”.
  • CHECks are CHange detECtors, testing is exploring.
  • A complete debunking of some boundary value analysis truisms: it is generally accepted that the behaviour at boundaries is more likely to show erratic behaviour, but how do we know these boundaries? The actual boundaries in a system may not be the ones we are told about. That’s why we must explore.
  • Testing is “storytelling” – I liked that take on testing:

“You must tell a story about the product, about how it failed, and how it might fail – in ways that matter to your various clients. But you must also also tell a story about testing, how you configured, operated and observed it – about what you haven’t tested, yet… or won’t test, at all – and about why what you did was good enough.”

The end of the session was foreseen at 5 PM. The discussions kept going on until 5.45 PM. I think that says it all. Later that evening, an international amalgam of testers set out to explore the possibilities of finding food in Gamla Stan. Eventually we found an Indian restaurant using that good old I.NEWTON heuristic (Indian, Nearby, Edible, Welcoming, Tasty, Open, Not-too-expensive). The end of a great day. Had some nice conversations with Rikard Edgren, Tone Molyneux, Ray Arell and  John Watkins (my trackchair) as well.

December 1

The second day started with a tutorial as well, be it a half-day one: Managing Exploratory Testing by Jonathan Kohl. Of course there were a lot of similarities with the first tutorial, but this was more of a hands-on session, where we could put Michael Bolton’s concepts from the day before into practise. There was some theory about coverage models – SF Depot anyone? We ended up describing a whole bunch of characteristics of a table that we had never associated with an ordinary table before. Practical and fun. Certainly an eye-opener.

At that point I was still trying to get a hold of the person I was supposed to trackchair on wednesday. Originally I would be trackchairing my colleague Wim De Mey’s track about regression testing in a migration project, but Wim had to cancel his presentation at the very last moment because of unfortunate familial circumstances. A replacement was found in the person of Mika Katara, from Finland – but no sign of him, yet. Oh well, time for a quick lunch, a tour of the expo and the actual kick-off of the conference.  Dorothy Graham opened the 17th Eurostar conference in style. She introduced the program committee (Tone and Mieke made sure Isabel Evans was also represented by carrying an air-filled balloon with a face drawn on it – I’m not sure if Isabel would be too happy with the analogy 🙂 ) and set the scene for the first keynote speaker.

Lee Copeland started this very first talk of the conference about nine of the most important innovations in software testing: the context-driven school, test-first development, really good books, open source tools, session-based test management, testing workshops, freedom of the press, virtualization and “testing in the cloud”. Strange that he sees the context-driven school as an innovation – as far as I know it was founded in 1999; the first book that explicitly named it was already published in 2001. I agree with the freedom of the press thing. Testing blogs are appearing everywhere (guilty, your honour), twitter is on the rise. Lee is apparently not a fan of twitter. Neither was I – I always thought of it as encouraging the spreading of triviality, but I’m actually starting to come back from that. I noticed that a lot of people within the testing community are using it to share their ideas, give advice or call for help. And it gives a great deal of extra coverage to an event like this (see twitter.com/esconfs), so maybe I’ll give it a try. Later. 

The rest of the afternoon consisted of a series of  short 20-minute tracks, which is mostly just enough to launch some provoking ideas, but not really ideal for a lot of content. Johan Jonasson talked about how he managed to save a project with the introduction of a structured exploratory testing approach. This track would have benefited from a 45 minute timeslot – there was no time to go into detail, which I found a pity. Next up was Julian Harty, who explained the concept of “trinity testing”: short session of around 90 minutes per feature, where the feature owner, the developer and the test engineer work interactively through the software to share knowledge and ideas. Pretty interesting, since I also found out later that “the trinity test” was also the name of the very first nuclear test ever conducted, marking the very start of the nuclear age. Julian is probably aware of this – I didn’t hear him mentioning it, though.
Geoff Thompson then talked about reporting – “If only we could make them listen!”. Well actually, it’s more the communicator’s job to make sure he gets heard. It was a great talk – he was able to slip in the Challenger disaster and the Heathrow terminal 5 debacle as examples of how important messages were apparently not deemed important enough, with horrendous results. Knowing your recipients is key, and knowing what information they want as well. Noteworthy: a lot of people are color-blind. If you absolutely want to make sure that everyone understands your reports, shouldn’t you avoid the reds and greens?
Besides being a sapient testing evangelist, Michael Bolton is also a human quote machine. He did this cross between stand-up routine and  political televangelism called “Burning Issues of the Day” (available here). A lot of wisecracks and eye-openers, the funniest moment at Eurostar for me. He was even able to win a bet by slipping in a quote about agilists and sex:

“The agilistas did not discover pairing or test-first programming. They’re like teenagers who’ve just discovered sex. It IS great, but calm down”.

The last speaker of the day was the same as the first one. Jonathan Kohl talked about how our urge to be “Agile” can distract us from our mission to deliver software that our customers value, while supporting our team. Agile can distract from skill development too. The term “Agile” has become big business, and lost a great deal of it’s significance. So let’s stop worrying about whether what we do is “agile” or not, and go back to calling it “software development”. As far as I’m concerned, he hit the nail on head. I wouldn’t have minded him talking about this a little longer.

The day ended with drinks in the expo and my attempt at playing a memory game at one of the stands. I kept failing epically. While I was trying to get asleep I found the ideal excuse: my head was already full of things to remember – no room for these trivial button sequences.

December 2

Right before the first keynote of the day I finally met Mika, whom I was supposed to be trackchairing in the afternoon. He was invited as a backup speaker on friday to speak on wednesday, was able to make it, but had to leave immediately after his talk. A true case of hit-and-run guerilla presenting at Eurostar! Naomi Karten then delivered an interesting keynote about “changing how you manage and communicate change”. Her talk was built around the Satir change model. There’s an initial status quo, then a foreign change-inducing element causing a ‘POW’, then chaos, after that an adjustment and in the end a new status quo. When people are confronted with change, they are experiencing a loss of control, and they often react to that in an emotional way. Important: listen, be empathic, regularly communicate the status of the change, even when there is nothing to report. She also used a quote that I well certainly use myself when feeling cornered:  

Hofstadter’s Law: It always take longer than you expect, even when you take into account Hofstadter’s Law”

By then it was time to pay a visit to the Test Lab that was set up by James Lyndsay and Bart Knaack. It was a Eurostar first, and I am actually wondering now why it took so long to have some actual “testing” going on at a testing conference. The software they were running was Open EMR, an open source patient management and appointment book system. What made it even more interesting for me is that I have been testing and working with a similar (not open source, though) system for a long time, so I more or less know what to expect (or what actual users of the software would expect). I paired up with Rikard for a while and found a whole bunch of issues by merely touring the application – we noted them for later reference. It is always nice to pair with fellow testers to see what they focus on, and what their reasoning is. The state of the software under test was something else. It showed some pretty alarming behaviour, and it was far from intuitive or user-friendly.

By then it was time for Eurostar veteran Erik Boelen, speaking at Eurostar for the fifth time already. I’ve known Erik for some time now, and his talks are always entertaining and relaxing in a way. “The power of risk” was his view on how to use a risk-based test strategy that “makes people talk”, like Läkerol. His main message was (apart from the implicit one that testing can be fun *and* will rule the world) that they defined all the risks and used them as entry paths for exploratory testing. For the highest and medium risks they documented their test cases, and for low risks they just reported the results.

After lunch I introduced Mika Katanen (from the university of Tampere in Finland) and his talk about Automatic GUI test generation for smartphone applications. I am totally new to model-based testing and I was impressed with the brief demo he showed. His track went well, and there were a lot of people approaching him for a chat at the end. I do hope that he was able to catch his plane on time. Parallel with this track, Shrini Kulkarni held his talk about software metrics which I was unable to attend. People said it was good – I hope I will be able to see him speak some place else in the future.

Remembering the memory game disaster from the day before, I decided to unfocus for a while – my mind was getting stuck again. I teamed up with some CTG colleagues plus a wildcard named Tom and enrolled ourselves for the quiz that was supposed to take place in the evening. We aptly named ourselves “The Handsome Oracles”, but it wasn’t meant to be. The quiz was canceled later on, so we weren’t able to put the money where our mouth was. We also worked out some testing limericks for the limerick competition – we didn’t win. I thought they were good, but that’s probably just another example of parents not recognizing the ugliness of their own babies. There’s a good joke and an interesting analogy about that hereGitte Ottosen ended the day with a talk about combining agile and maturity models which was chosen best presentation last year in The Hague. I had the impression she was a little nervous – which is completely understandable. I was telling to myself that delivering a keynote for a full auditorium like that sure looked like a daunting task – until I suddenly realised that I would be standing in that same room tomorrow. My unfocused mind started wandering off.

While the temperatures were taking a dive, the Handsome Oracles went into town for dinner. I returned a bit earlier than the rest to rehearse my talk and to get a good night’s sleep while the (by then just plain) Oracles went barhopping. Haha! Life’s good, but not fair at all. 

December 3

The last day of the conference, and people started looking weary. Ray Arell gave us a good wake-up call with his keynote on moving to an agile environment, based on his experiences at Intel. Ray’s a great speaker (and a fun guy too – I might add). He described his hits and misses; the ‘misses’ are often the most interesting parts of experience reports. Lot’s of good advice and some nice puns (Wagile, FRagile, Scrumfalls).

I stayed in the agile track in the big auditorium where John Watkins presented some material from his book on agile testing, aptly named “Agile Testing”. John had gathered case study material from twenty agile projects and proposed agile methods for small, medium, large, off-site, and even off-shore projects. Intriguing, but upon hearing the idea of “agile best practises”, my context-driven genes started to play up.

John was also my great trackchair and introduced me as “Filmstar, Rockstar, Tester!” At least, that was his own juicy summary after I mentioned to him that I had worked as a movie distributor before and had also played in a rock band. Granted, I also admitted playing a zombie once – a serious case of method acting. Anyway, his introduction loosened the audience a bit and I was able to present my track “A lucky shot at agile?” without any problems. I wanted to tell a testing story and I think it went well. I felt at ease (those wireless microphones are really great) and there were many questions afterwards. During the rest of the day people I didn’t know came up to me to congratulate me with the presentation, which was nice. I took a long lunch and had a walk around the expo. I went back to the Test Lab to report the bugs that we found earlier. I didn’t succeed in entering them all, which made me feel kind of guilty – I wished that I would have spent more time there. But I had a hard time choosing. It’s a pity that test labbing also meant skipping tracks as well.

The last regular talk of the conference was held by Rikard Edgren, who is also a Eurostar regular. I had seen his presentation on testing creativity (“Where testing creativity grows”) in 2007 and I liked it a lot, since it is also a subject that is dear to me. There’s far too many people that think that testing is not a creative or challenging activity. This time he talked about  “More and better test ideas“. He promoted the use of oneliners as test ideas – a brief statement of something that should be tested. These test ideas can then be used as a basis for test cases, or as a guideline for other types of testing, or even discarded when there irrelevant or when there is simply not enough time. I think Rikard’s subjects will always be a bit polarizing due to their innovative nature – you either like them or you don’t. I am a believer and it was a good way for me to finish the conference.

I missed the first part of the Test Lab result presentation since they changed the timing and I totally forgot about that. But I got the most important statistics. Over two and a half days, more than 50 bugs were found. My first reaction was: “Only 56? Man, there’s hundreds of them hiding in there”, but then I realised that people had been testing in the lab only for short periods, in between tracks, just as I did. I wonder what would have happened if hundreds of testers had a go at it, all at the same time. Bugfest!

After a short panel discussion with John Fodeh (next year’s programme chair), Geoff thompson, Tobias Fors and Nathalie Van Delft it was time for the award ceremony. Naomi Karten received the Best Tutorial Award and the European Testing Excellence Award went to Anne Mette Hass. In the meanwhile I was dozing off in my not-so-comfy chair – these 4 days of conferencing were finally getting to me. A friendly woman on the stage was mentioning someting about a longlist of papers, and a shortlist, and a final selection of three, containing two Dutch and one Belgian paper. Now wait a minute… how many Belgians sent in a paper? 1…2… before I could make the math, my name was announced as winner of the ENEA Best Paper Award. Two talks at Eurostar, two papers, two awards… what are the odds of that? I was absolutely flabbergasted. That’s actually three in a row for my company CTG, since Bert Jagers won the award last year in The Hague. The pressure is on for next year :-).

I spent the rest of the evening in the hotel bar, where all the testers with an early flight on friday morning were flocking. We ended the day singing an eclectic mix of Irish traditionals, Dylan, early Springsteen and – of course! – Abba, accompanied by a non-certified tester, who plays a mean mandolin. I love Stockholm in wintertime. It was a good Eurostar. Yes sirree.

Agile testing days 2009 – Berlin

A write-up of the Agile testing days 2009 in Berlin.

In october I attented the Agile Testing Days in Berlin. The program committee assembled a really great line-up (see the 2009 programme here). Here is my write-up of the event. A late one, I admit, but it certainly was worth writing about. So without any further ado, here goes… 

October 11

I arrived in Berlin late sunday evening. During the frantic cab ride through the green outskirts of Berlin I had my first conversation with a genuine Berliner. While giving me a quick ‘Berlin for dummies’ round-up, he managed to distract my attention just enough so I didn’t really notice all the near-collisions. I made it to the Seminaris Campushotel Berlin in one piece. Lovely venue, by the way. 

October 12

(c) 2009 Crispin & Gregory

First day of the conference. Quick registration, coffee and off to the first floor where I attended a full-day tutorial by Lisa Crispin, “Using the Agile Testing Quadrants to Cover Your Testing Needs”. There were four other tutorials going on that morning, by Elisabeth Hendrickson, Isabel Evans & Stuart Reid, Tom Gilb and Tom & Mary poppendieck. A great line-up, which made it really hard to choose. But since I had bought and already briefly skimmed through Lisa’s (and Janet Gregory’s) excellent book “Agile Testing: A Practical Guide for Testers and Agile Teams“, I decided to settle with the agile testing quadrants. The day went by really quick, which is always a good sign. The theory wasn’t new, but there were some revealing thought-exercises, like listing all your practises in the quadrants. Visualizing them often makes it very clear if things are missing. She also told a funny anecdote on how her team ‘materialises’ remote team members during meetings and pairing sessions:  

“My team set up a rolling cart for each remote team member, with a laptop, webcam, Skype and mic. My webcam displays on the laptop, and my team members roll ‘me’ around to whoever I’m pairing with, or to meetings (rolling through the halls saying hi to people is fun!) I can control the webcam to look for people.” 

October 13

With a indecent amount of coffee in our systems, the actual conference kicked off with a keynote by Lisa Crispin, “Are Agile Testers Different?”. An interesting keynote, based on ideas that are also described in her book: 

  • In agile projects, the lines between the different roles are blurred. 
  • Testers also need to change their mindsets (seek new ways to improve, be proactive, collaborate) if they want to contribute in an agile team.
  • The value that agile testers add to the team (through continuous feedback, direct communication, simplicity, responding to change, enjoyment).

After that I attended a talk by Ulrich Freyer-Hirtz, about “The Agility GPS”, described by the author as a ‘systematic approach for position fixing of agile projects’, a method to assess your team’s agility. The idea behind it was quite interesting and he already put a lot of work into the model, but it remains unclear to me why anyone would really want to know how ‘agile’ they are, especially considering the fact that the underlying agility model is different for every team/company. The author argued that it could be useful for self-assessment or to unmask alibi-agilistas. The agility GPS is really focused on the agile values, principles and practises and would tell you things like: “you are scoring low on this principle, you need more code reviews”. Interesting, but strange nontheless. According to me, agile practises are mostly context-driven. Apply the practises that give you the quickest return and that work best for you, and stick to them. Discard those that distract and do not add real value. 

Next track was ‘How to develop a common sense of done” by Alexander Schwartz. His main message was that the combination of branching and ‘quality gates’ can be a good way to improve the common sense of „DONE“, and that this will also help testers in getting integrated into agile teams. The most interesting part of the presentation for me was Mayank Gupta’s ‘Done thinking grid’ (read his Scrum Alliance article on a definition of done here). He also mentioned the use of a physical merge token to coordinate all publish merges in the trunk. They used what they called a “merge frog” (the German original Mördsch Frosch sounds pretty scary and is too much of a tongue twister for me) – a merge would only be allowed if a developer had put the merge frog on her/his desk first. 

Next up was a keynote of Elisabeth Hendrickson called “Agile testing, uncertainty, Risk, and Why It all Works”. I had seen some talks of her before on google video, but this ‘live performance’ made it crystal clear: you can’t beat the real thing. She’s really charismatic, a great speaker with a very clear and interesting message. She talked about the four big sources of techical risk (ambiguity, dependencies, assumptions and capacity), the seven key testing practises in agile (ATDD, TDD, exploratory testing, automated system tests, automated unit tests, collective test ownership and continuous integration) and how these practises help mitigate the afore mentioned risks. Simple but sweet. 

We were already deep in the afternoon, but that dreaded mid-afternoon dip didn’t stand a chance. The next track I attended was “Agile Quality Management – Axiom or Oxymoron?” by David Evans, in which he described a number of agile conundrums, some quality principles and a framework. The conundrums that were listed could be interpreted as oxymorons (sentences that combine contradictory terms) as well as axioms ( propositions that are not proven or demonstrated but considered to be self-evident), e.g.:  

  • “Developers know the acceptance tests”. Oxymoron: “They will only write code to make the tests pass!”. Axiom: “They won’t write code that makes the tests fail”.
  • “No test plan”.  Oxymoron: “Failing to plan is planning to fail! How will we know what to test?”. Axiom: “The test plan is implied in the product backlog: everything we build, we test”.
  • “No Test Manager (or Test Management Tool)”. Oxymoron: “So how can we possibly manage testing?”. Axiom: “Testers == team; tests == specifications; they don’t need separate management”.

The framework he described after that listed some interesting items as well. He mentioned ‘applying balloon patterns’, which intrigued me. I liked the metaphor. “Start valid but empty” (an empty balloon is still a balloon, complete the form of a solution before adding a function), “Rubber before air” (don’t deliver functionality that cannot be tested). And a great quote to finish off: “Delaying testing is just incurring quality debt”). 

After that I checked in on “Testify – One-button Test-Driven Development tooling & setup” by Mike Scott from – again – SQS. This was actually the third SQS speaker of the day (after  Ulrich Freyer-Hirtz and David Evans) – they seem to be pretty active in those agile trenches. Mike gave us a quick overview of a tool called Testify, an agile TDD Toolset installer and project generator. The speed with which he was able to set up a new project from scratch and to start writing some unit tests was pretty impressive. 

The keynote from Tom Gilb that ended the first day was bizarre, to say the least. The talk was supposed to be about Agile Inspections, but he talked about old-school inspections, how to perform them (“specifications must be unambigious, testable and must not contain design”) and how these inspections should be primarily used to refuse requirements being handed over to testing because of poor quality. The slides he used to prove his point formed one gigantic style inferno. My eyes started hurting from all these different styles and fonts, overloaded slides and texts being cut off randomly. I still have a hard time understanding why this specific talk was chosen for a keynote at an agile conference, where things are specifically NOT about finding as much defects in the requirements as possible so they can be thrown over the wall again. Agile teams prefer involving the whole team in requirement discussions to filter out all hidden assumptions. I guess this goes to show that you can not just put “agile” in front of some practises and “agilize” the hell out of them. 

When we descended back to the ground floor we were greeted by conference organizer José Diaz – in Lederhosen. The main hall had in the meanwhile been transformed into a genuine Bayerisches Oktoberfest, complete with Hendl, Schweinsbraten, Sauerkraut, Haxn, Würstl und Brezn. And big one-liter-glasses of beer. There was live music, regularly interrupted by the ‘Ein Prosit’-mantra. The whole Bierstube-atmosphere – ok, maybe it was just the beer – really made people talk. It was all great fun and I met and talked to some great people. I even won a prize in a  tombola: a one-year subscription to the Testing Experience magazine

October 14

 The last day was kicked off with a keynote by the godmother of Lean development, Mary Poppendieck: “The One Thing You Need to Know … About Software Development”. She started off by stating that complexity is the enemy of software development. She then gave an overview of ways to divide and conquer complexity, providing a whole lot of software development history in the process. Her natural presentation style made it a really enjoyable talk. More about her presentation can be found here and here.

What followed was – for me anyway – the best session of the conference: Declan Whelan with “Building a learning culture on your Agile team”. His track was stuffed with food for thought, real little gems: pointers, quotes, interesting movies and games – it left me anxious to go discover all those interesting books and websites. The highlights:

  • A quote by Shunryu Suzuki: “in the beginner’s mind there are many possibilities, in the expert’s mind there are few”
  • Bits about Peter Senge’s “The Fifth Discipline” (must read that one!)
  • Virginia Satir‘s change model
  • The principle of Shu-Ha-Ri, a martial arts concept that describes the stages of learning to mastery
  • A moving little video of Gever Tulley talking about “Tinkering School“.

“Tinkering School is a place where kids can pick up sticks and hammers and other dangerous objects, and be trusted. Trusted not to hurt themselves, and trusted not to hurt others. Tinkering School doesn’t follow a set curriculum. And there are no tests. We’re not trying to teach anybody any specific thing. When the kids arrive they’re confronted with lots of stuff, wood and nails and rope and wheels, and lots of tools, real tools… And within that context, we can offer the kids time. Something that seems in short supply in their over-scheduled lives. Our goal is to ensure that they leave with a better sense of how to make things than when they arrived, and the deep internal realization that you can figure things out by fooling around. Nothing ever turns out as planned … ever. And the kids soon learn that all projects go awry – and become at ease with the idea that every step in a project is a step closer to sweet success, or gleeful calamity. We start from doodles and sketches. And sometimes we make real plans. And sometimes we just start building. Building is at the heart of the experience. Hands on, deeply immersed and fully committed to the problem at hand. Robin and I, acting as collaborators, keep the landscape of the projects tilted towards completion. Success is in the doing. And failures are celebrated and analyzed. Problems become puzzles and obstacles disappear.”

Eric Jimmink also stepped up to the challenge of presenting a morning session after a beerfest, with “Promoting the use of a quality standard”. Main ideas I remembered: you should have a Definition of Done on different levels – for tasks, stories, sprints and releases. Revisit the DoD regularly in the sprint retrospectives. Although he looked a bit tired, he managed to get his message across using some excerpts of his and Anko Tijman’s book “Testen 2.0”, a great Dutch book on agile testing that was launched a year ago at Eurostar 2008. But although I am a native Dutch speaker, I find it hard to read books on testing in Dutch. I have always felt that English is the most natural choice within the testing community. Many of these translated terms just don’t sound right. Anyway, that’s probably just my silly Belgian self – I’m pretty sure all those Dutch testers out there don’t mind.

After lunch, Stuart Reid talked about skills needed in agile teams. His keynote “Investing in individuals and interactions” focused on the first statement of the agile manifesto. He showed a formula to calculate the job satisfaction (MPS=Motivating Potential Score) from Hackman & Oldham, which was interesting.  There also was this nice analogy about pairing, where you normally have the roles of “driver” and “navigator”. The driver is the one who is learning and the navigator has the expertise and can transfer the knowledge. But which do you think is the safest option when flying an airplane?  A senior pilot flying, the apprentice watching? or the other way around? Actually it’s the latter. The young pilot wouldn’t dare to criticize the older pilot when he sees a mistake, while the older pilot will be much more alert when he lets the younger pilot take control.

The last track I attended was “Agile practices in a traditional environment” from Markus Gärtner. He presented an experience report of how they started using some agile practises (test-driven development, exploratory testing, agile planning, improved communication) without actually using the term agile. They also used the testing quadrants to visualise where the current approach was lacking – similar to the exercise we actually did in Lisa Crispin’s tutorial two day earlier. This helped them to move their efforts more in the direction of business-facing automated tests, with the additional risk of neglecting the technology-facing tests. He seemed pretty nervous when starting his talk, but there was no reason to be – he had a great story to tell and he obviously knows what he is talking about.

By then it was time for me to leave for the airport. I missed the panel discussion that ended the conference, though I heard that many of the keynote speakers had already left. For a first time conference, the event was really well organised, cosy and well thought-out. A very nice and familiar atmosphere between attendees and speakers as well. Next year’s call for papers is open. Do send your abstracts to José, and maybe you’ll get a chance to see him in Lederhosen as well.