The winner takes it all (a 3-day workshop with Cem Kaner)

A review of the Cem Kaner workshop during the Cem Kaner Week, held in the last week of October 2009 in Nieuwegein, The Netherlands.

Last August I received a mail from a company called Immune-IT that promoted their upcoming “Cem Kaner week“. The concept: during the last week of october Cem Kaner would visit The Netherlands to lead a three day workshop and give a freely accessible presentation as well. Considering the cost of the workshop I settled for the free presentation, which I figured was also quite unique. Nieuwegein in The Netherlands is only a 160 km drive from my home (ok, you’d have to get past Antwerp, Breda and the Utrecht traffic jams, but still: worth the hassle). Later I heard that there was also a possibility to get a free workshop place by entering a contest on their website. Long story short: I participated, won the contest, checked the mail again (still said I won it) and jumped from joy. On October 26, I had a date with testing history.

I have always looked up to Cem Kaner, admired him for being one of the pioneers in the software testing field. He is a professor of computer sciences at Florida Institute of Technology and the (co-)author of truly great books like “Testing Computer Software” (alledgedly the best selling software testing book of all time) and “Lessons learned in Software Testing” (which inspired and influenced me big time). He founded the “context-driven school of software testing” (I’m on their side) and has all these excellent free courses available online for Black Box Software Testing. All this was rushing through my head while cruising through the lowlands. Later on, traffic came to a standstill, and this gave me even more time to get my imagination running… What if I don’t like him? What if he turns out to be a belligerent ghoul (gratuitous “The Smiths“-reference)? Well, none of all that. Cem is a nice person, almost humble in his ways. I like him. A friendly, human encyclopedia of black box software testing.

The size of the workshop was pretty small – only 12 people enrolled, which made it super-interactive. Immune IT had asked Cem to talk specifically about testing in difficult economic times. He did talk about that, briefly, but luckily he covered a whole range of other things as well. Some highlights:

  • Exploratory checklists and a truly hilarious piece of scientific proof that following scripts is a best practise (yes, you heard it right, in some cases there *are* best practises) for brain-damaged rats. In 1977, Cem incidentally stumbled upon proof that normal lab rats use ‘checklists’ to drive their behaviour, while brain-damaged ones resort to following the same script over and over, no matter the context. More on this can be found here.
  • Scripting and learning. Cem debunked some common myths surrounding scripted tests.
    #1. “Test scripts are ideal as ‘training wheels’ for new testers. After several months of following a wide range of scripts, the new tester will have learned by example a lot about the application domain, the program and how to test it”. So they say. But have you ever noticed what happens when you drive to a new place with a GPS that gives you a script like “Turn left at the next lights etc…?”. If you try to go there again, do you actually remember the route? You learn faster when exploring and discovering, rather than by ‘being told’.
    #2. “Test scripts specify all initial entry conditions”. But you cannot capture everything, there are so many variables that it becomes impossible (system state, program state, system config, system resources, other processes) to describe this upfront.
    #3.  “Test scripts specify the expected results”. But – same as above – scripts cannot specify all possible outcomes. There are simply too many variables involved. Our tests cannot address all possibilities.
    #4.  “Test scripts involve a comparison that machines or humans should make”. But what about confirmation bias? Or inattentional blindness? We ignore things based on their meaning, before we ever become aware of them.
  • An extensive part about exploratory testing from the godfather himself, it doesn’t get any better than that. He explained the renewed definition of exploratory testing (which I endorse but find a bit too long – I liked the old one better).  *taking a big breath* here goes:

Exploratory Software Testing:
– is a style of software testing
– that emphasizes the personal freedom and responsibility
– of the individual tester
– to continually optimize the value of her work
– by treating
     * test-related learning,
     * test design,
     * test execution, and
     * test result interpretation
– as mutually supportive activities
– that run in parallel throughout the project

  • An intriguing part on testing an investment modeling tool called VectorVest.
     
  • “Exploratory Test Automation”. Slides available here. I specifically liked his experiences at a phone manufacturing company called Telenova.  They were able to track down a stack overflow bug in one of their telephone sets that had been undetected even though testing achieved 100% statement and branch coverage in the relevant parts of the code. In the field, this would require a long sequence of calls to a continuously-active phone. They succeeded in isolating it through “Exploratory Test Automation”: they created a simulator that generated long chains of random events, emulating input to the system’s 100 phones. That eventually exposed the bug (and several others as well).
  • Cem is obviously not a great fan of automated tests, but he makes an exception for “Exploratory Test Automation” , “High-volume Test Automation” and “Extended Random Regression Testing”. Traditional test techniques tie us to a small number of tests. Extended random regression and long simulations exposes bugs the traditional techniques probably won’t find.

These three days were a real treat. I was able to talk and discuss with Cem Kaner personally – and learn from him, which was priceless.  I also met some other great people – passionate about testing and always prepared for an healthy argument. At the end Cem signed the copies of “Lessons learned in software testing” that we all got for free, which means I now got two of them. One to use and one to cherish.

Advertisement

Agile testing days 2009 – Berlin

A write-up of the Agile testing days 2009 in Berlin.

In october I attented the Agile Testing Days in Berlin. The program committee assembled a really great line-up (see the 2009 programme here). Here is my write-up of the event. A late one, I admit, but it certainly was worth writing about. So without any further ado, here goes… 

October 11

I arrived in Berlin late sunday evening. During the frantic cab ride through the green outskirts of Berlin I had my first conversation with a genuine Berliner. While giving me a quick ‘Berlin for dummies’ round-up, he managed to distract my attention just enough so I didn’t really notice all the near-collisions. I made it to the Seminaris Campushotel Berlin in one piece. Lovely venue, by the way. 

October 12

(c) 2009 Crispin & Gregory

First day of the conference. Quick registration, coffee and off to the first floor where I attended a full-day tutorial by Lisa Crispin, “Using the Agile Testing Quadrants to Cover Your Testing Needs”. There were four other tutorials going on that morning, by Elisabeth Hendrickson, Isabel Evans & Stuart Reid, Tom Gilb and Tom & Mary poppendieck. A great line-up, which made it really hard to choose. But since I had bought and already briefly skimmed through Lisa’s (and Janet Gregory’s) excellent book “Agile Testing: A Practical Guide for Testers and Agile Teams“, I decided to settle with the agile testing quadrants. The day went by really quick, which is always a good sign. The theory wasn’t new, but there were some revealing thought-exercises, like listing all your practises in the quadrants. Visualizing them often makes it very clear if things are missing. She also told a funny anecdote on how her team ‘materialises’ remote team members during meetings and pairing sessions:  

“My team set up a rolling cart for each remote team member, with a laptop, webcam, Skype and mic. My webcam displays on the laptop, and my team members roll ‘me’ around to whoever I’m pairing with, or to meetings (rolling through the halls saying hi to people is fun!) I can control the webcam to look for people.” 

October 13

With a indecent amount of coffee in our systems, the actual conference kicked off with a keynote by Lisa Crispin, “Are Agile Testers Different?”. An interesting keynote, based on ideas that are also described in her book: 

  • In agile projects, the lines between the different roles are blurred. 
  • Testers also need to change their mindsets (seek new ways to improve, be proactive, collaborate) if they want to contribute in an agile team.
  • The value that agile testers add to the team (through continuous feedback, direct communication, simplicity, responding to change, enjoyment).

After that I attended a talk by Ulrich Freyer-Hirtz, about “The Agility GPS”, described by the author as a ‘systematic approach for position fixing of agile projects’, a method to assess your team’s agility. The idea behind it was quite interesting and he already put a lot of work into the model, but it remains unclear to me why anyone would really want to know how ‘agile’ they are, especially considering the fact that the underlying agility model is different for every team/company. The author argued that it could be useful for self-assessment or to unmask alibi-agilistas. The agility GPS is really focused on the agile values, principles and practises and would tell you things like: “you are scoring low on this principle, you need more code reviews”. Interesting, but strange nontheless. According to me, agile practises are mostly context-driven. Apply the practises that give you the quickest return and that work best for you, and stick to them. Discard those that distract and do not add real value. 

Next track was ‘How to develop a common sense of done” by Alexander Schwartz. His main message was that the combination of branching and ‘quality gates’ can be a good way to improve the common sense of „DONE“, and that this will also help testers in getting integrated into agile teams. The most interesting part of the presentation for me was Mayank Gupta’s ‘Done thinking grid’ (read his Scrum Alliance article on a definition of done here). He also mentioned the use of a physical merge token to coordinate all publish merges in the trunk. They used what they called a “merge frog” (the German original Mördsch Frosch sounds pretty scary and is too much of a tongue twister for me) – a merge would only be allowed if a developer had put the merge frog on her/his desk first. 

Next up was a keynote of Elisabeth Hendrickson called “Agile testing, uncertainty, Risk, and Why It all Works”. I had seen some talks of her before on google video, but this ‘live performance’ made it crystal clear: you can’t beat the real thing. She’s really charismatic, a great speaker with a very clear and interesting message. She talked about the four big sources of techical risk (ambiguity, dependencies, assumptions and capacity), the seven key testing practises in agile (ATDD, TDD, exploratory testing, automated system tests, automated unit tests, collective test ownership and continuous integration) and how these practises help mitigate the afore mentioned risks. Simple but sweet. 

We were already deep in the afternoon, but that dreaded mid-afternoon dip didn’t stand a chance. The next track I attended was “Agile Quality Management – Axiom or Oxymoron?” by David Evans, in which he described a number of agile conundrums, some quality principles and a framework. The conundrums that were listed could be interpreted as oxymorons (sentences that combine contradictory terms) as well as axioms ( propositions that are not proven or demonstrated but considered to be self-evident), e.g.:  

  • “Developers know the acceptance tests”. Oxymoron: “They will only write code to make the tests pass!”. Axiom: “They won’t write code that makes the tests fail”.
  • “No test plan”.  Oxymoron: “Failing to plan is planning to fail! How will we know what to test?”. Axiom: “The test plan is implied in the product backlog: everything we build, we test”.
  • “No Test Manager (or Test Management Tool)”. Oxymoron: “So how can we possibly manage testing?”. Axiom: “Testers == team; tests == specifications; they don’t need separate management”.

The framework he described after that listed some interesting items as well. He mentioned ‘applying balloon patterns’, which intrigued me. I liked the metaphor. “Start valid but empty” (an empty balloon is still a balloon, complete the form of a solution before adding a function), “Rubber before air” (don’t deliver functionality that cannot be tested). And a great quote to finish off: “Delaying testing is just incurring quality debt”). 

After that I checked in on “Testify – One-button Test-Driven Development tooling & setup” by Mike Scott from – again – SQS. This was actually the third SQS speaker of the day (after  Ulrich Freyer-Hirtz and David Evans) – they seem to be pretty active in those agile trenches. Mike gave us a quick overview of a tool called Testify, an agile TDD Toolset installer and project generator. The speed with which he was able to set up a new project from scratch and to start writing some unit tests was pretty impressive. 

The keynote from Tom Gilb that ended the first day was bizarre, to say the least. The talk was supposed to be about Agile Inspections, but he talked about old-school inspections, how to perform them (“specifications must be unambigious, testable and must not contain design”) and how these inspections should be primarily used to refuse requirements being handed over to testing because of poor quality. The slides he used to prove his point formed one gigantic style inferno. My eyes started hurting from all these different styles and fonts, overloaded slides and texts being cut off randomly. I still have a hard time understanding why this specific talk was chosen for a keynote at an agile conference, where things are specifically NOT about finding as much defects in the requirements as possible so they can be thrown over the wall again. Agile teams prefer involving the whole team in requirement discussions to filter out all hidden assumptions. I guess this goes to show that you can not just put “agile” in front of some practises and “agilize” the hell out of them. 

When we descended back to the ground floor we were greeted by conference organizer José Diaz – in Lederhosen. The main hall had in the meanwhile been transformed into a genuine Bayerisches Oktoberfest, complete with Hendl, Schweinsbraten, Sauerkraut, Haxn, Würstl und Brezn. And big one-liter-glasses of beer. There was live music, regularly interrupted by the ‘Ein Prosit’-mantra. The whole Bierstube-atmosphere – ok, maybe it was just the beer – really made people talk. It was all great fun and I met and talked to some great people. I even won a prize in a  tombola: a one-year subscription to the Testing Experience magazine

October 14

 The last day was kicked off with a keynote by the godmother of Lean development, Mary Poppendieck: “The One Thing You Need to Know … About Software Development”. She started off by stating that complexity is the enemy of software development. She then gave an overview of ways to divide and conquer complexity, providing a whole lot of software development history in the process. Her natural presentation style made it a really enjoyable talk. More about her presentation can be found here and here.

What followed was – for me anyway – the best session of the conference: Declan Whelan with “Building a learning culture on your Agile team”. His track was stuffed with food for thought, real little gems: pointers, quotes, interesting movies and games – it left me anxious to go discover all those interesting books and websites. The highlights:

  • A quote by Shunryu Suzuki: “in the beginner’s mind there are many possibilities, in the expert’s mind there are few”
  • Bits about Peter Senge’s “The Fifth Discipline” (must read that one!)
  • Virginia Satir‘s change model
  • The principle of Shu-Ha-Ri, a martial arts concept that describes the stages of learning to mastery
  • A moving little video of Gever Tulley talking about “Tinkering School“.

“Tinkering School is a place where kids can pick up sticks and hammers and other dangerous objects, and be trusted. Trusted not to hurt themselves, and trusted not to hurt others. Tinkering School doesn’t follow a set curriculum. And there are no tests. We’re not trying to teach anybody any specific thing. When the kids arrive they’re confronted with lots of stuff, wood and nails and rope and wheels, and lots of tools, real tools… And within that context, we can offer the kids time. Something that seems in short supply in their over-scheduled lives. Our goal is to ensure that they leave with a better sense of how to make things than when they arrived, and the deep internal realization that you can figure things out by fooling around. Nothing ever turns out as planned … ever. And the kids soon learn that all projects go awry – and become at ease with the idea that every step in a project is a step closer to sweet success, or gleeful calamity. We start from doodles and sketches. And sometimes we make real plans. And sometimes we just start building. Building is at the heart of the experience. Hands on, deeply immersed and fully committed to the problem at hand. Robin and I, acting as collaborators, keep the landscape of the projects tilted towards completion. Success is in the doing. And failures are celebrated and analyzed. Problems become puzzles and obstacles disappear.”

Eric Jimmink also stepped up to the challenge of presenting a morning session after a beerfest, with “Promoting the use of a quality standard”. Main ideas I remembered: you should have a Definition of Done on different levels – for tasks, stories, sprints and releases. Revisit the DoD regularly in the sprint retrospectives. Although he looked a bit tired, he managed to get his message across using some excerpts of his and Anko Tijman’s book “Testen 2.0”, a great Dutch book on agile testing that was launched a year ago at Eurostar 2008. But although I am a native Dutch speaker, I find it hard to read books on testing in Dutch. I have always felt that English is the most natural choice within the testing community. Many of these translated terms just don’t sound right. Anyway, that’s probably just my silly Belgian self – I’m pretty sure all those Dutch testers out there don’t mind.

After lunch, Stuart Reid talked about skills needed in agile teams. His keynote “Investing in individuals and interactions” focused on the first statement of the agile manifesto. He showed a formula to calculate the job satisfaction (MPS=Motivating Potential Score) from Hackman & Oldham, which was interesting.  There also was this nice analogy about pairing, where you normally have the roles of “driver” and “navigator”. The driver is the one who is learning and the navigator has the expertise and can transfer the knowledge. But which do you think is the safest option when flying an airplane?  A senior pilot flying, the apprentice watching? or the other way around? Actually it’s the latter. The young pilot wouldn’t dare to criticize the older pilot when he sees a mistake, while the older pilot will be much more alert when he lets the younger pilot take control.

The last track I attended was “Agile practices in a traditional environment” from Markus Gärtner. He presented an experience report of how they started using some agile practises (test-driven development, exploratory testing, agile planning, improved communication) without actually using the term agile. They also used the testing quadrants to visualise where the current approach was lacking – similar to the exercise we actually did in Lisa Crispin’s tutorial two day earlier. This helped them to move their efforts more in the direction of business-facing automated tests, with the additional risk of neglecting the technology-facing tests. He seemed pretty nervous when starting his talk, but there was no reason to be – he had a great story to tell and he obviously knows what he is talking about.

By then it was time for me to leave for the airport. I missed the panel discussion that ended the conference, though I heard that many of the keynote speakers had already left. For a first time conference, the event was really well organised, cosy and well thought-out. A very nice and familiar atmosphere between attendees and speakers as well. Next year’s call for papers is open. Do send your abstracts to José, and maybe you’ll get a chance to see him in Lederhosen as well.

So it begins…

The beginning is the most important part of the work. I think Plato said that. And when the ancients talk, I tend to listen.

I’ve been thinking about starting a testing-related blog for already a long time, but for some reason or another I never did. Maybe I always assumed there weren’t enough interesting stories to tell. Well… enough of that. There are plenty of good testing stories all around – and compelling stories often tell themselves. We’ll see what this leads up to.
Do not go where the path may lead; instead go where there is no path and leave a trail. Ralph Waldo Emerson said that. I concur.