#BREWT… so it begins

There Be Dragons

Over the past years, peer conferences on software testing have sprung up all over the world. Europe in particular has seen a lot of strange acronyms emerge: DEWT, DWET, SWET, CEWT, LEWT, TITAN, PEST,…

When initial talks started for DEWT (the Dutch Exploratory Workshop in Testing) in 2010, I jumped on the Dutch train. It’s been a great – nay, fantastic – ride so far. But there was always that undisclosed little area that was conspicuously absent in the peer conference landscape. One country that testing-wise had “Here Be Dragons” written all over it: Belgium.

Sure, there were plans, ideas and good intentions, but rarely the energy. This is were Beren Van Daele decided to kick things up a notch. With energy to spare, he registered a site, fired off some invitatons and started planning that first peer conference on Belgian soil.

BREWT force!

That is how BREWT was born. The Belgian Research Event and Workshop on Testing. Undoubtedly similar to DEWT but with a distinguished Burgundian twist (and most likely some uncut Belgian surrealism thrown in the mix). I see BREWT not only as a peer conference, but also a sounding board of professionals, a place to discuss and share ideas, an opportunity to sharpen our skills and thinking. A movement? Let’s see where it leads us.

Et tu, BREWT?

Our first major event will be a peer conference in the fall of 2017. If you’re interested in joining, we would like to hear from you!

For more information, visit us at http://brewtconf.wordpress.com.

 

 

 

 

Eurostar 2016 sketchnotes

In my continuing deliberate sketchnoting practice, I documented the Eurostar sessions I attended (or rather, the sessions for which I arrived on time to get properly set up).

Testing the inside of your head – Liz Keogh

testing-the-inside-of-your-head-liz-keogh-eurostar-2016

Lessons learned from the worst bug I ever found – Ru Cindrea

lessons-learned-from-the-worst-bug-i-ever-found-ru-cindrea-eurostar-2016

The critique of AI in the age of the net – Harry Collins

the-critique-of-ai-in-the-age-of-the-net

Stories from testing healthcare.gov – Ben Simo

stories-from-testing-healthcare-gov-ben-simo-eurostar-2016

Kolb’s testing cycle – Beren Van Daele

kolbs-testing-cycle-beren-van-daele-eurostar-2016

Growing a company test community – Alex Schladebeck

growing-a-company-test-community-alex-schladebeck-eurostar-2016

Don’t learn the rules, learn from the rules – Dale Emery

dont-learn-the-rules-learn-from-the-rules-dale-emery-eurostar-2016

My Eurostar 2014 closing keynote

I had the privilige of delivering the closing keynote at the Eurostar 2014 conference in Dublin. I crafted a talk that was unique to this event, bringing the theme together, summarizing what the theme meant to me and exploring how it is all connected.

I know the slides can only tell you so much when the narrative isn’t there, but here is the online version of my Prezi:

Everything is connected – exploring diversity, innovation, leadership

Everything is connected

Although it was the first (and last) rendition of this talk, I think it went well. Several people found it to be “thought-provoking”, which is exactly what I was aiming (and hoping) for.

Now that this is over, I feel I am done with conference presentations for a while. I’m planning to take a long-awaited deep dive – with lots of reading, learning and working on new content. I’m taking it slowly. There are some important topics that need exploring, and now I am finally giving them (and me) the time to make that happen. I’m also looking forward to some exciting collaborations with others in the near future.

I’m following my energy. Let’s see where that leads us.

DEWT4 – a peer conference on teaching testing

dewt4-participants-v4

From left to right: Jeanne Hofmans, Rob van Steenbergen, Jurian van de Laar, Peter Simon Schrijver, Jean-Paul Varwijk, Bernd Beersma, Huib Schoots, Arjen Verweij, Zeger van Hese, Joris Meerts, Markus Gärtner, Bart Broekman, Angela van Son, Pascal Dufour, Ard Kramer, Jeroen Mengerink, Kristoffer Nordström‏, Philip Hoeben, Daniël Wiersma, Joep Schuurkes, Duncan Nisbet, Eddy Bruin, Wim Heemskerk, Ruud Cox, Richard Scholtes, Ray Oei

Teaching Software Testing

DEWT IntroIn the weekend of 7-9 February, the fourth edition of DEWT took place at Hotel Bergse Bossen in Driebergen, the Netherlands. DEWT stands for the Dutch Exploratory Workshop on Testing and is a LAWST-style peer workshop on testing like its older siblings LAWSTLEWT and SWET. This means a presentation is followed by a facilitated discussion that goes on as long as it brings value.

This edition was extra special to me since I volunteered to be the Content Owner during our preparatory meeting in september. Jean-Paul Varwijk agreed to fill the Conference Chair role and Peter Simon Schrijver would be the main facilitator. Why yes, you do need a good facilitator to make this kind of thing work.

The main theme of this edition was “Teaching Software Testing”

In this edition we also added the obligation – for all attendees – to send in a proposal for an experience report. I wanted attendees to look at teaching software testing in a broad sense, and asked for experience reports on:

  • How software testing is taught
  • Unconventional or alternative ways of teaching software testing
  • Lessons learned by teaching software testing
  • Learning how to teach software testing
  • The receiving end of teaching – learning (being taught)
  • The transfer of theoretical versus practical knowledge
  • Teaching novice testers versus teaching experienced ones
  • Acquiring teaching skills

Apart from the DEWT core members (10), an additional 16 people were invited, of whom three came from abroad – Markus Gaertner (D), Duncan Nisbet (UK) and Kristoffer Nordström (SE). Actually, that makes four since I am from abroad (B) as well – I keep forgetting that I am DEWT’s legal alien.

Friday, February 7

The first night of a DEWT conference is usually an informal meetup, with a welcoming dinner for the people that can make it in time. A great evening it was, with strangers getting to know each other and old friends catching up. Lots of games and testing talk – and in some way or another, My Little Pony () became a topic as well. There were not as many drinks as we would have liked, though, since our first evening happened to coincide with a wedding in our regular hangout, the Grand Cafe. This meant we were banned to a room with a part-time waiter, dividing his inevitably part-time attention (I’m guessing 85/15) between drunk party people and relatively sober software testers. His selection of Belgian beers and copious amounts of deep-fried snacks (it is common knowledge that Markus Gaertner will attend any meetup that involves bitterballen) made up for it.

We ended the night giving the bride and groom some heartfelt marital advice, and by sipping from that curious bottle Duncan brought from Gibraltar – Patxaran (Zoco). When Duncan started cleaning tables to compensate for our invisible waiter, we knew it was time to go to bed.

Saturday, February 8

In front of a notably bigger group than we ended the day with on friday (some people were only joining in on saturday morning), Jean-Paul, Peter and myself kicked off the conference. In the previous weeks, the three of us had come to an agreement on which talks should go highest on the list, being well aware that in the end, a schedule like this is always tentative since you never know when discussions are going to end or where the energy of the group will be going.

The roomKristoffer Nordström went first with “Learning and change in a dysfunctional organisation“, illustrating the difficulties of a consultant that represents both management and the outside. Are learnings and change even possible is this situation? He compared a team with a spring that is attached to context and culture. When a string is attached to something, it is very hard to change. You can bend the spring and make it work at first, but inevitably, the spring – the team – will veer back to its original position. He explained how he tried to cope with his plight: establish trust, show passion and enthusiasm, lead by example, show respect, take time to teach instead (not tell). Even simple things like smiling and saying hello to people helped him to achieve his goals. Kristoffer’s experience report was rich and well-prepared, and touched many things which I could relate with. The discussion afterwards went on longer than planned, but hey, we’re all flexible, right?

Next up was Arjen Verweij with “Preaching software testing: evangelizing testers among non-testers“. In his experience report, he described how he advocates for testing with different stakeholders:

  • Talk to project managers about value
  • Inform and explain customers about changes in the software
  • Convince engineers that you need their expertise.
  • Help support people by providing them with good tools that facilitate bug reporting
  • Work with sales to set reasonable expectations
  • Get buy-in from the developers by supporting their work

One of Arjen’s take-aways was to not mention “testing” if you want non-testers to test, which spawned a hefty discussion on-site in which several people on twitter got involved.

After lunch we decided to go for a walk in the woods to avoid that dreaded carb coma. The hotel staff provided us with instructions for a walk, and it turned out to be a strictly scripted procedure: no map, but a list of written instructions. Great, a bunch of (mostly) context-driven testers asked to follow a walking script. As could be expected, we got lost in a heartbeat. Our explorer’s instinct – supported by many a gps module – got us back with only 20 minutes delay.

Aside from harassing us with more space unicorn songs then we could handle, Markus Gaertner got us up and about with a workshop that used the principles from the book “Training From the Back of the Room!: 65 Ways to Step Aside and Let Them Learn” by Sharon Bowman, after which he elaborated on the 4 C’s, a framework to help design classes that leverage accelerated learning. The acronym stands for “Connection, Concept, Concrete Practice, Conclusion”. During the connections step, learners make connections with what they already know about the topic at hand. In the concepts step, learners take in new information in multi-sensory ways: hearing, seeing, discussing, writing, reflecting, imagining, participating and teaching it to others. The concrete practice step serves to actively practice a new skill using the new information, participate in an active review of what they have learned and again teach others what they know or can now do. During the conclusions step, learners summarize what they have learned, evaluate it and make a commitment to use it at work or in their lives.

Joep Schuurkes and Richard Scholtes were up next with “Teaching testing with a chain testing simulation“, in which they described their experiences in designing an apparently simple chain testing simulation exercise. In it, participants were provided with five laptops running the applications that make up the chain, and each was assigned to one of the laptops (or was assigned the role of testing coordinator), after which the group was given the assignment to “perform a chain test”. Joep and Richard contrasted the things they thought should happen with the things that actually happened, which lead to a couple of nice surprises. Chaos ensued, apparently, and people stayed on their own island for way too long. But it proved an engaging format for all involved – people continued during breaks, were discussing it the days after and it led to quite some aha-moments as well. Another take-away: putting an empty chair in between two people is an effective means to stop all communication.

Bart Broekman‘s experience report brought us “Back to the Middle Ages“. Or at least, a part of the theory did. He talked about the master-apprentice model, which is fundamentally different from the teacher/student model which is now so common. Later on he linked it to the Dreyfus model of skill acquisition. Bart saw the biggest gap to be bridged in going from “competent” to “proficient”. How can we make our students make that big leap? Bart went on to explain how he tried to do that through organising masterclasses, working with the student’s own content and real-life problems to solve.

By the time the discussion after Bart’s report died down, dinner was calling, and we gladly obliged. The evening was filled with drinks, puzzles, games, poetry recitations and Dutch people winning gold, silver and bronze medals in the Winter Olympics. Leave some for us, would you?

Sunday, February 9

Sunday morning saw the first (granted, UK-imported) Gibraltarian DEWT-invitee ever take the stage: Duncan Nisbet with a report on his experiences teaching testing to new/non testers, “You can’t learn to drive by reading a book“. A misleading title, since he proceeded to compellingly relate it to how he used to train his pupils in wildwater kayaking. In that line of sport, it is important that you first give your students a safe place to fail, like clear, calm and warm water. Whitewater is an unforgiving place for newbies. Slowly progress, and if you fail, do it more often. Books don’t prepare you for the whitewater experience. Duncan then explained how he tries to teach testing to newbies in the same way. Start simple and build confidence gradually. First, give them a Web/GUI to play wit, later make them aware that of the existence of logs that can help them in testing, and then on to other more specialised disciplines like performance and automation. The facilitated discussion afterwards spawned so many question threads that Simon Schrijver dedicated a whole blogpost on how he facilitated it.

Angela Van Son is a DEWT regular, although she is not a tester. Angela is a professional (procrastination) coach, and she made the program because I was convinced that she could contribute to the topic by offering a view on teaching in general. In “The skill of teaching: How do you make them WANT it?“, she told us about the 30 Day Video Challenge put out by Holly Sugrue that she participated in. She witnessed how Holly managed to energize and inspire the group to deliver in this challenge, a remarkable feat considering that the different people all had different goals to participate. Angela then analyzed what was so peculiar about this challenge. How did the challenge make the people want to master it? As it turned out, it was a combination of many things. It was well chosen, with clear limits. There was a lot of playful repetition that never got boring, and great group dynamics that pushed people forward. There were no obligations – it was a safe place to learn.

After lunch, to finish on a lighter and more active note, we scheduled a workshop on the design of exercises to teach testing, led by Huib Schoots and Ruud Cox. The crowd got divided in several small groups and given the task to design an exercise to teach some testing concepts. That exercise would then be run by one of the other groups followed by a debrief. The tricky part was of course that the time to accomplish this was very limited. I already knew from attending Jerry Weinberg’s Problem Solving Leadership that designing a good and fun experiential exercise can be very, VERY hard. But given the circumstances, the teams did a good job – resorting to ruthless peppermint crushing and exploratory walking. I felt that the debrief helped a lot in seeing where our own exercise could be improved.

This concluded DEWT4. Two + days filled with learning and fun. I hoped to achieve a good variety in the topics and I am glad how it turned out. The atmosphere was focused yet relaxed, and everyone seemed to enjoy themselves. A big thank you to all the participants for their time, their stories and their passion.

Sketch-notes

To finish of this lengthy report, here are some sketch-notes I made during the weekend. Click the thumbnail to see a bigger version.

For other reports and impressions of DEWT4, check out the DEWT after party blog page.

Back to the middle ages - Bart Broekman You cant learn to drive by reading a book - Duncan Nisbet Learning and change in a dysfunctional organisation - Kristoffer Nordstrom Preaching software testing - Testing with non-testers - Arjen Verweij Teaching testing with a chain testing simulation - Joep Schuurkes Richard Scholtes The skill of teaching - how to make them want it - Angela Van Son

Vacansopapurosophobia


tumbleweed

< crickets >

For those of you who have watched the tumbleweeds roll by on this blog in the past year, I apologize.

It’s been eleven months since my last blog post and I have no excuse.

Sure, I have been busy – 2013 was one hell of a ride. I founded my own company (Z-sharp), worked hard on getting things started, created presentations and papers to present at conferences, co-organized Belgium’s first public RST course with Michael Bolton, attended and organized peer workshops and delivered webinars.

I knew from past experience that being busy does not necessarily mean that your writing suffers. And yet I had no energy for blogging.

I kept telling myself “Just wait it out, ideas will pop up. There’s no need to do things half-heartedly. Pick your battles”.

Self-diagnosis

Ideas did pop up. Plenty of them, eventually. Strangely enough I felt no urge to act upon them, which in turn reinforced the feeling I was stuck. It started to freak me out. This was the first time I hit a dry writing spell of this length. What was happening?

“That’s it”, I thought. “Vacansopapurosophobia – the fear of a blank page” (the first word that came to mind was “writer’s block”, to be honest. But I like the sound of vacansopapurosophobia better, it has a nice supercalifragilisticexpialidocious ring to it).

Writer’s block – the fear of every aspiring writer! Or rather, blogger’s block. Wait – did I just self-diagnose myself?

“Self-diagnosis is the process of diagnosing, or identifying, medical conditions in oneself. It may be assisted by medical dictionaries, books, resources on the Internet, past personal experiences, or recognizing symptoms or medical signs of a condition that a family member previously had. Self-diagnosis is prone to error and may be potentially dangerous if inappropriate decisions are made on the basis of a misdiagnosis” (source: Wikipedia)

The danger of self-diagnosis is that you’re mainly forming conclusions based on internet folklore. Quite ironically, there *is* a lot of writing about writer’s block on the internet, with causes ranging from too much audience awareness and perfectionism over burnout to flat-out depression.

Weinberg on writing

I decided to seek guidance from a professional. I dug up my copy of “Weinberg on Writing“, in which my personal Yoda Jerry Weinberg describes his writing process. I still vividly recall my amazement several years ago when when I first read it. It fit my own amateurish and seemingly unstructured writing style like a glove.

In the book, Weinberg compares writing to the creation of stone-wall structures. Harnessing ideas and words into a written work is a lot like building a stone wall: gathering, arranging, rearranging, and discarding fieldstones as the wall evolves organically over time. To be successful in your writing, Weinberg suggests, you should have many fieldstones, chunks of work in progress. But be aware that “in progress” is a very vague concept: it may mean you’ve written two words, a hundred words, or even several chapter-like things.

The fieldstones allow you to make progress on any piece of work. The method helps to keep personal energy high, efforts focused and the daunting work of composition forward-moving.

Weinberg on not writing

I dove in the part on writer’s block (you can read an article based on that chapter here), and the following sentence struck a chord – or two:

“Writer’s block is not a disorder in you, the writer. It’s a deficiency in your writing methods – the mythology you’ve swallowed about how works get written.”

Of course! I knew this all along, but I let it get snowed in in my middle-aged excuse for a brain. The creation of a text is not a linear process, like reading is. Reading structures are presentation methods, not creation methods. Creation doesn’t work in such a linear way.

Later on I stumbled upon a rather amusing interview with Jerry Weinberg in which he dispels the myth of writer’s block. This taught me another valuable lesson: as long as you have things you can do, you aren’t blocked at all. When you feel stuck with one part, work on another – they don’t even have to be directly related. There is always something you can do to keep on moving.

All of a sudden, I came to the realization  that the solution to my problem was very simple.

“You have nothing to write about? How lovely is that! Isn’t that a GREAT subject?”

So here I am, writing my first blog post in ages, about why I wasn’t writing. I hope I’m here to stay.

Rapid Software Testing – skilled software testing unleashed

Up to 11

The software testing profession looks like a steadily maturing profession from the outside. After all, there are certifications schemes like ISTQB, CAT, IREB and QAMP (the one to rule them all), standards (ISO 29119) and companies reaching TMM (test maturity model) levels that – just like a Spinal Tap guitar amplifier – one day might even go up to 11. The number of employees that companies send off to get certified in a mere three days is soaring, and new certification programs are being created as we speak. Quick and easy. Multiple choice exams for the win!

The reality, however, is that the field of software testing is torn between different “schools” of testing. You could see these schools as determined and persistent patterns of belief, speech and behaviour. This means that different people – all calling themselves “test professionals” – have vastly different ideas of what testing is all about. Even something elementary as the definition of testing varies from “demonstration of fitness for purpose” to “questioning a product in order to evaluate it”, depending on who you talk with (for more info on the schools of software testing, I heartily recommend Brett Pettichord’s presentation on the subject).

And so it happens that different people think differently about “good” or “mature” software testing. I, for one, don’t believe in tester certification programs, at least not in the format they are in now and the way they are being used in the testing profession. The current business model is mainly designed to get as many people as possible certified within the shortest timeframe. Its prime focus is on certifiability, not on tester skill, and certainly not on the advancement of the craft. Advancement comes from sharing, rather than shielding.

Rapid Software Testing (RST)

So what are the options for a tester on a quest for knowledge and self-improvement? What is a budding tester to do?

I think there are valuable alternatives for people who are serious about becoming a world-class tester. One of these is Rapid Software Testing (RST), a 3-day hands-on course designed by James Bach and Michael Bolton.

Actually, calling this “a course” doesn’t do it justice. RST is at the same time a methodology, a mind-set and a skill set about how to do excellent software testing in a way that is very fast, inexpensive, credible and accountable. It is a highly experiential workshop with lessons that stick.

How is RST different?

During RST you spend much of the time actually testing, working on exercises, puzzles, thought experiments and scenarios—some computer-based, some not. The goal of the course is to teach you how to test anything expertly, under extreme time pressure and conditions of uncertainty, in a way that will stand up to scrutiny.

The philosophy presented in this class is not like traditional approaches to testing, which ignore the thinking part of testing and instead focus on narrow definitions for testing terms while advocating never-ending paperwork. Products have become too complex for that, time is too short, and testers are too expensive. Rapid testing uses a cyclic approach and heuristic methods to constantly re-optimize testing to fit the needs of your clients.

What’s in it for you?

  • The ability to test something rapidly and skilfully is critical. There is a growing need to test quickly, effectively, and with little information available. Testers are expected to provide quick feedback. These short feedback loops make for more efficient and higher quality development
  • Exploratory testing is at the heart of RST. It combines test design, test execution, test result interpretation, and learning into a seamless process that finds a lot of problems quickly. Experienced testers will find out how to articulate those intellectual processes of testing that they already practice intuitively, while new testers will find lots of hands-on testing exercises that help them gain critical experience
  • RST teaches you how to think critically and ask important questions. The art of questioning is key in testing, and a very important skill for any consultant
  • RST will provide you with tools to do excellent testing
  • RST will heighten your awareness

Bold claim bottom-line:

RST will make you a better tester.

RST comes to Belgium

Co-learning and Z-sharp are proud to announce that from 30 September – 2 October, Michael Bolton will visit Belgium to deliver the first ever RST course on Belgian soil, giving you the opportunity to experience this unique course in person. More info can be found here, or feel free to contact us for more info.

Brace yourself for an mind-opening experience that will energize and boost your mind.

All the way up to 11.

(for even more information and testimonials about RST, see Michael Bolton’s RST page)

 

Unrest assured

“Eight! Five! Four! Three!”.

Four words that I am tired of hearing by now.

“Eight! Five! Four! Three!”.

My oldest (8yo) daughter finally saved up some money for that secret digital diary, and she was so excited she could hardly get to sleep. “Tomorrow I will go buy my secret diary! And it’s totally secret! You can only unlock it by whispering the secret password! It’s gonna be neat, daddy!”.

Next day I looked up what the fuss was all about, and found this description on the manufacturer’s website:

“Rest assured that your secrets will be kept fully safe in this digital diary.
With VTech’s revolutionary voice recognition technology, you can set an audio password that only you can enter to the diary. Even when others find out about your password, the voice recognition feature will only recognise the tone of your voice, and only let you in.”

Wow. That actually sounded pretty cool for a kid’s toy. I would love to get my hands on that one, explore it to test their pretty bold claims. When I came home from work the next day, my daughter beat me to it. She had locked herself up in her room with her new secret toy, only to come out a couple of hours later.

“This is so cool, daddy. Look!”

She pressed a button on the pink gizmo.

“Say your password” – Beep – “Eight! Five! Four! Three!”. “Password not recognized”. Umph.
“Say your password” – Beep – “Eight! Five! Four! Three!”. “Password not recognized”.

No signs of frustration on the 8 year-old face yet. “Hold on.”

“What’s happening here? It doesn’t work?”.

“Yes it does. But not always. Sometimes it just doesn’t recognize the password.”

“Your very secret password? The 8543 you keep on yelling at it?”

“Yes”

“Say your password” – Beep – “Eight! Five! Four! Three!”. “Welcome” <harp music>.

“See? Now it works.”

“Well, that’s strange, don’t you think? What did you do differently now compared to your previous attempts?”

“No idea, daddy”. She closes the thing again. Beep. “8!5!4!3!”. “Password not recognized”.

It puzzles me that she’s not really interested in the cause. The thing only works one out of four attemps, and she seems perfectly fine with it.

A couple of 8543-filled days later, as the oldest daughter is off to a birthday party, the youngest one (5 yo) decides to seize the opportunity. She had been quietly watching her sister the past days, not even bothering to try – probably knowing that the oldest would strongly object.

“Can I have a look at sis’ secret diary daddy?”

“Sure. But you probably won’t get too far, it is made especially to prevent others from…”

“Say your password” – Beep – “Eight Five Four Three”. “Welcome” <harp music>. Big triumphant smile.

On. Her. First. Try. And their voices don’t even sound alike!
Granted, they’re sisters, their voices may be similar, but still there is a significant difference. And I would think that this thing is made specifically to shield your secrets from curious sisters.

Is there a problem here? What happened? After this revelation I tried to open it, and it manages to keep me locked out, where I should be. My wife is not able to enter the realm of secrets either. But little sisters, on the other hand, seem to have a bigger accuracy rate than the actual voice owners.

So, two very apparent problems:

– The diary doesn’t open when it should
– The diary opens when it shouldn’t

It turned out that the reason why the oldest had a hard time getting passed the passphrase was her impatience. Impatience is her middle name – she just couldn’t wait to yell 8-5-4-3 until a while after the beep – most of the time she started the password phrase at the same time of the beep. When she respected the little pause, her success rate tripled.

The second problem is a bit trickier. I’m trying to imagine if and how this toy was tested. Security for kids is probably not very high on most stakeholders’ priority lists. They probably tested a couple of scenarios with kids of different ages and gender, maybe a deep-voiced parent here and there. Did they include voices from siblings? Siblings aren’t corner cases – they represent realistic real life scenarios, with a high probability of diary sneaking. Granted, the kids don’t really make much of it, but the oldest was not amused when she found out about the security breach. If you make these bold claims about your product, and your product’s whole advertising and unique selling proposition is centered on this very feature, can you please make sure that it – you know – “works”?

V-Tech – frustrating tester parents since 2013.

Exploratory bug diagnosis

The prologue

At the Let’s Test conference last week, I attended a half-day tutorial on bug diagnosis by James Lyndsay, in which we tried to analyze the actions of testers when pinpointing bugs. We did all this by identifying our actions during some bug diagnosing exercises. My learnings kept lingering in the back of my mind throughout the conference (which was excellent, by the way). When I noticed on the way back home that something was wrong with the songs on my iPhone play-list, I decided to test my newly learned diagnosing-fu by describing my learnings in an exploratory essay while trying to find out what the problem really is.

At the time of writing, I don’t know what the cause of that problem is, yet. I will document my knowledge as it evolves (hopefully). So cover me, I’m going in…

The problem

On the way from Stockholm to Brussels, at a cruising altitude of 10.000 km, the hostess tells us it is safe to turn all electronic devices back on. I whip out my phone and start flipping through the albums that were uploaded via a newly created play-list. I select an album and hit play. But something’s amiss – that familiar album sounds less familiar this time around. It takes a while before I realize that the album didn’t start with its opening song. Did I hit shuffle unknowingly, by any chance? That has happened before… Nope, shuffle is off. I go to the details of the album and notice that the first song is not there.

First probing

Strange, that. I initially dismiss it as a one-off, but the next album I listen to, the same phenomenon occurs. All songs are there but the opening one. I check the other albums and it turns out that half of the albums uploaded to my play-list lack their first song. If a bug is something that bugs a user, this must be a capital B one. Having to listen to incomplete albums seriously bugs me; what’s even more: it takes away my desire to listen.

[I notice how I react quite emotionally to the strange behavior. Emotions are a powerful oracle. Although I have only limited knowledge of the problem now, I declare it officially a bug]

Defocusing & narrowing down

I feel frustrated because further bug investigation possibilities on my phone seem limited. I put it away and decide to defocus. A little in-flight snack and some reading manage to temporarily distract me, but an hour later the bug creeps up on me again. I follow my energy that leads me to start a static analysis. Although the symptoms are all on display here, I suspect the cause is not located in my phone but rather within my PC, iTunes or in the synching between iTunes and the phone.

[I just narrowed down the scope of the investigation with a first broad hypothesis]

From possible to plausible

I haven’t got iTunes at my disposal right now, but while I am at it, I refine my previous hypothesis into a couple of more specific ones that I will be able to confirm or refute when I get home:

  1. The original source mp3-folders contain incomplete albums
  2. The albums were uploaded wrongly in my iTunes library
  3. The albums were copied wrongly into the play-list
  4. The albums were synchronized wrongly from the play-list to the phone

[This list of 4 contains possible causes]

I can narrow these down further because hypotheses 1 and 2 are highly unlikely. The very same songs, from the very same source folders were recently used in other play-lists without a problem, and I haven’t noticed songs missing from albums in my music library.

[This makes hypotheses 3 and 4 the most plausible ones – better concentrate on these]

Checking hypotheses

Back home, I am reunited with the family, and with my iTunes library that resides on an external storage drive (which also happens to contain the original mp3-files). I quickly check option 1 and 2, because I am painfully aware of biases in my memory, and my thinking. [ Although I think these two options are less likely, you never know. I’m a tester, and we know things can be different, right? Well, in this case, not so much]. The suspicious albums in the source folders are complete, as they are in the music library.

[I notice that, rather than checking all albums, I tend to focus on one sample album to check my assumptions. Comparing the same sample throughout the hypothesis increases consistency and diminishes possible distortions. A possible risk is of course that this could turn out to be a not-so representative example]

This leaves me with 3 and 4, the plausible ones.

Were the albums copied wrongly in the suspected play-list? I know they have been correctly copied to other play-lists before, so I am curious to see if this can really be the case.

A-ha! Now we’re getting somewhere. Song number one, “Get Miles” got lost in the mists of the copy from library to play-list.

[I now come to realize that this was to be expected, since the synch process was designed to synch exactly what is in this play-list. Oh well, better be safe than sorry. This causes me to drop hypothesis 4, because the synch did exactly what is was supposed to do]

Reflecting & diving deeper

So, time-out for a second. What is happening? The contents of some albums were corrupted somewhere during the transfer to the play-list. First thing that strikes me: why only half of the albums? Why not all of them? They were all dragged to the play-list in the same session. Is there something I did differently for some albums? I recall that I started with importing individual albums into the library, but that I then resorted to a bulk import of the remaining albums. Maybe the “bulk-imported” albums are causing this? Then again, they are correctly loaded in the library, it is when they were transferred to the play-list that things went awry.

[While diving deeper within hypothesis 3, I develop a sub-hypothesis]

3.   The albums were copied wrongly into the play-list
3.1.   The problem with the play-list has something to do with bulk imports

I check hypothesis 3.1: I do a bulk import from several albums in one folder, and then transfer those to a newly created play-list. To no avail. I drag the suspicious album to a new play-list, but all 12 songs are there. I drag a couple of similar ones in there separately. Nothing wrong with them.

[This is not really working for me, and it starts to get boring. Let’s drag all of my available albums in, at the same time]

Bingo! Many albums there with the first songs missing. That was easy. Triumphantly, I clean the play-list and repeat the same action, to confirm.

[Repeating experiments can decrease uncertainty, but can also  free us from the illusion of control]

Nothing. All back to normal again. Huh? What did I do exactly, that first time? I launch several attempts to reproduce what I had first seen, including starting from a new play-list from scratch, all of them unsuccessful. It takes a while before I realize that I just copied the contents from the faulty play-list into the new one.

[So I start making mistakes. Back to square one. I abort hypothesis 3.1. and decide to catch some sleep]

New perspectives

Another day, a fresh perspective. What else is striking about this bug? It occurs to me that the solution might lie in the fact that every single one of those missing songs is the first song on the album. What does the missing “number one” tell me?

  • Order of play?
  • Something that started wrong and then went well?
  • Switching between albums?
  • Corrupting the first song and switching albums?
  • Switching from a bad to a good state?

[I am now focusing on the “why” of the first songs, whereas in the previous hypothesis I was focusing on the “why” of only half the albums]

Was there something I did that made the first songs go in a special state?  Suddenly, I remember… I keep forgetting that I moved the iTunes library to an external drive, it used to be on my laptop until a month ago. That means that iTunes does not recognize the songs in my library as long as the external drive is not connected to my laptop. That is no problem, as long as I don’t perform any actions on the songs, like dragging them or playing them. Otherwise the songs get a lovely exclamation mark in front of them. I disconnect the external drive and try to play the first song of the album in the library. The trusted exclamation mark appears:

I find myself investigating a new sub-hypothesis of hypothesis no. 3:

3.   The albums were copied wrongly into the play-list
3.1.   The problem with the play-list has something to do with bulk imports
3.2.   The problem has something to do with a disconnected library

Usually, the moment I notice I forgot to hook up the external drive, I quickly connect it and no harm is done. I wonder what would happen if I now connect the external drive again and drag the album to a new playlist in this state? [I have the feeling I’m nearly there. Could it really be…?]

I feel I am finally making some progress and refine 3.2 into 3.2.1.:

3.   The albums were copied wrongly into the play-list
3.1.   The problem with the play-list has something to do with bulk imports
3.2.   The problem has something to do with a disconnected library
3.2.1. Actions performed on songs while being disconnected from the library cause the songs to be skipped when copying albums to play-lists, even when the library is connected again at the time of copying

I do the experiment, and it confirms my hypothesis. I repeat the same procedure with an other album and this time, the same behavior occurs.

[I was kind of hoping and expecting that this would happen, which is normal behavior, but which can also be a danger during testing. We tend to focus more on things we really *want* to see]

The Cause & the Trigger

This last discovery leaves me with some mixed emotions. I feel happy to know what caused the missing songs, but I am also puzzled as to why so many songs have received the exclamation mark without me noticing. I can perfectly reproduce the problem, and I’m pretty sure it won’t happen to me again, since I will now be aware of the little exclamation marks while making play-lists. I have found the cause of the strange behavior that kept me busy for quite a while, but still… I am not sure how it got triggered in the first place.

I do have a trigger hypothesis (for now): flipping through albums using the cover flow view and hitting enter or trying to play them while the library is not there only marks the first songs with an exclamation mark. I recall that tried to listen to some albums, but not the complete amount that had missing songs. So there is still a decent amount of mystery involved.

Epilogue – is it a bug, really?

When I was first confronted with the problem, I proclaimed this a bug with capital B, because it annoyed me – the user – and it made me stop using the product. Has my opinion changed now that I have lived with the thing for a couple of days? I would argue that it has.

The behavior only seems to occur in very specific situations, and although the impact was quite big for me, it is unlikely that it will happen to me again. Is there a possibility that others will stumble upon this? Well, I stumbled upon it, so chances are that others will do too. And I certainly think there are other people like me that have their libraries on other media that are not by default connected to their computers. So yes, I think it IS a bug, although not as severe as I initially thought it was. This goes to show that we adapt severity and priority to our gradually evolving knowledge about the bug, and to the changing context (Something Rob Sabourin neatly pointed out as well in his brilliant Let’s Test keynote).

What I got to know about the problem so far leads me to believe that the product (iTunes) can be improved in a couple of ways (actually, there are plenty of other ways of improving it, but I digress). How about the following ones, for starters:

  • Doing a re-check of previously failed songs in case connectivity has been restored?
  • Removing obsolete exclamation marks when an external library is re-connected?
  • Adding a notification when trying to copy “songs not found” to playlists?
  • Making it more conspicuous to the user when the music library is not connected?

This concludes my adventure that started on the way back from Let’s Test. I wrote this post in several stages as I was trying to get a grip on that devious bug. It didn’t turn out to be the “clean” or “clear” bug I hoped it to be. Perhaps the iTunes product managers will even say it’s cosmetic or trivial. After all, they make the call. Oh well. I learned valuable stuff in the process. I learned that wording/noting your thoughts in the process helps you to see where your line of reasoning is heading, and what the (sometimes hidden) hypotheses are. It was all about the journey[1] of course, and not so much about the eventual outcome (which I felt was only a partial success).


[1] Although it was a personal journey, it was inspired by James Lyndsay, who encouraged me to share my thoughts on diagnosing bugs

Finding Porcini

A couple of weeks ago I found myself in southwestern France, a region which – at the time – was being struck by a unseen spell of global wetting. Summer had arrived three months early, people said. April and may had been exceptionally dry and warm; but in July, autumn was knocking on the door of our vacation home. Early autumn, I was told by our gentle host Philippe, goes hand in hand with a peculiar phenomenon: fungi frenzy/mushroom madness. All the locals get this strange misty-eyed look and head for the woods to hunt for the precious boletus edulis, more commonly known as  fungo porcino, porcine mushroom, cep and lovingly referred to as “the brown plague”, as it tends to halt the local economy.

When Philippe invited me to join him on an early morning quest for porcini, I gladly accepted. It sounded like a treasure hunt, as fresh porcini are sold for outrageous prices to local restaurants. And who doesn’t enjoy a good treasure hunt after breakfast? We left for the woods, armed only with wooden baskets, a knife and a sturdy 4×4.

Mushroom hunting, I found out quickly, is an art in its own right. Slowly and carefully, like an old sensei, Philippe unveiled his mushroom hunting mysteries. And as the mysteries disappeared, a neat set of categorized heuristics came trickling through:

The Mission

  • “Ready Zeger? So, we’re looking for cèpes, têtes de negres and chanterelles. Leave the others be. Some are poisonous, others just don’t taste nice”
  • “When we stop? When we’re finding more mushrooms than we can carry. Or when we’re not finding anything anymore.”

Classification

  • “Wait, you don’t know what to look for, do you? Come over here for a sec. This is a vintage cèpe. A fungho porcino. Like all other boletes, it has tubes extending downward from the underside of the cap, rather than gills. The pore surface of fruit body is whitish when young, but ages to a greenish-yellow when older.”
  • “Be careful there. Some mushrooms look very much like porcini. You should look under the hood. If it’s yellow, don’t touch ’em.”
  • “When you’re not really sure, scratch the bottom of the hood. When the scratch turns purple, don’t touch.”

Timing

  • “Why we’re heading out all of a sudden? Porcini tend to appear after summer peaks, depending on the weather. Usually they pop up about a week after a wet spell.”
  • “Oh no, that’s not true, Zeger. The fact that we picked loads of mushrooms here doesn’t mean I’m not coming back here tomorrow. These things grow fast. They often push overnight, you know.”

Location

  • “Location is everything. You should look at open spots in the woods, where the sun can actually reach the ground. Look, that should be a good place over there. Do you see the sunbeams peaking through the leafs? Let’s head over there.”
  • “When you find one, mind your step. Where there is one, there are many. You might crush some perfectly good ‘shrooms hidden under some leaves or grass.”
  • “Spotting porcini takes a trained and experienced eye. Here’s a pro-tip: look for where the leaves bunch up – perhaps they are being pushed up by a growing mushroom.”
  • “Don’t spend your time looking near ferns, man. Ferns grow on intensely acid soil, porcini don’t”
  • “Hey Zeger! You see this black beauty here? This is a tête de nègre, a particularly tasty and expensive kind of cep. Look for them near oak trees.”
  • “This here’s a chanterelle. If you spot two of them, follow the line that connects them; they always grow on a straight line.”

Picking technique

  • “Use a knife. Never ever pull mushrooms out of the ground. Cut them. If you damage their mycelium, they won’t grow again next year.”
  • “Remember: be gentle, cut them near the bottom of the stem in a straight line. Don’t break the hood.”
  • “Wait! Don’t cut the really small ones – they’ll be worth much more later on.”

I was soaking with sweat after a couple of hours of intense scouting. In a short timespan, Philippe managed to transform me into a die-hard mushroom hunter. A novice still, but I felt I was learning quickly. Philippe’s heuristics (not best practices, mind you) helped me discern the good from the bad, finding porcini hidden under leaves and cantharelles in a neat straight line. I even developed my own heuristics as I went along: I started looking alongside paths through the woods – plenty of chances for the sun to peep through the deck of leaves, and easier to spot since the vegetation is less dense.

Testing porcini

As I was wandering through the woods with eagle eyes and at a snail’s pace, it all felt strangely familiar. When Philippe said “Where there is one, there are many”, it struck my tester chord. Here I am, a tester, looking for mushrooms, which doesn’t seem to be all that different than looking for bugs. No wonder I liked it so much. I also realized that when I’m looking for bugs, I use these kind of heuristics all the time, but all too often I’m not very aware of them. Which is a pity, because used consciously, these heuristics (“a fallible method for solving a problem”) can be a really powerful tool to boost your exploratory testing efforts.

  • Start with a mission – make sure you – and your team – know what to look for, since our conception limits our perception. Michael Bolton often quotes Marshall McLuhan on this: “I wouldn’t have seen it if I hadn’t believed it”
  • Make sure you’ve got your classification right. If you’re only interested in a specific kind of bug, maybe you shouldn’t waste time reporting others. You could consider parking them somewhere, or keeping the reporting rather lightweight by MIPping (mention in passing) them. But try to stick to your main focus for the session. And if you find a nice-looking bug, is it really? Scratch it, it might turn purple
  • Timing – as in mushroom picking – is also a factor to be considered in bug hunting. Are there typical times at which the application is less stable? When is an ideal testing time, really? Again, this is largely dependent on context
  • Location, location, location. Personally, I use many heuristics to guide me where to test. Which areas are more vulnerable? When you find one, there tend to be many others, indeed. As leaves bunching up *might* indicate a pushing mushroom, seemingly insignificant facts might be a tell-tale sign for bugs nearby: the code that developers write after a wild night of partying might not be all that good, for example. Or they can just have a bad day. I was once told by an old native American medicine man that developers are human too
  • As some mushrooms are picked and not cut, our bag-o-techniques should enable us to deal with any situation. As Lee Copeland points out in A Practitioner’s Guide to Software Test Design: a tester should carry his techniques with him at all times, just like a handyman’s toolbox follows him around everywhere he goes. Apply a specific technique, use an particular approach when the situation calls for it.

For the record: I’m not a mushroom master, yet. I lack practice, experience and domain knowledge to attain mastery. I’m not a testing master either, as I’m in constant learning mode. For every good practice I know, in context, I am aware that there’s always another context that I need to get myself familiar with. That prospect may seem humbling and daunting to many, but I wouldn’t want it any other way. That’s Context-Driven Testing for ya.

(For more info, see The Seven Basic Principles of the Context-Driven School as a starting place. There’s a lot more where that came from).

What happened at DEWT1 doesn’t just stay at DEWT1 (June 11, 2011)

A report on the first DEWT (Dutch Exploratory Workshop on Testing) on May 11, 2011 in Driebergen, NL

What started on twitter in november last year, culminated in a first major milestone last weekend: DEWT1, our first peer – and Exploratory – Workshop on Testing (yes, the D is for Dutch, but these Dutchmen happily accepted this Belgian foreign element in their midst). Michael Bolton added to the international character by agreeing to be our special guest for the weekend.

It turned out to be an inspiring and fun event. Here’s my write-up.

The venue

Hotel Bergsebossen, Driebergen, NL

The participants

People on DEWT-y, from left to right:

Jeroen Rosink, Ray Oei, Jeanne Hofmans, Michel Kraaij, Huib Schoots, Jean-Paul Varwijk, Ruud Cox, Zeger Van Hese, Michael Bolton

Peter “Simon” Schrijver (who was roaming the earth the Better Software conference at the time)  and Anna Danchenko could not attend

The pre-conference

We gathered on friday night as a warm-up to the conference. When Michael Bolton is around, this usually means getting lured into some tricky testing puzzles, and some beers to ease the pain of messing up. And yes, jokes too. And Talisker. After we discovered the versatility of the average Dutch hotel bouncer (half bouncer, half God ad-hoc bartender), we called it a night. A dream-ridden night it was, filled with newly learned terms, such as…

Shanghai (transitive verb) \ˈshaŋ-ˌhī, shaŋ-ˈhī\ (shanghaied / shanghaiing)

1 a : to put aboard a ship by force often with the help of liquor or a drug b : to put by force or threat of force into or as if into a place of detention

2 : to put by trickery into an undesirable position

The conference

Artful Testing

Speaking of which… during our last preparatory DEWT-meet-up, my fellow DEWTees shanghaied me into doing the first talk of the day, which they promptly called a keynote to make it sound like an invitation. I thankfully accepted though, since I wanted to get some feedback on my work-in-progress presentation. The link between art and testing has been consuming me for more than half a year now. I premiered my ideas on it at the second Writing About Testing (WAT) conference in Durango last month (if you haven’t done it already, you should check-out the great WAT write-ups from Marlena Compton, Alan Page and Markus Gärtner).

Ruud (who facilitated the morning sessions) kicked off the conference and invited me to take the proverbial stage. Based on the feedback from WAT, I made some modifications to the presentation and put it out here again for a second time. I don’t know if the subject was really fit for an early morning session, but I received some gratifying feedback that convinced me to pursue my efforts in this direction.

Transpections

Transpections (basically a way of learning and sharpening your ideas by putting yourself in someone else’s place in some kind of Socratic dialog) were on our DEWT wish list for quite some time already. We had been reading all sorts of interesting stuff on it (see James Bach’s post here, some Michael Bolton posts here and here, and Stephen J. Hill’s post here), so we asked Michael Bolton if he would be willing to give us a quick roundup on the subject. Michael agreed and made it into an interactive session, inviting us to pair up to gather information about transpections and then transpect on that. Meta-transpection for the win!

The information gathering exercise was enlightening, and brought up some good food for thought. Michael compared a transpection session with the play between a hammer and an anvil, where the hammer would be the initiator of the transpection, the anvil the person whom the initiator is transpecting with, and the metal the idea being shaped.

In the end, we didn’t get to try an actual transpection session, partly because I artfully exceeded my allotted time in the previous session. Oh well…  It was a valuable exercise nonetheless.

Lightning talks

After lunch there were some lightning talks to fight the afternoon dip:

  • Jeroen got started about the hierarchic “testing pyramid” model (testers / test coordinators / test managers) and how he wants to challenge that classical view
  • Huib followed, on “the power of knowing nothing”, about how starting with a (mentally) clean slate reduces the chances of being biased. “It’s not about the what, it’s about the “why”
  • I touched upon the topic of the Baader Meinhof phenomenon and how testers could leverage the effect by absorbing as much knowledge as possible, on several subjects (a blog about that has been sitting in my drafts since january 2010 – I’ll try to finish that)

Introducing exploratory testing in Dutch projects

Ray then presented an experience report on how he was able to introduce exploratory testing and session based test management in classic, T-Map-style projects, using the principles he learned from Rapid Software Testing. Discussion ensued on how to prove the benefits of RST, and what the major differences between the approaches are. But we ended up talking mostly about “release advice”, and what to do when you’re asked to give it. One take-away phrase for me: “it’s not declining, it’s empowering the product manager”.

Walking break & Positive Deviance

Although we finished the previous topic way ahead of schedule, everyone felt like the last discussion drained our energy (our staying up late the night before probably didn’t help either). Jeanne, who facilitated the afternoon sessions, had the brilliant idea to just go out for a walk in the “Utrechtse Heuvelrug” national park, which turned out to be a conference session in its own right: relaxing, fun and informative. A beautiful spot, too. There was a moment where I thought we were getting lost, but here’s another lesson: do not underestimate the power of nine explorers, without a map.

Back at the hotel, Michael talked about positive deviance and positive deviants (people whose uncommon but successful behaviors or strategies enable them to find better solutions to a problem than their peers, despite having no special resources or knowledge). He also showed us a video of Jasper Palmer, a patient transporter at the Albert Einstein hospital (and a positive deviant) who became famous for his “Palmer Method”, which is now a standard life-saving practice in a number of hospitals. A mighty fascinating topic, that I’ll be exploring more for sure.

Credibility

Ruud delivered the closing presentation, on credibility – the quality of being trusted and believed. The main issues Ruud addressed were: how do we – testers – build credibility, and how do we manage to maintain it? After all, trust is built slowly, but destroyed in seconds. Simple questions, but a very complex subject indeed. “Trust” and “credibility” are relations: you can be credible to some person at a certain moment in time, but totally incredible to another. Trying to build your credibility is not always something controllable. Sure, you can do your very best to improve your credibility on a personal level, but you don’t really have an influence on how people will perceive you. Ruud then explained how he tries to build credibility. He impressed me with the personal mnemonic he developed, and the matching artwork as a personal reminder to stick to these principles:

STYLE

  • Safety language
  • Two ears one mouth
  • Yes but
  • Lighten up a little
  • Empathy
I’m not going in detail here, because I specifically want Ruud to finish that blog post he’s been mulling over for ages now. So, yes Ruud, the pressure is on. You’ve got some great material – time to share it with the world!

DEWT1 ended with drinks, testing games and dinner. I ended the day way more energized than I started it, which is always a good sign (silly extroverts like me get fueled by events like this). DEWT1 rocked. It was informal, informative and entertaining. When is the next?