Agile Testing Days 2010 – Day 3 (Lederhosen and Certified Self-Certifiers)

Agile Testing Days 2010 – Day 3 (Lederhosen and Certified Self-Certifiers)

October 6

Wednesday. Michael Bolton warmed up the audience with the keynote performance How am I supposed to live without you?Testers: Get Out of the Quality Assurance Business!“, and proved once again that he’s a hard act to follow. He immediately came out of the closet saying that he’s an Agile skeptic and stated what “being Agile” means to him:

  • Adhering to the Agile Manifesto
  • “Be able to move quickly and easily” (cf the definition in the Oxford English Dictionary)
  • De-emphasizing testing for repeatability
  • Re-emphasizing testing for adaptability
  • For testers, focusing on testing skills
  • Focusing on not being fooled

Michael then defined quality as “Value to some person(s) who matter” (© Weinberg, Bach, Bolton) and said that decisions about quality are always political and emotional, and taken by people who actually have the power to make these important decisions. A little bit later, the main message of the talk jumped right at us and bit us in the face:

If you are a tester, do *you* hire the programmers? Fix problems in the code? Design the product? Allocate staff? Set the company’s strategic direction? Allocate training budgets? Set the schedule? Decide on raises? Control the budget in any way? Negotiate customer contracts? Actually choose the development model? Set the product scope? Do you decide which bugs to fix, or write the code yourself?

Did you answer “No” to most of them? Then you will probably agree that it is simply impossible to “assure” quality. But no worries – it is not our job to assure quality. What we *can* do is test, and make sure we’re damn good at it. Testing in the sense of a sapient activity, providing information with the intent of *informing* a decision, not *taking* the decision. Not to be confused with checking, which mainly aims at confirming existing beliefs. Checking is automatable and non-sapient.

Michael Bolton shifted into a higher gear, and claimed that “acceptance tests” are examples, and that examples aren’t really tests. They are checks, not tests. Acceptance tests don’t tell us when we’re done, but they do tell us that we’re not finished when they fail. They should in fact be called “rejection checks”.

I looked around me. Usually, at this point in a presentation and at this time of day, people are dozing off. Even the biggest barflies were wide awake now. He ended with a set of statements that almost read like some kind of Tester’s Manifesto:

We’re not here to enforce The Law.
We are neither judge nor jury.
We’re here to add value, not collect taxes.
We’re here to be a service to the project, not an obstacle. 

I got out of the room early and skipped the Q&A part, since my presentation was up next. Apparently the Q&A got a bit out of hand (I suspect the A was probably more to blame than the Q), because the auditorium doors swung open 15 minutes late. In hindsight, I was lucky that I even had an audience; in a parallel track, Gojko Adzic was delivering one hell of a performance (a stand-up comedy routine, I was told) for an overly packed room. 

No stand-up comedy in my room, but an honest “inexperience report” called “A lucky shot at Agile?“. I had ditched Powerpoint one week earlier and decided to go for Prezi, the so much nicer alternative. Of course, this was a bit of a risk, but I think it turned out fine. The presentation went well, and I received some good and heartwarming feedback which really made the rest of my day. 

In case you are interested, here’s A lucky shot at agile – prezi.

<Shameless_plug>In case you’re interested in the full story, Eurostar conferences has released my paper on the subject in an ebook-format – available for free – here </Shameless_plug>

I stayed in the room to attend Anko Tijman‘s talk “Mitigating Agile Testing Pitfalls“. Anko’s talk revolved around five pitfalls that threaten agile teams, and what we can do to mitigate them:

  1. Not testing with the customer. We can mitigate this risk by building a relationship, building trust.
  2. Not testing as a team. Teams are collectively responsible for the quality of the product. Share knowledge not only with your testers, but with the whole team. Work on a collaborative definition of done, tackle risks.
  3. Unbalanced test strategy. Teams sometimes focus too much on unit tests or acceptance tests, postpone other test activities to the next phase. This in turn can lead to a lack of feedback. To overcome this, put more detail in Definition of Done, schedule knowledge sessions, share content on a wiki.
  4. Requirements are too vague/ambiguous. Collaboration is the key in overcoming this pitfall. Communicate!
  5. Tools. Focus only on tools that add value to the team and that support the practices of the team. Decide as a team which tools to use and which not.

By then it was time for lunch, which is always a good occasion to mingle with other testers, discuss and have some fun. And to ravage that German buffet, of course. I had the impression that everyone was eagerly anticipating the keynote that would follow, which was Stuart Reid with “Agile Testing Certification – How Could That Be Useful“. It became clear that he wasn’t exactly going to preach for his own parish.

And a controversial talk it was. Twitter servers were moaning as Stuart’s quotes and graphic interpretations thereof were launched into #AgileTD cyberspace. Strangely enough, the infamous twitter fail whale was nowhere to be seen, which surprised me since the whole auditorium was filled with bug magnets. Stuart Reid started off by stating that it is only a matter of time before a qualification for agile testing is proposed and launched, whether we like it or not. He continued to say that if we want our industry as a whole to improve, we should exert our influence to help create a certification scheme we can truly benefit from. Fair enough. But what followed next confused me.

Stuart Reid stated that “the certification genie is out of the bottle” – what started as a good intention has spiralled out of control, and there’s no way back. This sounded like nothing more than a public dismissal of ISTQB to me, coming from one of the founding fathers. He proceeded to give an overview of the typical money flows in such a certification scheme, which was pretty enlightening. At one point, Stuart even managed to upset Elisabeth Hendrickson by stating that “it’s not because you are teaching Agile, that the training itself has to be Agile”. The movie clip of that very moment will live long and prosper on the internet. The whole “if you can’t beat them, join them”-idea bothered me too, as if there are no alternatives. Instead of focusing on certifications, we could try to educate employers, starting right at the top level. Certification programs exist mainly because employers don’t really know what qualities define a good tester. For them, a certification is merely a tool to quickly filter incoming resumes. Anyway, I think it’s good that Stuart initiated the debate, which would continue the rest of the conference.

The room was buzzing afterwards. Nothing better than some good old controversy to get the afternoon started. David Evans calmed things down again with “Hitting a Moving Target – Fixing Quality on Unfixed Scope“. He had some great visuals to support a thoughtful story. Some heavily tweeted quotes here:

  • QA in Agile shouldn’t be Quality Assurance but rather Questions and Answers
  • The product of testing is confidence (to which Michael Bolton quickly added that the product of testing is actually the demolition of false confidence).
  • Acceptance Test Driven Development (ATDD) slows down development just as passengers slow down a bus. We should measure the right thing.

Then it was Markus Gärtner‘s moment to shine in the spotlights. He presented “Alternative Paths for Self-Education in Software Testing“. During the last year, I got to know Markus as a passionate professional, dedicated to learning and advancing the craft. An overly active and ever-blogging guy that may have found the secret of the 27-hour-day. He opened with the question “who is in charge of your career?” Is it your boss? Your employer? Your family? Your teachers from high school? Well, none of that. It’s YOU. If you find yourself unemployed a year from now, everything you do now is contributing to you being employed quickly again.

Markus listed several ways of learning and self-improvement:

  • Books:
  • Courses
  • Buccaneer Scholaring, a way of taking your education in your own hands, based on the book Secrets of a Buccaneer Scholar by James Bach
  • Testing challenges – challenges to and by the Testing Community
  • Testing Dojos – principles: collaboration in a safe environment, deliberate practice. Usually consists of a mission which allows the testers to practice their testing and learning. Can happen with observers or facilitators, can be a good occasion to practice pair testing too.
  • Weekend Testing – A few hours of testing + debriefing in the weekend  according to a charter or a mission. I participated in a couple of European weekend sessions, and I must say: great learnings indeed. 
  • The Miagi-Do School of Software Testing, a school founded by software craftsman Matt Heusser. It’s a zero profit school where people can improve their skills, learn from others and share knowledge, using a belt system like in martial arts. They are not widely advertised – as Markus said: the first challenge is finding them.  

Janet Gregory‘s closing keynote fitted nicely in Markus’ theme, since it was all “About Learning“. It was an inspiring talk, about congruence in learning, the importance of learning, the curiosity of children – how their unspoiled curiosity makes them natural testers. She also related the learning to the agile principles. She managed to tie in neatly with Rob Lamberts presentation about structures and creativity. A safe environment helps you to learn. She referred to trust as an important element in team safety. A blame culture will work counterproductive. No-one will learn anything.

After all this theory about learning, we were all yearning for some hands-on practice. The Diaz & Hilterscheid gang gave us the opportunity to practice that typically German custom called Oktoberfest. Just like last year, they dressed up in Lederhosen (I’m actually getting used to the look of José in Lederhosen, go figure) and started serving plenty of local food and one-liter glasses of beer. There was live music as well, which added to a fun Bayerisches athmosphere. The evening culminated in some vivid discussions of the burning issues of the day. Well, actually there was only one burning issue: certification. Elisabeth Hendrickson was determined to get everyone mobilised for a worthy cause and whipped out her iPad on which she had written some kind of self-certification manifesto. Someone threw a pile of index cards on the table. Elisabeth was on fire and started handing them out everywhere. “If you agree with it, copy it. If you don’t, don’t”. Index cards on tables. Pens. Beer. Lots of people copying index card after index card till their fingers went in a cramp. That night witnessed the birth of a community of certified self-certifyers, all of them proudly carrying the message:

We are a community of professionals.
We are dedicated to our own continuing education
and take responsibility for our careers.
We support advancing in learning and advancing our craft.
We certify ourselves.

Some people took the discussions to the hotel bar, while others decided to dance the night away. I think I even spotted some genuine limbo-ing on the dancefloor. Someone ought to tell these testers about risk…

To be continued… Day 4

Advertisement

Agile Testing Days 2010 – Day 2 (Defect limbo and stunt hamsters)

Agile Testing Days 2010 – Day 2 (Defect limbo and stunt hamsters)

October 5

After a full day of playful tutorials on monday, it was back to business on tuesday. The actual conference kicked off with an interesting keynote by Lisa Crispin. Lisa is an author/agile tester extraordinaire/donkey afficionado with Belgian roots – a winning combination if you ask me. The title of her talk was “Agile defect management” and was all about finding a suitable approach to manage and track defects in agile projects. Lisa used the limbo analogy for defects, stating that we should strive to lower the bar on defects. I liked the analogy – but had a hard time dissociating from all the alcoholic connotations of a classic limbo-fest, where the bar is generally lowered until the first drunkard ruptures his anterior cruciate ligaments. I think it’s time to groom my unconscious backlog a little.

In the agile/lean world, using a defect tracking system (DTS) is generally seen as wasteful, since agile teams strive for ‘zero defects’. Instead of filing bugs in a DTS, they prefer to fix the problem immediately by creating an automated test for that defect and adding it to the unit test suite.

I particularly liked Lisa’s “tabula rasa” idea: try to start your project *without* a defect tracking system and see what you need as you progress. Set some rules like “no more than 10 bugs at the same time” and fix the important bugs immediately. You could even use a combination of a DTS and defect cards on a board. Use the DTS for defects in production and defect cards on the story board for defects in development.

The next track I attended was “Incremental Scenario Testing: Beyond Exploratory Testing” from Matthias Ratert. He started off by explaining that they performed exploratory testing in their project, that it was helpful for about 2-3 test sessions , but that it became increasingly difficult for the testers to come up with new and creative test ideas and that too many areas of the complex system remained untested.

My exploratory tester heart was bleeding at first because they dismissed exploratory testing so quickly. When I heard that they were using unskilled, untrained and outsourced labor in the form of students, without experience and/or motivation to continue in this line of work, it all made sense. No wonder the exploratory testing yielded sub-par results.

In order to cope with the testers’ lack of imagination and sense of coverage, they developed a tool (the IST-tool) to do Incremental Scenario Testing (IST). The tool was used to automatically generate test scenarios as a starting point, which were composed of preconditions, states and events. It was tweakable by all kinds of different parameters to suit different contexts. The testers would still have the freedom to test in an exploratory fashion withing these predefined areas, without stating expected results. The tool could be configured so that important areas would appear more often in the testers’ selected scenarios.

The tool as such sounded like a good solution to  generate different scenarios, to spread and divide work to have a better coverage, but in my opinion will not solve their initial problem: they were still letting unskilled and unmotivated people perform exploratory testing, which is especially known to be a highly skilled, brain-engaged activity. Replacing them was apparently no option because of budgetary reasons. But why not try to train them into first-class ET-guerilleros first?

For the last morning session I chose to attend “One small change to code, one giant leap towards testability” by the lively and ubiquitous Brett Schuchert (presenter of two track sessions and the open space facilitator on thursday, how much more omnipresent can you get?).  His topic was mainly technical – how to design for testability. He used the example of a dice game in which the rolling of the dice was a factor that was beyond our control. In order to make the design testable, we should use dependency injection: create two loaded dice with a predictable result, and feed them into the game.

Schuchert’s inner showmaster came to surface when he threw some jeopardy-style quotes at us to illustrate the importance of Test Driven Development – a design practice rather than a testing practice:

The answer is 66%. What was the question?
“What is the chance that a one-line defect fix will introduce another defect?” (Jerry Weinberg)

After a copious lunch, ‘Fearless Change’-author Linda Rising inspired the big auditorium with her keynote “Deception and Estimation: How We Fool Ourselves“. She defined deception as consciously or unconsciously leading another or yourself to believe something that is not true. Her main message was that we constantly deceive ourselves and others. She illustrated her point with the typical marrying couple at the altar. Although current studies indicate that chances for a marriage to succeed are only 50-50, this knowledge doesn’t keep anyone from getting married. I didn’t know these odds when I decided to get married. I like to think it wouldn’t have made a difference – I’ve always liked to defy statistics. 

Us humans, we are a strange lot. We are hardwired to be optimistic. We see what we want to see, so we unconsciously filter out the things we dislike. After all, we fear what we cannot control. And it’s here that estimations come into play. Our hardwiring biases our estimations – we constantly overestimate our ability to do things: coding, testing, everything. And do we ever learn from our mistakes? But we shouldn’t be too overwhelmed by this, there’s hope: achieving good enough estimates isn’t totally impossible, if we just take small enough steps, experiment, learn from failures as well as successes.



Software Testing Club busybee Rob Lambert provided some very good food for thought in his talk “Structures Kill Testing Creativity“. I don’t know how deliberate it was, but he did this really cool thing of standing at the door outside the room and greeting people as they came in. It certainly made me feel welcome from the beginning. He was also the first person up till then that I saw using Prezi. A kindred spirit! I sat back and enjoyed the show. The main point of his presentation that in order to foster creativity, we need *some* structure (he used the example of a sonnet which has *some* predefined rules to it), but that imposing excessive structure upon people and teams will suffocate creativity (my wording, not his). Rob defined creativity through the equation:

Expertise + Motivation + Imagination = Creativity

Rob then tried the Purdue creativity test on his audience – he asked us to draw the person sitting next to us in a mere 30 seconds, which led to some hilarious results (you can check some of the drawings on his blog – the speed-portrait of Lisa Crispin by the artist formerly known as Ruud Cox is pretty mindblowing). The point of the exercise was to show that when we have to share creative ideas, we constantly self-edit. We feel shy and embarrassed, even more so if the environment doesn’t feel safe to us. True. It struck me that almost everyone was apologizing for the bad portraying afterwards. Excessively structured environments don’t make good breeding ground for creativity.

Rob Lambert told a couple of personal stories about people who actively pursued creative environments for the better. Marlena Compton moved to Australia to work at Atlassian, a cool company without excessive structures in place. He talked about Trish Koo, who works at Campaign Monitor, a company that is apparently all about people. He mentioned Pradeep Soundararajan, who started using a videocamera to film testing in progress to make the testing language more portable and universal.

© Quality Tree Software Inc.

Dynamic? Peppy? Is there a stronger word than energetic? If so, it would describe the keynote by compelling storyteller Elisabeth Hendrickson. The title of her talk rang a little bell: “Lessons Learned from 100+ Simulated Agile Transitions“. The main subject was indeed the infamous WordCount experiment, which she has done numerous times, including her tutorial on day 1 of the conference (you can find my write-up for that here). Because of non-disclosure agreements, she couldn’t use actual pictures, so she used stunt hamsters to illustrate her point. Throughout her talk, it was nice to see that our WordCount group from the day before was no bunch of forgettable dilettantes. It was déjà-vu all over the place:

  • Round 1. The computer is bored (check)
  • Round 2. Chaos (double check)
  • Round 3. Structure (triple check, at least the beginning of structure)
  • Round 4. Running on all cylinders (quadruple check)

 The lessons learned: teams that struggle, typically

  • Hold on tight to rules, silos and work areas
  • Have everything in progress and get nothing done
  • Sit in meetings constantly instead of creating visibility
  • Fail to engage effectively with the customer

Teams that succeed, generally:

  • Drive development with customer-provided examples
  • Seek customer feedback early and often
  • Re-shape their physical environment
  • Re-invent key-agile engineering practises like ATDD and CI

The ressemblance to what our team went through the day before was striking. I was still pondering that throughout dinner, when it hit me that it would be my turn to perform the next day. By that time, the stage in the dining hall was taken over by overly enthusiastic improv actors, but I wasn’t really in the mood for that kind of entertainment. I obeyed the voice in my head. Must. Prepare. Prezi. 

To be continued… Day 3

Agile Testing Days 2010 – Day 1 (Agile transitions)

Agile Testing Days 2010 – Day 1

After a great experience at the Agile Testing Days last year, I decided to answer their call for papers early. By the time the full program was announced (somewhere in april), I had almost forgotten that I participated. So it was a pleasant surprise to see my name listed among all those great speakers. I decided to break out of my comfort zone for once and in the last minute I “prezi-fied” my existing presentation. Confidently stressed, I flew east to Berlin to be part of what proved to be a wonderfully memorable conference. 

October 3

It was sunday October 3, which meant I arrived on the 20th anniversary of the German unification. The last time I had been in the city centre, Berlin was still a divided city. I was 16, and overwhelmed by the contrast between the neon-lit Ku’damm and the clean but spookily deserted East. Going through checkpoint Charlie to the East – and happily back again, while others desperately wanted to but couldn’t – still ranks among the most awkward moments in my otherwise pretty uneventful youth. Sure, the Alexanderplatz, Ishtar gate and Pergamon museum impressed me, but why a country would deliberately lock up its people was totally beyond my 16-year-old self.

So, with a few hours of daylight left, I headed to some sites that I still remembered from the days of yore. The Brandenburger Tor was now the backdrop for big festivities: music, beer, bratwurst and parachute commandos executing a perfect landing at Helmut Kohl’s feet at the Reichstag. No concrete walls to be seen. Unter den Linden completely opened up again. It felt great. Sometimes nostalgia isn’t what it used to be.

October 4

© Stephan Kämper

The morning of tutorial day, the Seminaris Hotel conference lobby was buzzing with coffee machines and activity. I had enrolled for Elisabeth Hendrickson‘s “Agile transitions” tutorial, which turned out to be an excellent choice. Eight people were taking part in the WordCount experiment, of which Elisabeth recounts an earlier experience here. After a round of introductions, we divided roles within the WordCount company: tester – developer – product manager – interoffice mail courier (snail mail only) – computer (yes, computers have feelings too) or observer. Strangely enough, I felt this natural urge to be a tester. I didn’t resist it, why should I? Elisabeth then proceeded to explain the rules. We would play a first round in which we had to stick to a set of fixed work agreements, like working in silos, formal handoffs and communicating only through the interoffice mail courier. The goal of the game was basically to make our customer happy by delivering features and thus earning money in the process.

We didn’t make our customer happy, that first round. On the contrary – confusion, chaos and frustration ensued. Testers belting out test cases, feeding them to the computer, getting back ambiguous results. Developers stressed out, struggling to understand the legacy code. Our product manager became hysterical because the customer kept harassing him for a demo and no-one was responding to his messages. The mail courier was bored, our computer felt pretty abandoned too. It all felt wonderfully unagile.

In round 2 we were allowed to change our work agreements any way we wanted, which sounded like music to our agile ears! We co-located immediately and fired our mail courier. We organised a big kickoff-meeting in which the customer would explain requirements and walk us through the application. We already visualised the money flowing in. In theory, theory and practice are the same. In practice – not so much. We spent a whole round discussing how we would work. We lost track of time. There were no new features, and no money. We felt pretty silly.

Round 3 was slightly better. We were able to fix some serious bugs and our first new features were developed, tested and working. But just when we thought we were on a roll, our customer coughed up some examples that she really wanted to pass too. They didn’t. 

Pressure was on in round 4, which was going to be the last one of the day. Would we make history by not delivering at all? Well, no. We actually reinvented ATDD, by letting the customer’s examples drive our development. This resulted in accepted features, and some money to go with that. We managed to develop, test and demo some additional functionalities too. A not-so-epic win, but a win nontheless. Wordcount was still in business. If there would have been a round 5, I’m pretty sure WordCount Inc. would have made a glorious entrance at the Nasdaq stock exchange.

Elisabeth did a great job facilitating the discussions in between rounds and playing a pretty realistic customer. All the participants made for a very enjoyable day too. The day really flew by and ended with a great speaker’s dinner at the borders of the Schlachtensee. A Canadian, an American, a German and a Belgian decided to walk back to the hotel instead of taking the bus. It sounds like the beginning of a bad joke, but that refreshing 5km walk through the green suburbs was actually the perfect closure of a terrific day. And without a map, I might add. As the rapid Canadian pointed out later: documentation is overrated.

What a picture can tell you – an exercise

Shortly after I posted the pictorial challenge on my blog, I had a conversation with Thomas Ponnet on Twitter:

ThomasPonnet: I could srt deconstructing but 4 wht reason? So far thr’s no context so therefore thr cnt B a story IMO. Interesting though

TestSideStory: I think there are clues in the pic that might give some context away.It’s just an exercise in seeing/interpreting signs imo

ThomasPonnet: I’m difficult on purpose 😉 w/out context, no, w/out oracle I can’t infer, I can only guess, do testers do that? Yes,for fun

TestSideStory: Testers do guess. They call that making an hypothesis 🙂 Then they see where they get from there.

About the hypothesis thing: I meant that when I said it. We make guesses all the time. We make hypotheses, we assume some things and we act accordingly. We perform experiments to see whether we can confirm our hypotheses. If not, time to re-model. 

We construct models in our mind, but are these models ever correct? And even if they prove to be incorrect, that is often more useful than  having no models at all. Remember the old adagio “when you’re lost, any old map will do” .

Back to the challenge. Quite a lot of people visited, but no-one actually rose to the challenge, which leads me to assume that people either :

  • were confused
  • were not interested
  • didn’t see the point
  • didn’t have the time
  • couldn’t care less

After this sobering insight I decided to eat my own dog food and have a go at it myself. Click on the thumbnail for a larger resolution picture.

Can we derive any context from this picture?

On the denotation side: it’s just a little boy is sitting on top of a house. We only see the upper floor. On the balcony, there’s a little blue bike, a blue baby bath, a blue screen door to keep the mosquitoes out, a birdcage with two birds in it, a pot with a plant and some broken but repaired windows. On top, we see two old publicity signs. One says “Zanchetti”, the other is only partially visible and reads “La Mejor Ropa de Tr…”

Any connotations? Situational context? La mejor ropa… Ropa means clothing or clothes in Spanish. So the publicity signs seem to point us to a  spanish-speaking country. Somewhere in Mexico, maybe? Or Spain? Latin-America? South-America? 

Let’s google the two terms on the signs.

Zanchetti mejor ropa de

Mmm… primarily hits from Argentina, some from Chile and Colombia too. Maybe we should narrow down the search. The two signs seem to belong together, so Zanchetti is probably a clothing factory. Let’s try another search and see what happens.

Zanchetti ropa

The first result from this search leads us to an “indumentaria online” (clothing online) site (thanks, Google translate!), which basically seems to be a collection of stores that sell working clothes. So we can also complete the publicity signs by now: “Zanchetti, la mejor ropa de trabajo”. The last store in the list rings a bell:

ZANCHETTI HNOS.
Vieytes 1876 (1275) Bs.As.

The Zanchetti brothers are in Argentina, allright. Buenos Aires, to be exact. Enter Google Maps, that trusted friend for the geographically challenged.

This is the result.

Note that the address shown isn’t actually Vieytes 1876, which is a smaller street in an other part of the city.

Of course, we can’t just assume that Vieytes 1876 is the address where the picture was taken. Publicity signs are typically constructed on tall buildings in commercial areas, not too far from the neighborhood of the business itself.

The building looks old, and the publicity signs are weary, almost from decades ago. On an other clothing website, it says that the Zanchetti brothers started their business in 1962. It sure looks like it stems from that era. The fact that they once decided to place the signs here, indicates that this probably used to be a big commercial or industrial site. The faded signs also suggest that this area is no longer mainstream, and deteriorating. The building looks like a residential building now, so its function may have changed over time. Could it be that this once was a thriving part of the city, but that the city has now evolved elsewhere, leaving this area to deteriorate?

In spite of the building, the boy on top of the roof doesn’t look poor. He is meticulously dressed in what looks like a sports outfit. This may indicate that his parents are not too wealthy, but that they take pride in giving their children the best life possible (or this indicates that the boy is a small time drug dealer succesfullysupporting himself – but I give him the benefit of the doubt). He looks comfortable up there, as if this is his usual hide-out/vantage point. He’s ignoring the rooftop view, probably because he’s pretty familiar with the surroundings. 

The blue bicycle stowed away on the balcony is likely the boy’s, and the fact that it’s up there and not downstairs ready for use, is maybe an indication that the boy and his family only live in (or own) the upper floor. This probably means that there’s not too much room in the apartment. So maybe the rooftop is where he goes to have some time for himself. He sure looks a bit lonely up there.

But there’s other interesting stuff on the balcony.

The blue baby bath. It indicates that there is at least one other (younger) child in the family. We can’t say for sure if it’s still a baby. After all, the bath could be an old one, waiting to be discarded. 

The blue screen doors with what appears to be children’s stickers on the inside. The bird cage with what looks like parakeets (there is towel on top of the cage, which indicates that they are covered up at night and spend the night outside too). The broken and poorly mended glass in the doors. All these things imply a rather poor but caring and happy family.

Any things I missed?

A pictorial challenge: Deconstruction / Denotation / Connotation

I’ve been thinking about deconstruction (a term first introduced by the French philosopher Jacques Derrida) and its applicability to software testing lately. According to wikipedia, this is

an approach used in literary analysis, philosophy or other fields to discover the meaning of something to the point of exposing the supposed contradictions and internal oppositions upon which it is founded – showing that those foundations are irreducibly complex, unstable, or impossible

Deconstruction is also an essential part of semiotics and media studies, where it is used to pick images apart through the use of fine detail. We are surrounded by images in every day life, and we think we are able to read and understand everything we see, taking our visual literacy and cognitive abilities for granted. Deconstructing can help us to understand our decoding processes and how we derive meaning from all that surrounds us.

Within deconstruction, we have denotations and connotations

  • A ‘denotation’ is the first level of signification, a literal meaning on a basic, dictionary level. This is the ‘obvious’ or ‘commonsense’ meaning of something for someone: this thing is pink, it is a bicycle.
  • The term ‘connotation’ is at the second level of signification. It refers to ‘personal’ associations (ideological, emotional) and is typically related to the interpreter’s class, age, gender, ethnicity and so on. Connotations are numerous, and vary from reader to reader. The above mentioned bicycle is rather small, so it probably belongs to a teenager. It is pink, so perhaps it is a girl’s. But it is flashy and eyecatching too and might therefore connote that its owner is just an extrovert. If you once fell off a bicycle, you may even associate this bicycle with negativity and pain.

I think a large part of what we do as testers is deconstructing, in a way. We try to make sense of something by uncovering meaning (intended or unintended). We aim to derive meaning from different angles. We deconstruct by applying factoring (Michael Bolton defined this as “listing all the dimensions that may be relevant to testing it”) to objects, images and software – it can be useful to list as many different hypotheses as possible.

So, what about *your* deconstructing skills?

Since testers do seem to like challenges – here’s one for you all to enjoy.

Click on the thumbnail for a larger picture.

What can you find out about this picture?

What does it tell you?

What story does it tell?

Can you derive context?

What are you assuming? Why?

Which heuristics did you use?

I’m not revealing the copyright details just yet – no spoilers. Additional info will be added later. Enjoy!

Children’s own pass/fail criteria (and nursery rhymes)

One month ago, my oldest daughter (6) started taking on rope skipping. The last time I had seen her practising, two weeks ago, she was still having trouble getting the rope neatly over (and under) herself, but yesterday she was able to complete several jumps in one go in a fluent movement. It was the first time I had seen her do that, so I was pretty impressed.

She was clearly in learning mode. I sat down to observe her more closely. 

– “Wow, where did you learn all that?”

– “I’ve been watching older girls do that in school, daddy. Watch”.

She started jumping and counting out loud.

– “One, two, three, four, five, six, …”

She tripped on the rope.

– “Woohoo! Six!”

– “You go, girl!”

– “Again! One, two, three, four, five, nooooo…”

– “Five is good”.

– “No, daddy, five is not good. Again!”

She repeated the process a couple of times. She jumped seven (“Yes!”), four (“Nooo!”), five (“Pfff!”), six (“Yippie!”). I started noticing a pattern. It struck me that she alternated frustration with joy, and she let it depend on the number of jumps. Time for some questioning.

– “Why are you happy with anything above or equal to six, but unhappy with anything lower?”

– “It has to be at least six, daddy”.

– “Why six?”

She seemed really annoyed that I didn’t see her point. She thought I was pulling her leg.

– “Because I’m _six_ years old, daddy. Didn’t you know? What else could it be?”.

I was totally flabbergasted. She managed to impose some totally arbitrary pass/fail criteria on herself. Where did that come from? I thought that using pass/fail tests actually sabotages kids’ natural learning processes? But this appeared to come out of herself. No-one told her that she had to make at least six.

I wondered – maybe she just chose her age as a starting point, just to set some initial learning goals for herself? Was she planning on raising the bar later on when reaching six would have become too easy? Unfortunately I didn’t have the chance to follow-up on that – lunchtime!

Flash forward to work. All this reminded me of commonly defined pass/fail criteria such as

“90% of all tests must pass”

Really? 

In “Are your lights on?”, Jerry Weinberg uses the well-known “Mary had a little lamb” nursery rhyme to show how a seemingly straightforward statement is prey to multiple interpretations, depending on which word you emphasize. An invaluable heuristic when looking at requirements. Why not try that on the familiar pass/fail criterium stated above?

“90%”? What if the tests that would have revealed some serious errors happen to be in that 10% you so confidently dismissed? Why not 89 or 91?

“All”? You know “all” possible tests that can be performed? Are they all documented? Some of them might still be residing in your head. What if in the meanwhile we performed some more important tests that revealed serious risks? Are these tests part of “All”?

“Tests”? Do you only count scripted tests, or do you also take exploratory ones into account? What about important usability issues some users might have found? Or acceptance test checklists? Or automated checks? 

– “Must”? What if not all 90% passes? Does this mean your solution is without value? The customer might value other things than you do. Is it up to you to decide how much value is in there?

“Pass” ? What about behavior that is totally acceptable for your client, but that we find annoying? Pass or fail? What about tests that pass all steps, but that reveal important problems as a side-effect? Sometimes a test’s pass/fail decision is not binary.

My daughter went to school this morning and – for the first time –  took her own jump rope with her. I wonder how many % of her rope jump cases will pass this time.