A pictorial challenge: Deconstruction / Denotation / Connotation

I’ve been thinking about deconstruction (a term first introduced by the French philosopher Jacques Derrida) and its applicability to software testing lately. According to wikipedia, this is

an approach used in literary analysis, philosophy or other fields to discover the meaning of something to the point of exposing the supposed contradictions and internal oppositions upon which it is founded – showing that those foundations are irreducibly complex, unstable, or impossible

Deconstruction is also an essential part of semiotics and media studies, where it is used to pick images apart through the use of fine detail. We are surrounded by images in every day life, and we think we are able to read and understand everything we see, taking our visual literacy and cognitive abilities for granted. Deconstructing can help us to understand our decoding processes and how we derive meaning from all that surrounds us.

Within deconstruction, we have denotations and connotations

  • A ‘denotation’ is the first level of signification, a literal meaning on a basic, dictionary level. This is the ‘obvious’ or ‘commonsense’ meaning of something for someone: this thing is pink, it is a bicycle.
  • The term ‘connotation’ is at the second level of signification. It refers to ‘personal’ associations (ideological, emotional) and is typically related to the interpreter’s class, age, gender, ethnicity and so on. Connotations are numerous, and vary from reader to reader. The above mentioned bicycle is rather small, so it probably belongs to a teenager. It is pink, so perhaps it is a girl’s. But it is flashy and eyecatching too and might therefore connote that its owner is just an extrovert. If you once fell off a bicycle, you may even associate this bicycle with negativity and pain.

I think a large part of what we do as testers is deconstructing, in a way. We try to make sense of something by uncovering meaning (intended or unintended). We aim to derive meaning from different angles. We deconstruct by applying factoring (Michael Bolton defined this as “listing all the dimensions that may be relevant to testing it”) to objects, images and software – it can be useful to list as many different hypotheses as possible.

So, what about *your* deconstructing skills?

Since testers do seem to like challenges – here’s one for you all to enjoy.

Click on the thumbnail for a larger picture.

What can you find out about this picture?

What does it tell you?

What story does it tell?

Can you derive context?

What are you assuming? Why?

Which heuristics did you use?

I’m not revealing the copyright details just yet – no spoilers. Additional info will be added later. Enjoy!

The importance of discussion

Feynman on the importance of discussion

While I was on holiday, I immersed myself a bit more in the Feynman universe. And I must say – the combination of simmering French sun, lazy poolside-lounging and Feynman’s scientific and philosophical subjects worked surprisingly well. The result was like a tasty cocktail – the kind that gives you a light buzz in the head and that leaves you wanting for more.

Consuming too much of it would have probably given me a nasty headache too, but that didn’t really happen. The only lasting thing I got out of it was the desire to write some of the stuff down before I forget. So here goes…

In his 1964 lecture called “The Role of Scientific Culture in Modern Society”, Feynman states:

 “I believe that we must attack these things in which we do not believe.”

“Not attack by the method of cutting off the heads of the people, but attack in the sense of discuss. I believe that we should demand that people try in their own minds to obtain for themselves a more consistent picture of their own world; that they not permit themselves the luxury of having their brain cut in four pieces or two pieces even, and on one side they believe this and on the other side they believe that, but never try to compare the two points of view. Because we have learned that, by trying to put the points of view that we have in our head together and comparing one to the other, we make some progress in understanding and in appreciating where we are and what we are. And I believe that science has remained irrelevant because we wait until somebody asks us questions or until we are invited to give a speech on Einstein’s theory to people who don’t understand Newtonian mechanics, but we never are invited to give an attack on faith healing or astrology–on what is the scientific view of astrology today.”

“I think that we must mainly write some articles. Now what would happen? The person who believes in astrology will have to learn some astronomy. The person who believes in faith healing will have to learn some medicine, because of the arguments going back and forth; and some biology. In other words, it will be necessary that science becomes relevant. The remark which I read somewhere, that science is all right so long as it doesn’t attack religion, was the clue that I needed to understand the problem. As long as it doesn’t attack religion it need not be paid attention to and nobody has to learn anything. So it can be cut off from modern society except for its applications, and thus be isolated. And then we have this terrible struggle to explain things to people who have no reason to want to know. But if they want to defend their own points of view, they will have to learn what yours is a little bit. So I suggest, maybe incorrectly and perhaps wrongly, that we are too polite.”

It strikes me how relevant this out-of-context quote still is after almost fifty years.

We cannot overestimate the importance of a critical mindset. Testers may need that even more than anybody else. Sometimes we just need to attack common beliefs that have become axioms in a way. I think it was Mark Twain who once said “It ain’t what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” 

So, we need more discussions in our line of work – they’re a surefire way to advancing the testing craft. True, there’s plenty of discussions and controversies within testing already – the different schools of testing come to mind.  But what I feel lacking sometimes, is a desire to understand where the “other side” is coming from. Why are they thinking the way they think? What are their beliefs and motives? Can we prove their beliefs to be false?

I think I’ll make this my personal mantra:

  • Attack, but don’t attack what you don’t understand
  • Be credible
  • Be reasonable

Feynman on naming

How Feynman’s take on naming things is applicable to testing

Feynman’ s father Melville played a big role in shaping little Richard’s way of thinking. He used to read bedtime stories from the Encyclopedia Britannica.

“See this Tyrannosaurus Rex over here? It says here that this thing was 25 feet high, and the head was six feet across. That means that if it stood in our front yard, it would be high enough to put his head through the window, but not quite because the head is a little bit too wide, it would break the window as it came by”.

He always tried to translate things into some kind of reality, so little Richard would be able to figure out what it really meant, what it was really saying.

Melville would also take his kid for long walks in the Catskill mountains, telling him about nature and explaining that in order to really *know* something, you should start observing and noticing instead of merely naming (a thing most of his classmates seemed to do):

“You can know the name of a bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird… So let’s look at the bird and see what it’s doing — that’s what counts. I learned very early the difference between knowing the name of something and knowing something.”

From “The pleasure of finding things out” (1981)

I think the above quote illustrates a phenomenon that occurs all too often in software testing: the nominal fallacy. Basically, this means applying a label or name to something and thinking you have explained it.

What about boundary value testing (or domain testing), for instance?

“First, we identify the boundaries, then we identify tests at each boundary. For example, one test each for >, =, <, using the first value in the > range, the value that is equal to the boundary, and the first value in the < range”.

A pretty straightforward and effective technique, right? We think we master it, until we realise that most textbooks are only talking about known and visible boundaries. What about the boundaries that are not known, not even by the developers that wrote the code? Most of the time, the software’s complexity stretches far beyond our imagination.

In case you need convincing: let’s revisit the ParkCalc exercise that took place a couple of months ago (here‘s a good write-up of that event by Selena Delesie). Matt Heusser put out a challenge to test ParkCalc, a small app on the Grand Rapids Airport website to calculate parking fees. The site provided entry and leaving times, date pickers and five different kinds of parking to choose from. Quickly, an instant test-flashmob formed via twitter. Testers from all over the globe jumped on the bandwagon. James Bach joined in and challenged everyone to beat his highest total parking fee. Extreme testing ensued. What followed was a good exercise in factoring, investigation and on-the-fly test design. And it happened to illustrate the complexity of boundary value analysis as well. 

To get an even better idea of this complexity, there’s always Cem Kaner‘s online course on domain testing. Do we know boundary value analysis because we know its most basic definition?

I’m not trying to open Pandora’s box here, but these nominal fallacies also apply to a testing certification that mainly focuses on definitions. Naming things isn’t enough. As Feynman put it: knowing something requires practice and observation. The benefit? No need to memorize anything anymore when real understanding kicks in. 

Of course, all this isn’t new. Half of the starr-cross’d lovers were already aware of this, way back in the late sixteenth century:

“What’s in a name? That which we call a rose
By any other name would smell as sweet.”

(Juliet Capulet)

Exploring Feynman

On my intention to start exploring Richard Feynman (1918-1988)

The Plan

I’m planning to do a little blog series on the late Richard Feynman to record some of my impressions and learnings while I work myself through this intriguing oeuvre of his. No, that summer heat is not getting to me, yet. I’m not exactly planning on processing his massive back catalog – I’m not really into path integral formulation or the behavior of subatomic particles, let alone the superfluidity of supercooled liquid helium. I do value the sparse free time that I have – time is on my side, yes it is. Rather, I’d like to document my exploration of his more popular works, audio and video recordings.

Exploratory learning, as you wish. Dipping into it all and savoring the juicy bits, spitting out the others. And relate things to testing, of course.

Why Richard Feynman?

Feynman intrigues me, and I have nothing but deep respect and admiration for the man. He was witty, brilliant and had this perpetual curiosity to discover new things (Tuvan throat-singing, anyone?). He opposed rote learning or unthinking memorization as teaching methods – he wanted his students to start thinking, for a change. How great is that?

On occasion, he was a totally nutty professor – a flamboyant geek. But he also happened to build a truly astonishing career which eventually earned him the Nobel prize in physics in 1965. 

I’m planning to gradually learn about him and post my progress here. Stay tuned!

Behold, the “better Tester”

Reaction to a discussion in the “Software Testing & Quality Assurance” LinkedIn-group

There are some strange and intriguing questions being posed in the Software Testing & Quality Assurance LinkedIn group lately. “Who are Testing Criminals?“, “Who should take the blame for defects in production software ? ” or “Test Plan ! Do we really follow it ! ” (an awful lot of exclamation marks for a question, if you ask me) to name but a few. I read them through, bit my tongue, controlled my breathing and managed to ignore them in the end. 

But two days ago, another gem was posted:

Who is better Tester?

@Who finds highest defects..?
@Who can say in 100% confidence that this product\application is BUG\DEFECTS free..?
@Who creates highest testing scenarios,Testcases,etc..?
@Who has as many as software testing certifications like ISTQB or something like that…?

Mmm… I thought about Pradeep Soundararajan’s blog post about the infamous Black Viper Testing Technique as a provoking response to another discussion in that same group, and decided to react.

Who is the better tester? Well… None of them, actually.

  • “Highest number of defects”
    What does that tell you, out of context? Maybe the person that found the highest number of defects has tested more hours than the rest. Maybe he tested a buggy part of the product, while others were concentrating on a more “mature” part. Maybe he only found “low priority” bugs, while others – with less bugs on their list – found more severe showstoppers. Which tester would you prefer?
  • “100% Bug free”
    I wouldn’t hire a tester that dares to make claims such as “this software is 100% bug free”. It isn’t. It can’t be. If you continue testing, more bugs will be found. This is the catch-22 of testing. We stop testing when we’re finished, but we’re never finished. So we stop testing because of other reasons.
  • “Highest number of test cases/scenarios”
    Again, without context, this is a worthless statement. Maybe he works faster, but more sloppy. Maybe the other testers were investigating bugs they found while scripting. Time spent hunting/pinpointing bugs is valuable, and if testers are asked to engage in that, they’re also not “writing tests”. Maybe other people are designing tests in a slower pace because they tend to talk to stakeholders about the requirements first, to be sure that they understand what is really going on. The person with the highest test case count may be designing tests badly, or testing things the wrong way. Or maybe he’s just testing the wrong things. The productivity of a tester can/may never be measured in number of test cases, but this seems to happen all too often.
  • “Number of certifications”.
    What do certifications tell you? That a tester was able to sit through a short course and was able to score +28 out of 40 on a multiple choice exam? Does certification actually tell you anything about the real skills the tester possesses? Is he able to think critically, to question the product in order to evaluate it?

I would say the better tester is the one that adds the most value with his testing, one that is able to explore the software and recognizes problems. A good tester doesn’t just ask ‘Pass or Fail?’. A good tester asks ‘Is there a problem here?’ (props to Michael Bolton for that one).

There. I said it. I actually felt relieved after writing that down. Who is better Self-therapist?