Feynman on naming

How Feynman’s take on naming things is applicable to testing

Feynman’ s father Melville played a big role in shaping little Richard’s way of thinking. He used to read bedtime stories from the Encyclopedia Britannica.

“See this Tyrannosaurus Rex over here? It says here that this thing was 25 feet high, and the head was six feet across. That means that if it stood in our front yard, it would be high enough to put his head through the window, but not quite because the head is a little bit too wide, it would break the window as it came by”.

He always tried to translate things into some kind of reality, so little Richard would be able to figure out what it really meant, what it was really saying.

Melville would also take his kid for long walks in the Catskill mountains, telling him about nature and explaining that in order to really *know* something, you should start observing and noticing instead of merely naming (a thing most of his classmates seemed to do):

“You can know the name of a bird in all the languages of the world, but when you’re finished, you’ll know absolutely nothing whatever about the bird… So let’s look at the bird and see what it’s doing — that’s what counts. I learned very early the difference between knowing the name of something and knowing something.”

From “The pleasure of finding things out” (1981)

I think the above quote illustrates a phenomenon that occurs all too often in software testing: the nominal fallacy. Basically, this means applying a label or name to something and thinking you have explained it.

What about boundary value testing (or domain testing), for instance?

“First, we identify the boundaries, then we identify tests at each boundary. For example, one test each for >, =, <, using the first value in the > range, the value that is equal to the boundary, and the first value in the < range”.

A pretty straightforward and effective technique, right? We think we master it, until we realise that most textbooks are only talking about known and visible boundaries. What about the boundaries that are not known, not even by the developers that wrote the code? Most of the time, the software’s complexity stretches far beyond our imagination.

In case you need convincing: let’s revisit the ParkCalc exercise that took place a couple of months ago (here‘s a good write-up of that event by Selena Delesie). Matt Heusser put out a challenge to test ParkCalc, a small app on the Grand Rapids Airport website to calculate parking fees. The site provided entry and leaving times, date pickers and five different kinds of parking to choose from. Quickly, an instant test-flashmob formed via twitter. Testers from all over the globe jumped on the bandwagon. James Bach joined in and challenged everyone to beat his highest total parking fee. Extreme testing ensued. What followed was a good exercise in factoring, investigation and on-the-fly test design. And it happened to illustrate the complexity of boundary value analysis as well. 

To get an even better idea of this complexity, there’s always Cem Kaner‘s online course on domain testing. Do we know boundary value analysis because we know its most basic definition?

I’m not trying to open Pandora’s box here, but these nominal fallacies also apply to a testing certification that mainly focuses on definitions. Naming things isn’t enough. As Feynman put it: knowing something requires practice and observation. The benefit? No need to memorize anything anymore when real understanding kicks in. 

Of course, all this isn’t new. Half of the starr-cross’d lovers were already aware of this, way back in the late sixteenth century:

“What’s in a name? That which we call a rose
By any other name would smell as sweet.”

(Juliet Capulet)

Advertisement

Happy two thousand and System.NullreferenceException!

A comment on the Y2.01K bugs detected at Symantec and the DSVG bank group.

First of all, happy 2010 everyone!

And a happy new year for the German banks as well, althought it must have been a not-so-happy-one for the DSVG bank group. Yesterday, they issued a statement  that some 20 million debit cards issued by the banks belonging to the group were affected by a “millennium bug”-like problem. Apparently the problem stemmed from a chip on the cards which, due to a programming fault, wouldn’t correctly process the number 2010. The group said cash machines were adjusted hours after the problem emerged to ensure that customers can withdraw money, but there may still be problems using some debit-card terminals. Those should be fixed by Monday, it said.

Monday? Like in Monday, January 11th? Like in “almost a week from now”? In full winter sales period, with the number of payment transactions generally hitting historical highs (who’s paying in cash, nowadays?), that seems kind of disastrous to me.  

The same problem hit Symantec as well. The Symantec Endpoint Protection Manager (SEPM) server handled all dates greater than December 31, 2009 11:59pm as “out of date”.

Back in 1999, computer experts widely believed that hardware and software systems would fail as the clocks rolled over to the year 2000 because computers and other devices, which used only two digits to represent the year, would mistake the year 2000 for the year 1900. In the end, however, the so-called “millennium bug” caused few problems, because a lot of companies had anticipated the turn of the millennium with all possible resources. The computer software business was booming at the time. There were lots of new jobs. Granted, most of them were repetitive and rote programming and testing jobs, but jobs nontheless. Freshly graduated students were sent off to bootcamps to become ruthless programmers. I still dream about riding dinosaurs while yelling Cobol commands at them – but I digress. There also was a high demand for testers. A wide range of different people were transformed into software testers. Finally, companies were acknowledging the need for testing. Great! But apparently, not all testing (and programming, for that matter) was up to standard.

Now I don’t know the exact processing happening at Symantec or in the DSVG bank terminals, but this indeed looks like a sibling of that good old millennium bug. Maybe this was the result of a cheap and dirty Y2K bug fix where programmers put in a simple if <10 = 20xx otherwise the date is 19xx. In that case, this unwanted emergent behaviour doesn’t seem too hard to detect. Or does it? My 2 cent’s worth: this is what you get when performing scripted testing based on poorly thought-out boundary value analysis (aka domain testing) without exploring the software to know more about the risks. Or maybe they did explore, but they were only focused on that one known boundary called 2000. In both cases, testers failed to acknowledge other boundaries in the software that maybe simple talks with the developers would have detected anyhow. Michael Bolton talked about this in his Eurostar tutorial this year: the actual boundaries in a system may not be the ones we are told about – that’s why we need to explore. He also wrote this interesting article (pdf) on domain testing in Better Software magazine.