Children’s own pass/fail criteria (and nursery rhymes)

One month ago, my oldest daughter (6) started taking on rope skipping. The last time I had seen her practising, two weeks ago, she was still having trouble getting the rope neatly over (and under) herself, but yesterday she was able to complete several jumps in one go in a fluent movement. It was the first time I had seen her do that, so I was pretty impressed.

She was clearly in learning mode. I sat down to observe her more closely. 

– “Wow, where did you learn all that?”

– “I’ve been watching older girls do that in school, daddy. Watch”.

She started jumping and counting out loud.

– “One, two, three, four, five, six, …”

She tripped on the rope.

– “Woohoo! Six!”

– “You go, girl!”

– “Again! One, two, three, four, five, nooooo…”

– “Five is good”.

– “No, daddy, five is not good. Again!”

She repeated the process a couple of times. She jumped seven (“Yes!”), four (“Nooo!”), five (“Pfff!”), six (“Yippie!”). I started noticing a pattern. It struck me that she alternated frustration with joy, and she let it depend on the number of jumps. Time for some questioning.

– “Why are you happy with anything above or equal to six, but unhappy with anything lower?”

– “It has to be at least six, daddy”.

– “Why six?”

She seemed really annoyed that I didn’t see her point. She thought I was pulling her leg.

– “Because I’m _six_ years old, daddy. Didn’t you know? What else could it be?”.

I was totally flabbergasted. She managed to impose some totally arbitrary pass/fail criteria on herself. Where did that come from? I thought that using pass/fail tests actually sabotages kids’ natural learning processes? But this appeared to come out of herself. No-one told her that she had to make at least six.

I wondered – maybe she just chose her age as a starting point, just to set some initial learning goals for herself? Was she planning on raising the bar later on when reaching six would have become too easy? Unfortunately I didn’t have the chance to follow-up on that – lunchtime!

Flash forward to work. All this reminded me of commonly defined pass/fail criteria such as

“90% of all tests must pass”

Really? 

In “Are your lights on?”, Jerry Weinberg uses the well-known “Mary had a little lamb” nursery rhyme to show how a seemingly straightforward statement is prey to multiple interpretations, depending on which word you emphasize. An invaluable heuristic when looking at requirements. Why not try that on the familiar pass/fail criterium stated above?

“90%”? What if the tests that would have revealed some serious errors happen to be in that 10% you so confidently dismissed? Why not 89 or 91?

“All”? You know “all” possible tests that can be performed? Are they all documented? Some of them might still be residing in your head. What if in the meanwhile we performed some more important tests that revealed serious risks? Are these tests part of “All”?

“Tests”? Do you only count scripted tests, or do you also take exploratory ones into account? What about important usability issues some users might have found? Or acceptance test checklists? Or automated checks? 

– “Must”? What if not all 90% passes? Does this mean your solution is without value? The customer might value other things than you do. Is it up to you to decide how much value is in there?

“Pass” ? What about behavior that is totally acceptable for your client, but that we find annoying? Pass or fail? What about tests that pass all steps, but that reveal important problems as a side-effect? Sometimes a test’s pass/fail decision is not binary.

My daughter went to school this morning and – for the first time –  took her own jump rope with her. I wonder how many % of her rope jump cases will pass this time.