Pizza Testing

I’m starting to get tired of ordering pizza.

Whenever I feel like pizza, I usually order it from one of two places. And over the last several years, both of them have been, independently, adopting an almost-identical script for ordering pizza. It goes something like this:

Them: Hello, and thank you for calling Our Fine Pizza Place. Can I start with your phone number and area code?
Me: <gives phone number>
Them: Is this for pick-up or delivery?
Me: Delivery.
Them: Can you tell me your address, please?

Here’s the thing. I can hear them hitting keys on their keyboard over the phone. When I give them my phone number, I know that they don’t type it in. That’s because, like most modern call centres, they have software that detects my phone number when I call (using something like call display), finds my records in their database and displays my information on the screen before the conversation even begins.

Often, I find myself saying something like this:

Me: My address hasn’t changed since last time I ordered.
Them: I still need you to tell me your address.
Me: Why?
Them: It’s our policy.

So, anyway, I’ve been finding myself increasingly annoyed that I have to go through this process of providing a phone number and an address when they already know that information. I think I’ve probably been spoiled by ordering books on Amazon: I want to order my pizza with one click. In my ideal world, I want to just call up and say, “send me my usual order.” But I can’t because I have to keep going through their annoying script.

One could easily make the case that I’m just unusual in this way. <shrug> Quite possible. I do know that I’ve started ordering pizza less frequently. It has started to feel like too much of a bother. And I wonder if this is a known side-effect of their process.

A couple of years ago, I attended a conference where I met Cem Kaner, an expert in the field of testing. He described a lot of styles of testing: function testing, state-based testing, monkey testing, and others. One form of testing that he described, that I’d not really heard articulated before was the idea of scenario testing.

Scenario testing attempts to imagine reasonable scenarios and test for them. Here’s a slightly longer description:


Scenario tests are realistic, credible and motivating to stakeholders, challenging for the program and easy to evaluate for the tester. They provide meaningful combinations of functions and variables rather than the more artificial combinations you get with domain testing or combinatorial test design.


A scenario might be something like this: a customer is in a hurry and wants to order a pizza quickly. Other types of testing might simply be interested in whether or not all the correct data was collected and/or whether or not the pizza was delivered and paid for.

But scenario tests are one of the few styles of tests where you start to expose interesting problems such as, “what parts of this system will start to drive people crazy after they use them for a while?”

The objective of this kind of testing is somewhat different than the typically-held view of testing: that the primary reason to test is to prove that the program works correctly. Here’s a case where the pizza-ordering script works as designed, but has an unintended side effect (that is, I’m tending to order fewer pizzas than I have in the past).

Another interesting point about scenario tests is this: a lot of testing forms, such as unit tests and functional tests, seem to require an easy way to test the correctness of the final result. That’s great for specification-driven testing, but it does introduce a certain tendency toward only looking at the parts of the system where we know the intended result. Here’s Kaner’s words on the matter:


In The Art of Software Testing, Glenford Myers described a series of ineffective tests. 35% of the bugs reported from the field had been exposed by a test but the tester didn’t notice or didn’t appreciate the failure, and so the bug escaped into the field. These (and many others with the same problems) appear to be scenario-like tests (such as transaction-flow tests that use customer data), but the expectation is that testers will do their own calculations to determine whether the printouts or other output are correct. The testers do some checking, a small sample of the tests, but miss defects in the many tests whose results were not checked. The complexity of the tests makes it much harder to work out expected results and check the program against them. The push toward ease of checking results stems from this.


This is an important point because test automation generally requires an easy way to validate the result. Further, test automation is so seductive in agile projects that agile developers can sometimes become blind to forms of testing that cannot or should not be automated.

Suggesting that there are types of tests that should not be automated is almost an agile heresy. In fact, Lisa Crispin and Tip House, in their book, Testing Extreme Programming, very clearly state that all tests should be automated. I think I’m pretty much convinced that that’s a mistaken suggestion.

It's only fair to share...
Share on Facebook
Tweet about this on Twitter
Share on LinkedIn

Leave a Reply