Testing Smarter with Michael Bolton

By John Hunter and Justin Hunter · Apr 3, 2017

This interview with Michael Bolton is part of our series of “Testing Smarter with…” interviews. Our goal with these interviews is to highlight insights and experiences as told by many of the software testing field’s leading thinkers.

Michael Bolton is a consulting software tester and testing teacher who helps people solve testing problems that they didn’t realize they could solve. He is the co-author (with senior author James Bach) of Rapid Software Testing, a methodology and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure.


Michael Bolton

Personal Background

Hexawise: Do your friends and relatives understand what you do? How do you explain it to them?

Michael: [laughs] A lot of my friends and relatives don’t really care what I do! But if they’re curious, and they want an explanation, I tell them this:

"All that software you’re using every day is hard to develop. Whenever someone wants software built, development groups try to figure out what to do and how to do it. But since people never completely understand one another, there is a risk that the proposed solution won’t solve the problem, or will introduce problems of its own.

"Then, even when the ideas about how to build something are pretty clear, we’re still building it for the first time. It’s new to us, so we have to learn how to build it. As that happens, people make mistakes, and they realize new problems along the way. It’s the tester’s job to help identify problems all the way through that process.

"And then, as people are trying to build the product, testers explore it and experiment with it, looking for problems that might threaten its value. Ideally, the important problems get found and get fixed before the software comes to you.

"In other words, testers help people to try to figure out whether the product they’ve got is the product they want. I help testers and teams learn how to do that effectively, as a consultant and as a teacher. And I do that worldwide; in classes, at client sites, at conferences, and in social networks."

If friends or family want to know more, I can tell them more. But usually they’re happy with that.

Friends and family are one thing; managers and teams are another. Some testers say their managers don’t understand them. Recognize this: managers need to know about problems that threaten the on-time, successful completion of the project, where “project” means “some burst of development work”. Think about what you’re doing and the problems you’re facing in terms of how they relate to that. Talk about your work that way, and things will become a whole lot clearer.

Hexawise: What one or two software testing-related experiences have you found to be most personally satisfying in your career?

Michael: That’s a tough call. On projects, I really like the detective work. In one specific case I investigated the roots of a nagging customer support problem, and found six or seven wildly different factors that, if addressed, could have eliminated that problem. That was satisfying. On another project, at a bank, I constructed a really elaborate oracle [Hexawise: "If you are not familiar with using oracles in software testing we strongly recommend following the link to learn more about this important concept."] that modeled consumer financial transactions through all of the different account flows. That involved a lot of programming, and using powerful features of Excel, and learning a ton about the business domain. That was fun. I found some cool bugs and weird behaviour, too.

From 2003 through 2008, I did pretty extensive study with Jerry Weinberg —several of the AYE Conferences, some consulting workshops, and the Problem Solving Leadership class. Those were really helpful for my own personal development.

But the most satisfying thing is teaching Rapid Software Testing, helping people develop their mindsets and skill sets, and reframing the ways they think and speak about testing. It’s rewarding to hear from the class participants about their renewed energy and focus—and to hear from their organizations about the value and power of testing, value they might not have seen clearly before.

Views on Software Testing

Hexawise: What testing concept(s) do you wish more software testers understood?

Michael: Oh dear. [laughs] That’s an awfully long list, and it varies from day to day.

I wish more testers were more articulate in describing their models—their models of the product and how to cover it; their models of oracles, and how they recognize problems; their models of testing itself.

I wish that more testers understood that test cases are not central to testing. To be an excellent tester means to explore, to experiment, to study, to model, to learn, to collaborate, to investigate. To discover. None of these activities, these performances, are captured in test cases. But testers keep talking about test cases as being central to the work, as though recipe books and written ingredient lists were the most important things about cooking.

I wish more testers understood that testing is not about “building confidence in the product”. That’s not our job at all. If you want confidence, go ask a developer or a marketer. It’s our job to find out what the product is and what it does, warts and all. To investigate. It’s our job to find out where confidence isn’t warranted; where there are problems in the product that threaten its value.

I wish more testers were more articulate in describing their models—their models of the product and how to cover it; their models of oracles, and how they recognize problems; their models of testing itself.

Of course, sometimes testers find those things hard to talk about because their intuitive models are vague and fuzzy. That’s one of the important tasks in the Rapid Software Testing space: to help people sharpen up their models so that they can sound like they know what they’re doing — while actually knowing what they’re doing. Then we can talk more precisely amongst ourselves about what we’re doing, and how we’re going to approach solving problems for our clients.

Some people are bothered by our focus on expressing things precisely. They complain “If we use jargon, the non-testers we’re talking to will get confused!” If you use jargon with people who don’t need to hear it, they may get confused, so don’t use jargon with them when there’s nothing at stake.

On the other hand, when non-testers hears “automated testing”, that affords the interpretation that testing can be automated. It can’t. That’s why, in Rapid Software Testing, we talk about automated checking: to alleviate confusion between things machines can do (checking) and things that you need clever humans to do (testing, which includes the process of developing and interpreting checks).

It’s okay for different testing and development communities to adopt and adapt their own ways of speaking. But amongst ourselves, within our communities and within our teams, we had damn well better learn to speak precisely, because imprecision is where lots of problems start, and where bugs live. Don’t jump on the newbies; help them learn and guide them along.

But I would say this to people who seek respect: getting straight on what things mean and how they matter is essential in real professions. [laughs] Doctors don’t say “virus, bacteria… who cares? The guy’s sick! He’s got… a… bad thing! And he needs… stuff to make him better! Pharmacist, give him… some... better-making-stuff and tell him to take it… whenever.” Fail to make appropriate distinctions, and the attempt to help the patient will fail. And then society gets antibiotic-resistant bacteria as a byproduct.

I wish more testers recognized that testing can be applied to anything—a running product, of course, but also to documents, diagrams, models, ideas. I often hear people saying “testers should do more than just testing,” and when I ask them what they mean by “just testing”, it turns out that their notion of testing is really impoverished.

Testing isn’t just operating a product and looking for bugs. Testing involves modeling and questioning and studying, making inferences, challenging beliefs. Generating, describing, refining, revising, abandoning, and recovering ideas, all the way through the project. Designing and performing experiments, not just by interacting with running code, but also by performing thought experiments on whatever we might have in front of us, from an idea to a document to a diagram to a program. Review is testing a story or some code. Critical thinking is testing an idea — helping people sharpen their understanding. Developing tools to help test is testing. [laughs] What’s with the “just testing” business? All that isn’t enough?

Hexawise: Can you describe a view or opinion about software testing that you have changed your mind about in the last few years? What caused you to change your mind?

Michael: Many years ago, when I first started out as a program manager, I thought it would be a good idea to write lots of things down, formally, in advance, and hand those instructions to programmers. After that, building the product and testing it would be simple.

Well, that sure sounded appealing, but it didn’t work very well most of the time. Some programmers really liked very specific instructions, and occasionally we could give them those when we had a really good idea of what we wanted.

What I was ignoring was all the work, all of the trial and error, all of the collective tacit knowledge that gets you to the point where you can make a lot of stuff explicit… but by then you don’t need most of it to be explicit, so making it explicit is a waste of time!

Excessive or premature formality is really dangerous for testing. Notice those adjectives: excessive, premature! Sometimes, in certain contexts, there are specific things that must be tested formally, in specific ways, or to check specific facts. That formality might be important because of technical risk, business risk, legal risk, risk to finances or to human health and safety.

Much of the time, though, your testing doesn’t need to be very formal. Even when you’re in a context that requires formal testing, you need plenty of excellent informal testing in order to get to excellent formal testing.

I see a lot of people investing in formality and documentation really early in the project. But I think it’s a mistake to do that before we’ve learned about the product and the problem space. That learning is a fundamentally exploratory process, one that can’t be formalized unless you want to suppress it or damage it or slow it down needlessly.

Testing isn’t just operating a product and looking for bugs. Testing involves modeling and questioning and studying, making inferences, challenging beliefs.

Hexawise: You have clearly articulated the inherent limitations of testing coverage metrics. For example, from your blog post: 100% Coverage is Possible, you state:

To claim 100% coverage is essentially the same as saying “We’ve looked for bugs everywhere!” For a skilled tester, any “100%” claim about coverage should prompt critical thinking: “How much” compared to what? 100% of what, specifically? Some model of X—which one? Whose model? How well does the “model of X” model reality? What does the model of X leave out of the universe of possible ways of thinking about X? And what non-X things should we also be considering when we’re testing?

What advice do you give to teams you work with who use quantitative coverage metrics?

Michael: [laughs] That’s a little like asking “What advice do you give to journalists who use spreadsheets?”

Start from the default premise that we don’t need them. Don’t start from the premise that we must hang a number on our coverage. Assume that we will describe our coverage. If there’s some way in which numbers will help, assume that whatever we decide to quantify will be based on some model of the software. All models focus on something and leave everything else out, and at best we’re only aware of some of that “everything else”.

After all, what might we be covering? We could decide to model our coverage in terms of how many lines of code, or branches, or conditions have been covered. We could count them. But not all lines of code (or branches, or conditions) are equally significant. And code coverage tools report on our code, but not necessarily the code in third-party libraries and frameworks, in the operating system, in the hardware platform.

Some people model coverage in terms of “requirements”. What they really mean by that is specific statements in requirements documents. Documents model ideas about the product.

So here’s one statement: “the site shall provide scheduling information for all flights on the airline’s network”. Here’s another: “when the customer is logged in, each page shall display the customer’s first and last names in the upper-right corner of the screen”. Are those statements of equivalent significance? How would we test them?

[laughs] Don’t say something silly, like you’ll fully test your product by having “one positive and one negative test case for each sub-requirement”! (The Wikipedia article on “test cases”, as of today, says exactly that.) Your testing of a product is a performance.

Sometimes a journalist will find it useful to illustrate a point with numbers, or a table, a spreadsheet. Baseball is a great example of a game where statistics help us to evaluate performances. Bill James, the guy at the centre of Moneyball, has shown how the numbers you choose can support arguments about a player’s value. But the magic of the Baseball Abstract, which was a kind of journal he wrote for many years, was that he told great stories, using stats to illustrate them. He also showed how stats don’t tell the whole story, and how they can mislead, too.

Sometimes journalists will quote figures from researchers, or use poll numbers to back a story. But the history of polls—particularly recent history—provides a great example of the ways we can be fooled by numbers.

I wrote these articles on talking about coverage a while ago, but I think they still stand up:

You’ll find useful stuff on measurement in Quality Software Management, Vol. 2: First Order Measurement (Weinberg) or the two e-books that make up that book, How to Observe Software Systems and Responding to Significant Software Events.

But if you want to quantify coverage, be skeptical. Be professionally unsure. Read "Software Engineering Metrics: What Do They Measure and How Do We Know" (Kaner and Bond). Read How Not to Be Wrong (Ellenberg). Read Proofiness (Seife). Read How to Lie with Statistics (Huff).

Industry Observations / Industry Trends

Hexawise: What software testing industry trends make you optimistic about the future? And which software testing industry trends make you concerned?

Michael: There’s been some degree of progress over the years in refining how people think about testing, but the people who are pushing for that are in a tiny minority. We’re a gaggle of hobbits facing an army with entrenched ideas that might as well have been handed down from orcs. Those ideas didn’t work 30 years ago and are crazily inefficient now. People still don’t think very critically about testing folklore and mythodology. (Yes, I said mythodology.)

Many people treat certain ideas as Grand Truths about testing. But many of the claims that people make—especially some of the testing tool vendors—are myths, or folklore, or simply invalid. People often say things that they haven’t thought about very deeply, and some of those things don’t stand up very well to critical scrutiny.

People don’t seem to notice how rickety everything is at the best of times. Have you tried to print something lately, using the new printer software that came out yesterday? Tried to buy a plane ticket? Tried to set up or top up a pay-as-you-go mobile hotspot when you’re travelling? I do all that stuff regularly, and it’s usually time-critical, and almost every time I enter a world of pain. Technology is complex, and our lives are complex, and the least little bump in the road can make the wheels fall off. Every day, as I try to use software, I feel like I’m being pecked to death by ducklings.

We want to do stuff like move money between accounts with our smart phones, securely, but the global financial system is at the mercy of institutions that are getting testing services from the lowest bidder, trying to find ways to make testing cheaper instead of more powerful and more risk focused. That life-critical medical software comes from the same larger community that brought us JavaScript and CSS. Yikes. [laughs]

I’m concerned that we’re still overly focused on testing the clockwork, the functional aspects of the product without thinking about how people will respond to it. As Harry Collins would say, machines are social prostheses. Like insulin pumps or artificial legs, they’re being plugged into human life and human purposes to do work that humans once did (or where humans with superpowers could do that work). But protheses do don’t what the real thing does, and the surrounding body has to adapt to that fact. Software doesn’t do what humans do, so humans often must adapt to it. We should all be asking: “how does the technology change us—for good and for ill?”

Staying Current / Learning

Hexawise: I see that you’ll be presenting at the StarEAST conference in May. What could you share with us about what you’ll be talking about? / What gave you the idea to talk about it?

Michael: I’ll be giving two tutorials. The first is about critical thinking for testers. We describe critical thinking like this: “thinking about thinking, with the intention of avoiding being fooled”. That’s central to our work as testers. Testers think critically about software to help our clients make informed decisions about whether the product they’ve got is the product they want.

Many people treat certain ideas as Grand Truths about testing. But many of the claims that people make—especially some of the testing tool vendors—are myths, or folklore, or simply invalid. People often say things that they haven’t thought about very deeply, and some of those things don’t stand up very well to critical scrutiny. That’s one of the reasons I started developing and teaching this class in 2008. I was dismayed that testers and other people in software development were accepting certain myths about testing unskeptically.

Not only that, but testers and teams allow themselves to be fooled by focusing on confirmation, rather than challenging the software. So, in the class, we talk about the ways in which words and models can fool us. In a safe environment, it’s okay—and even fun—to be fooled, and to figure out how not to be fooled quite so easily the next time.

On Tuesday, I present on one-day class called “A Rapid Introduction to Rapid Software Testing.” RST (the methodology) is focused on the mindset and the skill set of the individual tester. It’s about focusing testing on the mission, rather than on bureaucracy and paperwork. We sometimes joke that RST (the class) is a three day class in which we attempt to cover about nine days of material. [laughs] In “A Rapid Introduction to Rapid Software Testing”, I try to do the three-day class in one day. It is a rapid introduction, but we’ll be able to explore some of the central ideas.

Hexawise: What software testing-related books would you recommend should be on a tester’s bookshelf?

Michael: The two that I can recommend that are explicitly about software testing are Lessons Learned in Software Testing and Perfect Software and Other Illusions about Testing.

But you asked about testing-related books, and there’s an absurd number of those. Thinking Fast and Slow is about critical thinking and how easy it is for us to get fooled. The Shape of Actions (Collins and Kusch) is about what machines can and cannot do, and the Golem series (Collins and Pinch) is about the nature of science, technology, and medicine. Code (Petzold) is about how data is represented and processed on digital computers. Everyday Scripting in Ruby is a decent, tester-focused book on creating little tools and doing work with scripts. And the list of Jerry Weinberg books is plenty long on its own: Exploring Requirements, and An Introduction to General Systems Thinking, and Errors, and Agile Impressions.

James Bach and I have not written a book on software testing together. But you could read our blogs, which would be like reading a long-ish book of essays on testing.

Profile

Michael Bolton is a consulting software tester and testing teacher who helps people solve testing problems that they didn’t realize they could solve. He is the co-author (with senior author James Bach) of Rapid Software Testing, a methodology and mindset for testing software expertly and credibly in uncertain conditions and under extreme time pressure.

With twenty-five years of experience testing, developing, managing, and writing about software, Michael has led DevelopSense, a Toronto-based testing and development consultancy, for the past fifteen years. Previously, he was with Quarterdeck Corporation where he managed the company’s flagship products and directed project and testing teams both in-house and worldwide.

Links:

Blog: DevelopSense

Twitter: @michaelbolton

Related posts: Testing Smarter with Alan Page - Testing Smarter with Dorothy Graham