I can be too verbose with some of my posts. This will be quick.

I recommend that you read this. It is the Exploratory Testing Dynamics document, a tightly condensed list of useful testing heuristics authored by three of the most thoughtful and experienced software testers alive today: James Bach, Michael Bolton, and Jon Bach. Using it will help you improve your software testing capabilities. http://www.satisfice.com/blog/wp-content/uploads/2009/10/et-dynamics22.pdf

Told you it'd be brief. Go. Now. Read.

By: Justin Hunter on Jul 13, 2010

Categories: Exploratory Testing, Software Testing

Context is Important in Usability Testing

As Adam Goucher recently pointed out, it is important to keep in mind WHY you are testing. Testing teams working on similar projects will have different priorities that will impact how much time they test, what they look for, and what test design methods they use. (Kaner and Bach provide some great specific examples that underscore this point here). In short, the context in which you're testing should impact how you test.

The same maxim holds true when you're conducting usability testing. Considering the context is important is well, both the context of the users of the application and the context of the application itself vis a vis other similar products. Important considerations can include:

  1. What problem does the application solve for the user?

  2. What does the application you're testing aspire to be as compared to competing applications?

  3. Who is the target audience of the application? What differentiating features does the application have?

  4. What is the "personality" of the application?

  5. What benefits and values do specific segments of target users prioritize?

These questions are all important when you analyze a web site with an eye on usability. I would recommend combining both a "checklist" approach (e.g., Jakob Nielsen's well-known Ten Usability Heuristics) with an approach that takes context-specific considerations (such as the 5 questions listed above) into account.


The Context of a User Group I'm Familiar with: the Hexawise Team

As of the end of June, 2010, our website leaves a great deal to be desired, so say the least. Hexawise.com consists mainly of a single landing page with anemic content that we threw together a year ago thinking that we'd "turn it into a real site" when we got around to it. We then proceeded to focus all of our development efforts on the Hexawise tool itself as opposed to our website (which we've let fester). Apologies if you've visited our site and wanted to know more details about what our test design tool does and how it complements test management tools. To date, we haven't provided as much information as we should have.

We've kicked off a project now to right this wrong. To do so, we're drafting up new content and organizing our thoughts about how to present it to visitors. Our needs are relatively simple. We want to create a set of simple wireframes that will allow us to quickly experiment with a few design options, gather feedback from friends and target users. For us, ease of use is key. Quickly being able to use the tool (without needing to read through a user guide) is critical. Ability to use the tool without reading through user guides is a must. We also value a tool's ability to make it easy to collaborate with one another easily.

With that as background, what follows are some quick comments on a couple wireframing tools I've recently explored in the context of our preferences and values. Wireframing is the practice of creating a skeletal visual interface for software. It is used for the the purposes of prototyping, soliciting early user/client feedback. It comes before the more time consuming phases of design. Two popular wireframe creation tools are Balsamiq and Hotgloo. Both are flash applications. Balsamiq is a desktop app. Hotgloo is a SaaS tool used over the internet.


Balsamiq and HotGloo

The first thing that strikes me about Balsamiq is the rich library of UX elements neatly organized and accessible by category or through a quick add search box. Everything works as it should: the drag, drop, click and type interface follows the principle of least astonishment. Fortunately, ease of use doesn't preclude speed: modifying the content and structure of UX elements is text-based versus form-based - blending in a touch of UNIX command line efficiency into otherwise graphical interface. UNIX and IRC users will feel right at home.

HotGloo is a very promising wireframing tool. They have clearly taken a page from the 37 Signals product development playbook. They have made a tool with a smaller set of features that is very intuitive to use. They've avoided the potential risk of "feature bloat" by having fewer bells and whistles. Where I think they add value: as a SaaS tool, HotGloo is exceptionally good at allowing multiple members on a team to collaborate on iterative designs. Whereas Balsamiq uses traditional files, HotGloo is accessible from anywhere. HotGloo enables multiple users to chat and view mockups in real time. Only one user can make changes at a time. Feedback is very easy to give and I found their support to be exceptionally responsive.

HotGloo is easy to learn for the first time, but my designer felt frustrated how much time he had to spend tweaking little things (like changing the names and links of a tabbed window element). The element controller pop-ups got in the way of work and he found myself frequently dragging them away. Hotgloo also takes a more minimalist approach than Basalmiq with UX elements with respect to features. Whether this is a strength or a weakness to users is a matter of personal preference. The 37 Signals camp (which I am highly sympathetic to) argues that is often preferable to have fewer, easier-to-use features since the vast majority of users will not want or need too many bells and whistles. Our designer felt that Balsamiq's feature set fit his needs better. As a "meddlesome manager" who wants to provide regular input into the content for version 2.0 of our site, feature-richness is less important to me than the collaborative ability.


Usability Considerations I Shared with the Hotgloo Team







Balsamiq has a couple usability features that make it fun to use. A case in point is how you insert an image. Balsamiq gives you three choices, the third of which is really a nice touch: You can 1. Upload a file 2. Use a photo on the web or 3. Perform a flickr search right there and then without ever leaving comfort of the Balsamiq window. In my book, that kind of thoughtful workflow integration is what makes a good product great.


"Postscript" - Good Karma and an Open Invitation

20100705-g1cjw9ab78ji6nqcj61uch5aau 20100705-kt8qxwjes9xabycnpm97djjtch 20100705-x3gauxcqahddtum8aj75kbfjw8


As a post-script of sorts, after sending 5 UX suggestions (including the 2 above) to the HotGloo team last week, I received 5 outstanding UX suggestions for our Hexawise tool this week - out of the blue - from Janesh Kodikara, a new Hexawise user based in Sri Lanka. In addition, the HotGloo team provided 5 excellent UX suggestions for improving our tool as well. Taken together, they are some of the best suggestions we've had to date. If anyone reading this would be willing to share your usability suggestions with us, I can assure you, we're extremely interested in hearing your ideas.

By: Justin Hunter on Jul 5, 2010

Categories: Context-Driven Testing, Pairwise Software Testing, Uncategorized, User Experience, User Interface

A friend passed me this set of recent tweets from Wil Shipley, a Mac developer with 11,743 followers on Twitter as of today. Wil recently encountered the familiar problem of what to do when you've got more software tests to run than you can realistically execute.



I love that. Who can't relate?

Now if only there were a good, quick way to reduce the number of tests from over a billion to a smaller, much more manageable set of tests that were "Altoid-like" in their curious strength. :) I rarely use this blog for shameless plugs of our test case generating tool, but I can't help myself here. The opening is just too inviting. So here goes:



There's an app for that... See www.hexawise.com for Hexawise, a "pairwise software test case generating tool on steroids." It eats problems like the one you encountered for breakfast. Hexawise winnows bazillions of possible test cases down in the blink of an eye to small, manageable sets of test cases that are carefully constructed to maximize coverage in the smallest amount of tests, with flexibility to adjust the solutions based upon the execution time you have available. In addition to generating pairwise testing solutions, Hexawise also generates more thorough applied statistics-based "combinatorial software testing" solutions that include tests for, say, all possible 6-way combinations of test inputs.

Where your Mac cops an attitude and tells you "Bitch, I ain't even allocating 1 billion integers to hold your results" and showers you with taunting derisive sneers, head-waggling and snaps all carefully choreographed to let you know where you stand, Hexawise, in contrast, would helpfully tell you: "Only 1 billion total possibilities to select tests from? Pfft! Child's play. Want to start testing the 100 or so most powerful tests? Want to execute an extremely thorough set of 10,000 tests? Want to select a thoroughness setting in the middle? Your wish is my command, sir. You tell me approximately how many tests you want to run and the test inputs you want to include, and I'll calculate the most powerful set of tests you can execute (based on proven applied statistics-based Design of Experiments methods) before you can say "I'm Wil Shipley and I like my TED Conference swag."

More info at:
free trials at:

By: Justin Hunter on Jun 23, 2010

Categories: Combinatorial Software Testing, Combinatorial Testing, Interesting People , Pairwise Software Testing, Pairwise Testing, Recommended Tool, Software Testing

There are good reasons James Bach is so well known among the testing community and constantly invited to give keynote presentations around the globe at software testing conferences. He's passionate about testing and educating testers; he's a gifted, energetic, and entertaining speaker with a great sense of humor; and he takes joy in rattling his saber and attacking well-established institutions and schools of thought that he disagrees with. He doesn't take kindly to people who make inflated claims of benefits that would materialize "if only you'd perform testing in XYZ way or with ABC tool" given that (a) he can always seem to find exceptions to such claims, (b) he doesn't shy away from confrontation, and (c) he (rightly, in my view) thinks that such benefits statements tend to discount the importance of critical thinking skills being used by testers and other important context-specific considerations.

Leave it up to James to create a list of 13 questions that would be great to ask the next software testing tool vendor who shows up to pitch his problem-solving product. In his blog post titled "The Essence of Heuristics," he posed this exact set of questions in a slightly different context, but as a software testing tool vendor myself, they really hit home. They are:


  1. Do they teach you how to tell if it’s working?
  2. Do they teach you how to tell if it’s going wrong?
  3. Do they teach you heuristics for stopping?
  4. Do they teach you heuristics for knowing when to apply it?
  5. Do they compare it to alternative heuristics?
  6. Do they show you why it works?
  7. Do they help you understand when it probably works best?
  8. Do they help you know how to re-design it, if needed?
  9. Do they let you own it?
  10. Do they ask you to practice it?
  11. Do they tell stories about how it has failed?
  12. Do they listen to you when you question or challenge it?
  13. Do they praise you for questioning and challenging it?


[Side note: Apparently I wasn't the only one who thought of Hexawise and pairwise / combinatorial test design approaches when they saw these 13 questions. I was amused that after I drafted this post, I saw Jared Quinert's / @xflibble's tweet just now:]


Where do I come down on each of James' 13 questions with respect to people I talk to about our test design tool, Hexawise, and the types of benefits and the size of benefits it typically delivers? Quite simply, "Yes" to all 13. I enjoy talking about exactly the kinds of questions that James raised in his list. In fact, when I sought out James to ask him questions at a conference in Boston earlier this year, it was because I wanted his perspective on many of the points above, particularly #11: (hearing stories about how James has seen pairwise and combinatorial approaches to test design fail), and #7 (hearing his views on where it works best and where it would be difficult to apply it). I'll save my specific answers to another post, but I am serious about wanting to share my thoughts on them; time constraints are holding me back today. I gave a speech at the ASQ World Conference on Quality Improvement in St. Louis last week though that addressed many, but not all, of James' questions.

I'm not your typical software tool vendor. Basically, my natural instincts are all wrong for sales. I agree with the premise that "a fool with a tool is still a fool"; when talking to target clients and/or potential partners, I'm inclined to point out deficiencies, limitations, and various things that could go wrong; I'm more of an introvert than an extrovert, etc. Not exactly the typical characteristics of a successful salesman... Having said that, I believe that we've built a very good tool that helps enable dramatic efficiency and thoroughness benefits in many testing situations but our tool, along with the pairwise and combinatorial test design approaches that Hexawise enables both have their limitations. It is primarily by talking to software testers about their positive and negative experiences that our company is able to improve our tool, enhance our training, and provide honest, pragmatic guidance to users about where and how to use our tool (and where and how not to).

Tool vendors who defend their tools (and/or the approaches by which their tools helps users solve problems) as magical, silver bullet solutions are being both foolish and dishonest. Tool vendors who choose not to engage in serious, honest and open discussions with users about the challenges that users have when applying their tools in different situations are being short-sighted. From my own experiences, I can say that talking about the 13 topics raised by James have been invaluable.

By: Justin Hunter on Jun 1, 2010

Categories: Combinatorial Testing, Design of Experiments, Hexawise test case generating tool, Pairwise Testing, Software Testing, Software Testing Efficiency, Uncategorized

While I wouldn't describe myself as a blanket "Microsoft hater," I have developed a no-holds-barred hatred of IE6. Our Hexawise test generation tool doesn't support it (and we have no plans to unless the surprisingly high number of our target clients in financial service firms, government agencies and buggy-whip manufacturing who are requesting IE6 support make it too financially painful for us to continue to stand on principle).

As an unapologetic IE6-hater, it made me smile to watch and listen to Scott Ward sing his witty and original song "IE is being mean to me." Enjoy.



A tip of the hat to Chris McMahon, an impressively knowledgeable software tester who played the bass professionally in bands for years, who alerted me to this gem.

By: Justin Hunter on May 14, 2010

Categories: Browser Testing, Product Management

Luis Fernández, an Associate professor at Universidad de Alcala is conducting a survey of software testers to gather data relating to, e.g., "Why isn't software testing conducted as efficiently and effectively as it should be?" and "What factors lead to software testing being 'under-appreciated' as a potential career path?"

His survey (as of March, 2010) is listed [here].(http://www.cc.uah.es/encuestas/index.php?sid=28392&lang=en)

Personally, I agree that the following two issues (identified in his survey) are significant causes of inefficiency in software testing:

1) "People tend to execute testing in an uncontrolled manner until the total expenditure of resources in the belief that if we test a lot, in the end, we will cover or control all the system." (Or, at least, given the relatively undisciplined test case selection methods prevalent in the industry, my experience in analyzing manually selected test scenarios is that testers generally believe (a) they are covering a higher proportion of an application's possible combinations than they actually are and (b) they underestimate the amount of time that is spent during test execution unproductively repeating steps that they have previously tested)

2) "Many managers did not receive appropriate training on software testing so they do not appreciate its interest or potential for efficiency and quality."

It is unfortunate, but true, that many testing managers do not have any background whatsoever in combinatorial testing methods that (a) dramatically reduce the amount of time it takes to select and document test cases, and (b) will simultaneously improve test execution efficiency when applied correctly. See, for example, this

See also: This slideshow on efficient and effective test design

Please consider taking Fernández's short survey. It takes only 5-10 minutes to complete.

By: Justin Hunter on Mar 1, 2010

Categories: Combinatorial Testing, Efficiency, Software Testing, Software Testing Efficiency

An unusually hectic work-schedule has been keeping me hopping lately. I returned this weekend from a great two-week trip to the UK in which I visited with 5 testing teams using our Hexawise tool to design test cases for applications being used in two banks, a consulting and systems integration firm, a grocery store chain, and a telecoms company.

Every product manager worth his or her salt will tell you it is a good idea to go meet with customers, listen to them, and watch them as they use your application. Even though everyone I know agrees with this, I find it difficult to make happen as regularly as I would like to. This trip provided me with a reminder of how valuable in-depth customer interactions can be. The two weeks of on-site visits with testing teams proved to be great way to: (a) reconnect with customers, (b) get actionable input about what users like / don't like about our tool, (c) identify new ways we can continue to refine our tool, and even (d) understand a couple unexpected ways teams are using it.

Bret Petticord's tweets on "What is Agile?" / "not Agile?" prompted me to write this quick post. I like them a lot.




When we first created our Hexawise tool, we followed the 4 steps Bret lays out in his description of "What is Agile?" My experience in the UK over the last two weeks was the start of one of many "Repeat" cycles.

I admire people who can succinctly summarize wisdom into bite-sized quips like Bret achieved with his two tweets. Another guy who excels at creating sound-bites is James Carville. Love him or hate him, he has that skill in spades. When I watched the movie "War Room," I felt like I was watching the "master of the sound-bite" in his element. Me? I'm more of a rambling, meandering, verbose communicator. I've just taken 332 words and a screen shot with Bret's tweets when all I set out to do in starting to write this post was to share Bret's 32 words with you.

By: Justin Hunter on Feb 25, 2010

Categories: Agile, Combinatorial Testing, Hexawise test case generating tool



All the quotes below are from the inside cover of Statistics for Experimenters written by George Box, Stuart Hunter, and William G. Hunter (my late father). The Design of Experiments methods expressed in the book (namely, the science of finding out as much information as possible in as few experiments as possible), were the inspiration behind our software test case generating tool. In paging through the book again today, I found it striking (but not surprising) how many of these quotes are directly relevant to efficient and effective software testing (and efficient and effective test case design strategies in particular):

  • "Discovering the unexpected is more important than confirming the known." - George Box

  • "All models are wrong; some models are useful." - George Box

  • "Don't fall in love with a model."

  • How, with a minimum of effort, can you discover what does what to what? Which factors do what to which responses?

  • "Anyone who has never made a mistake has never tried anything new." - Albert Einstein

  • "Seek computer programs that allow you to do the thinking."

  • "A computer should make both calculations and graphs. Both sorts of output should be studied; each will contribute to understanding." - F. J. Anscombe

  • "The best time to plan an experiment is after you've done it." - R. A. Fisher

  • "Sometimes the only thing you can do with a poorly designed experiment is to try to find out what it died of." - R. A. Fisher

  • The experimenter who believes that only one factor at a time should be varied, is amply provided for by using a factorial experiment.

  • Only in exceptional circumstances do you need or should you attempt to answer all the questions with one experiment.

  • "The business of life is to endeavor to find out what you don't know from what you do; that's what I called 'guessing what was on the other side of the hill.'" - Duke of Wellington

  • "To find out what happens when you change something, it is necessary to change it."

  • "An engineer who does not know experimental design is not an engineer." - Comment made by to one of the authors by an executive of the Toyota Motor Company

  • "Among those factors to be considered there will usually be the vital few and the trivial many." - J. M. Juran

  • "The most exciting phrase to hear in science, the one that heralds discoveries, is not 'Eureka!' but 'Now that's funny...'" - Isaac Asimov

  • "Not everything that can be counted counts and not everything that counts can be counted." - Albert Einstein

  • "You can see a lot by just looking." - Yogi Berra

  • "Few things are less common than common sense."

  • "Criteria must be reconsidered at every stage of an investigation."

  • "With sequential assembly, designs can be built up so that the complexity of the design matches that of the problem."

  • "A factorial design makes every observation do double (multiple) duty." - Jack Couden

Where the quotes are not attributed, I'm assuming the quote is from one of the authors. The most well known of the quotes not attributed, above, "All models are wrong; some models are useful." is widely attributed to George Box in particular, which is accurate. Although I forgot to confirm that suspicion with him when I saw him over Christmas break, I suspect most of them are from George (as opposed to from Stu or my dad); George is 90 now and still off-the-charts smart, funny, and is probably the best story teller I've met in my life. If he were younger and on Twitter, he'd be one of those guys who churned out highly retweetable chestnuts again and again. [Update - George Box died in 2013]


Related thoughts

As you know if you've read my blog before, I am a strong proponent of using the Design of Experiments principles laid out in this book and applying them in field of software testing to improve the efficiency and effectiveness of software test case design (e.g., by using pairwise software testing, orthogonal array software testing, and/or combinatorial software testing techniques). In fact, I decided to create my company's test case generating tool, called Hexawise, after using Design of Experiments-based test design methods during my time at Accenture in a couple dozen projects and measuring dramatic improvements in tester productivity (as well as dramatic reductions in the amount of time it took to identify and document test cases). We saw these improvements in every single pilot project when we used these methods to identify tests.

My goal, in continuing to improve our Hexawise test case generating tool, is to help make the efficiency-enhancing Design of Experiments methods embodied in the book, accessible to "regular" software testers, and more more broadly adopted throughout the software testing field. Some days, it feels like a shame that the approaches from the Design of Experiments field (extremely well-known and broadly used in manufacturing industries across the globe, in research and development labs of all kinds, in product development projects in chemicals, pharmaceuticals, and a wide variety of other fields), have not made much of an inroad into software testing. The irony is, it is hard to think of a field in which it is easier, quicker, or immediately obvious to prove that dramatic benefits result from adopting Design of Experiments methods than software testing. All it takes is for a testing team to decide to do a simple proof of concept pilot. It could be for as little as a half-day's testing activity for one tester. Create a set of pairwise tests with Hexawise or another t00l like James Bach's AllPairs tool. Have one tester execute the tests suggested by the test case generating tool. Have the other tester(s) test the same application in parallel. Measure four things:

  1. How long did it take to create the pairwise / DoE-based test cases?

  2. How many defects were found per hour by the tester(s) who executed the "business as usual" test cases?

  3. How many defects were found per hour by the tester who executed the pairwise / DoE-based tests?

  4. How many defects were identified overall by each plan's tests?

These four simple measurements will typically demonstrate dramatic improvements in:

  • Speed of test case identification and documentation

  • Efficiency in defects found per hour

As well as consistent improvements to:

  • Overall thoroughness of testing.


A Suggestion: Experiment / Learn / Get the Data / Let the Efficiency and Effectiveness Findings Guide You

I would be thrilled if this blog post gave you the motivation to explore this testing approach and measure the results. Whether you've used similar-sounding techniques before or never heard of DoE-based software testing methods before, whether you're a software testing newbie or a grizzled veteran, I suspect the experience of running a structured proof of concept pilot (and seeing the dramatic benefits I'm confident you'll see) could be a watershed moment in your testing career. Try it! If you're interested in conducting a pilot, I'd be happy to help get you started and if you'd be willing to share the results of your pilot publicly, I'd be able to provide ongoing advice and test plan review. Send me an email or leave a comment.

To the grizzled and skeptical veterans, (and yes, Mr, Shrini Kulkarni / @shrinik who tweeted "@Hexawise With all due respect. I can't credit any technique the superpower of 2X defect finding capability. sumthng else must be goingon" before you actually conducted a proof of concept using Design of Experiments-based testing methods and analyzed your findings, I'm lookin' at you), I would (re)quote Sophocles: "One must try by doing the thing; for though you think you know it, you have no certainty until you try." For newer testers, eager to expand your testing knowledge (and perhaps gain an enormous amount of credibility by taking the initiative, while you're at it), I'd (re)quote Cole Porter: "Experiment and you'll see!"

I'd welcome your comments and questions. If you're feeling, "Sounds too good to be true, but heck, I can secure a tester for half a day to run some of these DoE-based / pairwise tests and gather some data to see whether or not it leads to a step-change improvement in efficiency and effectiveness of our testing" and you're wondering how you'd get started, I'd be happy to help you out and do so at no cost to you. All I'd ask is that you share your findings with the world (e.g., in your blog or let me use your data as the firms did with their findings in the "Combinatorial Software Testing" article below).



By: Justin Hunter on Jan 27, 2010

Categories: Combinatorial Testing, Design of Experiments, Hexawise test case generating tool, Multi-variate Testing, Software Testing

I responded to a recent blog post written by Gareth Bowles today and was struck - again - that a defect that must have been seen >10 million times by now has still not been corrected. When anyone responds to a blog post on Blogger.com, the stat counter says "1 comments" instead of correctly stating "1 comment." What's up with that?


Lands End-Wikipediathefreeencyclope


The clothing company Lands' End (with the apostrophe erroneously after the s instead of before it) has a bizarre but somewhat logical explanation for why they have printed their grammatical-mistake-laden brand name on millions of pieces of clothing. According to one version of the story I have heard, they printed their first brochures with the typo and couldn't afford to get it changed. I also remember reading a more detailed explanation in a catalog in the late 80's to the effect that by the time the company management realized their mistake and tried to get trademark protection on "Land's End" they discovered that another firm already had trademarked rights to that name. Quick internet searches can't verify that so perhaps my memory is just playing tricks on me. But I digress. Here's the defect I wanted to highlight with this post:


OptimalOperations ExchangingAnswers


For Blogger.com to leave the extra "s" in has me stumped for several reasons. First, this defect has been seen by a ton of people as Blogger.com is, according to Alexa's site tracking, the world's 7th most popular site. Second, Blogger.com is owned by Google (among the most competent, quality-oriented IT wizards on the planet) and no trademark protection is preventing the correction. Third, it would seem to be such an easy thing to fix. Fourth, other sites (like wordpress) don't make the same mistake. Fifth, it doesn't seem like a "style preference" issue (like spelling traveled with one "L" or two); it seems to me like a pretty clear case of a mistake. It would be a mistake to say "one cars," "one computers," or "one pedantic grammarians"; similarly, it is a mistake to say "one comments". What gives? Anyone have any ideas?

For anyone wondering where the ">10 million times" figure came from, it is pure conjecture on my part. If anyone has a reasoned way to refute or confirm it or propose a better estimate, I'm all ears.

By: Justin Hunter on Dec 9, 2009

Categories: Inexplicable Mysteries, Software Testing

Dave Whalen posted a good piece here asserting that software testing, done well, requires a blend of "Science" and "Art". I recommend it. (He also has a good post about testing databases here).




He includes the statement below which I agree with. If you are a software tester and any doubts about whether all of these methods work (pairwise software testing in particular), I would encourage you to conduct a pilot project on your own and measure the results achieved with and without the technique applied.


From the scientific side, testing can include a number of proven techniques such as equivalency class testing, boundary value analysis, pair-wise testing, etc. These techniques, if used properly, can reduce test times and focus on finding the bugs where they tend to hang out - much like a porch light on a summer night.


My response to Dave's post, included below, is not especially profound or even well-written, but, hey, I'm in a hurry in the pre-Thanksgiving rush and the topic hit close to home so I couldn't resist jotting a little something. Enjoy. Please let me know your thoughts / reactions if you have any.



Very well said!

I wholeheartedly, enthusiastically agree with your premise. I also wish that more people saw things the same way.

My father co-wrote Statistics for Experimenters which describes the “art and science” within the Design of Experiments ("DoE") field of applied statistics. Well-run manufacturing companies use DoE techniques in their manufacturing processes. Many companies, such as Toyota see them as an absolutely fundamental part of their processes (yet unfortunately, software testers, who could use DoE techniques such as pairwise and other forms of combinatorial testing, are often ignorant about how to use them properly and the software testing industry as a whole dramatically under-utilizes such techniques…. but I digress).

I brought the book up because it opens up with a good example relevant to the points you made. To win at the game of 20 questions, it is useful to know “the science” of game theory and DoE; choose questions so that there is a 50/50 chance that the answer will be Yes. Someone who knows this technique, all else being equal, will be win more because of their “scientific” approach than someone who doesn’t know the technique. And yet… other stuff (whether the subject matter expertise in this example, or subject matter expertise and “artistic” Exploratory Testing in your example) is indispensable as well.

You can’t truly excel at either 20 Questions or software testing unless you have a good mix of “science” (governed by mathematical principles, proven methods of DoE, etc.) and and “art” (governed by experience, instincts, and subject matter expertise).

By: Justin Hunter on Nov 24, 2009

Categories: Combinatorial Testing, Efficiency, Pairwise Testing, Software Testing, Uncategorized