When we built our test case design tool, we thought:

  1. When used thoughtfully, this test design approach is powerful.

  2. The kinds of things going on "underneath the covers," like the optimization algorithms and combinatorial coverage risk confusing and alienating potential users before they get started.

  3. Designing an easy-to-use interface is absolutely critical. (We kept in the "delighter" in our MVP release.)

  4. As we have continued to evolve, we've kept (a) focusing on design, and (b) having learned that customers don't find the tool itself hard to use but they do sometimes find the test design principles confusing, we've focused on adding self-paced training modules into the tool (in addition to training and help web resources and Hexawise TV tutorials.

 

hexawise-levels

View of the screen showing a user's progress through the integrated learning modules in Hexawise

 

Our original hope was that if we focused on design (and iterated it based on user feedback), that Hexawise would "sell itself." And that users would tell their friends and drive adoption through word of mouth. That's exact what we've seen.

We did learn that we needed to include more help getting started with the challenges of learning the test design principles needed to successful create tests. We knew that was a bit of a hurdle but we have seen it was more of a hurdle than we anticipated so we have put more resources into helping get over than initial resistance. And we have increased the assistance to clients on how to think differently about creating test plans.

A recent interview with Andy Budd takes an interesting look at the role of design in startups.

Andy Budd: Design is often one of the few competitive advantages that people have. So I think it’s important to have design at the heart of the process if you want to cross that chasm and have a hugely successful product. ... Des Traynor: If a product is bought, it means that users are so attracted to it, that they’ll literally chase it down and queue up for it. If it’s sold, it means that there are people on commission trying to force it into customer’s hands. And I find design can often be the key difference between those two types of models for business.

Andy: There’s a lot of logic in spending less money on marketing and sales, and more money on creating a brilliantly delightful product. If you do something that people really, really want, they will tell their friends.

 

As W. Edward Deming said in Out of the Crisis

Profit in business comes from repeat customers, customers that boast about your product and service, and that bring friends with them

 

The benefits of delighting customers so much that they help promote your product are enormous. This result is about the best one any business can hope for. And great design is critical in creating a user experience that delights users so much they want to share it with their friends and colleagues.

The article also discusses one of the difficult decision points for a software startup. The minimal viable product (MVP) is a great idea to help test out what customers actually want (not just what they say they want). This MVP is a very useful concept (pushed into many people's consciousness by the popularity of lean startup and agile software development).

MVP is really about testing the marketplace. The aim is to get a working product in people's hands and to learn from what happens. If the user experience is a big part of what you are offering (and normally it should be) a poor user experience (Ux) on a MVP is not a great way to learn about the market for what you are thinking of offering.

In my opinion, adding features to existing software can be tested in a MVP way with less concern for Ux than completely new software, but I imagine some would want the great Ux for this case too. My experience is that users that appreciate your product can understand the rough Ux in a mock up and separate the usefulness from the somewhat awkward Ux. This is especially true for example with internal software applications where the developers can directly interact with the users (and you can select people you know who can separate the awkward temporary Ux from the usefulness of a new feature).

 

Related: The two weeks of on-site visits with testing teams proved to be great way to: (a) reconnect with customers, (b) get actionable input about what users like / don’t like about our tool, (c) identify new ways we can continue to refine our tool, and even (d) understand a couple unexpected ways teams are using it. - Automatically Generating Expected Results for Tests in Hexawise - Pairwise and Combinatorial Software Testing in Agile Projects

By: John Hunter on Sep 17, 2013

Categories: Experimenting

Many teams are trying to generate unusually powerful and varied sets of software tests by using Design of Experiments-based methods to generate many or most of their tests. The two most popular software test design methods are orthogonal array testing and pairwise testing. This article describes how these two approaches are similar but different and suggests that in most cases, pairwise testing is preferable.

Before advancing, it may be worth pointing out that Orthogonal Array Testing is also known as OA or OATS. Similarly, pairwise testing is sometimes referred to as all pairs testing, allpairs testing, pair testing, pair-wise testing, or simply 2-way testing. The difference between these two very similar approaches of pairwise vs. orthogonal array is that orthogonal array-based solutions require the same coverage goal that pairwise solutions do (e.g., that every pair of inputs is tested at least once) plus an additional hurdle/characteristic, that there be a uniform distribution throughout the domain.

I have studied the question of how can software testing inputs be combined most efficiently and effectively pretty steadily for the last 7 years. I started by searching the web for "Design of Experiments" and "software testing" and found references to Dr. Madhav Phadke (who, by coincidence, turns out was a former student of my father).

  • I discovered that Dr. Phadke had designed RDExpert which, although it had been primarily created to help with Research & Design projects in manufacturing settings, could also be used to select small sets of powerful test sets in software testing projects, using the Orthogonal Array-based test selection criteria.

  • I used RDExpert to create test sets (and compared those test sets against sets of tests that had been selected manually by software testers)

  • I gathered results by asking one tester to execute the manually selected tests and another tester to execute the the Orthogonal Array-based tests; the OA-based tests dramatically outperformed the manually-selected ones in terms of defects found per tester hour and defexts found overall.

So, in short, I had confirmed to my satisfaction that an OA-based test data combination strategy was far more effective than manually selecting combinations for the kinds of projects I was working on, but I was curious if other techniques worked better.

 

After more study I have concluded that:

  • Pairwise is more efficient and effective than orthogonal arrays for software testing.

  • Orthogonal Arrays are more efficient and effective for manufacturing, and agriculture, and advertising, and many other settings.

 

And we have built Hexawise as a software tool to help software producers test their software, based on what I have learned from my experience. We take full advantage of the greatly increased efficiency and effectiveness of letting testers to determine what needs to be tested and software algorithms to quickly create comprehensive test plans that provide more coverage with dramatically fewer tests.

But we also go well beyond this to create a software as a service solution that aids the software testing team with many huge advantages such as: automatically generating Expected Results in test scripts, automated importing of data from Excel or mind maps, exporting tests into other tools, preventing impossible to test for values from appearing together, and much more.

 

Why is a pairwise testing strategy better than an orthogonal array strategy?

  • Pairwise testing almost always requires fewer tests than orthogonal array-based solutions (it is possible, in some situations, for them to have an equal number of tests).

  • Remember, the reason that orthogonal array-based solutions require more tests than a pairwise solution to reach the coverage goal of testing all pairs of test conditions together in at least one test is the additional hurdle/characteristic that orthogonal array testing has, e.g., that there be a uniform distribution throughout the domain.

  • The "cost" of the extra tests (AKA experiments) is worth paying in many settings outside of the software testing industry because the results are non-binary in those tests. Someone seeking a desired darkness and gloss and luminosity and luster for a particular shade of green in the processing of film, for example, would benefit from with the information obtained from the added information gathered from orthogonal arrays.

  • In software testing, however, the added costs imposed by the the extra tests are not worth it. You're generally not seeking some ideal point in a continuum; you're looking to see what two specific pieces of data will trigger a defect when they appear in the same transaction. To identify that binary approach most efficiently and effectively, what you want is a pairwise solution (with fewer tests), not a longer list of orthogonal array-based tests.

 

Let me also add these points.

  • First, unlike some of my other views on combinatorial test design, my opinion on this narrow subject is not based on multiple empirical studies; it is based on (a) the reasoning I laid out above, and (b) a dozen or so conversations I've had with PhD's who specialize in the intersection of "Design of Experiments" and software test design, and (c) anecdotal evidence from using both methods.

  • Secondly, to my knowledge,very few, if any, studies have gathered empirical data showing benefits of pairwise solutions vs. orthogonal array-based solutions in software testing scenarios.

  • Thirdly, I strongly suspect that if you asked Dr. Phadke, he would give you his reasons for why orthogonal array-based solutions are appropriate (and even preferable) to pairwise test case selection methods for certain kinds of software projects. I have a huge amount of respect for both him and his son.

 

Time doesn't allow me to get into this last point much now, but "mixed strength" tests are another even more powerful test design approach for you to be aware of as well. With mixed strength testing solutions, the test designer is able to select a default coverage strength for the entire plan (e.g., pairwise / AKA 2-way coverage) and, in the same set of tests, select certain high priority values to receive higher coverage strength (e.g., 4-way coverage strength selected for each "Credit Rating" and "Income" and "Loan Amount" and "Loan to Value Ratio" would give you a palm that achieved pairwise coverage for everything in the plan plus comprehensive coverage for every imaginable combination of values from those four high priority parameters. This approach allows you to focus on risk-based testing considerations.

 

Sorry if I got a bit long-winded. It's a topic I'm passionate about.

Originally posted on Stack Exchange, Additional note added after the first 3 comments were submitted:

@Hannibal, @Peter K., and @MichaelF, Thanks for your comments! If you'd like to read more about this stuff, I recommend the multiple links available through this "bundle of links" about pairwise testing and combinatorial testing. In particular, Michael Bolton's article on pairwise testing is directly relevant and very clearly written. It is one of the few introductory articles around that accurately describes the difference between orthogonal array-based solutions and pairwise solutions. If I remember correctly though, the example Michael uses is a rare exception to the rule; the OA solution has the same number of tests as an optimal pairwise solution does.

Related: The Empirical Evidence for Using Pairwise and Combinatorial Software Testing - 3 Strategies to Maximize Effectiveness of Your Tests - Hexawise TV

More than 100 Fortune 500 firms use Hexawise to design their software tests. While large companies pay six figures per year for enterprise licenses, Hexawise is available for free to schools, open source projects, other non-profits, and teams of up to 5 users from any kind of company. Sign up for your Hexawise account.

By: John Hunter and Justin Hunter on Jun 11, 2013

Categories: Combinatorial Testing, Design of Experiments, Efficiency, Multi-variate Testing, Pairwise Software Testing, Software Testing, Testing Strategies, Experimenting

I saw these words of advice from Conrad Fujimoto in an email and thought they were worth passing on. I'm using them with Conrad's permission:

Over the years, I’ve taught many software testing courses. Trainees are appreciative of the ideas, insights, and techniques presented to them.They are convinced that principles and methods taught are useful and effective. Yet, often I hear the phrase “but, that won’t work here.”

Some of the reasons given for such pessimism are resource constraints, organizational politics, lack of testing focus, and little management understanding and support. The trainees knew what adjustments needed to be made but they felt powerless to affect any meaningful change. Fortunately, much can be accomplished by strategic planning and being aware of opportunities.

 

Some ideas:

  1. When no one is taking a leadership role in improving the process, consider assuming that role (people are often happy to see someone take charge).

  2. Seek opportunities to form relationships and work with others who share the same concerns about the existing process.

  3. Establish your authority and credentials for speaking on testing matters by recognizing and promoting the successes of your testing team.

  4. Be proactive and constantly monitor and report the progress of both development and testing against published schedules.

  5. Be ready to implement corrective actions or invoke contingency plans in the event of schedule slippages; where appropriate, suggest process changes that reduce future slippages.

  6. Always perform test closure activities and ensure that lessons learned are recorded and reported.

  7. Get testing representation in requirement review meetings and on the change control board.

  8. Foster an attitude of continuous improvement; build on your successes.

  9. As software testers, we have a professional obligation to do our best in assisting our organizations to build quality software. We may not necessarily have the term “manager” in our job title, but we still have the ability to be leaders. We can guide our organizations to creating better software.

 

Conrad Fujimoto is an expert instructor and consultant. He teaches Software Tester Certification for SQE Training.

George Box, a good friend (and a close colleague to my father), put the problem of getting new ideas adopted this way (from Management Matters by John Hunter):

  1. It won’t work
  2. It won’t work here
  3. I thought of it first

John's book, and blog, discuss the challenges of actually getting improvements put into action in the workplace. Getting past the resistance to new ideas, new ways of working and change is more difficult than it should be. But there are practical steps you can take to get improvements adopted, including those mentioned above.

Sadly, George Box recently passed away. You can see George in this video of him discussing the art of discovery (which is a big part of what software testers do - discovering how the software works).

 

Related: Growing the Application of Management Improvement Ideas in Your Organization - Outcome and In-Process Measures - Improving Software Development with Automated Tests

By: John Hunter and Justin Hunter on May 2, 2013

Categories: Lean, Testing Strategies, Experimenting

Based on my experience, over dozens of pilot projects where we've gathered hard data, many software testers would literally more than double their productivity overnight on many projects if they used combinatorial test design methods intelligently (in comparison to selecting test case conditions by hand).

In this 10 project study, Combinatorial Software Testing Case Studies, we found 2.4 times more defects per tester hour on average when we compared testers who executed manually-selected test cases to testers who executed test cases created by a combinatorial testing algorithm designed to achieve as much coverage as possible in as few tests as possible.

How many participating testers thought they would see dramatic increases before they gathered the data? Almost none (even the testers told about the prior experiences of their other colleagues on similar projects). How many participating testers are glad that they took the time to use the scientific method?

  • hypothesis

  • experiment

  • evidence

  • revise world-view

Every one of them.

What stops more people from using the scientific method on their projects and gathering data to prove or disprove hypotheses like the one addressed in the study above? A pilot could take one person's time for less than 2 days. If past experience is any indication of future results (and granted, it isn't always), odds would appear pretty good that results would show that productivity would double (as measured in defects found per tester hour).

What's stopping the testing community from doing more such analysis? Perhaps more importantly, what is stopping you from gathering this kind of data on your project?

Additional empirical studies on the effectiveness of software testing strategies would greatly benefit the software testing community.

 

Related: Hexawise case studies on software testing improvement (health insurance, IT consulting and mortgage processing studies) - How Not to Design Pairwise Software Tests - 3 Strategies to Maximize Effectiveness of Your Tests

By: Justin Hunter on Mar 5, 2013

Categories: Combinatorial Testing, Efficiency, Pairwise Software Testing, Testing Case Studies, Testing Strategies, Experimenting