We have created a new site to highlight Hexawise videos on combinatorial, pairwise + orthogonal array software testing. We have posted videos on a variety of software testing topics including: selecting appropriate test inputs for pairwise and combinatorial software test design, how to select the proper inputs to create a pairwise test plan, using value expansions for values in the same equivalence classes.

Here is a video with an introduction to Hexawise:



Subscribe to the Hexawise TV blog. And if you haven't subscribed to the RSS feed for the main Hexawise blog, do so now.

By: John Hunter on Nov 20, 2013

Categories: Combinatorial Testing, Hexawise test case generating tool, Multi-variate Testing, Pairwise Testing, Software Testing Presentations, Testing Case Studies, Testing Strategies, Training, Hexawise tips

Software testers should be test pilots. Too many people think software testing is the pre-flight checklist an airline pilot uses.


The checklists airline pilots use before each flight are critical. Checklists are extremely valuable tools that help assure steps in a process are followed. Checklists are valuable in many professions. The Checklist – If something so simple can transform intensive care, what else can it do? by Atul Gawande


Sick people are phenomenally more various than airplanes. A study of forty-one thousand trauma patients—just trauma patients—found that they had 1,224 different injury-related diagnoses in 32,261 unique combinations for teams to attend to. That’s like having 32,261 kinds of airplane to land. Mapping out the proper steps for each is not possible, and physicians have been skeptical that a piece of paper with a bunch of little boxes would improve matters much. In 2001, though, a critical-care specialist at Johns Hopkins Hospital named Peter Pronovost decided to give it a try. … Pronovost and his colleagues monitored what happened for a year afterward. The results were so dramatic that they weren’t sure whether to believe them: the ten-day line-infection rate went from eleven per cent to zero. So they followed patients for fifteen more months. Only two line infections occurred during the entire period. They calculated that, in this one hospital, the checklist had prevented forty-three infections and eight deaths, and saved two million dollars in costs.


Checklists are extremely useful in software development. And using checklist-type automated tests is a valuable part of maintaining and developing software. But those pass-fail tests are equivalent to checklists - they provide a standardized way to check that planned checks pass. They are not equivalent to thoughtful testing by a software testing professional.

I have been learning about software testing for the last few years. This distinction between testing and checking software was not one I had before. Reading experts in the field, especially James Bach and Michael Bolton is where I learned about this idea.


Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference.

(A test is an instance of testing.)

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.


I think this is a valuable distinction to understand when looking to produce reliable and useful software. Both are necessary. Both are done too little in practice. But testing (as defined above) is especially underused - in the last 5 years checking has been increasing significantly, which is good. But now we really need to focus on software testing - thoughtful experimenting.


Related: Mistake Proofing the Deployment of Software Code - Improving Software Development with Automated Tests - Rapid Software Testing Overview Webcast by James Bach - Checklists in Software Development

By: John Hunter on Nov 13, 2013

Categories: Checklists, Software Testing, Testing Checklists, Testing Strategies

Hexawise allows you to adjust testing coverage to focus more thorough coverage on selected, high-priority areas. Mixed strength test plans allow you to select different levels of coverage for different parameters.

Increasing from pairwise to "trips" (3 way) coverage increases the test plan so that bugs that are the results of 3 parameters interacting can be found. That is a good thing. But the tradeoff is that it requires more tests to catch the interactions.

The mixed-strength option that Hexawise provides allow you to do is select a higher coverage level for some parameters in your test plan. That lets you control the balance between increased test thoroughness with the workload created by additional tests.




See our help section for more details on how to create a risk-based testing plan that focuses more coverage on higher priority areas.

As that example shows, Hexawise allows you to focus additional thoroughness on the 3 highest priority parameters with just 120 tests while also providing full pairwise coverage on all factors. Mixed strength test plans are a great tool to provide extra benefit to your test plans.


Related: How Not to Design Pairwise Software Tests - How to Model and Test CRUD Functionality - Designing a Test Plan with Dependent Parameter Values

By: John Hunter on Nov 6, 2013

Categories: Combinatorial Testing, Efficiency, Hexawise test case generating tool, Hexawise tips, Software Testing Efficiency, Testing Strategies

Recently, the following defect made the news and was one of the most widely-shared articles on the New York Times web site. Here's what the article, Computer Snag Limits Insurance Penalties on Smokers said:

A computer glitch involving the new health care law may mean that some smokers won’t bear the full brunt of tobacco-user penalties that would have made their premiums much higher — at least, not for next year.

The Obama administration has quietly notified insurers that a computer system problem will limit penalties that the law says the companies may charge smokers, The Associated Press reported Tuesday. A fix will take at least a year.


Tip of the Iceberg

This defect was entirely avoidable and predictable. Its safe to expect that hundreds (if not thousands) of similar defects related to Obamacare IT projects will emerge in the weeks and months to come. Had testers used straightforward software test design prioritization techniques, bugs like these would have been easily found. Let me explain.


There's no Way to Test Everything

If the developers and/or testers were asked how could this bug could sneak past testing, they might at first say something defensive, along the lines of: "We can't test everything! Do you know how many possible combinations there are?" If you include 40 variables (demographic information, pre-existing conditions, etc.) in the scope of this software application, there would be:


possible scenarios to test. That's not a typo: 41 QUADRILLION possible combinations. As in it would take 13 million years to execute those tests if we could execute 100 tests every second. There's no way we can test all possible combinations. So bugs like these are inevitably going to sneak through testing undetected.


The Wrong Question

When the developers and testers of a system say there is no way they could realistically test all the possible scenarios, they're addressing the wrong challenge. "How long would it take to execute every test we can think of?" is the wrong question. It is interesting but ultimately irrelevant that it would take 13 million years to execute those tests.


The Right Question

A much more important question is "Given the limited time and resources we have available for testing, how can we test this system as thoroughly as possible?" Most teams of developers and software testers are extremely bad at addressing this question. And they don't realize nearly how bad they are. The Dunning Kruger effect often prevents people from understanding the extent of their incompetence; that's a different post for a different day. After documenting a few thousand tests designed to cover all of the significant business rules and requirements they can think of, testers will run out of ideas, shrug their shoulders in the face of the overwhelming number of total possible scenarios and declare their testing strategy to be sufficiently comprehensive. Whenever you're talking about hundreds or thousands of tests, that test selection strategy is a recipe for incredibly inefficient testing that both misses large numbers of easily avoidable defects and wastes time by testing certain things again and again. There's a better way.


The Straightforward, Effective Solution to this Common Testing Challenge: Testers Should Use Intelligent Test Prioritization Strategies

If you create a well-designed test plan using scientific prioritization approaches, you can reduce the number of individual tests to test tremendously. It comes down to testing the system as thoroughly as possible in the time that's available for testing. There are well-proven methods for doing just that.


There are Two Kinds of Software Bugs in the World

Bugs that don't get found by testers sneak into production for one of two main reasons, namely:

  • "We never thought about testing that" - An example that illustrates this type of defect is one James Bach told me about. Faulty calculations were being caused by an overheated server that got that way because of a blocked vent. You can't really blame a tester who doesn't think of including a test involving a scenario with a blocked vent.

  • "We tested A; it worked. We tested B; it worked too.... But we never tested A and B together." This type of bug sneaks by testers all too often. Bugs like this should not sneak past testers. They are often very quick and easy to find. And they're so common as to be highly predictable.


Let's revisit the high-profile bug Obamacare bug that will impact millions of people and take more than a year to fix. Here's all that would have been required to find it:

  • Include an applicant with a relatively high (pre-Medicare) age. Oh, and they smoke.


Was the system tested with a scenario involving an applicant who had a relatively high age? I'm assuming it must have been.

Was the system tested with a scenario involving an applicant who smoked? Again, I'm assuming it must have been.

Was the system tested with a scenario involving an applicant who had a relatively high age who also smoked? That's what triggers this important bug; apparently it wasn't found during testing (or found early enough).


If You Have Limited Time, Test All Pairs

Let's revisit the claim of "we can't execute all 13 million-years-worth of tests. Combinations like these are bound to sneak through, untested. How could we be expected to test all 13 million-years-worth of tests?" The second two sentences are preposterous.

  • "Combinations like these are bound to sneak through, untested." Nonsense. In a system like this, at a minimum, every pair of test inputs should be tested together. Why? The vast majority of defects in production today would be found simply by testing every possible pair of test inputs together at least once.

  • "How could we be expected to test all 13 million-years-worth of tests?" Wrong question. Start by testing all possible pairs of test inputs you've identified. Time-wise, that's easily achievable; its also a proven way to cover a system quite thoroughly in a very limited amount of time.


Design of Experiments is an Established Field that was Created to Solve Problems Exactly Like This; Testers are Crazy Not to Use Design of Experiments-Based Prioritization Approaches

The almost 100 year-old field of Design of Experiments is focused on finding out as much actionable information as possible in as few experiments as possible. These prioritization approaches have been very widely used with great success in many industries, including advertising, manufacturing, drug development, agriculture, and many more. While Design of Experiments test design techniques (such as pairwise testing and orthogonal array testing / OA testing) are increasingly becoming used by software testing teams but far more teams could benefit from using these smart test prioritization approaches. We've written posts about how Design of Experiments methods are highly applicable to software testing here and here, and put an "Intro to Pairwise Testing" video here. Perhaps the reason this powerful and practical test prioritization strategy remains woefully underutilized by the software testing industry at large is that there are too few real-world examples explaining "this is what inevitably happens when this approach is not used... And here's how easy it would be to avoid this from happening to you in your next project." Hopefully this post helps raise awareness.


Let's Imagine We've Got One Second for Testing, Not 13 Million Years; Which Tests Should We Execute?

Remember how we said it would take 13 million years to execute all of the 41 quadrillion possible tests? That calculation assumed we could execute 100 tests a second. Let's assume we only have one second to execute tests from those 13 million years worth of tests. How should we use that second? Which 100 tests should we execute if our goal is to find as many defects as possible?

If you have a Hexawise account, you can to your Hexawise account to view the test plan details and follow along in this worked example. To create a new account in a few seconds for free, go to hexawise.com/free.

By setting the 40 different parameter values intelligently, we can maximize the testing coverage achieved in a very small number of tests. In fact, in our example, you would only need to execute only 90 tests to cover every single pairwise combination.

The number of total possible combinations (or "tests") that are generated will depend on how many parameters (items/factors) and how many options (parameter values) there are for each parameter. In this case, the number of total possible combinations of parameters and values equal 41 quadrillion.



This screen shot shows a portion of the test conditions that would be included the first 4 tests of the 90 tests that are needed to provide full pairwise coverage. Sometimes people are not clear about what "test every pair" means. To make this more concrete, by way of a few specific examples, pairs of values tested together in the first part of test number 1 include:

  • Plan Type = A tested together with Deductible Amount = High

  • Plan Type = tested together with Gender = Male

  • Plan Type = A tested together with Spouse = Yes

  • Gender = Male tested together with State = California

  • Spouse = Yes tested together with Yes (and over 5 years)

  • And lots of other pairs not listed here



This screen shot shows a portion of the later tests. You'll notice that the values are shown in purple italics. Those values listed in purple italics are not providing new pairwise coverage. You will note in the first tests every single parameter value is providing new pairwise coverage value, toward the end few parameter value settings are providing new pairwise coverage. Once a specific pair has been tested, retesting it doesn't provide additional pairwise coverage. Sets of Hexawise tests are "front loaded for coverage." In other words, if you need to stop testing at any point before the end of the complete set of tests, you will have achieved as much coverage as possible in the limited time you have to execute your tests (whether that is 10 tests or 30 tests or 83). The pairwise coverage chart below makes this point visually; the decreasing number of newly tested pairs of values that appear in each test accounts for the diminishing marginal returns per test.


You Can Even Prioritize Your First "Half Second" of Tests To Cover As Much As Possible!


This graph shows how Hexawise orders the test plan to provide the greatest coverage quickly. So if you get through 37 of the 90 tests needed for full pairwise coverage you have already tested over 90% of all the pairwise test coverage. The implication? Even if just 37 tests were covered, there would be a 90% chance that any given pair of values that you might select at random would be tested together in the same test case by that point.


Was Missing This Serious Defect an Understandable Oversight (Because of Quadrillions of Possible Combinations Exist) or was it Negligent (Because Only 90 Intelligently Selected Tests Would Have Detected it)?

A generous interpretation of this situation would be that it was "unwise" for testers to fail to execute the 90 tests that would have uncovered this defect.

A less generous interpretation would be that it was idiotic not to conduct this kind of testing.

The health care reform act will introduce many such changes as this. At an absolute minimum, health insurance firms should be conducting pairwise tests of their systems. Given the defect finding effectiveness of pairwise testing coverage, testing systems any less thoroughly is patently irresponsible. And for health insurance software testing it is often wiser to expand to test all triples or all quadruples given the interaction between many variables in health insurance software.

Incidentally, to get full 3 way test coverage (using the same example as above) would require 2,090 tests.


Related: Getting Started with a Test Plan When Faced with a Combinatorial Explosion - How Not to Design Pairwise Software Tests - Efficient and Effective Test Design

By: Justin Hunter on Sep 26, 2013

Categories: Combinatorial Software Testing, Hexawise test case generating tool, Multi-variate Testing, Pairwise Software Testing, Software Testing, Testing Strategies

When we built our test case design tool, we thought:

  1. When used thoughtfully, this test design approach is powerful.

  2. The kinds of things going on "underneath the covers," like the optimization algorithms and combinatorial coverage risk confusing and alienating potential users before they get started.

  3. Designing an easy-to-use interface is absolutely critical. (We kept in the "delighter" in our MVP release.)

  4. As we have continued to evolve, we've kept (a) focusing on design, and (b) having learned that customers don't find the tool itself hard to use but they do sometimes find the test design principles confusing, we've focused on adding self-paced training modules into the tool (in addition to training and help web resources and Hexawise TV tutorials.



View of the screen showing a user's progress through the integrated learning modules in Hexawise


Our original hope was that if we focused on design (and iterated it based on user feedback), that Hexawise would "sell itself." And that users would tell their friends and drive adoption through word of mouth. That's exact what we've seen.

We did learn that we needed to include more help getting started with the challenges of learning the test design principles needed to successful create tests. We knew that was a bit of a hurdle but we have seen it was more of a hurdle than we anticipated so we have put more resources into helping get over than initial resistance. And we have increased the assistance to clients on how to think differently about creating test plans.

A recent interview with Andy Budd takes an interesting look at the role of design in startups.

Andy Budd: Design is often one of the few competitive advantages that people have. So I think it’s important to have design at the heart of the process if you want to cross that chasm and have a hugely successful product. ... Des Traynor: If a product is bought, it means that users are so attracted to it, that they’ll literally chase it down and queue up for it. If it’s sold, it means that there are people on commission trying to force it into customer’s hands. And I find design can often be the key difference between those two types of models for business.

Andy: There’s a lot of logic in spending less money on marketing and sales, and more money on creating a brilliantly delightful product. If you do something that people really, really want, they will tell their friends.


As W. Edward Deming said in Out of the Crisis

Profit in business comes from repeat customers, customers that boast about your product and service, and that bring friends with them


The benefits of delighting customers so much that they help promote your product are enormous. This result is about the best one any business can hope for. And great design is critical in creating a user experience that delights users so much they want to share it with their friends and colleagues.

The article also discusses one of the difficult decision points for a software startup. The minimal viable product (MVP) is a great idea to help test out what customers actually want (not just what they say they want). This MVP is a very useful concept (pushed into many people's consciousness by the popularity of lean startup and agile software development).

MVP is really about testing the marketplace. The aim is to get a working product in people's hands and to learn from what happens. If the user experience is a big part of what you are offering (and normally it should be) a poor user experience (Ux) on a MVP is not a great way to learn about the market for what you are thinking of offering.

In my opinion, adding features to existing software can be tested in a MVP way with less concern for Ux than completely new software, but I imagine some would want the great Ux for this case too. My experience is that users that appreciate your product can understand the rough Ux in a mock up and separate the usefulness from the somewhat awkward Ux. This is especially true for example with internal software applications where the developers can directly interact with the users (and you can select people you know who can separate the awkward temporary Ux from the usefulness of a new feature).


Related: The two weeks of on-site visits with testing teams proved to be great way to: (a) reconnect with customers, (b) get actionable input about what users like / don’t like about our tool, (c) identify new ways we can continue to refine our tool, and even (d) understand a couple unexpected ways teams are using it. - Automatically Generating Expected Results for Tests in Hexawise - Pairwise and Combinatorial Software Testing in Agile Projects

By: John Hunter on Sep 17, 2013

Categories: Experimenting

tl;dr: When you have parameters that only have sensible values depending on certain conditions you should include a value like "N/A" or "Does not appear" for those parameters.


You can try this example out yourself using your Hexawise account. If you do not have an account yet you can create a demo account for free that lets you create effective test plans.

Let's take a simple, made up example from version 1 of a restaurant ordering system that has 3 parameters:

Entree: Steak, Chicken, Salmon
Salad: Caesar, House
Side: Fries, Green Beans, Carrots, Broccoli

Everything is just fine with our test plan for version 1, but then let's suppose the business decides that in version 2, people that order "Chicken" don't get a "Salad". Easy enough, we just make an invalid pair between "Chicken" and "Caesar" and "Chicken" and "House", correct? No, Hexawise won't let us. Why? Because then it has no value available for "Salad" to pair with "Chicken" as the "Entree".

But that's what we want! "Salad" will disappear from the order screen as soon as we select "Chicken". So there is no value. That's OK. We just need to add that as the value:

Entree: Steak, Chicken, Salmon
Salad: Caesar, House, Not Available
Side: Fries, Green Beans, Carrots, Broccoli

At this point we could create the invalid pairs between "Chicken" and "Caesar" and "Chicken" and "House", and Hexawise will allow it because there is still a parameter value, "Not Available", left to pair with "Chicken" in the "Salad" parameter.


If we do this though, we'll find that Hexawise will force a pairing between "Steak" and "Not Available" and "Salmon" and "Not Available". Not exactly what we wanted! So we can also add an invalid pair between "Steak" and "Not Available" and "Salmon" and "Not Available".

With these four invalid pairs, we have a working test plan for version 2, but rather than the four invalid pairs, this scenario is exactly why Hexawise has bi-directional married pairs. A bi-directional married pair between "Chicken" and "Not Available" tells Hexawise that every time "Entree" is "Chicken", "Salad" must be "Not Available" and every time "Salad" is "Not Available", "Entree" must be "Chicken". So it gives us precisely what we want for this scenario by creating just one bi-directional married pair rather than four invalid pairs.

Now let's suppose version 3 of the menu system comes out, and now there is a fourth Entree, "Pork". And "Pork", being the other white meat, also does not have a salad option:

Entree: Steak, Chicken, Salmon, Pork
Salad: Caesar, House, Not Available
Side: Fries, Green Beans, Carrots, Broccoli

When we go to connect "Entree" as "Pork" and "Salad" as "Not Available" with a bi-directional married pair, Hexawise will rightly stop us. While we can logically say that every time "Entree" is "Chicken", "Salad" is "Not Available" and every time "Entree is Pork", "Salad" is "Not Available", we can't say the reverse. It's nonsensical to say that every time "Salad" is "Not Available", "Entree" is "Chicken" and every time "Salad" is "Not Available", "Entree" is "Pork".

This is precisely why Hexawise has uni-directional married pairs. What we do in this case is create an uni-directional married pair between "Chicken" and "Not Available" which says that every time "Entree" is "Chicken", "Salad" is "Not Available", but it's not the case that every time "Salad" is "Not Available", "Entree" is "Chicken". This of course leaves us free to create a uni-directional married pair between "Pork" and "Not Available". With this design, we're back to Hexawise wanting to pair "Steak" and "Not Available" and "Salmon" and "Not Available" since our uni-directional married pairs don't prohibit that, so we need to add our invalid pairs for those two pairings.

So our final solution for version 3 looks like:

Entree: Steak, Chicken, Salmon, Pork
Salad: Caesar, House, Not Available
Side: Fries, Green Beans, Carrots, Broccoli

Uni-directional Married Pair - Entree:Chicken → Salad:Not Available Uni-directional Married Pair - Entree:Pork → Salad:Not Available Invalid Pair - Entree:Steak ↔ Salad:Not Available Invalid Pair - Entree:Salmon ↔ Salad:Not Available

Let's suppose the specifications for version 4 now hit our desks, and they specify that those that chose the "House" "Salad" get a choice of two dressings, "Ranch" or "Italian". We can then end up with a dependent value that's dependent on another dependent value. That's ok. We've got this!

Entree: Steak, Chicken, Salmon, Pork
Salad: Caesar, House, Not Available
Dressing: Ceasar, Ranch, Italian, Not Available
Side: Fries, Green Beans, Carrots, Broccoli

Uni-directional Married Pair - Entree:Chicken → Salad:Not Available
Uni-directional Married Pair - Entree:Pork → Salad:Not Available
Uni-directional Married Pair - Entree:Chicken → Dressing:Not Available
Uni-directional Married Pair - Entree:Pork → Dressing:Not Available
Bi-directional Married Pair - Salad:Caesar ↔ Dressing:Caesar
Bi-directional Married Pair - Salad:Not Available ↔ Dressing:Not Available
Invalid Pair - Entree:Steak ↔ Salad:Not Available
Invalid Pair - Entree:Salmon ↔ Salad:Not Available
Invalid Pair - Entree:Steak ↔ Dressing:Not Available
Invalid Pair - Entree:Salmon ↔ Dressing:Not Available

Hexawise tests can uncover any pair-wise defects in the identified parameters for version 4 of our hypothetical menu ordering system in just 20 tests out of a possible 192. We just saved ourselves from executing 172 extra tests or missing some defects!


Related: How do I create an "Invalid Pair" to prevent impossible to test for Values from appearing together? - How do I prevent certain combinations from appearing using the "Married Pair" feature? - Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values

By: Sean Johnson on Sep 9, 2013

Categories: Hexawise tips, Testing Strategies

Software testing is becoming more critical as the proportion of our economic activity that is dependent on software (and often very complex software) continues to grow. There seems to be a slowly growing understanding of the importance of software testing (though it still remains an under-appreciated area). My belief is that skilled software testers will be appreciated more with time and that the opportunities for expert software testers will expand.

Here are a few career tips for software testers.


  • Read what software testing experts have written. It's surprising how few software testers have read books and articles about software testing.Here are some authors (of books, articles and blogs) that I've found particularly useful. Feel free to suggest other software testing authors in the comments.

James Bach

Cem Kaner

Lisa Crispin

Michael Bolton

Lanette Creamer

Jerry Weinberg

Keith Klain

James Whittaker - posts while at Google and his posts with his new employer, Microsoft

Pradeep Soundararajan

Elisabeth Hendrickson

Ajay Balamurugadas

Matt Heusser

our own, Justin Hunter


  • Join the community You'll learn a lot as a lurker, and even more if you interact with others. Software Testing Club is one good option. Again, following those experts (listed above) is useful; engaging with them on their blogs and on Twitter is better. The software testing community is open and welcoming. In addition, see: 29 Testers to Follow on Twitter and Top 20 Software Testing Tweeps. Interacting with other testers who truly care about experimenting, learning, and sharing lessons learned is energizing.


  • Develop your communication skills Communication is critical to a career in software testing as it is to most professional careers. Your success will be determined by how well you communicate with others. Four critical relationship are with: software developers, your supervisors, users of the software and other software testers.Unfortunately in many organizations, managers and corporate structures restrict your communication with some of these groups. You may have to work with what you are allowed, but if you don't have frequent, direct communication with those four groups, you won't be able to be as effective.
    Work on developing your communication skills. Given the nature of software testing two particular types of communication - describing problems ("what is it?") and explaining how significant problem is ("why should we fix it?") - are much more common than in many other fields. Learning how to communicate these things clearly and without making people defensive is of extra importance for software testers.
    A great deal of your communication will be in writing and developing your ability to communicate clearly will prove valuable to your continued growth in your career.
    Writing your own blog is a great way to further you career. You will learn a great deal by writing about software testing. You will also develop your writing ability merely by writing more. And you will create a personal brand that will grown your network of contacts. It also provides a way for those possibly interested in hiring you to learn more. We all know hiring is often done outside the job announcement - job application process. By making a name for yourself in the software testing community you will increase the chances of being recruited for jobs.


  • Do what you love You career will be much more rewarding if you find something you love to do. You will most likely have parts of your job you could do without but finding something you have a passion for makes all the difference in the world. If you don't have a passion for software testing, you are likely better off finding something you are passionate about and pursuing a career in that field.

Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle.

Steve Jobs


  • Practice and learn new techniques and ideas. James Bach provides such opportunities. Also the Hexawise tool, lets you easily and quickly try out different scenarios and learn by doing. We also provide guided training, help and webcasts explaining software testing concepts and how to apply them using Hexawise.We include a pathway that guides you through the process of learning about software testing, with learning modules, practical experience and tests along the way to make sure you learn what is important. And once you reach the highest level and become a Hexawise guru we have offer a community experience to allow all those reaching that level to share their experience and learn from each other.


  • Go to good software testing conferences.I've heard great things about CAST (by the Association for Software Testing), Test Bash, and Let's Test in particular. Going to conferences is energizing because while you're there, you're surrounded by the minority of people in this industry who are passionate about software testing. Justin, Hexawise founder and CEO, will be presenting at CAST next week, August 26-28 in Madison, Wisconsin.


Related: Software Testing and Career Advice by Keith Klain - Looking at the Empirical Evidence for Using Pairwise and Combinatorial Software Testing - Maximizing Software Tester Value by Letting Them Spend More Time Thinking

By: John Hunter on Aug 22, 2013

Categories: Software Testing

As I looked at our administrative dashboard for Hexawise.com today I was struck by how diverse our users are. I was looking at the most active users in the last week and the top 11 users were from 8 different countries: USA, France, Norway, India, Israel, Spain, Thailand and Canada. The only country with more than 1 was the USA with 2 users from Florida and 1 from Wisconsin. Brazil, Belgium and the Russian Federation were also represented in the top 25.

If you look at the top 25 users in the last month, in addition to the countries above (except Belgium) 3 more countries are represented: China, Netherlands and Malaysia.


Visitors to the Hexawise web site in the last month by country.


Looking at our web site demographics the top countries of our visitors were: United States, India, Philippines, Australia, Brazil, United Kingdom, Israel, Malaysia, Italy and Netherlands. In the last month we have had visitors from 84 countries.

It is exciting to see the widespread use of Hexawise across the globe. The feedback on our upgrades included in Hexawise 2.0 have been very positive. We continue to get more and more users which makes us happy: we believe we have created a valuable tool for software testers and it is exciting to get confirmation from users. Please share your experiences with us; knowing what you like is helpful and we have made numerous enhancements based on user feedback.


Related: Empirical Evidence for Using Pairwise and Combinatorial Software Testing - Hexawise TV (webcasts on Hexawise and wise software testing practices) - Training material on Hexawise and software testing principles

By: John Hunter on Jul 17, 2013

Categories: Hexawise test case generating tool

The Hexawise Software Testing blog carnival focuses on sharing interesting and useful blog posts related to software testing.


  • T-Shaped Testers and their role in a team by Rob Lambert - "I believe that testers, actually – anyone, can contribute a lot more to the business than their standard role traditionally dictates. The tester’s critical and skeptical thinking can be used earlier in the process. Their other skills can be used to solve other problems within the business. Their role can stretch to include other aspects that intrigue them and keep them interested."

  • Testing triangles, pyramids and circles, and UAT by Allan Kelly - "Thus: UAT and Beta testing can only truly be performed by USERS (yes I am shouting). If you have professional testers performing it then it is in effect a form of System Testing.
    This also means UAT/Beta cannot be automated because it is about getting real life users to user the software and get their feedback. If users delegate the task to a machine then it is some other form of testing."



Photo by Justin Hunter, taken in South Africa.


  • Software Testing in Distributed Teams by Lisa Crispin - "Remote pairing is a great way to promote constant communication among multiple offices and to keep people working from home 'in the loop'.
    I often remote pair with a developer to specify test scenarios, do exploratory testing, or write automated test scripts. We use screen-sharing software that allows either person to control the mouse and keyboard. Video chat is best, but if bandwidth is a problem, audio will do. Make sure everyone has good quality headphones and microphone, and camera if possible."

  • Seven Kinds of Testers by James Bach - "I propose that there are at least seven different types of testers: administrative tester, technical tester, analytical tester, social tester, empathic tester, user, and developer. As I explain each type, I want you to understand this: These types are patterns, not prisons. They are clusters of heuristics; or in some cases, roles. Your style or situation may fit more than one of these patterns."

  • Which is Better, Orthogonal Array or Pairwise Software Testing? by John Hunter and Justin Hunter - "After more study I have concluded that: Pairwise is more efficient and effective than orthogonal arrays for software testing. Orthogonal Arrays are more efficient and effective for manufacturing, and agriculture, and advertising, and many other settings."

  • Experience Report: Pairing with a Programmer by Erik Brickarp - "We have different investigation methods. The programmer did low level investigations really well adding debug printouts, investigating code etc. while I did high level investigations really well checking general patterns, running additional scenarios etc. Not only did this make us avoid getting stuck by changing 'method' but also, my high level investigations benefited from his low level additions and vice versa."

  • I decided to evolve a faster test case by Ben Tilly - "I first wrote a script to run the program twice, and report how many things it found wrong on the second run. I wrote a second script that would take a record set, sample half of it randomly, and then run the first script.
    I wrote a third script that would take all records, run 4 copies of my test program, and then save the smaller record set that best demonstrated the bug. Wash, rinse, and repeat..."

  • Tacit and Explicit Knowledge and Exploratory Testing by John Stevenson - "It is time we started to recognise that testing is a tacit activity and requires testers to think both creativity and critically."

  • Tear Down the Wall by Alan Page - "There will always be a place for people who know testing to be valuable contributors to software development – but perhaps it’s time for all testing titles to go away?"

By: John Hunter on Jul 10, 2013

Categories: Software Testing

Here is a wonderful webcast that provides a very quick, and informative, overview of rapid software testing.

Software testing is when a person is winding around a space searching that space for important information.


James Bach starts by providing a definition of software testing to set the proper thinking for the overview.

Rapid software testing is a set of heuristics [and a set of skills]. Heuristics live at the border of explicit and tacit knowledge... Heuristics solve problems when they are under the control of a skilled human... It takes skill to use the heuristics effectively - to solve the problems of testing. Rapid software testing focuses on the tester... Tacit skills are developed through practice.


Automated software tests are useful but limited. In the context of rapid software testing only a human tester can do software testing (automated checks are defined as "software checking"). See his blog post: Testing and Checking Refined.


Related: People are better at figuring out interesting ideas to test. Generating a highly efficient, maximally varied, minimally repetitive set of tests based on a given set of test inputs is something computer algorithms are more effective at than a person. - Hexawise Lets Software Testers Spend More Time Focused on Important Testing Issues - 3 Strategies to Maximize Effectiveness of Your Tests

By: John Hunter on Jul 2, 2013

Categories: Software Testing, Testing Strategies