Justin posted this to the Hexawise Twitter feed

cave-question

It sparked some interesting and sometimes humorous discussion.

cave-exploration

The parallels to software testing are easy to see. In order to learn useful information we need to understand what the purpose of the visit to the cave is (or what we are seeking to learn in software testing). Normally the most insight can be gained by adjusting what you seek to learn based on what you find. As George Box said

Always remember the process of discovery is iterative. The results of each stage of investigation generating new questions to answered during the next.

Some "software testing" amounts to software checking or confirming that a specific known result is as we expect. This is important for confirming that the software continues to work as expected as we make changes to the code (due to the complexity of software unintended consequences of changes could lead to problems if we didn't check). This can, as described in the tweet, take a form such as 0:00 Step 1- Turn on flashlight with pre-determined steps all the way to 4:59 Step N- Exit But checking is only a portion of what needs to be done. Exploratory testing relies on the brain, experience and insight of a software tester to learn about the state of the software; exploratory testing seeks to go beyond what pre-defined checking can illuminate. In order to explore you need to understand the aim of the mission (what is important to learn) and you need to be flexible as you learn to adjust based on your understanding of the mission and the details you learn as you take your journey through the cave or through the software. Exploratory software testing will begin with ideas of what areas you wish to gain an understanding of, but it will provide quite a bit of flexibility for the path that learning takes. The explorer will adjust based on what they learn and may well find themselves following paths they had not thought of prior to starting their exploration.

 

Related: Maximizing Software Tester Value by Letting Them Spend More Time Thinking - Rapid Software Testing Overview - Software Testers Are Test Pilots - What is Exploratory Testing? by James Bach

By: John Hunter on Apr 8, 2014

Categories: Exploratory Testing, Scripted Software Testing, Software Testing, Testing Checklists

Some of those using Hexawise use Gherkin as their testing framework. Gherkin is based on using a given [a], when [b] --> then [c] format. The idea is this helps make communication clear and make sure business rules are understood properly. Portions of this post may be a bit confusing for new Hexawise users, links are provided for more details on various topics. But, if you don't need to create output for Gherkin and you are confused you can just skip this post.

A simple Gherkin scenario: Making an ATM withdrawal

Given a regular account
  And the account was originally opened at Goliath National
  And the account has a balance of $500 
When using a Goliath National Bank 
  And making a withdrawal of $200 
Then the withdrawal should be handled appropriately 

Hexawise users want to be able to specify the parameters (used in given and when statements) and then import the set of Hexawise generated test cases into a Gherkin style output.

In this example we will use Hexawise sample test plan (Gherkin example), which you can access in your Hexawise account.

I'll get into how to export the Hexawise created test plans so they can be used to create Gherkin data tables below (we do this ourselves at Hexawise).

In the then field we default to an expected value of "the withdrawal should be handled appropriately." This is something that may benefit from some explanation.

If we want to provide exact details on exactly what happens on every variation of parameter values for each test script those have to be manually created. That creates a great deal of work that has very little value. And it is an expensive way to manage for the long term as each of those has to be updated every time. So in general using a "behaves as expected" default value is best and then providing extra details when worthwhile.

For some people, this way of thinking can be a bit difficult to take in at first and they have to keep reminding themselves how to best use Hexawise to improve efficiency and effectiveness.

enter-default-expected-value

To enter the default expected value mouse-over the final step in the auto scripts screen. When you mouse over that step you will see the "Add Expected Results" link. Click that and add your expected result text.

expected-value-entry

The expect value entered on the last step with no conditions (the when drop down box is blank) will be the default value used for the export (and therefor the one imported into Gherkin).

In those cases when providing special notes to tester are deemed worth the extra effort, Hexawise has 2 ways of doing this. In the event a special expected value exists for the particular conditions in the individual test case then the special expected value content will be exported (and therefore used for Gherkin).

Conditional expected results can be entered using the auto scripts feature.

Or we can use the requirements feature when we want to require a specific set of parameter values to be tested. If we chose 2 way coverage (the default, pairwise coverage) every pair of parameter values will be tested at least once.

But if we wanted a specific set of say 3 exact parameter values ([account type] = VIP, [withdrawal ATM] = bank-owned ATM, [withdrawal amount] = $600 then we need to include that as a requirement. Each required test script added also includes the option to include an expected result. The sample plan includes a required test case with those parameters and an expected result of "The normal limit of $400 is raised to $600 in the special case of a VIP account using a Goliath National Bank owned ATM."

So, the most effective way to use Hexawise to create a pairwise (or higher) test plan to then use to create Gherkin data tables will be to have the then case be similar to "behaves as expected." And when there is a need for special expected results details to use the auto script or requirements features to include those details. Doing so will result the expected result entered for that special case being the value used in the Gherkin table for then.

When you click auto script button the test are then generated, you can download them using the export icon.

autoscripts-export

Then select option to download as csv file.

script-export-options

You will download a zip file that you can then unzip to get 2 folders with various files. The file you want to use for this is the combinations.txt file in the csv directory.

The Ruby code we use to convert the commas to pipes | used for Gherkin is

!/usr/bin/env ruby
require 'csv'
tests = CSV.read("combinations.csv")
table = []
tests.each do |test|
table << "| " + test[1..-1].join(" | ") + " |\n"
end
IO.write("gherkin.txt", table.join())

Of course, you can use whatever method to convert the format you wish, this is just what we use. See this explanation for a few more details on the process.

Now you have your Gherkin file to use however you wish. And as the code is changed over time (perhaps adding parameter value options, new parameters, etc.) you can just regenerate the test plan and export it. Then convert it and the updated Gherkin test plan is available.

 

Related: Create a Risk-based Testing Plan With Extra Coverage on Higher Priority Areas - Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Designing a Test Plan with Dependent Parameter Values

By: John Hunter on Mar 27, 2014

Categories: Hexawise test case generating tool, Hexawise tips, Scripted Software Testing, Software Testing Efficiency, Testing Strategies

The test cases in a given test plan should be sufficiently similar and should not have wildly divergent paths depending on the value of parameters in a test case.

When you do find your test case flow diverges too much you often want to either break your test plan down into a few different test plans, so that you have a plan for each different kind of pass through the system.

Another similar approach is that you may want to decrease the scope of your test plan a bit so that you end up with test cases that are all similar in the plan.

Lastly, let's say the flows aren't wildly divergent, but only slightly so. As a silly example let's say you were testing a recipe that varied based on the fruit selected.

Fruit: Apple, Grape, Orange, Banana

And then you wanted a step for how the peeling was done.

Peeling: By hand, By manual peeling tool, By automated peeler Peeler type: Hand crank, Battery powered, AC powered

Now... our testing flow here has some divergence. Grapes and Apples don't get peeled in this recipe, so they never enter that flow. And Bananas are always peeled by hand so they only get a part of that flow. If this was just the tip of the iceberg of the divergence, we should create a test plan for Grapes and Apples and a different one for Oranges and Bananas.

But if this is the entire extent of the divergent flow, then we want to take advantage of N/A values and married and invalid pairs.

We update our parameter values for the peeling flow to include N/A.

Peeling: By hand, By manual peeling tool, By automated peeler, N/A Peeler type: Hand crank, Battery powered, AC powered, N/A

We marry Grape and Orange (uni-directionally) to the two N/A's so they don't participate in the peeling flow. We marry Banana (unidirectionally) to "By hand" and the 2nd N/A so it has a partial and circumscribed pass through the peeling flow.

Lastly we don't allow Orange to be paired with either N/A with an invalid pair.

That's how a slight flow variation can be accommodated. Please comment with any questions about any of these approaches to your problem.

 

Related: 3 Strategies to Maximize Effectiveness of Your Tests - How Not to Design Pairwise Software Tests - How to Model and Test CRUD Functionality

By: Sean Johnson on May 21, 2013

Categories: Hexawise tips, Scripted Software Testing, Software Testing, Testing Strategies

84 percent coverage in 20 tests

Hexawise test coverage graph showing 83.9% coverage in just 20 tests

 

Among the many benefits Hexawise provides is creating a test plan that maximizes test coverage with each new scenario tested. The graph above shows that after just 20 test 83.9% of the test combinations have been tested. Read more about this in our case study of a mortgage application software test plan. Just 48 test combinations are needed to test for every valid pair (3.7 million possible tests combinations exist in this case). If you are lost now, this video may help.

The coverage achieved by the first few tests in the plan will be quite high (and the graph line will point up sharply) then the slope will decrease in the middle of the plan (because each new test will tend to test fewer net new pairs of values for the first time) and then at the end of the plan the line will flatten out quite a lot (because by the end, relatively few pairs of values will be tested together for the first time).

One of the benefits Hexawise provides is making that slope as steep as possible. The steeper the slope the more efficient your test plan is. If you repeat the same tests of pairs and triples and... while not taking advantage of the chance to test, untested pairs and triples you will have to create and run far more test than if you intelligently create a test plan. With many interactions to test it is far too complex to manually derive an intelligent test plan. A combinatorial testing tool, like Hexawise, that maximizes test plan efficiency is needed.

For any set of test inputs, there is a finite number of pairs of values that could be tested together (that can be quite a large number). The coverage chart answers, after each tests, what percentage of the total number of pairs (or triples, etc.) that could be tested together have been tested together so far?

The Hexawise algorithms achieve the following objectives that help testers find as many defects as possible in as few tests as possible. In each and every step of each and every test case, the algorithm chooses a test condition that will maximize the number of pairs that can be covered for the first time in the test case. (Or, the maximum number of triplets or quadruplets, etc. based on the thoroughness setting defined by the user). Allpairs (AKA pairwise) is a well known and easy to understand test design strategy. Hexawise lets users create pairwise sets of tests that will test not only every pair but it also allows test designers to generate far more thorough sets of tests (3-way to 6-way coverage). This allows users to "turn up the coverage dial" and generate tests that cover every single possible triplet of test inputs together at least once (or every 4-way combination or 5-way combination or 6-way combination).

Note that the coverage ratio Hexawise shows is based on the factors entered as items to be tested: not a code coverage percentage. Hexawise sorts the test plan to front load the coverage of the tuple pairs, not the coverage of the code paths. Coverage of code paths ultimately depends on how good a job the test designer did at extracting the relevant parameters and values of the system under test. You would expect there to be some loose correlation between coverage of identified tuple pairs and coverage of code paths in most typical systems.

If you want to learn more about these concepts, I would recommend Scott's Scott Sehlhorst articles on pairwise and combinatorial test design. They are some of the clearest introductory articles about pairwise and combinatorial testing that I have seen. They also contain some interesting data points related to the correlation between 2-way / allpairs / pairwise / n-way coverage (in Hexawise) and the white box metrics of branch coverage, block coverage and code coverage (not measurable by Hexawise).

In Software testing series: Pairwise testing, for example, Scott includes these data points:

 

  • We measured the coverage of combinatorial design test sets for 10 Unix commands: basename, cb, comm, crypt, sleep, sort, touch, tty, uniq, and wc... The pairwise tests gave over 90 percent block coverage.

 

  • Our initial trial of this was on a subset Nortel’s internal e-mail system where we able cover 97% of branches with less than 100 valid and invalid testcases, as opposed to 27 trillion exhaustive testcases.

 

  • A set of 29 pair-wise... tests gave 90% block coverage for the UNIX sort command. We also compared pair-wise testing with random input testing and found that pair-wise testing gave better coverage.

 

Related: Why isn't Software Testing Performed as Efficiently and Effecively as it could be? - Video Highlight Reel of Hexawise – a pairwise testing tool and combinatorial testing tool - Combinatorial Testing, The Quadrant of Massive Efficiency Gains

Specific guidance on how to view the percentage of coverage graph for the test plan in Hexawise:

 

When working on your test plan in Hexawise, to get the checklist to be visible, click on the two downward arrow keys located shown in the image:

How-To Progress Checklists-2 inline

Then you'll want to open up the "Advanced" list. So you might need to click here:

Advanced How-To Progress Checklist inline

Then the detailed explanation will begin when you click on "Analyze Tests"

Decreasing Marginal Returns inline

 

This post is adapted (and some new content added) from comments posted by Justin Hunter and Sean Johnson.

By: John Hunter on Feb 3, 2012

Categories: Combinatorial Software Testing, Combinatorial Testing, Efficiency, Multi-variate Testing, Pairwise Software Testing, Pairwise Testing, Scripted Software Testing, Software Testing, Software Testing Efficiency

a1ValueOfChecklists-1pdfpage46of95

 

I highly recommend this presentation by Cem Kaner (available here as a pdf download of slides). It is provocative, funny, and insightful. In it, Cem Kaner makes a strong case for using checklists (and mercilessly derides many aspects of using completely scripted tests). Cem Kaner, as I suspect most people reading this already know, is one of the leading lights of software testing education. He is a professor of computer sciences at Florida Institute of Technology and has contributed enormously to software testing education by writing Testing Computer Software "the best selling software testing book of all time," founding the Center for Software Testing Education & Research](http://www.testingeducation.org/BBST/), and making an excellent free course available online for [Black Box Software Testing. .

Here are a couple of my favorite slides from the presentation.

 

a2ValueOfChecklists-1pdfpage82of95

 

a1ValueOfChecklists-2pdfpage83of95

 

My own belief is that the presentation is very good and makes the points it wants to quite well. If I have a minor quibble with it, it is that in doing such a good job at laying out the case for checklists and against scripted testing, the presentation - almost by definition/design - does not go into as much detail as I would personally like to see about a topic that I think is extremely important and not written about enough; namely, how practitioners should use an approach that blends the advantages of scripted tests (that can generate some of the huge efficiency benefits of combinatorial testing methods for example) and checklist-based Exploratory Testing (which have the advantages pointed out so well in the presentation). A "both / and" option is not only possible; it is desirable.


Credit for bringing this presentation to my attention: Michael Bolton ([the testing expert](http://www.developsense.com/blog.html, of course, not the singer, ["Office Space" video snippet] , posted a link to this presentation. Thanks again, Michael. Your enthusiastic recommendation to pick up boxed sets of the BBC show Connections was also excellent as well; the presenter of Connections is like a slightly tipsy genius with ADHD who possesses incredible grasp of history, an encyclopedic knowledge of quirky scientific developments and a gift for story-telling. I like how your mind works.

By: Justin Hunter on Nov 4, 2009

Categories: Scripted Software Testing, Software Testing Efficiency, Software Testing Presentations, Testing Case Studies, Testing Checklists, Testing Strategies