Begin with the “Goldilocks Rule” in mind to identify how much detail is appropriate.

unnamed

If your tests cover a large scope, as in a set of end-to-end tests of a process, you focus first on entering only the most important elements of that process into Hexawise. If your tests cover a small scope, as in tests that focus on a few items on a single screen, the amount of detail you will want to include in Hexawise is higher. Learn more about the Goldilocks Rule.

 

Imagine explaining your System Under Test to someone’s mother. Start with 5 things that may change in each test

Suggestion: do not start this process with detailed Requirements or Technical Specifications. Instead, start with your basic common sense description of some things that would change from test to test.

  • If you were explaining the application to someone’s mother, how would you explain what it does in 2 minutes?
  • What kinds of things would be important to vary from test to test?
    • Hardware and software configurations?
    • User types?
    • Different actions that a user might take?
  • Identify 5 things that change from test to test and turn those 5 things into your first Parameters.
    • How might those things change? Add one or more Values for each Parameter.
    • At this point general descriptions might be fine; (e.g., SUV’s or Economy cars vs. Toyota Corolla).
    • Remember that, where possible, you should avoid creating long lists of values.

 

Create a draft set tests, assess obvious big gaps in the tests, and start filling them.

What obvious types of scenarios are missing? Add parameters and values as necessary to fill those big, obvious holes.

 

Create tests again, assess whether you’re covering necessary business rules and requirements.

If your tests are not yet testing a business rule or Requirement that you want to test:

  • Consider adding a Parameter or Value
  • Consider adding a specific combination of values to be tested together in the requirements tab

 

Reduce scope if the test plan is too complex

  • Consider cutting the scope of the plan (e.g., create two different plans with largely similar parameters – one for regular users and one for special users – instead of one big plan which tries to “do it all.”)
  • Consider changing the way you are describing “hard coded” values. Instead, of “iPhone 4S with International roaming” (which might not be a valid option after the first part of the draft test case suggests a transaction for a phone for corporate customers from a Northern location responding to the special holiday offer…) consider using descriptive parameters and values along the lines of “from the available options of phones available at this point in the test case, select an option that meets as many of these conditions as you can…

 

Consider adding additional details into your plan from other sources.

Other sources for test ideas could be:

J Michael Hunter’s “You Are Not Done Yet”
Elisabeth Hendrikson’s Test Heuristics Cheat Sheet
Hans Buwalda’s Soap Opera Testing

By: John Hunter on Jun 17, 2014

Categories: Combinatorial Software Testing, Software Testing

The Hexawise Software Testing blog carnival focuses on sharing interesting and useful blogposts related to software testing.

 

  • The Zen of Application Test Suites by Curtis “Ovid” Poe – “This document is about testing applications — it’s not about how to write tests. Application test suites require a different, more disciplined approach than library test suites. I describe common misfeatures experienced in large application test suites and follow with recommendations on best practices.”

  • The Software Tester’s Easter Egg Hunt by Ben Austin – “The testing industry will benefit greatly if more people follow Holland’s example and prioritize critical thinking over the ability to write test scripts. That said, it is equally important to recognize that scripting tools, when used in the situations they’re intended for, can be great time-savers that can enable more thoughtful, context-driven exploratory testing.

  • “Grapefruit Juice Bugs” – A New Term for a Surprisingly Common Type of Surprising Bugs by Justin Hunter – “Like grapefruit juice’s impact on prescription drugs, software testing involves critical interactions between different parts of the system. And risks exist when these different parts interact with one another.”

  • Testing at Airbnb by Lou Kosak – “Building good habits around testing is hard, especially at an established company. One of the biggest lessons I’ve learned this year is that as long as you have a team that’s open to the idea, it’s always possible to start testing (even if you’ve got a six year old monolithic Rails app to contend with). Once you have a decent test suite in place, you can actually start refactoring your legacy code, and from there anything is possible. The trick is just to get started.”

  • Disruptive Testing – James Bach Interviewed by Rodney Urquhart – “the future I am helping to build is about systematically training up skilled testers, some of whom but not all with coding skills, so that a small number of testers can do– or coordinate to be done– all the testing that a large project might need. A good future for testing would be one with a lot fewer “testers” but each one of those testers being passionate about his craft.”

  • The role of a Test Manager in agile by Katrina Clokie – “In a cross-skilled team, the agile tester must ensure that the quality of testing is good regardless of who in the team is completing it. The tester becomes the spokesperson for collaborative testing practices, and provides coaching via peer reviews or workshops to those without a testing background.”

  • Speaking to Your Business Using Measurements by Justin Rohrman – “In my experience, no one measure did a great job of telling the story about my software ecosystem. I’ve been deceived by groups of measures, too, because I misunderstood their weaknesses. If we are so easily deceived by measurements, imagine what happens when we send them off to others who need quick, high-level information.”

 

unnamed

Photo in Ranakpur India by Justin Hunter.

 

  • Shine a light by Rob Lambert – “Tours, personas and a variety of other test ideas give you a way of re-shining your light. Use these ideas to see the product in different ways, but never forget that it’s often time that is against you. And time is one of the hardest commodities to argue for during your testing phase.”

  • Using mind-mapping software as a visual test management tool by Aaron Hodder – “I like to employ a style of software testing that emphasises the personal freedom and responsibility of the individual tester to continually optimise the quality of his/her work by treating test-related learning, test design, test execution and test result interpretation as mutually supportive activities that run in parallel throughout the project. When performed by a skilled tester, this approach yields valuable and consistent results.”

  • Helpful Tips for Hiring Better Testers by Isaac Howard – “I looked at the good testers around me and tried to identify the “whys” of their success. All of them were driven to learn and capable of adapting to change. If they didn’t know a tool or a tech, they learned it. Because under the hood, testing is learning and relearning software everyday. The following are seven changes I made to my interviewing process.”

By: John Hunter on Jun 10, 2014

Categories: Software Testing

Here at Hexawise, we aim to make the process of testing easier and more efficient. One way in which we've done this is by promoting our test design tool. But we were seeing people make test plans too complicated. So we came up with an easy way to create super powerful test plans that stay simple and effective.

What did we come up with, you ask?

'Start with a verb and a noun.'

The idea of using a verb and a noun to describe the appropriate scope for a set of tests has been used by Eduardo Miranda. As he points out, if you find yourself tempted to add testing ideas (e.g., explore the help files in depth) that do not easily fit into your chosen verb and noun (e.g., "book a flight"), that can be a useful red flag; accordingly, you might want to exclude those new test ideas that "don't fit" from the test scenarios in your current scope of tests.

This strategy is useful for two main reasons.

One - It is a great leaping off point from which to create interesting scenarios.

The tester is forced to understand and question their system under test. For some, this is a radically different idea to what their job is. We typically hear something like "You mean, I can't just 'validate the file exists?'"

Two - The 'verb and noun' strategy requires you remain specific to one common goal.

Test plans get bloated when you start incorporating disparate ideas. This is commonly seen when testing a system that would be described as 'Apply for Loan' and you start adding in ideas to 'explore help files.' While exploring the help files will be necessary at some point, it probably will not trigger results needed for successfully testing your application process.

Now, let's explore this first reason:

You start by choosing any verb and noun.

Verb and Noun 1

Then you have to create questions to understand that verb and noun.

Nwspaper questions

Then answer your questions. This is important. If you can't answer them, how could you possibly test the system?

Answer the questions

These ideas of questions and answers lend themselves quite well to be used as test steps or scenario planning. Below you can see how well they imported into Hexawise.

Add those into Hexawise

Generating tests is the only thing left to do before testing.

5 Click on Create Tests

Hopefully you've enjoyed this exploration on making simple and effective tests using the 'Verb and Noun' process.

By: Justin Hunter on Jun 4, 2014

Categories: Combinatorial Software Testing

In general, all scripts (test cases) should have the same steps when using the Hexawise Auto-Scripts feature.

An important consideration in a Hexawise test plan is, "Can I test all these test cases with the same number of steps?" If the answer is no, then you should probably reconsider the scope of your test plan, as it's likely you're trying to include too much testing scope in a single plan.

Another rule of thumb to determine what should be included in the scope of one set of Hexawise tests is to think about a verb and a noun. "Apply for" could be your verb. "A loan" might be your noun. If you have a lot of permutations in which a person could apply for a loan, those scenarios could all fall within the scope of that set of tests. If, however, you started to think about testing the contents of help files, it may well be useful to include those "help file-related" tests in a different set of tests.

Each test case in a single Hexawise test plan will often test the same functionality, but with different permutations. For example, those permutations might include variations of environment (IE or Firefox, using mouse or keyboard, etc.), user (new user, normal user, VIP customer, admin), data (Florida, New York, under 18, over 18) and actions (used the dropdown, keyed it in manually, clicked the confirmation checkbox). The parenthetical examples I provided here come from testing end user software, but the same applies to other types of systems.

If you're testing the same scope (same verb & noun), but identifying all the possible variations, your Auto-Script steps will be the same for all test cases. What might change, of course, is what the tester should expect to happen. Should they see an error dialog or a confirmation dialog? Should the border be green or blue? Should the user get an "X" email or a "Y" email? But we'll save that for another discussion.

By: Sean Johnson on May 7, 2014

Categories: Combinatorial Testing, Hexawise tips

not-possible

What Does it Mean?

One of the more common support inquiries we receive is when a Hexawise generated test case includes "No possible value" for a parameter. The first time you see this, it can be a bit unclear what it means and what you can do to address it.

A "no possible value" in a test case is telling you the test case is providing coverage for a needed pair in some other parameters, and in light of that needed pair your invalid and married pairs are then leaving then no value allowed for the parameter with "no possible value". That sounds confusing, but an example is much easier to understand.

An Example

Let's say we have a test plan with just 3 parameters, each with 2 values:

Fruit: Apple, Pear Car: Toyota, Dodge Dog: Collie, Mutt

And let's further suppose we have 2 invalid pairs:

if Fruit = Apple then Car cannot = Toyota if Car = Dodge then Dog cannot = Mutt

This all seems simple, but a hidden problem lurks in this simple setup. To create 2-way coverage, Hexawise will ensure you've paired every parameter value with every other parameter value (unless a constraint says it shouldn't be paired), which in this case means that Hexawise will necessarily pair Fruit as Apple with Dog as Mutt in at least one test case, since that pairing could be the source of a bug. You probably already see the problem!

In the test case that has Fruit as Apple and Dog as Mutt we need to have a value for the Car parameter. You can't have Car as Toyota, because Apple can't be paired with Toyota, and you can't have Car as Dodge, because Mutt can't be paired with Dodge. So what value can Hexawise provide for Car in this test case? It has no value to provide, so it provides "no possible value".

That's why you can get test cases with "no possible value". Sometimes you can leave them be, sometimes you might want to introduce a "N/A" value for a parameter, and sometimes your invalid and married pairs may need a bit of adjusting. Generally, given the real context of your actual test plan, it is clear which path to take to resolve them.

Dealing With "No Possible Value"

Let's use a more realistic flight reservation example, to show how the context of your real tests indicates what you should do to resolve instances of "no possible value".

Our flight reservation example test plan has these 5 parameters:

Destination: USA, China, Australia Class: First, Business, Economy Reserve a Car: Yes, No Type of Car: Luxury, Economy Payment Method: Credit Card, Frequent Flier Miles, Upgrade Coupons

Without any constraints, we're going to get test cases that pair Reserve a Car as No with Type of Car as Luxury. This makes no sense, and even if a tester were just to ignore it, it hides the fact that the nonsensical test case may be the only test case where we attempt to pay for a Luxury car with Frequent Flier Miles. If this pairing led to a bug, we'd miss the bug if we just ignored the car portion of the generated test case, so it's much better to use constraints to eliminate having Type of Car as Luxury when Reserve a Car is No.

We have options for how to constrain this, but let's suppose we create the following 2 uni-directional married pairs:

When Type of Car = Luxury then Reserve a Car = Yes When Type of Car = Economy then Reserve a Car = Yes

We've taken both our values for Type of Car and married them to Reserve a Car as Yes. Hexawise is still duty-bound to pair Reserve a Car as No with things like Destination as USA and Payment Method as Credit Card. For these test cases there won't be a valid value for Type of Car, so it will get "no possible value". This leads us to our first option for dealing with no possible values.

Option 1: Do nothing! There really is no possible value in that case, and you're OK with it.

If we use option 1, the "no possible value" is intentional. Option 1 is the simplest, but not always our best option. Not every "no possible value" is intentional, many in fact are unintentional, and are a result of inconsistent or missing constraint logic. A "no possible value" therefore tends to be a "quality smell" in a test plan. Meaning... you're not sure something is rotten when you see it, but it sure smells like something might be rotten. So that brings us to another option for handling intentional no possible values.

Option 2: Introduce a "Not Applicable" value.

We can add a third value to Type of Car.

Type of Car: Luxury, Economy, N/A

Now, whenever Reserve a Car is No there will be just 1 value left for Hexawise for Type of Car which is N/A so all those cases of "no possible value" will become "N/A". Problem fixed? Almost, but not quite! Hexawise is now duty-bound to pair Type of Car as N/A with Reserve a Car as Yes. To avoid this new nonsense, we add an invalid pair:

When Reserve a Car = Yes then Type of Car cannot = N/A

Option 1 and option 2 work when you discover the source of the "no possible value" is intentional. But what about when you discover it's not intentional. In that case, the "quality smell" has led us to something that truly is rotten!

Let's suppose we have Class as Business only on international, not domestic flights. Easy enough. Let's add an invalid pair to reflect this business rule.

When Destination = USA then Class cannot = Business

Let's also suppose that an Upgrade Coupon can only be used for Business class, not Economy or First Class. Also easy. Let's add a uni-directional married pair for this:

When Payment Method = Upgrade Coupon then Class = Business

Have you spotted the trouble yet? Hexawise is duty-bound to pair Destination as USA with Payment Method as Upgrade Coupon, since their could be a bug caused by that pairing, and if there is, you want to be sure to find it. In the test case that has this pairing, Hexawise is going to have "no possible value" for Class!

Option 3: Correct your constraint logic.

In this case, our constraint logic is incomplete. We need an invalid pair:

When Destination = USA then Payment Method cannot = Upgrade Coupon

This relieves Hexawise of its duty to pair Destination as USA with Payment Method as Upgrade Coupon and eliminates the "no possible value".

Mysterious "No Possible Value"

For cases of an unitentional "no possible value" where you need to use option 3 to correct an inconsistency in your constraint logic, it's not always clear where the inconsistency is. In fact, it can be downright mysterious. The more constrained your plan is, the harder it can be to find the root cause.

My best advice for tracking down the cause of a "no possible value" in complicated cases is to stay patient and stick with it. Practice makes you better at it. Here are some additional tips:

  • Look at each test case with "no possible values" in isolation, don't try to analyze more than 1 at once
  • Start with the test case with the fewest number of "no possible values" and then fix the issues your analysis turns up, then regenerate your tests and repeat, until you have none left
  • You need pencil and paper! Write out the test case that you're analyzing on paper, then do the analysis from the "Define Inputs" page where it's easiest to see all the possible values for each parameter
  • Take advantage of the hover highlighting of applicable value pairs in the Define Inputs screen. Collapse every section in the left panel except the Paired Values (constraints) so you can see as many as value pairings at one time as possible
  • If you get completely stuck, reach out to a more experienced colleague or ask Hexawise for help

Doing this analysis can be tricky, so having Hexawise do the analysis for you instead is a great idea. We'd like to have a tooltip with an explanation for the no possible value. Since it can be tricky for us humans to determine why a particular no possible value case exists in complicated cases, the algorithm to do the same is also tricky to get right, but we're working toward having a tooltip to explain the cause of the "no possible value", and are going to be very excited when this feature is ready. We might accidentally create a self-aware AI that enslaves the humans in the process though. If we do, don't say we didn't warn you.

In addition to an automated explanation of where the "no possible value" came from, we also know that prevention is better than a cure, so we're working to identify these cases as soon as the constraints are entered so we can prompt you in the moment to correct the issue. We already do this for some of the simpler causes of "no possible value", you may have been prompted by Hexawise already. We're working on Hexawise being able to detect more complicated causes so we can prevent more of these cases early.

In the end, always keep in mind that every "no possible value" is simply a conflict between a parameter value pairing Hexawise is duty-bound to provide to ensure you get 100% coverage, and the constraints you've applied to the plan in the form of married pairs and invalid pairs. With that in mind, have fun "no possible value" hunting.

spock-logical

By: Sean Johnson on May 1, 2014

Categories: Combinatorial Testing, Constraints, Hexawise test case generating tool, Hexawise tips

Justin posted this to the Hexawise Twitter feed

cave-question

It sparked some interesting and sometimes humorous discussion.

cave-exploration

The parallels to software testing are easy to see. In order to learn useful information we need to understand what the purpose of the visit to the cave is (or what we are seeking to learn in software testing). Normally the most insight can be gained by adjusting what you seek to learn based on what you find. As George Box said

Always remember the process of discovery is iterative. The results of each stage of investigation generating new questions to answered during the next.

Some "software testing" amounts to software checking or confirming that a specific known result is as we expect. This is important for confirming that the software continues to work as expected as we make changes to the code (due to the complexity of software unintended consequences of changes could lead to problems if we didn't check). This can, as described in the tweet, take a form such as 0:00 Step 1- Turn on flashlight with pre-determined steps all the way to 4:59 Step N- Exit But checking is only a portion of what needs to be done. Exploratory testing relies on the brain, experience and insight of a software tester to learn about the state of the software; exploratory testing seeks to go beyond what pre-defined checking can illuminate. In order to explore you need to understand the aim of the mission (what is important to learn) and you need to be flexible as you learn to adjust based on your understanding of the mission and the details you learn as you take your journey through the cave or through the software. Exploratory software testing will begin with ideas of what areas you wish to gain an understanding of, but it will provide quite a bit of flexibility for the path that learning takes. The explorer will adjust based on what they learn and may well find themselves following paths they had not thought of prior to starting their exploration.

 

Related: Maximizing Software Tester Value by Letting Them Spend More Time Thinking - Rapid Software Testing Overview - Software Testers Are Test Pilots - What is Exploratory Testing? by James Bach

By: John Hunter on Apr 8, 2014

Categories: Exploratory Testing, Scripted Software Testing, Software Testing, Testing Checklists

Some of those using Hexawise use Gherkin as their testing framework. Gherkin is based on using a given [a], when [b] --> then [c] format. The idea is this helps make communication clear and make sure business rules are understood properly. Portions of this post may be a bit confusing for new Hexawise users, links are provided for more details on various topics. But, if you don't need to create output for Gherkin and you are confused you can just skip this post.

A simple Gherkin scenario: Making an ATM withdrawal

Given a regular account
  And the account was originally opened at Goliath National
  And the account has a balance of $500 
When using a Goliath National Bank 
  And making a withdrawal of $200 
Then the withdrawal should be handled appropriately 

Hexawise users want to be able to specify the parameters (used in given and when statements) and then import the set of Hexawise generated test cases into a Gherkin style output.

In this example we will use Hexawise sample test plan (Gherkin example), which you can access in your Hexawise account.

I'll get into how to export the Hexawise created test plans so they can be used to create Gherkin data tables below (we do this ourselves at Hexawise).

In the then field we default to an expected value of "the withdrawal should be handled appropriately." This is something that may benefit from some explanation.

If we want to provide exact details on exactly what happens on every variation of parameter values for each test script those have to be manually created. That creates a great deal of work that has very little value. And it is an expensive way to manage for the long term as each of those has to be updated every time. So in general using a "behaves as expected" default value is best and then providing extra details when worthwhile.

For some people, this way of thinking can be a bit difficult to take in at first and they have to keep reminding themselves how to best use Hexawise to improve efficiency and effectiveness.

enter-default-expected-value

To enter the default expected value mouse-over the final step in the auto scripts screen. When you mouse over that step you will see the "Add Expected Results" link. Click that and add your expected result text.

expected-value-entry

The expect value entered on the last step with no conditions (the when drop down box is blank) will be the default value used for the export (and therefor the one imported into Gherkin).

In those cases when providing special notes to tester are deemed worth the extra effort, Hexawise has 2 ways of doing this. In the event a special expected value exists for the particular conditions in the individual test case then the special expected value content will be exported (and therefore used for Gherkin).

Conditional expected results can be entered using the auto scripts feature.

Or we can use the requirements feature when we want to require a specific set of parameter values to be tested. If we chose 2 way coverage (the default, pairwise coverage) every pair of parameter values will be tested at least once.

But if we wanted a specific set of say 3 exact parameter values ([account type] = VIP, [withdrawal ATM] = bank-owned ATM, [withdrawal amount] = $600 then we need to include that as a requirement. Each required test script added also includes the option to include an expected result. The sample plan includes a required test case with those parameters and an expected result of "The normal limit of $400 is raised to $600 in the special case of a VIP account using a Goliath National Bank owned ATM."

So, the most effective way to use Hexawise to create a pairwise (or higher) test plan to then use to create Gherkin data tables will be to have the then case be similar to "behaves as expected." And when there is a need for special expected results details to use the auto script or requirements features to include those details. Doing so will result the expected result entered for that special case being the value used in the Gherkin table for then.

When you click auto script button the test are then generated, you can download them using the export icon.

autoscripts-export

Then select option to download as csv file.

script-export-options

You will download a zip file that you can then unzip to get 2 folders with various files. The file you want to use for this is the combinations.txt file in the csv directory.

The Ruby code we use to convert the commas to pipes | used for Gherkin is

!/usr/bin/env ruby
require 'csv'
tests = CSV.read("combinations.csv")
table = []
tests.each do |test|
table << "| " + test[1..-1].join(" | ") + " |\n"
end
IO.write("gherkin.txt", table.join())

Of course, you can use whatever method to convert the format you wish, this is just what we use. See this explanation for a few more details on the process.

Now you have your Gherkin file to use however you wish. And as the code is changed over time (perhaps adding parameter value options, new parameters, etc.) you can just regenerate the test plan and export it. Then convert it and the updated Gherkin test plan is available.

 

Related: Create a Risk-based Testing Plan With Extra Coverage on Higher Priority Areas - Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Designing a Test Plan with Dependent Parameter Values

By: John Hunter on Mar 27, 2014

Categories: Hexawise test case generating tool, Hexawise tips, Scripted Software Testing, Software Testing Efficiency, Testing Strategies

Coining a New Term

I'm coining a new term today, "grapefruit juice bugs."

My inspiration for this term is a blog post in the New York Times that David Pogue wrote. I was fascinated by the post and it got me to thinking about a particular kind of bugs in software that are more common than most people may realize. You could say that these bugs are surprisingly common. In fact, if you wanted to be more precise, you could even say that this term applies to a specific type of "surprisingly common type of surprising bugs." Let me explain.

There's something about the chemical makeup of grapefruit juice that makes it interact with our biology and a large number of different drugs in ways which result in dangerous conditions. For example, certain drugs lose their effectiveness dramatically when interacting with grapefruit juice which can have life-threatening consequences. Other times, the interactions with grapefruit juice can dramatically increase a drug's potency. This can result in "safe doses" becoming very unsafe.

Grapefruit Is a Culprit in More Drug Reactions

The 42-year-old was barely responding when her husband brought her to the emergency room. Her heart rate was slowing, and her blood pressure was falling. Doctors had to insert a breathing tube, and then a pacemaker, to revive her.

They were mystified: The patient’s husband said she suffered from migraines and was taking a blood pressure drug called verapamil to help prevent the headaches. But blood tests showed she had an alarming amount of the drug in her system, five times the safe level.

Did she overdose? Was she trying to commit suicide? It was only after she recovered that doctors were able to piece the story together.

“The culprit was grapefruit juice,” said Dr. Unni Pillai, a nephrologist in St. Louis, Mo. ...

The previous week, she had been subsisting mainly on grapefruit juice. Then she took verapamil, one of dozens of drugs whose potency is dramatically increased if taken with grapefruit. In her case, the interaction was life-threatening.

Last month, Dr. David Bailey, a Canadian researcher who first described this interaction more than two decades ago, released an updated list of medications affected by grapefruit. There are now 85 such drugs on the market, he noted, including common cholesterol-lowering drugs, new anticancer agents, and some synthetic opiates and psychiatric drugs, as well as certain immunosuppressant medications taken by organ transplant patients, some AIDS medications, and some birth control pills and estrogen treatments. ... Under normal circumstances, the drugs are metabolized in the gastrointestinal tract, and relatively little is absorbed, because an enzyme in the gut called CYP3A4 deactivates them. But grapefruit contains natural chemicals called furanocoumarins, that inhibit the enzyme, and without it the gut absorbs much more of a drug and blood levels rise dramatically.

For example, someone taking simvastatin (brand name Zocor) who also drinks a small 200-milliliter, or 6.7 ounces, glass of grapefruit juice once a day for three days could see blood levels of the drug triple, increasing the risk for rhabdomyolysis, a breakdown of muscle that can cause kidney damage.

 

So what do interactions between grapefruit juice and drugs have to do with software testing?

Like grapefruit juice's impact on prescription drugs, software testing involves critical interactions between different parts of the system. And risks exist when these different parts interact with one another. This is true whether you're talking about "large parts" interact in System Testing or "small parts" interact in Unit Testing.

Interactions between things are a very rich source of bugs in software. As anyone who has heard the infernal phrase "works on my machine" can tell you, software features and functions often work perfectly fine in many usage scenarios, hardware and software configurations , etc. - only to fail to work in ever-so-slightly different situations.

 

The difference between plain old every-day "Dual-Mode Faults" and "Grapefruit Juice Bugs"

A dual-mode fault occurs whenever two test inputs must both be present to trigger a defect. Most software testers start encountering them quite frequently within days of starting their jobs. Some examples:

  • This "buy" button works fine. Except when the customer is a "new user." (First, action = "click on the buy button" and Second, customer = "new user")

  • Transaction prices for share purchases are calculated correctly. Except when denominated in Japanese Yen. (First, Action = "sell shares" and Second, Currency = "Japanese Yen")

Like grapefruit juice's impact on prescription drugs, software testing involves critical interactions between different parts of the system. And risks exist when these different parts interact with one another. This is true whether you're talking about "large parts" interact in System Testing or "small parts" interact in Unit Testing.

While all grapefruit juice bugs are dual-mode faults, not all dual-mode faults are Grapefruit Juice Bugs:

  • Grapefruit juice bugs have got to have a little of the element of surprise in them. When you explain them to a developer, their first reaction should be "Huh? How is that even possible?" or at least "Hmmm... That's odd. Let me investigate."

  • Anything along the lines of "This feature usually works, except in IE6, when..." is almost definitely not a grapefruit juice bug. Problematic interactions with IE6 are an incredibly common type of dual-mode fault, not a surprising one.

Whenever you hear "works on my machine" replies to your bug reports, and it takes a while for the issue to be replicated, odds are pretty good that a grapefruit juice bug might be involved.

Here's an example of an especially surprising grapefruit juice bug. This excerpt from Apple's online help files that the company posted after users of the original iPad complained about problems with Wi-Fi connectivity. Certain screen brightness settings were causing problems with the Wi-Fi signals. I'm not even to begin to guess how one would have anything to do with the other.

Auto-Scripting-Exercises-at-1.30.13-PM1

How to identify grapefruit juice bugs during your testing?

What is a tester to do when faced with more possible potential grapefruit juice bugs than he can handle using traditional methods?

If you're a software tester trying to do your best to determine whether a feature or function in your System Under Test will work "on everyone's machine," you've got a nightmare on your hands . Really nasty combinatorial explosions arise when you consider all of the possible combinations that would be required to test multiple hardware options, multiple software options, multiple usage scenarios, multiple test data inputs (and multiple combinations of the test data itself), multiple ways in which users enter data, and all of the rest of the "stuff that could vary" when people use your application. If you take the time to think expansively about the possible variations in a medium-sized applications, Quadrillions of possible tests often result.

While not eating grapefruit and not drinking grapefruit juice might be wise if you are taking drugs, there is rarely, if ever, such an easy method for eliminating the possibility of negative results due to software interactions. Refusing to support IE 6 in order to avoid the disproportionate number of grapefruit juice-like problematic interactions associated with IE6 would be as close as you could come in the world of software.

Design of Experiments-based test design methods can help testers come to grips with this challenge. Orthogonal array software testing (often referred to as OATS or simply OA testing) is a test design strategy that allows us to efficiently detect bugs created by interactions within the system. Orthogonal array software testing is based on the principles of multifactor designed experiments as first explored by Sir RA Fisher.

Design of Experiments-based test design methods are very-closely related to pairwise testing (AKA allpairs testing, all pairs testing, and pairwise-testing). Any of these test design strategies will allow a software tester to quickly generate a set of tests that includes tests for every single pair of test inputs.

This approach to test design often has multiple advantages, including faster test creation, more varied test scenarios, 100% coverage of all potential dual-mode faults (including hard-to-predict grape-fruit juice bugs), and often a smaller resulting set of tests that will be quicker to execute. Having said that, it is by no means a magical silver bullet. This approach to test design requires test designers with above average analytical abilities to identify the appropriate Parameters and Values for their system under test; this is sometimes easier said than done because it requires a new mindset from test designers.

Software testers can take solace that the challenges of software testing, while significant, are simple when compared to trying to understand the effects of drug interactions in people.

Combinatorial testing can look at bugs created by the interaction between multiple (3, 4, 5, 6...) variables. So if there was a bug that didn't get triggered just by using Chrome on Windows but it would get triggered if you also tried to replace an existing photo in your profile with a new profile photo into your profile (test idea number 3), then pairwise testing might not catch it. Pairwise test design would create a set of tests that would include at least one test for each of these pairs:

  • Chrome & Windows and

  • Chrome & replace photo and

  • Windows & replace photo, but...

A set of pairwise might not fail to test for the specific combination of all three of those test inputs in the same test. With the use of combinatorial test design approaches, you could create test plans with 100% coverage for 3 way interactions and be sure that all 3-way interactions or 4-way interactions are covered. When you create sets of 3-way tests, 4-way tests, 5-way tests, and 6-way tests though, you'll quickly discover that the number of tests required starts to balloon.

Hexawise allows you to create test plans with the coverage interactions you desire. This allows you to create sets of tests from 2-way up all the way up to phenomenally-thorough 6-way sets of tests. In fact, it even lets you generate clever sets of risk-based tests that will, say, prioritize comprehensive 4-way coverage on 4 sets of Parameter Values while ensuring only pairwise coverage of the other, lower-priority, interactions in your system under tests. Hexawise also lets you create mixed strength test plans so if you have certain factors that you are very concerned about and want to provide coverage for more possible interactions you can set the interaction levels for those at a higher level.

 

Related: Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Maximize Test Coverage Efficiency And Minimize the Number of Tests Needed - How to Model and Test CRUD Functionality - 25 Great Quotes for Software Testers

By: Justin Hunter on Feb 11, 2014

Categories: Bugs, Combinatorial Testing, Design of Experiments, Multi-variate Testing, Multi-variate Testing, Pairwise Software Testing, Software Testing, Testing Strategies

Hexawise has had another great year and we owe it to you, our users. Thank you! As a result of your input and word-of-mouth recommendations, in 2013, the Hexawise test design tool:

  • Added powerful new features,

  • Became even easier to use,

  • Introduced lots of new practical instructional content, and

  • Doubled usage again.

If you haven’t visited Hexawise lately, please login now to see all the improvements we've made (register for free).

Ease-of-Use Enhancements

Instructional Guides for Hexawise Features
We’ve added illustrated step-by-step instructions describing how to use Hexawise features.

Find them at help.hexawise.com. For our advanced features, like creating risk-based test plans, auto-generating complete scripts, and using Value Expansions, we’ve gone beyond “how?” to explain “why?” you would want to use these features.

Practical Test Design Tips
Want to see tips and tricks for creating unusually powerful tests? Want to learn about common mistakes (and how to avoid them)? Want to understand whether pairwise test design really works in practice? These topics and more can now be found at training.hexawise.com.

Frog-Powered Self-Paced Learning

frogs next

Want to become a Hexawise guru? Listen to the frogs. If you complete the achievements recommended by your friendly amphibian guides, you will level up from a Novice to Practitioner to Expert to Guru.

frogs become expert

You’ll complete two kinds of achievements on your way to guru-ness. To complete some achievements you only need to use certain Hexawise features. Completing the other achievements requires learning important test design concepts and demonstrating that you understand them. The frogs, ever-ready to guide you towards test design mastery, will greet you immediately upon logging into your account.

Powerful New Features

Recently-added features that will make you a more powerful and speedy test designer, include:

Coverage of Specific High-Priority Scenarios
You can now force specific scenarios to appear in the tests you generate using the Requirements feature.

Requirements Traceability
Requirements traceability is easier to manage than ever with the Requirements feature.

Generation of Detailed Test Scripts
The Auto-Scripting feature allows you to automatically transform sets of optimized test conditions into test scripts that contain detailed instructions in complete sentences.

Auto-Population of Expected Results in Test Scripts
If you want to, you can even automatically generate rules-based Expected Results to appear as part of your test steps by using the Expected Results feature.

To find out more about these features Hexawise added in 2013, please check out these cool slides: "Powerful New Hexawise Features".

 

Public Recognition and Rapid Growth

Kind Words
As a five-year old company working as hard as we can to make Hexawise the best damn tool it can be, hearing input from you keeps us motivated and headed in the right direction. Once a week or so, we hear users say nice things about our tool. Here are some of the nice things you guys said about Hexawise this past year:

 

“Working coaching session with customer today. Huge data/config matrix was making them weep. Stunned silence when I showed them @Hexawise :)”

-Jim Holmes (@aJimHolmes)

 

“That would be @Hexawise & combinatorial testing to the rescue once again. #Thanks”

-Vernon Richards (@TesterFromLeic)

 

Freaking awesome visualisation of test data coverage. Kind courtesy of @Hexawise at Moolya!”

-Moolya Testing (@moolyatesting)

 

“Using @Hexawise combinatorial scenarios for e-commerce basket conditions. Team suitably impressed by speed and breadth of analysis. #Win”

-Simon Knight (@sjpknight)

 

“Just discovered Hexawise today, brilliant tool for creating test cases based on coverage of many variables.”

-Stephen Blower (@badbud65)

 

“This changes everything.”

-Dan Caseley (@Fishbowler)

 

Using Hexawise is one of the highest ROI software development practices in the world.

-Results, paraphrased, of independent study by industry expert Capers Jones and colleagues.

 

Rapid Growth
Throughout 2013, Hexawise continued to be piloted and adopted at new companies week after week. Hexawise is currently being used to generate tests at:

More than 100 Fortune 500 firms More than 2,000 smaller firms

Hexawise office

New offices
Having moved into our new offices in October, the Hexawise team now gets together to do all our best stuff at this swanky new location in Chapel Hill, North Carolina.

What's Next?

Constant Improvements
We keep a public running tally of all of our enhancements. As you can see, we’re making improvements at a rate of more than once a week.

Want us to Add a New Feature?
If you have an additional feature, please let us know. We listen. There’s not much that makes us happier than finding ways to improve Hexawise.

Please Tell Your Friends
Our growth has been almost purely due to word-of-mouth recommendations from users like you. So if you find Hexawise to be helpful, please help spread the word. We appreciate it more than you know.

You can even let your friends in on a bit of a testing industry secret: while company-wide Hexawise licenses start at $50,000 per year, we allow the first five users to use fully-featured Hexawise accounts at no cost!

Thank you for all of your help and support this past year. And have a great 2014!

By: Justin Hunter on Jan 28, 2014

Categories: Hexawise test case generating tool, Recommended Tool

A common mistake software companies make is creating products where features built for advanced users overwhelm and confuse average users. At Hexawise we have focused on creating a great experience for average and new users while providing advanced users powerful options.

How to Avoid a Common Product Mistake Many Teams Make by Mark Suster

The single biggest mistake most product teams make is building technology for what they believe the user would want rather than what the actual end-user needs. … My philosophy is simply. You design your product for the non-technologist. The “Normal.”

Give people fewer options, fewer choices to make. Give them the bare essentials. Design products for your mom. … power users will always find the features they need in your product even if they’re hidden away. Give them a configure button somewhere. From there give them options to soup up the product and add the complexity that goes with newfound power.

Make sure you read his full post and subscribe to his blog, the posts are clearly-written, pragmatic, and insightful.

Our experiences at Hexawise match the points the post makes. We've designed our web-based tool with Jason Fried/37 Signals-inspired "KISS" (Keep It Simple Stupid) design principles in mind. Our interesting debates about how to add (or whether to add) new features have often been based on the exact tensions you mention here. "Power users want new features" vs. "... but users love our tool precisely because it's easy-to-use and it doesn't have 'feature bloat'."

 

We've experimented with the suggestion you raise here (e.g., rather than say "no" to every advanced user request, we build them in hidden places for advanced users without distracting our regular users). Results have been good so far.

The Bulk add/bulk edit feature in Hexawise is an example of a powerful feature that is implemented in a way that doesn't interfere with the ease of use for those that don't take advantage of this feature.

For us, there are few things more important to our tool's success in the marketplace than our ability to get the balance right between "uncluttered simplicity that everyone wants" vs. "with the powerful capabilities that advanced users want."

There are natural tensions between the two opposing design goals. Sean Johnson (Hexawise CTO) is the strongest advocate for simplicity. John and Justin, Hexawise's CEO, love simplicity, and understand the important of simplicity for usability, but also find ourselves pushing for advanced features.

I am a strong believer in the simplicity with the option for power users to access advanced features. Turning this concept into practice isn't easy. Thankfully Sean has the mindset and skill to make sure we don't sacrifice simplicity and usability while providing power features to power users at Hexawise. I was also lucky enough to have another amazing co-worker, Elliot Laster, at a previous employer that provided this valuable role. One of the tricks to making this work is hiring a great person with the ability to make it a reality (it requires deep technical understanding, an understanding of usability principles and a deep connection to real users, all of which Sean and Elliot have to an amazing degree).

Getting the balance right is one of the most important things a software company can do. We've tried really hard to get it right. We're probably overdue for formal usability testing of basic functionality. Reading blogs like Mark's is useful for new ideas and also for the push to do what you know you should do but seem to keep putting off.

 

By: John Hunter and Justin Hunter on Jan 21, 2014

Categories: Hexawise test case generating tool, Software Development