So, we've been experimenting with a live customer support feature in our tool lately. We're rolling out the live chat support on a beta basis to see how useful our users find it.

The motivations for creating it were two-fold. Our first motivation was the Mayday button that Amazon Kindle announced recently. How cool is that, right? Live support available on demand any time you want it! Ingenius.

Screenshot-2013-12-04-18.07.32

 

Our second motivation for building a live chat support feature into our tool is that - while software test designers consistently tell us that they find Hexawise's features to be really easy to use - new users will often encounter test design questions as they start using the tool. We wanted to be available instantly to collaborate with them - and help them address questions in real time, like: "How should I be thinking about different ways of defining equivalence classes?" "Given what I'm trying to test in my system, how much detail is too much in this context?" etc. We wanted to be there to help users answer them. We're obsessive about customer service. Having the opportunity to have our expert test designers be a click away from every user of our tool every time they encounter a question. That's just too good an opportunity to pass up.

Early indications of how useful this service is to users are extremely promising. Users are telling us it is an amazingly helpful service. And, while we were worried that we might start to feel too stretched with tons of user questions to answer, we haven't felt that way at all. Interactions have been at manageable volumes. We've found them to be really positive. Many of the interactions have helped us learn about ways to improve our tool and/or how we can make certain test design concepts easier for Hexawise users to understand. Often, customers will click on a "call me" button to talk through questions live by phone rather than by typing back and forth. I'm glad this next conversation was done with keyboards.

This conversation happened about 45 minutes ago. Everyone at Hexawise headquarters is still smiling broadly. It made all of our days and stands apart from the rest. Enjoy!

(17:15:25) Visitor Hi

(17:15:40) Sean Johnson Hey

(17:15:44) Sean Johnson What's up?

(17:15:51) Visitor Hi Sean!

(17:15:57) Visitor Hey, I created 308 test cases

(17:16:02) Sean Johnson k

(17:16:02) Visitor out of a possible 18 trillion

(17:16:07) Sean Johnson nice!

(17:16:11) Visitor I consolidated all my user stories in 6 sprints total

(17:16:13) Visitor to 1 test set

(17:16:14) Visitor which is fine

(17:16:28) Visitor but I noticed from test case # 100 plus to 308

(17:16:37) Visitor most of my test cases are now 'any value'

(17:16:48) Visitor I was wondering if there's an option to force hexawise to pick a value for me

(17:16:52) Visitor but I don't there is

(17:16:56) Visitor but that would be a good enhancement

(17:17:05) Sean Johnson ha! are you spying on me?

(17:17:09) Visitor for 'any value' you can have the app just pick a random one

(17:17:10) Visitor LOL

(17:17:11) Visitor Nope

(17:17:14) Sean Johnson seriously… that's what I'm working on right now

(17:17:19) Visitor NO WAY!

(17:17:20) Sean Johnson what are the odds?

(17:17:24) Visitor O M G

(17:17:26) Sean Johnson yes way

(17:17:33) Visitor I've been meaning to provide that feedback 2 weeks ago

(17:17:39) Visitor but never took the time!

(17:17:46) Visitor I WOULD LOVE TO HAVE THAT FEATURE!

(17:17:49) Visitor Oh

(17:17:51) Visitor in the mean time

(17:17:56) Visitor my testers workaround

(17:18:00) Visitor is to print the test plan

(17:18:17) Visitor and just pick the values randomly from the value expansion list and input parameter list

(17:18:26) Visitor AWESOME Sean!

(17:18:31) Visitor Well let me know when it's available

(17:18:41) Sean Johnson that's really crazy

(17:18:41) Sean Johnson well… I guess I'm working on the right thing!

(17:18:42) Sean Johnson Will tomorrow be soon enough for you?

(17:18:42) Sean Johnson :-)

(17:18:43) Sean Johnson for now… it's going to be hidden

(17:18:43) Sean Johnson you'll add ?full=true

(17:18:43) Sean Johnson to the end of the URL

(17:18:57) Sean Johnson and that'll force Hexawise to fill in the any_values

(17:19:03) Visitor that's AWESOME!

(17:19:06) Visitor I will do the workaround

(17:19:09) Visitor OMG, you made my day!

(17:19:13) Visitor THANKS A TON!

(17:19:15) Visitor :-)

(17:19:18) Sean Johnson I'll send you note this evening or in the morning when it's live on [your company's Hexawise instance]

(17:19:23) Visitor I am so happy

(17:19:24) Visitor LOL

(17:19:28) Visitor Thanks so much

(17:19:32) Sean Johnson thanks for chatting! made my day to know I picked the right next priority.

(17:19:32) Visitor this will make my life easier

(17:19:38) Visitor Oh yeah, totally

(17:19:38) Sean Johnson excellent.

(17:19:58) Visitor I honestly think I'm not the only one who will appreciate this enhancement

(17:20:01) Visitor you guys are the best!

(17:20:03) Visitor Thanks so much

(17:20:05) Sean Johnson :-) we try

(17:20:15) Visitor You all do an amazing job

(17:20:21) Visitor this tool is the best

(17:20:21) Sean Johnson look for an email from me shortly

(17:20:26) Visitor will do

(17:20:27) Visitor thanks!

(17:20:33) Sean Johnson thanks!

We've been working hard for the past ~5 years building and continuously improving Hexawise so that it will be a tool that software test designers find to be extremely useful and - equally importantly - a tool that software test designers will find enjoyable to use. It is hard to put into words how satisfying it is to see an interaction like this one.

By: Justin Hunter on Dec 4, 2013

Categories: Uncategorized

I came across Elisabeth Hendrickson's "Test Heuristics Cheat Sheet" yesterday and developed some pairwise testing (AKA 2-way combinatorial) test cases using many of the good ideas contained in it. I would highly recommend it, I'd recommend you send it (or email a link to this blog post) to everyone on your QA team.

As an indication that the Hendrikson's Test Heuristics Cheat Sheet works well to uncover defects,

  • I wanted to create a set of pairwise tests that could be broadly applicable to test thousands of different applications, I incorporated many ideas from the Test Heuristics Cheat Sheet.

  • I intend to use those inputs to test our test design tool, Hexawise.

  • The way Hexawise works is that users enter "things they want to test" into Hexawise on the first of three screens, the "Define Inputs" screen, then click on "Create Tests." Hexawise then uses a scientific approach to maximizing coverage of the combinations of all the "stuff to be tested" in the fewest possible number of tests. This scientific approach is based on the >40 years of Design of Experiments lessons and includes both pairwise / AllPairs methods as well as more thorough 3-way, 4-way, 5-way and 6-way tests (as well

  • Ironically, even before starting to execute the test conditions suggested by Hexawise, I discovered that the special characters that I had input into the "Define Inputs" screen (which I took from the Test Heuristics Cheat Sheet) triggered a previously unidentified defect in Hexawise itself.

  • The fact that it was triggered so quickly in an application that has been live for a year and used thousands of times is a strong indication that using checklists and cheat sheets can be a great way to efficiently find defects.

Why is using checklists to guide your testing often such an efficient and effective way of finding defects? Here's my top ten list:

  1. The "bad ideas" have already been weeded out.
    1. The ideas on the list have found enough defects to make the author of the checklist think there is value in testing the particular idea.
    2. If you've got a checklist or "cheat sheet" put together by someone as thoughtful and experienced as the Bachs, Bolton, and Hendrickson, you're getting a highly-condensed executive summary version of many of their valuable insights.
    3. All testers go through many, many, "I wonder what would happen if we did this or considered that?" scenarios.
    4. The checklists referenced above represent expertise culled from thousands of testing projects.

  2. Checklists are directly actionable. You can apply them in almost no time at all.
  3. They work well. See Cem Kaner's slides on the Value of Checklists (11 Mb pdf file).
  4. They can easily evolve into some of your most powerful test artifacts.
    • Start with the lists above. See if each of the ideas for tests trigger defects in your Systems Under Test.
    • Find a lot of defects from certain test ideas? Create your own checklist of ideas that worked and iterate them over time.. Consider expanding upon the checklist items and concepts that do bear fruit.
    • Don't ever find defects from certain of the test ideas? Consider dropping those items from the checklists if they don't bear fruit for you (or put tests for those ideas at the back of your lists and only include tests for them if you have extra time).

  5. Checklists include useful, defect-triggering ideas that you may not have thought of on your own.
  6. They're free.
    • No software or books to buy.
    • No courses or conferences to attend.

  7. Using checklists mitigate the risk that you will forget to test for things that you know you should be testing for (but could well forget to test for in any specific instance).
    • As humans, we're naturally forgetful as a species despite our best efforts.
    • Checklists are widely used with good results by doctors, lawyers, pilots, software testers, and people going to grocery stores to minimize the effects of these shortcomings.

  8. Software testing checklists are an efficient way to communicate actionable information.
  9. Software testing checklists are widely applicable to all kinds of software testing.
    • Checklists can be used in creating Unit Tests, Assembly Tests, Product Tests, System Tests, Functional Tests, Load Tests, Performance Tests, User Acceptance Tests, etc.
    • Checklists can be used by Exploratory Testers and "script-everything-in-advance" test-case-centric testers.
    • Checklists can be used in Agile projects as well as Waterfall projects.

  10. Software testing checklists can be easily used in pairwise and combinatorial testing.
    • Using elements from the checklists in a pairwise test will have the added benefit that not only will you test for every one of the testing ideas on the checklist (e.g., XXX) but also, you can easily test for every idea on the checklist **in combination with** every other test idea on the checklist in at least one test case.

    By: Justin Hunter on May 3, 2012

    Categories: Checklists, Software Testing, Testing Checklists, Uncategorized

Request for ideas: What are good examples of good interactive online training (including exercises & quizzes) for how to use web apps?

We continue to look for ways to improve the ability of our users to add value to their organizations. We are always in the process of improving the software tools we provide.

The value proposition for using pairwise and combinatorial tools are huge. The biggest roadblocks we see in adoption are not being aware of the advantages and not being sure how to proceed once the advantages are seen.

We designed Hexawise to be very easy to use. But even so the concepts behind combinatorial testing take some people a bit of time to operationalize. To help with this we offer on site and off site personal training.

We are looking to create some online training to ease the transition into using Hexawise's combinatorial software testing tools most effectively. We would love to know what good examples of interactive online training people have found useful.

 

Related: Why isn't Software Testing Performed as Efficiently and Effecively as it could be? - In Praise of Data-Driven Management (AKA "Why You Should be Skeptical of HiPPO's")

By: John Hunter on Apr 24, 2012

Categories: Uncategorized

Combinatorial Software Test Design - Beyond Pairwise Testing

 

I put this together to explain combinatorial software test design methods in an accessible manner. I hope you enjoy it and that, if you do, that you'll consider trying to create test cases for your next testing project (whether you choose our Hexawise test case generator or some other test design tool).

 

Where I'm Coming From

As those of you know who read my posts, read my articles, and/or have attended my testing conference presentations, I am a passionate proponent of these approaches to software test design that maximize variation from test case to test case and minimize repetition. It's not much of an exaggeration to say I hardly write or talk publicly about any other software testing-related topics. My own consistent experiences and formal studies indicate that pairwise, orthogonal array-based, and combinatorial test design approaches often lead to a doubling of tester productivity (as measured in defects found per tester hour) as compared to the far more prevalent practice in the software testing industry of selecting and documenting test cases by hand. How is it possible that this approach generates such a dramatic increase in productivity? What is so different between the manually-selected test cases and the pair-wise or combinatorial testing cases? Why isn't this test design technique far more broadly adopted than it is?

 

A Common Challenge to Understanding: Complicated, Wonky Explanation

My suspicion is that a significant reason that combinatorial software testing methods are not much more widely adopted is that many of the articles describing it are simply too complex and/or too abstract for many testers to understand and apply. Such articles say things like:

A. Mathematical Model

 

A pairwise test suite is a t-way interaction test suite where t = 2. A t-way interaction test suite is a mathematical structure, called a covering array.

Definition 1 A covering array, CA(N; t, k, |v|), is an N × k array from a set, v, of values (symbols) such that every N × t subarray contains all tuples of size t (t-tuples) from the |v| values at least once [8].

The strength of a covering array is t, which defines, for example, 2-way (pairwise) or 3-way interaction test suite. The k columns of this array are called factors, where each factor has |v| values. In general, most software systems do not have the same number of values for each factor. A more general structure can be defined that allows variability of |v|.

Definition 2 A mixed level covering array, MCA (N; t, k, (|v1|,|v2|,..., |vk|)), is an N × k array on |v| values, where

| v |␣ ␣k | vi | , with the following properties: (1) Each i␣1

column i (1 ␣ i ␣ k) contains only elements from a set Si of size |vi|. (2) The rows of each N × t subarray cover all t-tuples of values from the t columns at least once.

  • "Construct Pairwise Test Suites Based on the Bak-Sneppen Model of Biological Evolution" World Academy of Science, Engineering and Technology 59 2009 - Jianjun Yuan, Changjun Jiang

 

If you're a typical software tester, even one motivated to try new methods to improve your skills, you could be forgiven for not mustering up the enthusiasm to read such articles. The relevancy, the power, and the applicability of combinatorial testing - not to mention that this test design method can often double your software testing efficiency and increase the thoroughness of your software testing - all tend to get lost in the abstract, academic, wonky explanations that are typically used to describe combinatorial testing. Unfortunately for pragmatic, action-oriented software testing practitioners, many of the readily accessible articles on pairwise testing and combinatorial testing tend to be on the wonky end of the spectrum; an exception to that general rule are the good, practitioner-oriented introductory articles available at combinatorialtesting.com.

 

A Different Approach to Explaining Combinatorial Testing and Pairwise Testing

In the photograph-rich, numbers-light, presentation embedded above, I've tried to explain what combinatorial testing is all about without the wonky-ness. The benefits from structured variation and from using combinatorial test design is, in my view, wildly under-appreciated. It has the following extremely important benefits:

  • Less repetition from test case to test case

    • In the context of discussing testing's "pesticide paradox" James Bach, I believe, used the analogy that following in someone's footsteps is a very good way to survive traversing through a mine field but a generally lousy way to find software defects efficiently.
    • Maximizing variation from test case to test case, as a general rule, is an absolutely spectacular way to find defects quickly.
    • There are thousands, if not trillions of relevant combinations to select from when identifying test cases to execute; computer algorithms will be able to solve the problem of "how can maximum variation be achieved?" far better than human brains can.
  • More coverage of combinations of test inputs

    • Most of the time, since awareness of pairwise and combinatorial testing methods remain low in the software testing community, combining all possible pairs of values in at least one test case is not even a conscious goal of testers.
    • Even if this were a goal of their test design strategy, testers would have a tremendous challenge in trying to achieve such a goal: with hundreds, thousands or tens of thousands of targeted combinations to cover, losing track of a significant number of them and/or forgetting to include them in software tests is virtually a foregone conclusion unless a test case generator is used.
    • More thorough coverage leads to more defects being found.
  • Efficiency (Testers can "turn the coverage dial" to achieve maximum efficiency with a minimal number of tests)

    • The efficiency and effectiveness benefits of pairwise testing have been demonstrated in testing projects every major industry.
    • I wanted to prominently include the message that testers using test case generators have the option to dramatically increase the testing thoroughness levels of the tests they generate because it is a topic that often gets ignored in introductions to pairwise testing case studies and introductions
  • Thoroughness - (Testers can also "turn the coverage dial" to achieve maximum thoroughness if that is their goal)

    • Too often, tester's view pairwise as a technique that focuses on a very small number of curiously strong tests; that is only part of the story.
    • This can lead to the /false/ impression that combinatorial testing methods are inappropriate where high levels of testing thoroughness are required.
    • You can create very different sets of tests that are as thorough as possible (given your understanding of what you are testing) no matter whether you have 1 hour to execute tests or one month to test.

 

Other Recommended Sources of Information on Pairwise and Combinatorial Testing:

By: Justin Hunter on Oct 7, 2010

Categories: Combinatorial Software Testing, Combinatorial Testing, Design of Experiments, Hexawise test case generating tool, Multi-variate Testing, Pairwise Software Testing, Pairwise Testing, Recommended Tool, Testing Strategies, Uncategorized

Context is Important in Usability Testing

As Adam Goucher recently pointed out, it is important to keep in mind WHY you are testing. Testing teams working on similar projects will have different priorities that will impact how much time they test, what they look for, and what test design methods they use. (Kaner and Bach provide some great specific examples that underscore this point here). In short, the context in which you're testing should impact how you test.

The same maxim holds true when you're conducting usability testing. Considering the context is important is well, both the context of the users of the application and the context of the application itself vis a vis other similar products. Important considerations can include:

  1. What problem does the application solve for the user?

  2. What does the application you're testing aspire to be as compared to competing applications?

  3. Who is the target audience of the application? What differentiating features does the application have?

  4. What is the "personality" of the application?

  5. What benefits and values do specific segments of target users prioritize?

These questions are all important when you analyze a web site with an eye on usability. I would recommend combining both a "checklist" approach (e.g., Jakob Nielsen's well-known Ten Usability Heuristics) with an approach that takes context-specific considerations (such as the 5 questions listed above) into account.

 

The Context of a User Group I'm Familiar with: the Hexawise Team

As of the end of June, 2010, our website leaves a great deal to be desired, so say the least. Hexawise.com consists mainly of a single landing page with anemic content that we threw together a year ago thinking that we'd "turn it into a real site" when we got around to it. We then proceeded to focus all of our development efforts on the Hexawise tool itself as opposed to our website (which we've let fester). Apologies if you've visited our site and wanted to know more details about what our test design tool does and how it complements test management tools. To date, we haven't provided as much information as we should have.

We've kicked off a project now to right this wrong. To do so, we're drafting up new content and organizing our thoughts about how to present it to visitors. Our needs are relatively simple. We want to create a set of simple wireframes that will allow us to quickly experiment with a few design options, gather feedback from friends and target users. For us, ease of use is key. Quickly being able to use the tool (without needing to read through a user guide) is critical. Ability to use the tool without reading through user guides is a must. We also value a tool's ability to make it easy to collaborate with one another easily.

With that as background, what follows are some quick comments on a couple wireframing tools I've recently explored in the context of our preferences and values. Wireframing is the practice of creating a skeletal visual interface for software. It is used for the the purposes of prototyping, soliciting early user/client feedback. It comes before the more time consuming phases of design. Two popular wireframe creation tools are Balsamiq and Hotgloo. Both are flash applications. Balsamiq is a desktop app. Hotgloo is a SaaS tool used over the internet.

 

Balsamiq and HotGloo

The first thing that strikes me about Balsamiq is the rich library of UX elements neatly organized and accessible by category or through a quick add search box. Everything works as it should: the drag, drop, click and type interface follows the principle of least astonishment. Fortunately, ease of use doesn't preclude speed: modifying the content and structure of UX elements is text-based versus form-based - blending in a touch of UNIX command line efficiency into otherwise graphical interface. UNIX and IRC users will feel right at home.

HotGloo is a very promising wireframing tool. They have clearly taken a page from the 37 Signals product development playbook. They have made a tool with a smaller set of features that is very intuitive to use. They've avoided the potential risk of "feature bloat" by having fewer bells and whistles. Where I think they add value: as a SaaS tool, HotGloo is exceptionally good at allowing multiple members on a team to collaborate on iterative designs. Whereas Balsamiq uses traditional files, HotGloo is accessible from anywhere. HotGloo enables multiple users to chat and view mockups in real time. Only one user can make changes at a time. Feedback is very easy to give and I found their support to be exceptionally responsive.

HotGloo is easy to learn for the first time, but my designer felt frustrated how much time he had to spend tweaking little things (like changing the names and links of a tabbed window element). The element controller pop-ups got in the way of work and he found myself frequently dragging them away. Hotgloo also takes a more minimalist approach than Basalmiq with UX elements with respect to features. Whether this is a strength or a weakness to users is a matter of personal preference. The 37 Signals camp (which I am highly sympathetic to) argues that is often preferable to have fewer, easier-to-use features since the vast majority of users will not want or need too many bells and whistles. Our designer felt that Balsamiq's feature set fit his needs better. As a "meddlesome manager" who wants to provide regular input into the content for version 2.0 of our site, feature-richness is less important to me than the collaborative ability.

 

Usability Considerations I Shared with the Hotgloo Team

20100630-ded769wwe9a5teycpej4t7jny9

20100630-d1gfj7nyjr4naaffxffus9dmw2

 

Balsamiq

20100705-n7rymxeb8yj3dfddt24itnr2ir

 

Balsamiq has a couple usability features that make it fun to use. A case in point is how you insert an image. Balsamiq gives you three choices, the third of which is really a nice touch: You can 1. Upload a file 2. Use a photo on the web or 3. Perform a flickr search right there and then without ever leaving comfort of the Balsamiq window. In my book, that kind of thoughtful workflow integration is what makes a good product great.

 

"Postscript" - Good Karma and an Open Invitation

20100705-g1cjw9ab78ji6nqcj61uch5aau 20100705-kt8qxwjes9xabycnpm97djjtch 20100705-x3gauxcqahddtum8aj75kbfjw8

 

As a post-script of sorts, after sending 5 UX suggestions (including the 2 above) to the HotGloo team last week, I received 5 outstanding UX suggestions for our Hexawise tool this week - out of the blue - from Janesh Kodikara, a new Hexawise user based in Sri Lanka. In addition, the HotGloo team provided 5 excellent UX suggestions for improving our tool as well. Taken together, they are some of the best suggestions we've had to date. If anyone reading this would be willing to share your usability suggestions with us, I can assure you, we're extremely interested in hearing your ideas.

By: Justin Hunter on Jul 5, 2010

Categories: Context-Driven Testing, Pairwise Software Testing, Uncategorized, User Experience, User Interface

There are good reasons James Bach is so well known among the testing community and constantly invited to give keynote presentations around the globe at software testing conferences. He's passionate about testing and educating testers; he's a gifted, energetic, and entertaining speaker with a great sense of humor; and he takes joy in rattling his saber and attacking well-established institutions and schools of thought that he disagrees with. He doesn't take kindly to people who make inflated claims of benefits that would materialize "if only you'd perform testing in XYZ way or with ABC tool" given that (a) he can always seem to find exceptions to such claims, (b) he doesn't shy away from confrontation, and (c) he (rightly, in my view) thinks that such benefits statements tend to discount the importance of critical thinking skills being used by testers and other important context-specific considerations.

Leave it up to James to create a list of 13 questions that would be great to ask the next software testing tool vendor who shows up to pitch his problem-solving product. In his blog post titled "The Essence of Heuristics," he posed this exact set of questions in a slightly different context, but as a software testing tool vendor myself, they really hit home. They are:

 

  1. Do they teach you how to tell if it’s working?
  2. Do they teach you how to tell if it’s going wrong?
  3. Do they teach you heuristics for stopping?
  4. Do they teach you heuristics for knowing when to apply it?
  5. Do they compare it to alternative heuristics?
  6. Do they show you why it works?
  7. Do they help you understand when it probably works best?
  8. Do they help you know how to re-design it, if needed?
  9. Do they let you own it?
  10. Do they ask you to practice it?
  11. Do they tell stories about how it has failed?
  12. Do they listen to you when you question or challenge it?
  13. Do they praise you for questioning and challenging it?

 

[Side note: Apparently I wasn't the only one who thought of Hexawise and pairwise / combinatorial test design approaches when they saw these 13 questions. I was amused that after I drafted this post, I saw Jared Quinert's / @xflibble's tweet just now:]

20100601-br4ud66pcc7f79q1ywgbat74jw

Where do I come down on each of James' 13 questions with respect to people I talk to about our test design tool, Hexawise, and the types of benefits and the size of benefits it typically delivers? Quite simply, "Yes" to all 13. I enjoy talking about exactly the kinds of questions that James raised in his list. In fact, when I sought out James to ask him questions at a conference in Boston earlier this year, it was because I wanted his perspective on many of the points above, particularly #11: (hearing stories about how James has seen pairwise and combinatorial approaches to test design fail), and #7 (hearing his views on where it works best and where it would be difficult to apply it). I'll save my specific answers to another post, but I am serious about wanting to share my thoughts on them; time constraints are holding me back today. I gave a speech at the ASQ World Conference on Quality Improvement in St. Louis last week though that addressed many, but not all, of James' questions.

I'm not your typical software tool vendor. Basically, my natural instincts are all wrong for sales. I agree with the premise that "a fool with a tool is still a fool"; when talking to target clients and/or potential partners, I'm inclined to point out deficiencies, limitations, and various things that could go wrong; I'm more of an introvert than an extrovert, etc. Not exactly the typical characteristics of a successful salesman... Having said that, I believe that we've built a very good tool that helps enable dramatic efficiency and thoroughness benefits in many testing situations but our tool, along with the pairwise and combinatorial test design approaches that Hexawise enables both have their limitations. It is primarily by talking to software testers about their positive and negative experiences that our company is able to improve our tool, enhance our training, and provide honest, pragmatic guidance to users about where and how to use our tool (and where and how not to).

Tool vendors who defend their tools (and/or the approaches by which their tools helps users solve problems) as magical, silver bullet solutions are being both foolish and dishonest. Tool vendors who choose not to engage in serious, honest and open discussions with users about the challenges that users have when applying their tools in different situations are being short-sighted. From my own experiences, I can say that talking about the 13 topics raised by James have been invaluable.

By: Justin Hunter on Jun 1, 2010

Categories: Combinatorial Testing, Design of Experiments, Hexawise test case generating tool, Pairwise Testing, Software Testing, Software Testing Efficiency, Uncategorized

Dave Whalen posted a good piece here asserting that software testing, done well, requires a blend of "Science" and "Art". I recommend it. (He also has a good post about testing databases here).

 

20091124-n96r4rh2t5qufjm3mq8eytwu51

 

He includes the statement below which I agree with. If you are a software tester and any doubts about whether all of these methods work (pairwise software testing in particular), I would encourage you to conduct a pilot project on your own and measure the results achieved with and without the technique applied.

 

From the scientific side, testing can include a number of proven techniques such as equivalency class testing, boundary value analysis, pair-wise testing, etc. These techniques, if used properly, can reduce test times and focus on finding the bugs where they tend to hang out - much like a porch light on a summer night.

 

My response to Dave's post, included below, is not especially profound or even well-written, but, hey, I'm in a hurry in the pre-Thanksgiving rush and the topic hit close to home so I couldn't resist jotting a little something. Enjoy. Please let me know your thoughts / reactions if you have any.

 

Dave,

Very well said!

I wholeheartedly, enthusiastically agree with your premise. I also wish that more people saw things the same way.

My father co-wrote Statistics for Experimenters which describes the “art and science” within the Design of Experiments ("DoE") field of applied statistics. Well-run manufacturing companies use DoE techniques in their manufacturing processes. Many companies, such as Toyota see them as an absolutely fundamental part of their processes (yet unfortunately, software testers, who could use DoE techniques such as pairwise and other forms of combinatorial testing, are often ignorant about how to use them properly and the software testing industry as a whole dramatically under-utilizes such techniques…. but I digress).

I brought the book up because it opens up with a good example relevant to the points you made. To win at the game of 20 questions, it is useful to know “the science” of game theory and DoE; choose questions so that there is a 50/50 chance that the answer will be Yes. Someone who knows this technique, all else being equal, will be win more because of their “scientific” approach than someone who doesn’t know the technique. And yet… other stuff (whether the subject matter expertise in this example, or subject matter expertise and “artistic” Exploratory Testing in your example) is indispensable as well.

You can’t truly excel at either 20 Questions or software testing unless you have a good mix of “science” (governed by mathematical principles, proven methods of DoE, etc.) and and “art” (governed by experience, instincts, and subject matter expertise).

By: Justin Hunter on Nov 24, 2009

Categories: Combinatorial Testing, Efficiency, Pairwise Testing, Software Testing, Uncategorized

There are some phrases in English that, as often as not, come off sounding obligatory and/or insincere. The phrase "I'm honored..." comes to mind (particularly if someone is accepting an award in front of a room full of people).

Be that as it may, I genuinely felt really honored last night and again today by a couple comments James Bach has said about me, including these:

 

TwitterHexawiseresults-Oct232009

 

Here's the quick background: (1) James knows much more about software testing than I do and I respect his views a lot. (2) He has a reputation for not suffering fools gladly and pretty bluntly telling people he doesn't respect them if he doesn't respect the content of their views. (3) in addition to his extremely broad expertise on "testing in general" James, like Michael Bolton, knows a lot about pairwise and combinatorial testing methods and how to use them. (4) I firmly (and passionately) believe that pairwise and combinatorial testing methods are (a) dramatically under-appreciated, and (b) dramatically under-utilized. (5) James has published a very good and well-reasoned article about some of the limitations of pairwise testing methods that I wanted to talk to him about. (6) I co-wrote an article that IEEE Computer recently published about Combinatorial Testing that I wanted to discuss with him. (7) James and I have been at the STP Conference in Boston over the past few days. (8) I reached out to him and asked to meet at the conference to talk about pairwise and combinatorial testing methods and share with him my findings that - in the dozens of projects I've been involved with that have compared testers efficiency and effectiveness - I've routinely seen defects found per tester hour more than double. (9) I was interested in getting his insights into where are these methods most applicable? Least applicable? What have his experiences been in teaching combinatorial testing methods to students, etc.

In short, frankly, my goals in meeting with him were to: (a) meet someone new, interesting and knowledgeable and learn as much as could and try to understand from his experiences, his impressive critical thinking and his questioning nature, and (b) avoid tripping up with sloppy reasoning (when unapologetically expressing the reasons I feel combinatorial testing methods are dramatically under-appreciated by the software testing community) in front of someone who (i) can smell BS a mile away, and (ii) doesn't suffer fools gladly.

I learned a lot, heard some fantastic war stories and heard his excellent counter-examples that disproved a couple of the generalizations I was making (but didn't dampen my unshaken assertions that combinatorial testing methods are wildly under-utilized by the software testing community). I thoroughly enjoyed the experience. Moving forward, as a result of our meeting, I will go through an exercise which will make me more effective (namely carefully thinking through and enumerating all of the assumptions behind my statements like: "I've measured the effectiveness of testers dozens of times - trying to control external variables as much as reasonably possible - and I'm consistently seeing more than twice as many defects per tester hour when testers adopt pairwise/combinatorial testing methods."

His complement last night was private so I won't share it but it ranks up there in my all time favorite complements I've ever received. I'm honored. Thanks James.

By: Justin Hunter on Oct 23, 2009

Categories: Combinatorial Testing, Design of Experiments, Efficiency, Interesting People , Pairwise Testing, Software Testing, Software Testing Efficiency, Testing Case Studies, Uncategorized

I have just created the first video overview of the Hexawise test case generator. Please take a look and let me know your thoughts (either with an email or a comment below).

 

Introduction to Hexawise Pairwise Testing Tool / Combinatorial Testing Tool

 

I'll refine and hopefully improve it over time, but wanted to share it at this point for feedback. I'd welcome feedback. Is the pace of the video too slow? Does it have too much detail about pairwise coverage? Does the fact that I've got a dull Midwestern, nasal, monotone mean I should have someone with a more animated and melodious "voice made for radio" do the voice over?

Thanks in advance for your feedback!

By: Justin Hunter on Oct 20, 2009

Categories: Uncategorized