84 percent coverage in 20 tests

Hexawise test coverage graph showing 83.9% coverage in just 20 tests

 

Among the many benefits Hexawise provides is creating a test plan that maximizes test coverage with each new scenario tested. The graph above shows that after just 20 test 83.9% of the test combinations have been tested. Read more about this in our case study of a mortgage application software test plan. Just 48 test combinations are needed to test for every valid pair (3.7 million possible tests combinations exist in this case). If you are lost now, this video may help.

The coverage achieved by the first few tests in the plan will be quite high (and the graph line will point up sharply) then the slope will decrease in the middle of the plan (because each new test will tend to test fewer net new pairs of values for the first time) and then at the end of the plan the line will flatten out quite a lot (because by the end, relatively few pairs of values will be tested together for the first time).

One of the benefits Hexawise provides is making that slope as steep as possible. The steeper the slope the more efficient your test plan is. If you repeat the same tests of pairs and triples and... while not taking advantage of the chance to test, untested pairs and triples you will have to create and run far more test than if you intelligently create a test plan. With many interactions to test it is far too complex to manually derive an intelligent test plan. A combinatorial testing tool, like Hexawise, that maximizes test plan efficiency is needed.

For any set of test inputs, there is a finite number of pairs of values that could be tested together (that can be quite a large number). The coverage chart answers, after each tests, what percentage of the total number of pairs (or triples, etc.) that could be tested together have been tested together so far?

The Hexawise algorithms achieve the following objectives that help testers find as many defects as possible in as few tests as possible. In each and every step of each and every test case, the algorithm chooses a test condition that will maximize the number of pairs that can be covered for the first time in the test case. (Or, the maximum number of triplets or quadruplets, etc. based on the thoroughness setting defined by the user). Allpairs (AKA pairwise) is a well known and easy to understand test design strategy. Hexawise lets users create pairwise sets of tests that will test not only every pair but it also allows test designers to generate far more thorough sets of tests (3-way to 6-way coverage). This allows users to "turn up the coverage dial" and generate tests that cover every single possible triplet of test inputs together at least once (or every 4-way combination or 5-way combination or 6-way combination).

Note that the coverage ratio Hexawise shows is based on the factors entered as items to be tested: not a code coverage percentage. Hexawise sorts the test plan to front load the coverage of the tuple pairs, not the coverage of the code paths. Coverage of code paths ultimately depends on how good a job the test designer did at extracting the relevant parameters and values of the system under test. You would expect there to be some loose correlation between coverage of identified tuple pairs and coverage of code paths in most typical systems.

If you want to learn more about these concepts, I would recommend Scott's Scott Sehlhorst articles on pairwise and combinatorial test design. They are some of the clearest introductory articles about pairwise and combinatorial testing that I have seen. They also contain some interesting data points related to the correlation between 2-way / allpairs / pairwise / n-way coverage (in Hexawise) and the white box metrics of branch coverage, block coverage and code coverage (not measurable by Hexawise).

In Software testing series: Pairwise testing, for example, Scott includes these data points:

 

  • We measured the coverage of combinatorial design test sets for 10 Unix commands: basename, cb, comm, crypt, sleep, sort, touch, tty, uniq, and wc... The pairwise tests gave over 90 percent block coverage.

 

  • Our initial trial of this was on a subset Nortel’s internal e-mail system where we able cover 97% of branches with less than 100 valid and invalid testcases, as opposed to 27 trillion exhaustive testcases.

 

  • A set of 29 pair-wise... tests gave 90% block coverage for the UNIX sort command. We also compared pair-wise testing with random input testing and found that pair-wise testing gave better coverage.

 

Related: Why isn't Software Testing Performed as Efficiently and Effecively as it could be? - Video Highlight Reel of Hexawise – a pairwise testing tool and combinatorial testing tool - Combinatorial Testing, The Quadrant of Massive Efficiency Gains

Specific guidance on how to view the percentage of coverage graph for the test plan in Hexawise:

 

When working on your test plan in Hexawise, to get the checklist to be visible, click on the two downward arrow keys located shown in the image:

How-To Progress Checklists-2 inline

Then you'll want to open up the "Advanced" list. So you might need to click here:

Advanced How-To Progress Checklist inline

Then the detailed explanation will begin when you click on "Analyze Tests"

Decreasing Marginal Returns inline

 

This post is adapted (and some new content added) from comments posted by Justin Hunter and Sean Johnson.

By: John Hunter on Feb 3, 2012

Categories: Combinatorial Software Testing, Combinatorial Testing, Efficiency, Multi-variate Testing, Pairwise Software Testing, Pairwise Testing, Scripted Software Testing, Software Testing, Software Testing Efficiency

jobgraph

 

This chart shows that the number of job listings posted for testers has remained essentially flat for testers over the last six years. During this same period, the number of job listings for developers has increased almost 50%. If your company's hiring has mirrored this chart's trends, and your software testers are increasingly feeling that there's not enough time in the day to test everything that's being thrown at them, your testers might have a point.

By: Justin Hunter on Feb 8, 2011

Categories: Software Testing

Combinatorial Software Test Design - Beyond Pairwise Testing

 

I put this together to explain combinatorial software test design methods in an accessible manner. I hope you enjoy it and that, if you do, that you'll consider trying to create test cases for your next testing project (whether you choose our Hexawise test case generator or some other test design tool).

 

Where I'm Coming From

As those of you know who read my posts, read my articles, and/or have attended my testing conference presentations, I am a passionate proponent of these approaches to software test design that maximize variation from test case to test case and minimize repetition. It's not much of an exaggeration to say I hardly write or talk publicly about any other software testing-related topics. My own consistent experiences and formal studies indicate that pairwise, orthogonal array-based, and combinatorial test design approaches often lead to a doubling of tester productivity (as measured in defects found per tester hour) as compared to the far more prevalent practice in the software testing industry of selecting and documenting test cases by hand. How is it possible that this approach generates such a dramatic increase in productivity? What is so different between the manually-selected test cases and the pair-wise or combinatorial testing cases? Why isn't this test design technique far more broadly adopted than it is?

 

A Common Challenge to Understanding: Complicated, Wonky Explanation

My suspicion is that a significant reason that combinatorial software testing methods are not much more widely adopted is that many of the articles describing it are simply too complex and/or too abstract for many testers to understand and apply. Such articles say things like:

A. Mathematical Model

 

A pairwise test suite is a t-way interaction test suite where t = 2. A t-way interaction test suite is a mathematical structure, called a covering array.

Definition 1 A covering array, CA(N; t, k, |v|), is an N × k array from a set, v, of values (symbols) such that every N × t subarray contains all tuples of size t (t-tuples) from the |v| values at least once [8].

The strength of a covering array is t, which defines, for example, 2-way (pairwise) or 3-way interaction test suite. The k columns of this array are called factors, where each factor has |v| values. In general, most software systems do not have the same number of values for each factor. A more general structure can be defined that allows variability of |v|.

Definition 2 A mixed level covering array, MCA (N; t, k, (|v1|,|v2|,..., |vk|)), is an N × k array on |v| values, where

| v |␣ ␣k | vi | , with the following properties: (1) Each i␣1

column i (1 ␣ i ␣ k) contains only elements from a set Si of size |vi|. (2) The rows of each N × t subarray cover all t-tuples of values from the t columns at least once.

  • "Construct Pairwise Test Suites Based on the Bak-Sneppen Model of Biological Evolution" World Academy of Science, Engineering and Technology 59 2009 - Jianjun Yuan, Changjun Jiang

 

If you're a typical software tester, even one motivated to try new methods to improve your skills, you could be forgiven for not mustering up the enthusiasm to read such articles. The relevancy, the power, and the applicability of combinatorial testing - not to mention that this test design method can often double your software testing efficiency and increase the thoroughness of your software testing - all tend to get lost in the abstract, academic, wonky explanations that are typically used to describe combinatorial testing. Unfortunately for pragmatic, action-oriented software testing practitioners, many of the readily accessible articles on pairwise testing and combinatorial testing tend to be on the wonky end of the spectrum; an exception to that general rule are the good, practitioner-oriented introductory articles available at combinatorialtesting.com.

 

A Different Approach to Explaining Combinatorial Testing and Pairwise Testing

In the photograph-rich, numbers-light, presentation embedded above, I've tried to explain what combinatorial testing is all about without the wonky-ness. The benefits from structured variation and from using combinatorial test design is, in my view, wildly under-appreciated. It has the following extremely important benefits:

  • Less repetition from test case to test case

    • In the context of discussing testing's "pesticide paradox" James Bach, I believe, used the analogy that following in someone's footsteps is a very good way to survive traversing through a mine field but a generally lousy way to find software defects efficiently.
    • Maximizing variation from test case to test case, as a general rule, is an absolutely spectacular way to find defects quickly.
    • There are thousands, if not trillions of relevant combinations to select from when identifying test cases to execute; computer algorithms will be able to solve the problem of "how can maximum variation be achieved?" far better than human brains can.
  • More coverage of combinations of test inputs

    • Most of the time, since awareness of pairwise and combinatorial testing methods remain low in the software testing community, combining all possible pairs of values in at least one test case is not even a conscious goal of testers.
    • Even if this were a goal of their test design strategy, testers would have a tremendous challenge in trying to achieve such a goal: with hundreds, thousands or tens of thousands of targeted combinations to cover, losing track of a significant number of them and/or forgetting to include them in software tests is virtually a foregone conclusion unless a test case generator is used.
    • More thorough coverage leads to more defects being found.
  • Efficiency (Testers can "turn the coverage dial" to achieve maximum efficiency with a minimal number of tests)

    • The efficiency and effectiveness benefits of pairwise testing have been demonstrated in testing projects every major industry.
    • I wanted to prominently include the message that testers using test case generators have the option to dramatically increase the testing thoroughness levels of the tests they generate because it is a topic that often gets ignored in introductions to pairwise testing case studies and introductions
  • Thoroughness - (Testers can also "turn the coverage dial" to achieve maximum thoroughness if that is their goal)

    • Too often, tester's view pairwise as a technique that focuses on a very small number of curiously strong tests; that is only part of the story.
    • This can lead to the /false/ impression that combinatorial testing methods are inappropriate where high levels of testing thoroughness are required.
    • You can create very different sets of tests that are as thorough as possible (given your understanding of what you are testing) no matter whether you have 1 hour to execute tests or one month to test.

 

Other Recommended Sources of Information on Pairwise and Combinatorial Testing:

By: Justin Hunter on Oct 7, 2010

Categories: Combinatorial Software Testing, Combinatorial Testing, Design of Experiments, Hexawise test case generating tool, Multi-variate Testing, Pairwise Software Testing, Pairwise Testing, Recommended Tool, Testing Strategies, Uncategorized

I can be too verbose with some of my posts. This will be quick.

I recommend that you read this. It is the Exploratory Testing Dynamics document, a tightly condensed list of useful testing heuristics authored by three of the most thoughtful and experienced software testers alive today: James Bach, Michael Bolton, and Jon Bach. Using it will help you improve your software testing capabilities. http://www.satisfice.com/blog/wp-content/uploads/2009/10/et-dynamics22.pdf

Told you it'd be brief. Go. Now. Read.

By: Justin Hunter on Jul 13, 2010

Categories: Exploratory Testing, Software Testing

Context is Important in Usability Testing

As Adam Goucher recently pointed out, it is important to keep in mind WHY you are testing. Testing teams working on similar projects will have different priorities that will impact how much time they test, what they look for, and what test design methods they use. (Kaner and Bach provide some great specific examples that underscore this point here). In short, the context in which you're testing should impact how you test.

The same maxim holds true when you're conducting usability testing. Considering the context is important is well, both the context of the users of the application and the context of the application itself vis a vis other similar products. Important considerations can include:

  1. What problem does the application solve for the user?

  2. What does the application you're testing aspire to be as compared to competing applications?

  3. Who is the target audience of the application? What differentiating features does the application have?

  4. What is the "personality" of the application?

  5. What benefits and values do specific segments of target users prioritize?

These questions are all important when you analyze a web site with an eye on usability. I would recommend combining both a "checklist" approach (e.g., Jakob Nielsen's well-known Ten Usability Heuristics) with an approach that takes context-specific considerations (such as the 5 questions listed above) into account.

 

The Context of a User Group I'm Familiar with: the Hexawise Team

As of the end of June, 2010, our website leaves a great deal to be desired, so say the least. Hexawise.com consists mainly of a single landing page with anemic content that we threw together a year ago thinking that we'd "turn it into a real site" when we got around to it. We then proceeded to focus all of our development efforts on the Hexawise tool itself as opposed to our website (which we've let fester). Apologies if you've visited our site and wanted to know more details about what our test design tool does and how it complements test management tools. To date, we haven't provided as much information as we should have.

We've kicked off a project now to right this wrong. To do so, we're drafting up new content and organizing our thoughts about how to present it to visitors. Our needs are relatively simple. We want to create a set of simple wireframes that will allow us to quickly experiment with a few design options, gather feedback from friends and target users. For us, ease of use is key. Quickly being able to use the tool (without needing to read through a user guide) is critical. Ability to use the tool without reading through user guides is a must. We also value a tool's ability to make it easy to collaborate with one another easily.

With that as background, what follows are some quick comments on a couple wireframing tools I've recently explored in the context of our preferences and values. Wireframing is the practice of creating a skeletal visual interface for software. It is used for the the purposes of prototyping, soliciting early user/client feedback. It comes before the more time consuming phases of design. Two popular wireframe creation tools are Balsamiq and Hotgloo. Both are flash applications. Balsamiq is a desktop app. Hotgloo is a SaaS tool used over the internet.

 

Balsamiq and HotGloo

The first thing that strikes me about Balsamiq is the rich library of UX elements neatly organized and accessible by category or through a quick add search box. Everything works as it should: the drag, drop, click and type interface follows the principle of least astonishment. Fortunately, ease of use doesn't preclude speed: modifying the content and structure of UX elements is text-based versus form-based - blending in a touch of UNIX command line efficiency into otherwise graphical interface. UNIX and IRC users will feel right at home.

HotGloo is a very promising wireframing tool. They have clearly taken a page from the 37 Signals product development playbook. They have made a tool with a smaller set of features that is very intuitive to use. They've avoided the potential risk of "feature bloat" by having fewer bells and whistles. Where I think they add value: as a SaaS tool, HotGloo is exceptionally good at allowing multiple members on a team to collaborate on iterative designs. Whereas Balsamiq uses traditional files, HotGloo is accessible from anywhere. HotGloo enables multiple users to chat and view mockups in real time. Only one user can make changes at a time. Feedback is very easy to give and I found their support to be exceptionally responsive.

HotGloo is easy to learn for the first time, but my designer felt frustrated how much time he had to spend tweaking little things (like changing the names and links of a tabbed window element). The element controller pop-ups got in the way of work and he found myself frequently dragging them away. Hotgloo also takes a more minimalist approach than Basalmiq with UX elements with respect to features. Whether this is a strength or a weakness to users is a matter of personal preference. The 37 Signals camp (which I am highly sympathetic to) argues that is often preferable to have fewer, easier-to-use features since the vast majority of users will not want or need too many bells and whistles. Our designer felt that Balsamiq's feature set fit his needs better. As a "meddlesome manager" who wants to provide regular input into the content for version 2.0 of our site, feature-richness is less important to me than the collaborative ability.

 

Usability Considerations I Shared with the Hotgloo Team

20100630-ded769wwe9a5teycpej4t7jny9

20100630-d1gfj7nyjr4naaffxffus9dmw2

 

Balsamiq

20100705-n7rymxeb8yj3dfddt24itnr2ir

 

Balsamiq has a couple usability features that make it fun to use. A case in point is how you insert an image. Balsamiq gives you three choices, the third of which is really a nice touch: You can 1. Upload a file 2. Use a photo on the web or 3. Perform a flickr search right there and then without ever leaving comfort of the Balsamiq window. In my book, that kind of thoughtful workflow integration is what makes a good product great.

 

"Postscript" - Good Karma and an Open Invitation

20100705-g1cjw9ab78ji6nqcj61uch5aau 20100705-kt8qxwjes9xabycnpm97djjtch 20100705-x3gauxcqahddtum8aj75kbfjw8

 

As a post-script of sorts, after sending 5 UX suggestions (including the 2 above) to the HotGloo team last week, I received 5 outstanding UX suggestions for our Hexawise tool this week - out of the blue - from Janesh Kodikara, a new Hexawise user based in Sri Lanka. In addition, the HotGloo team provided 5 excellent UX suggestions for improving our tool as well. Taken together, they are some of the best suggestions we've had to date. If anyone reading this would be willing to share your usability suggestions with us, I can assure you, we're extremely interested in hearing your ideas.

By: Justin Hunter on Jul 5, 2010

Categories: Context-Driven Testing, Pairwise Software Testing, Uncategorized, User Experience, User Interface

A friend passed me this set of recent tweets from Wil Shipley, a Mac developer with 11,743 followers on Twitter as of today. Wil recently encountered the familiar problem of what to do when you've got more software tests to run than you can realistically execute.

 

20100623-nixnbwu9urxaufu6hjt2143j3h

I love that. Who can't relate?

Now if only there were a good, quick way to reduce the number of tests from over a billion to a smaller, much more manageable set of tests that were "Altoid-like" in their curious strength. :) I rarely use this blog for shameless plugs of our test case generating tool, but I can't help myself here. The opening is just too inviting. So here goes:

 

"Wil,

There's an app for that... See www.hexawise.com for Hexawise, a "pairwise software test case generating tool on steroids." It eats problems like the one you encountered for breakfast. Hexawise winnows bazillions of possible test cases down in the blink of an eye to small, manageable sets of test cases that are carefully constructed to maximize coverage in the smallest amount of tests, with flexibility to adjust the solutions based upon the execution time you have available. In addition to generating pairwise testing solutions, Hexawise also generates more thorough applied statistics-based "combinatorial software testing" solutions that include tests for, say, all possible 6-way combinations of test inputs.

Where your Mac cops an attitude and tells you "Bitch, I ain't even allocating 1 billion integers to hold your results" and showers you with taunting derisive sneers, head-waggling and snaps all carefully choreographed to let you know where you stand, Hexawise, in contrast, would helpfully tell you: "Only 1 billion total possibilities to select tests from? Pfft! Child's play. Want to start testing the 100 or so most powerful tests? Want to execute an extremely thorough set of 10,000 tests? Want to select a thoroughness setting in the middle? Your wish is my command, sir. You tell me approximately how many tests you want to run and the test inputs you want to include, and I'll calculate the most powerful set of tests you can execute (based on proven applied statistics-based Design of Experiments methods) before you can say "I'm Wil Shipley and I like my TED Conference swag."

More info at:
http://hexawise.tv/intro/
or
https://hexawise.com/Hexawise_Introduction.pdf
free trials at:
http://hexawise.com/signup

By: Justin Hunter on Jun 23, 2010

Categories: Combinatorial Software Testing, Combinatorial Testing, Interesting People , Pairwise Software Testing, Pairwise Testing, Recommended Tool, Software Testing

There are good reasons James Bach is so well known among the testing community and constantly invited to give keynote presentations around the globe at software testing conferences. He's passionate about testing and educating testers; he's a gifted, energetic, and entertaining speaker with a great sense of humor; and he takes joy in rattling his saber and attacking well-established institutions and schools of thought that he disagrees with. He doesn't take kindly to people who make inflated claims of benefits that would materialize "if only you'd perform testing in XYZ way or with ABC tool" given that (a) he can always seem to find exceptions to such claims, (b) he doesn't shy away from confrontation, and (c) he (rightly, in my view) thinks that such benefits statements tend to discount the importance of critical thinking skills being used by testers and other important context-specific considerations.

Leave it up to James to create a list of 13 questions that would be great to ask the next software testing tool vendor who shows up to pitch his problem-solving product. In his blog post titled "The Essence of Heuristics," he posed this exact set of questions in a slightly different context, but as a software testing tool vendor myself, they really hit home. They are:

 

  1. Do they teach you how to tell if it’s working?
  2. Do they teach you how to tell if it’s going wrong?
  3. Do they teach you heuristics for stopping?
  4. Do they teach you heuristics for knowing when to apply it?
  5. Do they compare it to alternative heuristics?
  6. Do they show you why it works?
  7. Do they help you understand when it probably works best?
  8. Do they help you know how to re-design it, if needed?
  9. Do they let you own it?
  10. Do they ask you to practice it?
  11. Do they tell stories about how it has failed?
  12. Do they listen to you when you question or challenge it?
  13. Do they praise you for questioning and challenging it?

 

[Side note: Apparently I wasn't the only one who thought of Hexawise and pairwise / combinatorial test design approaches when they saw these 13 questions. I was amused that after I drafted this post, I saw Jared Quinert's / @xflibble's tweet just now:]

20100601-br4ud66pcc7f79q1ywgbat74jw

Where do I come down on each of James' 13 questions with respect to people I talk to about our test design tool, Hexawise, and the types of benefits and the size of benefits it typically delivers? Quite simply, "Yes" to all 13. I enjoy talking about exactly the kinds of questions that James raised in his list. In fact, when I sought out James to ask him questions at a conference in Boston earlier this year, it was because I wanted his perspective on many of the points above, particularly #11: (hearing stories about how James has seen pairwise and combinatorial approaches to test design fail), and #7 (hearing his views on where it works best and where it would be difficult to apply it). I'll save my specific answers to another post, but I am serious about wanting to share my thoughts on them; time constraints are holding me back today. I gave a speech at the ASQ World Conference on Quality Improvement in St. Louis last week though that addressed many, but not all, of James' questions.

I'm not your typical software tool vendor. Basically, my natural instincts are all wrong for sales. I agree with the premise that "a fool with a tool is still a fool"; when talking to target clients and/or potential partners, I'm inclined to point out deficiencies, limitations, and various things that could go wrong; I'm more of an introvert than an extrovert, etc. Not exactly the typical characteristics of a successful salesman... Having said that, I believe that we've built a very good tool that helps enable dramatic efficiency and thoroughness benefits in many testing situations but our tool, along with the pairwise and combinatorial test design approaches that Hexawise enables both have their limitations. It is primarily by talking to software testers about their positive and negative experiences that our company is able to improve our tool, enhance our training, and provide honest, pragmatic guidance to users about where and how to use our tool (and where and how not to).

Tool vendors who defend their tools (and/or the approaches by which their tools helps users solve problems) as magical, silver bullet solutions are being both foolish and dishonest. Tool vendors who choose not to engage in serious, honest and open discussions with users about the challenges that users have when applying their tools in different situations are being short-sighted. From my own experiences, I can say that talking about the 13 topics raised by James have been invaluable.

By: Justin Hunter on Jun 1, 2010

Categories: Combinatorial Testing, Design of Experiments, Hexawise test case generating tool, Pairwise Testing, Software Testing, Software Testing Efficiency, Uncategorized

While I wouldn't describe myself as a blanket "Microsoft hater," I have developed a no-holds-barred hatred of IE6. Our Hexawise test generation tool doesn't support it (and we have no plans to unless the surprisingly high number of our target clients in financial service firms, government agencies and buggy-whip manufacturing who are requesting IE6 support make it too financially painful for us to continue to stand on principle).

As an unapologetic IE6-hater, it made me smile to watch and listen to Scott Ward sing his witty and original song "IE is being mean to me." Enjoy.

 

 

A tip of the hat to Chris McMahon, an impressively knowledgeable software tester who played the bass professionally in bands for years, who alerted me to this gem.

By: Justin Hunter on May 14, 2010

Categories: Browser Testing, Product Management

Luis Fernández, an Associate professor at Universidad de Alcala is conducting a survey of software testers to gather data relating to, e.g., "Why isn't software testing conducted as efficiently and effectively as it should be?" and "What factors lead to software testing being 'under-appreciated' as a potential career path?"

His survey (as of March, 2010) is listed [here].(http://www.cc.uah.es/encuestas/index.php?sid=28392&lang=en)

Personally, I agree that the following two issues (identified in his survey) are significant causes of inefficiency in software testing:

1) "People tend to execute testing in an uncontrolled manner until the total expenditure of resources in the belief that if we test a lot, in the end, we will cover or control all the system." (Or, at least, given the relatively undisciplined test case selection methods prevalent in the industry, my experience in analyzing manually selected test scenarios is that testers generally believe (a) they are covering a higher proportion of an application's possible combinations than they actually are and (b) they underestimate the amount of time that is spent during test execution unproductively repeating steps that they have previously tested)

2) "Many managers did not receive appropriate training on software testing so they do not appreciate its interest or potential for efficiency and quality."

It is unfortunate, but true, that many testing managers do not have any background whatsoever in combinatorial testing methods that (a) dramatically reduce the amount of time it takes to select and document test cases, and (b) will simultaneously improve test execution efficiency when applied correctly. See, for example, this

See also: This slideshow on efficient and effective test design

Please consider taking Fernández's short survey. It takes only 5-10 minutes to complete.

By: Justin Hunter on Mar 1, 2010

Categories: Combinatorial Testing, Efficiency, Software Testing, Software Testing Efficiency

An unusually hectic work-schedule has been keeping me hopping lately. I returned this weekend from a great two-week trip to the UK in which I visited with 5 testing teams using our Hexawise tool to design test cases for applications being used in two banks, a consulting and systems integration firm, a grocery store chain, and a telecoms company.

Every product manager worth his or her salt will tell you it is a good idea to go meet with customers, listen to them, and watch them as they use your application. Even though everyone I know agrees with this, I find it difficult to make happen as regularly as I would like to. This trip provided me with a reminder of how valuable in-depth customer interactions can be. The two weeks of on-site visits with testing teams proved to be great way to: (a) reconnect with customers, (b) get actionable input about what users like / don't like about our tool, (c) identify new ways we can continue to refine our tool, and even (d) understand a couple unexpected ways teams are using it.

Bret Petticord's tweets on "What is Agile?" / "not Agile?" prompted me to write this quick post. I like them a lot.

 

20100225-dumbhsb9bpbaf5pu921kf9xim9

 

When we first created our Hexawise tool, we followed the 4 steps Bret lays out in his description of "What is Agile?" My experience in the UK over the last two weeks was the start of one of many "Repeat" cycles.

I admire people who can succinctly summarize wisdom into bite-sized quips like Bret achieved with his two tweets. Another guy who excels at creating sound-bites is James Carville. Love him or hate him, he has that skill in spades. When I watched the movie "War Room," I felt like I was watching the "master of the sound-bite" in his element. Me? I'm more of a rambling, meandering, verbose communicator. I've just taken 332 words and a screen shot with Bret's tweets when all I set out to do in starting to write this post was to share Bret's 32 words with you.

By: Justin Hunter on Feb 25, 2010

Categories: Agile, Combinatorial Testing, Hexawise test case generating tool