An unusually hectic work-schedule has been keeping me hopping lately. I returned this weekend from a great two-week trip to the UK in which I visited with 5 testing teams using our Hexawise tool to design test cases for applications being used in two banks, a consulting and systems integration firm, a grocery store chain, and a telecoms company.

Every product manager worth his or her salt will tell you it is a good idea to go meet with customers, listen to them, and watch them as they use your application. Even though everyone I know agrees with this, I find it difficult to make happen as regularly as I would like to. This trip provided me with a reminder of how valuable in-depth customer interactions can be. The two weeks of on-site visits with testing teams proved to be great way to: (a) reconnect with customers, (b) get actionable input about what users like / don't like about our tool, (c) identify new ways we can continue to refine our tool, and even (d) understand a couple unexpected ways teams are using it.

Bret Petticord's tweets on "What is Agile?" / "not Agile?" prompted me to write this quick post. I like them a lot.

 

20100225-dumbhsb9bpbaf5pu921kf9xim9

 

When we first created our Hexawise tool, we followed the 4 steps Bret lays out in his description of "What is Agile?" My experience in the UK over the last two weeks was the start of one of many "Repeat" cycles.

I admire people who can succinctly summarize wisdom into bite-sized quips like Bret achieved with his two tweets. Another guy who excels at creating sound-bites is James Carville. Love him or hate him, he has that skill in spades. When I watched the movie "War Room," I felt like I was watching the "master of the sound-bite" in his element. Me? I'm more of a rambling, meandering, verbose communicator. I've just taken 332 words and a screen shot with Bret's tweets when all I set out to do in starting to write this post was to share Bret's 32 words with you.

By: Justin Hunter on Feb 25, 2010

Categories: Agile, Combinatorial Testing, Hexawise test case generating tool

20100127-ht4mjknjnmwce46fp7m2jst7q

 

All the quotes below are from the inside cover of Statistics for Experimenters written by George Box, Stuart Hunter, and William G. Hunter (my late father). The Design of Experiments methods expressed in the book (namely, the science of finding out as much information as possible in as few experiments as possible), were the inspiration behind our software test case generating tool. In paging through the book again today, I found it striking (but not surprising) how many of these quotes are directly relevant to efficient and effective software testing (and efficient and effective test case design strategies in particular):

  • "Discovering the unexpected is more important than confirming the known." - George Box

  • "All models are wrong; some models are useful." - George Box

  • "Don't fall in love with a model."

  • How, with a minimum of effort, can you discover what does what to what? Which factors do what to which responses?

  • "Anyone who has never made a mistake has never tried anything new." - Albert Einstein

  • "Seek computer programs that allow you to do the thinking."

  • "A computer should make both calculations and graphs. Both sorts of output should be studied; each will contribute to understanding." - F. J. Anscombe

  • "The best time to plan an experiment is after you've done it." - R. A. Fisher

  • "Sometimes the only thing you can do with a poorly designed experiment is to try to find out what it died of." - R. A. Fisher

  • The experimenter who believes that only one factor at a time should be varied, is amply provided for by using a factorial experiment.

  • Only in exceptional circumstances do you need or should you attempt to answer all the questions with one experiment.

  • "The business of life is to endeavor to find out what you don't know from what you do; that's what I called 'guessing what was on the other side of the hill.'" - Duke of Wellington

  • "To find out what happens when you change something, it is necessary to change it."

  • "An engineer who does not know experimental design is not an engineer." - Comment made by to one of the authors by an executive of the Toyota Motor Company

  • "Among those factors to be considered there will usually be the vital few and the trivial many." - J. M. Juran

  • "The most exciting phrase to hear in science, the one that heralds discoveries, is not 'Eureka!' but 'Now that's funny...'" - Isaac Asimov

  • "Not everything that can be counted counts and not everything that counts can be counted." - Albert Einstein

  • "You can see a lot by just looking." - Yogi Berra

  • "Few things are less common than common sense."

  • "Criteria must be reconsidered at every stage of an investigation."

  • "With sequential assembly, designs can be built up so that the complexity of the design matches that of the problem."

  • "A factorial design makes every observation do double (multiple) duty." - Jack Couden

Where the quotes are not attributed, I'm assuming the quote is from one of the authors. The most well known of the quotes not attributed, above, "All models are wrong; some models are useful." is widely attributed to George Box in particular, which is accurate. Although I forgot to confirm that suspicion with him when I saw him over Christmas break, I suspect most of them are from George (as opposed to from Stu or my dad); George is 90 now and still off-the-charts smart, funny, and is probably the best story teller I've met in my life. If he were younger and on Twitter, he'd be one of those guys who churned out highly retweetable chestnuts again and again. [Update - George Box died in 2013]

 

Related thoughts

As you know if you've read my blog before, I am a strong proponent of using the Design of Experiments principles laid out in this book and applying them in field of software testing to improve the efficiency and effectiveness of software test case design (e.g., by using pairwise software testing, orthogonal array software testing, and/or combinatorial software testing techniques). In fact, I decided to create my company's test case generating tool, called Hexawise, after using Design of Experiments-based test design methods during my time at Accenture in a couple dozen projects and measuring dramatic improvements in tester productivity (as well as dramatic reductions in the amount of time it took to identify and document test cases). We saw these improvements in every single pilot project when we used these methods to identify tests.

My goal, in continuing to improve our Hexawise test case generating tool, is to help make the efficiency-enhancing Design of Experiments methods embodied in the book, accessible to "regular" software testers, and more more broadly adopted throughout the software testing field. Some days, it feels like a shame that the approaches from the Design of Experiments field (extremely well-known and broadly used in manufacturing industries across the globe, in research and development labs of all kinds, in product development projects in chemicals, pharmaceuticals, and a wide variety of other fields), have not made much of an inroad into software testing. The irony is, it is hard to think of a field in which it is easier, quicker, or immediately obvious to prove that dramatic benefits result from adopting Design of Experiments methods than software testing. All it takes is for a testing team to decide to do a simple proof of concept pilot. It could be for as little as a half-day's testing activity for one tester. Create a set of pairwise tests with Hexawise or another t00l like James Bach's AllPairs tool. Have one tester execute the tests suggested by the test case generating tool. Have the other tester(s) test the same application in parallel. Measure four things:

  1. How long did it take to create the pairwise / DoE-based test cases?

  2. How many defects were found per hour by the tester(s) who executed the "business as usual" test cases?

  3. How many defects were found per hour by the tester who executed the pairwise / DoE-based tests?

  4. How many defects were identified overall by each plan's tests?

These four simple measurements will typically demonstrate dramatic improvements in:

  • Speed of test case identification and documentation

  • Efficiency in defects found per hour

As well as consistent improvements to:

  • Overall thoroughness of testing.

 

A Suggestion: Experiment / Learn / Get the Data / Let the Efficiency and Effectiveness Findings Guide You

I would be thrilled if this blog post gave you the motivation to explore this testing approach and measure the results. Whether you've used similar-sounding techniques before or never heard of DoE-based software testing methods before, whether you're a software testing newbie or a grizzled veteran, I suspect the experience of running a structured proof of concept pilot (and seeing the dramatic benefits I'm confident you'll see) could be a watershed moment in your testing career. Try it! If you're interested in conducting a pilot, I'd be happy to help get you started and if you'd be willing to share the results of your pilot publicly, I'd be able to provide ongoing advice and test plan review. Send me an email or leave a comment.

To the grizzled and skeptical veterans, (and yes, Mr, Shrini Kulkarni / @shrinik who tweeted "@Hexawise With all due respect. I can't credit any technique the superpower of 2X defect finding capability. sumthng else must be goingon" before you actually conducted a proof of concept using Design of Experiments-based testing methods and analyzed your findings, I'm lookin' at you), I would (re)quote Sophocles: "One must try by doing the thing; for though you think you know it, you have no certainty until you try." For newer testers, eager to expand your testing knowledge (and perhaps gain an enormous amount of credibility by taking the initiative, while you're at it), I'd (re)quote Cole Porter: "Experiment and you'll see!"

I'd welcome your comments and questions. If you're feeling, "Sounds too good to be true, but heck, I can secure a tester for half a day to run some of these DoE-based / pairwise tests and gather some data to see whether or not it leads to a step-change improvement in efficiency and effectiveness of our testing" and you're wondering how you'd get started, I'd be happy to help you out and do so at no cost to you. All I'd ask is that you share your findings with the world (e.g., in your blog or let me use your data as the firms did with their findings in the "Combinatorial Software Testing" article below).

 

Related:

By: Justin Hunter on Jan 27, 2010

Categories: Combinatorial Testing, Design of Experiments, Hexawise test case generating tool, Multi-variate Testing, Software Testing

I responded to a recent blog post written by Gareth Bowles today and was struck - again - that a defect that must have been seen >10 million times by now has still not been corrected. When anyone responds to a blog post on Blogger.com, the stat counter says "1 comments" instead of correctly stating "1 comment." What's up with that?

 

Lands End-Wikipediathefreeencyclope

 

The clothing company Lands' End (with the apostrophe erroneously after the s instead of before it) has a bizarre but somewhat logical explanation for why they have printed their grammatical-mistake-laden brand name on millions of pieces of clothing. According to one version of the story I have heard, they printed their first brochures with the typo and couldn't afford to get it changed. I also remember reading a more detailed explanation in a catalog in the late 80's to the effect that by the time the company management realized their mistake and tried to get trademark protection on "Land's End" they discovered that another firm already had trademarked rights to that name. Quick internet searches can't verify that so perhaps my memory is just playing tricks on me. But I digress. Here's the defect I wanted to highlight with this post:

 

OptimalOperations ExchangingAnswers

 

For Blogger.com to leave the extra "s" in has me stumped for several reasons. First, this defect has been seen by a ton of people as Blogger.com is, according to Alexa's site tracking, the world's 7th most popular site. Second, Blogger.com is owned by Google (among the most competent, quality-oriented IT wizards on the planet) and no trademark protection is preventing the correction. Third, it would seem to be such an easy thing to fix. Fourth, other sites (like wordpress) don't make the same mistake. Fifth, it doesn't seem like a "style preference" issue (like spelling traveled with one "L" or two); it seems to me like a pretty clear case of a mistake. It would be a mistake to say "one cars," "one computers," or "one pedantic grammarians"; similarly, it is a mistake to say "one comments". What gives? Anyone have any ideas?

For anyone wondering where the ">10 million times" figure came from, it is pure conjecture on my part. If anyone has a reasoned way to refute or confirm it or propose a better estimate, I'm all ears.

By: Justin Hunter on Dec 9, 2009

Categories: Inexplicable Mysteries, Software Testing

Dave Whalen posted a good piece here asserting that software testing, done well, requires a blend of "Science" and "Art". I recommend it. (He also has a good post about testing databases here).

 

20091124-n96r4rh2t5qufjm3mq8eytwu51

 

He includes the statement below which I agree with. If you are a software tester and any doubts about whether all of these methods work (pairwise software testing in particular), I would encourage you to conduct a pilot project on your own and measure the results achieved with and without the technique applied.

 

From the scientific side, testing can include a number of proven techniques such as equivalency class testing, boundary value analysis, pair-wise testing, etc. These techniques, if used properly, can reduce test times and focus on finding the bugs where they tend to hang out - much like a porch light on a summer night.

 

My response to Dave's post, included below, is not especially profound or even well-written, but, hey, I'm in a hurry in the pre-Thanksgiving rush and the topic hit close to home so I couldn't resist jotting a little something. Enjoy. Please let me know your thoughts / reactions if you have any.

 

Dave,

Very well said!

I wholeheartedly, enthusiastically agree with your premise. I also wish that more people saw things the same way.

My father co-wrote Statistics for Experimenters which describes the “art and science” within the Design of Experiments ("DoE") field of applied statistics. Well-run manufacturing companies use DoE techniques in their manufacturing processes. Many companies, such as Toyota see them as an absolutely fundamental part of their processes (yet unfortunately, software testers, who could use DoE techniques such as pairwise and other forms of combinatorial testing, are often ignorant about how to use them properly and the software testing industry as a whole dramatically under-utilizes such techniques…. but I digress).

I brought the book up because it opens up with a good example relevant to the points you made. To win at the game of 20 questions, it is useful to know “the science” of game theory and DoE; choose questions so that there is a 50/50 chance that the answer will be Yes. Someone who knows this technique, all else being equal, will be win more because of their “scientific” approach than someone who doesn’t know the technique. And yet… other stuff (whether the subject matter expertise in this example, or subject matter expertise and “artistic” Exploratory Testing in your example) is indispensable as well.

You can’t truly excel at either 20 Questions or software testing unless you have a good mix of “science” (governed by mathematical principles, proven methods of DoE, etc.) and and “art” (governed by experience, instincts, and subject matter expertise).

By: Justin Hunter on Nov 24, 2009

Categories: Combinatorial Testing, Efficiency, Pairwise Testing, Software Testing, Uncategorized

a1ValueOfChecklists-1pdfpage46of95

 

I highly recommend this presentation by Cem Kaner (available here as a pdf download of slides). It is provocative, funny, and insightful. In it, Cem Kaner makes a strong case for using checklists (and mercilessly derides many aspects of using completely scripted tests). Cem Kaner, as I suspect most people reading this already know, is one of the leading lights of software testing education. He is a professor of computer sciences at Florida Institute of Technology and has contributed enormously to software testing education by writing Testing Computer Software "the best selling software testing book of all time," founding the Center for Software Testing Education & Research](http://www.testingeducation.org/BBST/), and making an excellent free course available online for [Black Box Software Testing. .

Here are a couple of my favorite slides from the presentation.

 

a2ValueOfChecklists-1pdfpage82of95

 

a1ValueOfChecklists-2pdfpage83of95

 

My own belief is that the presentation is very good and makes the points it wants to quite well. If I have a minor quibble with it, it is that in doing such a good job at laying out the case for checklists and against scripted testing, the presentation - almost by definition/design - does not go into as much detail as I would personally like to see about a topic that I think is extremely important and not written about enough; namely, how practitioners should use an approach that blends the advantages of scripted tests (that can generate some of the huge efficiency benefits of combinatorial testing methods for example) and checklist-based Exploratory Testing (which have the advantages pointed out so well in the presentation). A "both / and" option is not only possible; it is desirable.


Credit for bringing this presentation to my attention: Michael Bolton ([the testing expert](http://www.developsense.com/blog.html, of course, not the singer, ["Office Space" video snippet] , posted a link to this presentation. Thanks again, Michael. Your enthusiastic recommendation to pick up boxed sets of the BBC show Connections was also excellent as well; the presenter of Connections is like a slightly tipsy genius with ADHD who possesses incredible grasp of history, an encyclopedic knowledge of quirky scientific developments and a gift for story-telling. I like how your mind works.

By: Justin Hunter on Nov 4, 2009

Categories: Scripted Software Testing, Software Testing Efficiency, Software Testing Presentations, Testing Case Studies, Testing Checklists, Testing Strategies

On October 6th, I informally launched testing.stackexchange.com as "the stackoverflow.com for Software Testing" without much hoopla. So far, less than a month later, with no advertising other than word of mouth, the initial results are very promising. We've had approximately:

  • 70 new users join as members and contributors
  • 50 software testing questions
  • 160 answers to those questions
  • 2,200 views of the questions and answers

The most important development is not reflected in the numbers above. More important, by far, than the number of the participants have joined is the quality of people who are contributing. Members of the forum include some prominent experts including: Jason Huggins (creator of Selenium and cofounder of Sauce Labs, Alan Page and Bj Rollison (of "How we Test Software at Microsoft" fame), Michael Bolton (the testing expert, not the singer), Fred Beringer, Elisabeth Hendrickson, Joe Strazzere, Adam Goucher, Simon Morley, Rob Lambert, Scott Sehlhorst, etc. etc.). Given the high quality people the site has attracted, the quality of the answers delivered has been quite high. Perhaps the quality is also above average because people answering know that their answers will be analyzed by thoughtful testers and voted up (or down) based on how good they are. In short, testers are asking good questions and getting them answered which is why I created the site in the first place. I'm cautiously optimistic about the future: if the site

Members so far include:

 

1Users-TestingStackExchange

 

The most viewed questions so far include:

 

1TestingStackExchange-2

 

The most recent questions being asked and answered are:

 

1TestingStackExchangerecentqs-1

 

I'd like to extend special thanks to Alan Page (who likes the idea so much that has volunteered to join me as a co-manager/Moderator of the site), to Shmuel Gershon, Jason from NC, and Joe Strazzere for being particularly active and to Alan Page, Corey Goldberg, Shmel Gershon, and Konstantin for helping to get the word out about the form through their blog posts telling the world about testing.stackexchange.com. Without their combined help, we'd be nowhere. With their help and support, we're building a place where software testers can seek and receive high-quality, peer-reviewed answers to their testing questions.

Please help us succeed by spreading the word, asking a few questions, answering a few, and voting on the best answers.

Thanks everyone!

By: Justin Hunter on Nov 2, 2009

Categories: Interesting People , Software Testing

There are some phrases in English that, as often as not, come off sounding obligatory and/or insincere. The phrase "I'm honored..." comes to mind (particularly if someone is accepting an award in front of a room full of people).

Be that as it may, I genuinely felt really honored last night and again today by a couple comments James Bach has said about me, including these:

 

TwitterHexawiseresults-Oct232009

 

Here's the quick background: (1) James knows much more about software testing than I do and I respect his views a lot. (2) He has a reputation for not suffering fools gladly and pretty bluntly telling people he doesn't respect them if he doesn't respect the content of their views. (3) in addition to his extremely broad expertise on "testing in general" James, like Michael Bolton, knows a lot about pairwise and combinatorial testing methods and how to use them. (4) I firmly (and passionately) believe that pairwise and combinatorial testing methods are (a) dramatically under-appreciated, and (b) dramatically under-utilized. (5) James has published a very good and well-reasoned article about some of the limitations of pairwise testing methods that I wanted to talk to him about. (6) I co-wrote an article that IEEE Computer recently published about Combinatorial Testing that I wanted to discuss with him. (7) James and I have been at the STP Conference in Boston over the past few days. (8) I reached out to him and asked to meet at the conference to talk about pairwise and combinatorial testing methods and share with him my findings that - in the dozens of projects I've been involved with that have compared testers efficiency and effectiveness - I've routinely seen defects found per tester hour more than double. (9) I was interested in getting his insights into where are these methods most applicable? Least applicable? What have his experiences been in teaching combinatorial testing methods to students, etc.

In short, frankly, my goals in meeting with him were to: (a) meet someone new, interesting and knowledgeable and learn as much as could and try to understand from his experiences, his impressive critical thinking and his questioning nature, and (b) avoid tripping up with sloppy reasoning (when unapologetically expressing the reasons I feel combinatorial testing methods are dramatically under-appreciated by the software testing community) in front of someone who (i) can smell BS a mile away, and (ii) doesn't suffer fools gladly.

I learned a lot, heard some fantastic war stories and heard his excellent counter-examples that disproved a couple of the generalizations I was making (but didn't dampen my unshaken assertions that combinatorial testing methods are wildly under-utilized by the software testing community). I thoroughly enjoyed the experience. Moving forward, as a result of our meeting, I will go through an exercise which will make me more effective (namely carefully thinking through and enumerating all of the assumptions behind my statements like: "I've measured the effectiveness of testers dozens of times - trying to control external variables as much as reasonably possible - and I'm consistently seeing more than twice as many defects per tester hour when testers adopt pairwise/combinatorial testing methods."

His complement last night was private so I won't share it but it ranks up there in my all time favorite complements I've ever received. I'm honored. Thanks James.

By: Justin Hunter on Oct 23, 2009

Categories: Combinatorial Testing, Design of Experiments, Efficiency, Interesting People , Pairwise Testing, Software Testing, Software Testing Efficiency, Testing Case Studies, Uncategorized

I have just created the first video overview of the Hexawise test case generator. Please take a look and let me know your thoughts (either with an email or a comment below).

 

Introduction to Hexawise Pairwise Testing Tool / Combinatorial Testing Tool

 

I'll refine and hopefully improve it over time, but wanted to share it at this point for feedback. I'd welcome feedback. Is the pace of the video too slow? Does it have too much detail about pairwise coverage? Does the fact that I've got a dull Midwestern, nasal, monotone mean I should have someone with a more animated and melodious "voice made for radio" do the voice over?

Thanks in advance for your feedback!

By: Justin Hunter on Oct 20, 2009

Categories: Uncategorized

matt-heusser

 

Matthew Heusser, an accomplished tester, frequent blogger, and insightful contributor in the Context Driven Testing mailing list, and a testing expert whose opinion I respect a lot, has just published a very thought-provoking blog post that highlights an important issue surrounding "PowerPointy" consultants in the testing industry who have relatively weak real world testing chops. It's called "[The Fishing Maturity Model](http://blogs.stpcollaborative.com/matt/2009/10/08/the-fishing-maturity-model/."

Matthew argues that testers are well-advised to be skeptical of self-described testing experts who claim to "have the answer" - particularly when such "experts" haven't actually rolled their sleeves up and done software testing themselves. In reading his article, I found it quite thought-provoking, particularly because it hit close to home: while I'm by no means a testing expert in the broader sense of the term, I do consider myself to know enough about combinatorial test design strategies applicable to software testing to be able to help most testing teams become demonstrably more efficient and effective... and yet, my actual hands-on testing experiences are admittedly quite limited. If I'm not one of the guys he's (justifiably) skewering with his funny and well-reasoned post (and he assures me I'm not; see below), a tester could certainly be forgiven for mistaking me for one based on my past experiences.

Matthew's Five Levels of the Fishing Maturity Model (based, not so loosely, of course on the Testing Maturity Model, not to mention CMM, and CMMi)...

 

The five levels of the fishing maturity model: 1 – Ad-hoc. Fishing is an improvised process. 2 – Planned. The location and timing of our ships is planned. With a knowledge of how we did for the past two weeks, knowing we will go to the same places, we can predict our shrimp intake. 3 – Managed. If we can take the shrimp fishing process and create standard processes – how fast to drive the boat, and how deep to let out the nets, how quickly, etc, we can improve our estimates over time, more importantly. 4 – Measured. We track our results over time – to know exactly how many pounds of shrimp are delivered at what time with what processes. 5 – Optimizing. At level 5, we experiment with different techniques; to see what gathers more shrimp and what does not. This leads us to continual improvement.

 

Sounds good, right? Why, with a little work, this would make a decent 1-hour conference presentation. We could write a little book, create a certification, start running conferences …

 

And the rub...

The problem: I’ve never fished with nets in my entire life. In fact, the last time I fished with a pole, I was ten years old at Webelo’s camp.

I posted the following response, based on my personal experiences: Words in [brackets] are Matthew's response to me.

Matthew,

Excellent post, as usual. [I'm glad you like it. Thank you.]

You raise very good points. Testers (and other IT executives) should be leery of snake oil salesmen and use their judgment about “experts” who lack practical hands-on experience. While I completely agree with this point, I offer up my own experiences as a “counter-example” to the problem you pointed out here.

3-4 years ago, while I was working at a management consulting and IT company, (with a personal background as an entrepreneur, lawyer, and management consultant – and not in software testing), I began to recommend to any software testers who would listen, that they start using a different approach to how they designed their test cases. Specifically, I was recommending that testers should begin using applied statisitics-based methods* designed to maximize coverage in a small number of tests rather than continuing to manually select test cases and rely on SME’s to identify combinations of values that should be tested together. You could say, I was recommending that they adopt what I consider to be (in many contexts) a “more mature” test design process.

The reaction I got from many teams was, as you say “this whole thing smells fishy to me” (or some more polite version of the rebuttal “Why in the world should I, with my years of experience in software testing, listen to you – a non-software tester?”) Here’s the thing: when teams did use the applied statistics-based testing methods I recommended, they consistently saw large time reductions in how long it took them to identify and document tests (often 30-40%) and they often saw huge leaps in productivity (e.g., often finding more than twice as many defects per tester hour). In each proof of concept pilot, we measured these carefully by having two separate teams – one using “business as usual” methods, the other using pairwise or orthogonal array-based test design strategies – test the same application. Those dramatic results led to my decision to create [Hexawise](http://www.hexawise.com/users/new, a software test design tool. [Point Taken ...]

My closing thoughts related to your post boils down to:

  1. I agree with your comment – “There are a lot of bogus ideas in software development.”

  2. I agree that testers shouldn’t accept fancy PowerPointed ideas like “this new, improved method/model/tool will solve all your problems.”

  3. I agree that testers should be especially skeptical when the person presenting those PowerPointed slides hasn’t rolled up their sleeves for years as a software testing practitioner.

Even so…

  1. Some consultants who lack software testing experience actually are capable of making valuable recommendations to software testers about how they can improve their efficiency and effectiveness. It would be a mistake to write them off as charlatans because of their lack of software testing experience. [I agree with the sentiment that sometimes, people out of the field can provide insight. I even hinted at that with the comment that at least, Forrest should listen, then use his discernment on what to use. I'm not entirely ready to, as the expression goes, throw the baby out with the bathwater.]

  2. Following the “bogus ideas” link above takes readers to your quote that: “When someone tells you that your organization has to do something ‘to stay competitive,’ but he or she can’t provide any direct link to sales, revenue, reduced expenses, or some other kind of money, be leery.” I enthusiastically agree. In the software testing community, in my view, we do not focus enough on gathering real data** about which approaches work (or -ideally- in what contexts they work). A more data-driven management approach would help everyone understand what methods and approaches deliver real, tangible benefits in a wide variety of contexts vs. those methods and approaches that look good on paper but fall short in real-world implementations. [Hey man, you can back up your statements with evidence, and you're not afraid to roll up your sleeves and enter an argument. I may not always agree with you, but you're exactly the kind of person I want to surround myself with, to keep each other sharp. Thank you for the thoughtful and well reasoned comment.]

-Justin

 

Company – http://www.hexawise.com
Blog – http://hexawise.wordpress.com
Forum – http://testing.stackexchange.com

 

*I use the term “applied statistics-based testing” to incorporate pairwise, orthogonal array-based, and more comprehensive combinatorial test design methods such as n-wise testing (that can capture, for example, all possible valid combinations involving 6-values).

**Here is an article I co-wrote which provides some solid data that applied statistics-based testing methods can more than double the number of defects found per tester hour (and simultaneously result in finding more defects) as compared to testing that relies on "business as usual" methods during the test case identification phase.

By: Justin Hunter on Oct 12, 2009

Categories: CMM, Combinatorial Testing, Pairwise Testing, Software Testing, Software Testing Efficiency, Testing Maturity Model

Today I've released a beta version of testing.stackexchange.com which is a "stackoverflow.com for software testers." I would appreciate your help in contributing content, and/or getting the word out. Stackoverflow has become an extraordinarily useful forum for software developers to ask difficult, practical questions, and get quick, actionable, peer-reviewed responses from software developers around the globe. While there are some software testing questions on stackoverflow itself, the questions are mostly software developer-centric. There's no reason why we can't create a very similar forum geared primarily towards the software testing community. So who's with me? Please show your support by posting a question, sharing an answer or voting on existing answers at [testing.stackexchange.com](http://testing.stackexchange.com/

If you share my belief in the significant potential benefit to the software testing community that would result from a mature, well-trafficked site with a rich collection of peer-reviewed questions on software testing and you would be interested in helping out beyond posting periodic questions and/or answers to the site, please post a reply here or contact me through Linkedin. I'd love to brainstorm ideas and work with like-minded people to get this forum created for the software testing community. As of now, the odds are against testing.stackexchange from growing to obtain the critical mass it needs (particularly since I'm busy day-to-day building my software testing tool company); a small number of active collaborators would improve the odds dramatically.

I first found out about stackoverflow.com through my brother's blog here

 

Joel Spolsky's video is fantastic. He set out to crack the code on:

  • How can you get a useful exchange of information between experts that results in very good questions and answers being actively shared by participants?

  • How can the community encourage visitors to the site to actively participate and share their expertise?

  • How can the site generate a critical mass and utilize Google to drive traffic to the site to make it self-sustaining?

  • How can users (who might not otherwise be able to tell which are the best answers from among multiple answers) tell which answers are in fact the best?

 

In my view, he has succeeded on all of the above counts, which is truly impressive. We're using the identical strategies (and Spolsky's technology) at testing.stackexchange.com. The way Spolsky lays out his vision is impressive. He logically progresses through a graveyard of multiple Q & A sites that have devolved into largely useless forums where inane questions are asked and dubious answers are shared. He then shares how he and his collaborators adjusted the model for Stackoverflow to maximize the value to participants. Their self-described strategy amounts to taking the best ideas they could from multiple different sites and putting them together in stackoverflow (and "using Google as our landing page" as a way to build traffic).

Thank you in advance for helping to get the word out.

 

By: Justin Hunter on Oct 6, 2009

Categories: Software Testing