This interview with Mike Bland is part of our series of “Testing Smarter with…” interviews. Our goal with these interviews is to highlight insights and experiences as told by many of the software testing field’s leading thinkers.

Mike Bland aims to produce a culture of transparency, autonomy, and collaboration, in which “Instigators” are inspired and encouraged to make creative use of existing systems to drive improvement throughout an organization. The ultimate goal of such efforts is to make the right thing the easy thing. He's followed this path since 2005, when he helped drive adoption of automated testing throughout Google as part of the Testing Grouplet, the Test Mercenaries, and the Fixit Grouplet.


Mike Bland

This post includes highlights from our full interview with Mike Bland. The full interview is long and packed with great thoughts.

Personal Background

Hexawise: If you could write a letter and send it back in time to yourself when you were first getting into software testing, what advice would you include in it?

Mike: When I first started practicing automated testing and had a lot of success with it, I couldn’t understand why people on my team wouldn’t adopt it despite its “obvious” benefits. One of the biggest things that experience, reading, and reflection has afforded me is the perspective to realize now that different people adopt change differently, at different rates and for different reasons, and that you’ve got to create the space for everyone to adapt accordingly.

As I say in my most recent presentation, “The Rainbow of Death”, metrics and arguments are far from sufficient to inspire action in either the skeptical or the powerless, and the greater challenge is to create the cultural space necessary for lasting change.

Oh, and you’ve got to repeat yourself and say the same thing different ways multiple times—a lot.

Hexawise: What change management lessons did you learn while driving adoption of test automation methods at Google between 2005-2010? Which of those lessons were applicable when you were involved in the recent U.S. federal government effort to bring in talented tech people to bring new ways of working with technology into the government? Which of those lessons were not?

Mike: The top objective is to make the right thing the easy thing. Once people have the knowledge and power to do the right thing the right way, they won’t require regulation, manipulation, or coercion—doing things any other way will cease to make any sense.

Most of what I learned in terms of specific approaches to supplying the necessary knowledge and power has come from trying different things and seeing what sticks—and I’m still working to make sense of why certain things stuck, years after the fact. The most important insight, as mentioned earlier, is that different people adopt change differently, for different reasons, and as a result of different stimuli. Geoffrey A. Moore’s Crossing the Chasm was the biggest eye-opener in this regard. Then, years later, when I saw fellow ex-Googler Albert Wong present his “Framework for Helping” to describe his first experience in the U.S. Digital Service, I instantly saw it snapping in place across the chasm, describing how the Innovators and Early Adopters from Moore’s model—who I like to call “Instigators”—need to fulfill an array of functions in order to connect with and empower the Early Majority on the other side of the chasm. Of course, through the filter of my own twisted sense of humor, I thought “Rainbow of Death” might make the model stick in people’s brains a little better.

So these models helped provide context for why the specific things the Testing Grouplet did worked; and how, despite the fact that there were many scattered, parallel efforts underway, they ultimately served to reinforce one another, rather than creating confusion and chaos. Of course, Google’s open communication channels and the Testing Grouplet’s shared vision—that emerged two years into our five year run—helped keep everything aligned. The point being, don’t wait for the clear vision and perfect plan up-front—start doing things and pay attention to what’s working, and why, and develop your plans as you go. That’s just good Agile practice, isn’t it?

And did I mention that you have to reiterate things you’ve already said in different language over and over—like, a lot?

Every lesson applied, in that the real lessons were about human nature, not technology. Google disabused me of the notion that one metric, one tool, or one method of persuasion would suffice to change an entire population's behavior. In other words, there's no silver bullet.

See the full interview for useful details.

The top objective is to make the right thing the easy thing. Once people have the knowledge and power to do the right thing the right way, they won’t require regulation, manipulation, or coercion—doing things any other way will cease to make any sense.

Hexawise: Describe a testing experience you are especially proud of. What discovery did you make while testing and how did you share this information so improvements could be made to the software?

Mike: Probably like many folks, I remember my first time the most vividly. Immediately on the heels of a death march—when my team barely got a steaming pile of other people’s code to meet a critical spec by a harsh deadline and very nearly would’ve killed one another were it not for Strongbad’s Emails to keep us one hair’s breadth away from going completely insane—we got some time and freedom to try to make the program faster.

I’d gotten the idea that we needed to rewrite a particular subsystem to take advantage of data we weren’t even using, and at about the same time, I happened to read an issue of the C/C++ Users Journal that had an article on using CppUnitLite, I believe. Unit testing sounded like a neat idea, so I practiced it at the same time I started rewriting this subsystem from scratch.

In the end, my new subsystem was rock-solid and improved performance by a factor of 18x. When a couple bugs came up, I diagnosed and fixed them very, very quickly, when the norm was on the order of weeks or months. It totally transformed our relationship with our client.

See the full interview for additional useful details.

Hexawise: In watching your videos and reading your content online your ideas resonate with those of W. Edwards Deming, Russell Ackoff and Peter Senge from management, culture change and systems thinking perspectives. Who are your greatest influence in this area?

Mike: I’m a little ashamed to admit I haven’t read any of their stuff, or at least not much. Certainly what little I’ve gleaned of Deming resonates with my experience. I’ve begun reading Senge’s The Fifth Discipline, and while the introduction resonated very clearly, I’ve not yet read further. Ackoff is a new name for me (and thanks for the tip!). That said, it is gratifying when I do read an established author and find that, yep, more learned minds than mine have clearly articulated widely-accepted concepts that I’ve only figured out due to trial, error, and intuition.

In fact, one of the things I’m trying to do moving forward is to go back through the literature and connect it to the experiences I’ve had—not just for my own validation, but to reassure my audience and clients that the things I’ve done and the things I recommend aren’t all crazy talk. I’ve got Geoffrey A. Moore’s Crossing the Chasm model combined with Albert Wong’s model to form the Rainbow of Death, which comprises the core of my narrative now; and I’ve also recently added a very high-level view of Kurt Lewin’s theory of social change, which someone only recently suggested to me.

Views on Software Testing

Hexawise: In your online presentation, Making the Right Thing the Easy Thing, you note: [Use] "amplifying feedback loops to make sure knowledge is shared where needed as quickly and clearly as possible." How do you suggest this idea be applied by those involved with software testing?

Mike: Heh, that’s a paraphrase of the Second Way of Devops (out of Three), originally articulated by Gene Kim. Clearly the extent to which you can automate testable cases, to make it easy and fast to do, the better. People need to know that there are different kinds of automated tests for different levels of the software—they need to learn how to do the right thing the right way. Once developers in particular have gotten some traction with writing automated tests, then folks performing manual or system-level automated testing won’t waste their time catching (and re-catching!) bugs the developers could’ve easily caught, and can focus on truly pushing the limits of the software—reporting not just on whether it meets functional and nonfunctional requirements, but on the overall quality of the product.

In other words, a healthy balance of automated testing and manual testing plays to the strengths of all the humans and machines involved. When you’ve got optimal resource utilization happening, you eliminate a lot of both physical (in terms of slowness) and human friction, and a feeling of true partnership can take hold. Testers aren’t just the people reminding you that your code isn’t perfect—you’ve already reminded yourself of that through your own automated tests!—they’re the ones helping you make it even better.

it’s not about defects; it’s about feedback and collaboration. If you arrange incentives to produce an adversarial relationship between team members, e.g. if developers are incentivized to minimize defects and testers are incentivized to report defects, then that’s a house divided against itself.

Hexawise: What do you wish more developers, business analysts, and project managers understood about software testing?

Mike: Oh my. For one, it’s not about defects; it’s about feedback and collaboration. If you arrange incentives to produce an adversarial relationship between team members, e.g. if developers are incentivized to minimize defects and testers are incentivized to report defects, then that’s a house divided against itself. Some people think a degree of competition and/or adversarialism is a good thing, but when it comes to producing a product as a team—i.e. achieving a mission—you should keep it to a minimum in favor of fostering a spirit of collaboration.

Collaboration doesn’t mean blind consensus; it means communicating honestly in an environment in which we feel safe to do so, in which we share criticism in a spirit of mutual self-interest, not cutthroat competition.

One test type does not fit all. First, in terms of automated tests, unit testing can find a truly large number of errors, very quickly and cheaply, and tends to encourage better code quality (i.e. readability, maintainability, extensibility) overall. Integration tests can shake out errors and ambiguities between component contracts. High-level, developer-written system tests (as opposed to more extensive system tests developed by a dedicated tester) can quickly affirm that the entire product is in a buildable, runnable state. All of this “white box” testing by the developers is essential to giving the testers as high-quality a product as possible, so they can apply their “black box” techniques to push the product to its limits, rather than waste time alerting developers to defects they could’ve much more quickly, easily, and cheaply discovered themselves.

To this last point, I like to point to the examples of goto fail and Heartbleed. So many Internet “experts” threw up their hands and claimed that bugs like these were “too hard to test”. In both cases, after 2.5 years out of the industry (another story), I spent an evening diving into code I’d never seen before and wrote a test to reproduce each bug and validate its fix. After that, some liked to say, “Oh well, lots of other tools and techniques could’ve found these bugs.”

My claim isn’t that automated testing would’ve been the only way; my claim is that the discipline of automated testing likely would’ve prevented these bugs from ever existing even before writing a single test. With goto fail, the offending block of code was copied and pasted throughout the file six times! It was just that one of the six contained the errant “goto fail” line. But as I demonstrated with my version of the “fix”, extracting a common function and testing that six ways from Sunday likely would’ve avoided the problem entirely. In the case of Heartbleed, it was a failure to validate that an input buffer was actually as long as the user-supplied length indicated. Testing that kind of corner condition is unit testing 101, and the kind of thing you become more sensitive to every time you write a line of code once you’re in the habit of testing.

Hence, as difficult as it would’ve been for manual testing to discover these errors, and as long as it took for them to get shaken out months or years after their widespread deployment—Heartbleed via third-party fuzz testing, goto fail who knows how—both very, very likely could’ve been stopped dead in their tracks (or never would’ve existed!) if the developers were in the everyday habit of unit testing their code.

Hexawise: Our CTO, Sean Johnson, shared your memorably-named "Rainbow of Death" presentation with our management team. We absolutely loved it. In your presentation, you describe a series of concrete, practical steps you and your colleagues at Google took over the course of 5+ years to overcome resistance to change, educate teams, and successfully achieve broad adoption of automated testing efforts at Google across many teams, including lots of teams that were initially very change resistant. Can you please describe for our readers 2 or 3 noteworthy aspects of that change management journey?

Mike: What I hope the Rainbow of Death model, in combination with Geoffrey A. Moore’s Crossing the Chasm model, make apparent is that different people adopt change differently. There are many needs that need to be met by and for many different people, and the chances of figuring out the perfect plan to execute before taking any action are practically zero. After all, don’t the Agile and DevOps models that are all the rage comprise tools and practices for adapting to change, for performing experiments and adjusting course based on feedback? Organizational change is no different, yet many people remain conditioned to expect waterfall-like solutions to their social problems.

image showing items supporting change in the organization


Also, I mention in the talk that “The problem you want to solve may not be the problem you have to solve first.” In our case, we wanted to solve the problem of developers not writing enough automated tests. But first, we had to solve two other problems: People back then had very little exposure to or experience with automated testing, leading to the “My code is too hard to test” excuse, because they had no idea how to test it, or to write testable code to begin with.

The second problem was that the tools at the time couldn’t keep up with the growth of the company, its products, and its code base. It was growing ever more painful to write any code to begin with, yet delivery pressure was intense and Imposter Syndrome was rampant—on top of the fear of admitting your code might contain flaws, how could you make any time to learn how to write automated tests to begin with? Hence the “I don’t have time to test” excuse.

So we couldn’t just say “Testing is good! Yay testing! Please write moar testz!”

I think this mix of perspective, empathy, creativity, collaboration, tenacity, and patience is crucial to changing not only tech organizations, but society at large. I hope to put this notion, and the Rainbow of Death model in particular, to the test continuously throughout the remainder of my career.

my advice to both developers and testers is to identify the priorities, the social structures and dynamics at play in the organization. How can you work with these structures and dynamics instead of against them—or do you need to create a culture of open communication and collaboration in parallel with (or even before) communicating the testing message?

Industry Observations / Industry Trends

Hexawise: Large companies often discount the importance of software testing. What advice do you have for software testers to help their organizations understand the importance of expecting more from the software testing efforts in the organization?

Mike: Sadly, there’s no one message that works for every company, every culture, everywhere. It’s up to the Instigators in each environment to take the timeless principles I believe are essential—that testing is about feedback and collaboration, that different types of tests all catch different and important bugs, that developers and testers have different and mutually-reinforcing roles to play—and find the right cultural hooks to hang those messages on. In the case of Google, it took the Testing Grouplet five years to figure out and successfully implement, and it took an array of parallel efforts across multiple groups to saturate the culture with the message, not just one magical tool or technique or team to bind them all.

So my advice to both developers and testers is to identify the priorities, the social structures and dynamics at play in the organization. How can you work with these structures and dynamics instead of against them—or do you need to create a culture of open communication and collaboration in parallel with (or even before) communicating the testing message? This is the punchline of my Rainbow of Death presentation: The problem you want to solve may not be the problem you have to solve first, and the Standard Narrative from which all the problems emerged will not produce any solutions—though it may provide the keys necessary to unlock effective solutions.

At Google, it was the Testing Grouplet’s Test Certified program and all the other education and tooling efforts supporting it that provided the right hook—after two years of experimentation and reflection! But don’t focus on what Test Certified was comprised of: focus on why that approach worked for us, and see if that reflection inspires an approach that will work for your company.

In addition to that, it probably wouldn’t hurt to remind anyone who’ll listen of goto fail and Heartbleed, and how basic unit testing practices and the coding habits they encourage could’ve prevented these potentially catastrophic defects from even being written in the first place.

Hexawise: The story of your team's journey is fantastic. We highly recommend it to IT organizations embarking on any large improvement effort. Thank you for sharing it. We've recently started using elements of your approach to help our clients successfully adopt test optimization approaches at scale in their organizations.

Mike: That’s incredibly gratifying to hear! Please keep me in the loop of how well it’s working for you and your clients. Just as the impact of Testing Grouplet’s efforts were far greater than the sum of any individual part, and as my Rainbow of Death presentation benefited enormously from the input of trusted fellow Instigators to help illustrate it, I’m sure there are many more insights and improvements waiting to emerge from the model once more people have applied it farther and wider than I ever could on my own!

Hexawise: Do you believe the DevOps movement is resulting in better software testing within organizations? Do you see any other trends that software testers could leverage to promote improved application of software testing practices?

See full interview for Mike’s response.

Staying Current / Learning

Hexawise: What software testing-related books would you recommend should be on a tester’s bookshelf?

Mike: Sadly, I’m not current enough to make any solid recommendations. In my career, I’ve moved more into the culture change space than being a 100% in-the-trenches practitioner. That said, I certainly did years of quality time with my old “library” of programming and algorithms books, and was fortunate to be part of a culture that itself generated a broad swath of automated testing knowledge. Though I don’t keep up with the details of the latest developments, that time spent internalizing the core principles has served me very well throughout my career.

That said, I’m sure that great books exist, and people dedicated to the craft would do themselves a great service by discovering them and spending years plumbing their depths, rather than trying to read every book on the subject forevermore. That’s the model that worked for me, at least; but perhaps a more voracious reading regimen suits you better. Everybody’s different.

See the full interview for more questions and answers.

Profile

Mike aims to produce a culture of transparency, autonomy, and collaboration, in which “Instigators” are inspired and encouraged to make creative use of existing systems to drive improvement throughout an organization. The ultimate goal of such efforts is to make the right thing the easy thing. He's followed this path since 2005, when he helped drive adoption of automated testing throughout Google as part of the Testing Grouplet, the Test Mercenaries, and the Fixit Grouplet. He was instrumental in the execution of Test Certified and Testing on the Toilet, and the four company-wide Fixits he organized led to the development and rollout of the Test Automation Platform. His account of Google’s automated testing adoption also appears as a case study in The DevOps Handbook by Gene Kim, et. al.

He also served as a member of the Websearch Infrastructure team, which practiced DevOps before he was aware it had a name. Frequently working in concert with other indexing infrastructure teams, he also worked closely with Release Engineers and Site Reliability Engineers to package, release, deploy, and monitor multiple indexing services.

Most recently he served as Practice Director at 18F, a technology team within the U.S. General Services Administration, where he personally launched and drove several initiatives to increase 18F’s capability as a learning organization, including the Pages platform, the Guides series, and the Handbook.

Links:

This post includes highlights from our full interview with Mike Bland. The full interview is long and packed with great thoughts.

By: John Hunter on Mar 27, 2017

Categories: Interview, Software Testing, Testing Smarter with..., Testing Strategies

This interview with Alan Page is part of our series of “Testing Smarter with…” interviews. Our goal with these interviews is to highlight insights and experiences as told by many of the software testing field’s leading thinkers.

Alan Page has been a software tester (among other roles) since 1993 and is currently the Director of Quality for Services at Unity. Alan spent over twenty years at Microsoft working on a variety of operating systems and applications in nearly every Microsoft division.


Alan Page

Personal Background

Hexawise: Looking back on over 20 years in software testing at Microsoft can you describe a testing experience you are especially proud of?

Alan: There are a lot of great experiences to choose from - but I'm probably most proud of my experiences on the Xbox One team. My role was as much about building a community of testers across the the Xbox ecosystem (console, services, game development) as it was about testing and test strategy for the console. We had a great team of testers, including a handful of us with a lot of testing experience under our belts. We worked really well together, leveraged each others strengths, and delivered a product that has had nearly zero quality issues since the day of release.

Hexawise: What did you enjoy about working in such a large software company, Microsoft, for so long?

Alan: The only way I survived at Microsoft so long was that I could change jobs within the company and get new experiences and challenges whenever I felt I needed them. I love to take on new challenges and find things that are hard to do - and I always had that opportunity.

The biggest thing I look for in testers is a passion and ability to learn. I've interviewed hundreds of testers... The testers who really impress me are those who love to learn - not just about testing, but about many different things. Critical thinking and problem solving are also quite important.

Hexawise: What new challenges are you looking forward to tackling in your new role at Unity?

Alan: I'm looking forward to any and all challenges I can find. Specifically, I want to build a services testing community at Unity. Building community is something I feel really strongly about, and from what I've seen so far, I think there are a lot of opportunities for Unity testers to learn a lot from each other while we play our part in the coming growth of Unity services.

Views on Software Testing

Hexawise: What thoughts do you have in involving testers in A/B and multivariate testing? That stretches the bounds of how many people categorize testers but it could be a good use of the skills and knowledge some testers process.

Alan: My approach to experimentation (A/B testing) is that there are three roles needed. Someone needs to design the experiment. A product owner / program manager often plays this role and will attempt to figure out the variation (or variations) for the experiment as well as how to measure the business or customer value of both the control (original implementation) and treatment / experiment.

The second role is the implementer - this is typically a developer or designer depending on the nature of the experiment. Implementing an experiment is really no different than implementing any other product functionality.

The final role is the analyst role - someone needs to look at the data from the experiment, as well as related data and attempt to prove whether the experiment is a success (i.e. the new idea / treatment is an improvement) or not. I've seen a lot of testers be successful in this third role, and statistics and data science in general, are great skills for testers to learn as they expand their skill set.

image of book cover for How We Test Software at Microsoft

Hexawise: During your 20 years at Microsoft you have been involved in hiring many software testers. What do you look for when choosing software testers? What suggestions do you have for those looking to advance in their in software testing career?

Alan: The biggest thing I look for in testers is a passion and ability to learn. I've interviewed hundreds of testers, including many who came from top universities with advanced degrees who just weren't excited about learning. For them, maybe learning was a means to an end, but not something they were passionate about.

The testers who really impress me are those who love to learn - not just about testing, but about many different things. Critical thinking and problem solving are also quite important.

As far as suggestions go, keep building your tool box. As long as you're willing to try new things, you'll always be able to find challenging, fun work. As soon as you think you know it all, you will be stuck in your career.

Combinatorial testing is actually pretty useful in game testing. For example, consider a role-playing game with six races, ten character classes, four different factions, plus a choice for gender. That's 480 unique combinations to test! Fortunately, this has been proven to be an area where isolating pairs (or triples) of variations makes testing possible, while still finding critical bugs.

Hexawise: Do you have a favorite example of a combinatorial bug and how it illuminated a challenge with software testing?

Alan: I was only indirectly involved in discovering this one, but the ridiculously complex font-picker dialog from the Office apps was a mess to test - but it was tested extensively (at least according the person in charge of testing it). A colleague of mine showed them the all pairs technique, and they used it to massively decrease their test suite - and found a few bugs that had existed for years.

Hexawise: It seems to me that testing games would have significant challenges not found in testing fairly straightforward business applications. Could you share some strategies for coping with those challenges?

Alan: Combinatorial testing is actually pretty useful in game testing. For example, consider a role-playing game with six races, ten character classes, four different factions, plus a choice for gender. That's 480 unique combinations to test! Fortunately, this has been proven to be an area where isolating pairs (or triples) of variations makes testing possible, while still finding critical bugs.

Beyond that, testing games requires a lot of human eyeballs and critical thinking to ensure gameplay makes sense, objects are in the right places, etc. I've never seen a case where automating gameplay, for example, has been successful. I have, however, seen some really innovative tools written by testers to help make game testing much easier, and much more effective.

Hexawise: That sounds fascinating, could you describe one or more of those tools?

Alan: The games test organization at Microsoft wrote a few pretty remarkable tools. One linked the bug tracking system to a "teleportation" system in the game under test, including a protocol to communicate between a windows PC and an xbox console.

This enabled a cool two-way system, where a tester could log a bug directly from the game, and the world coordinates of where the bug occurred were stored automatically in the bug report. Then, during triage or debugging, a developer / designer could click on a link in the bug report, and automatically transport to the exact place on the map where the issue was occurring.

Hexawise: What type of efforts to automate gameplay came close to providing useful feedback? What makes automating gameplay for testing purposes ineffective?

Alan: I would never automate gameplay. There are too many human elements needed for a game to be successful - it has to be fun to play - ideally right at that point between too challenging, and too easy. If the game has a story line, a tester needs to experience that story line, evaluate it, and use it as a reference when evaluating gameplay.

Tools to evaluate cpu load or framerate during gameplay are usually a much better investment than trying to simulate gameplay via automation.

That said, there are some "what if" scenarios in games that may provide interesting bugs. For example simulating a user action (e.g. adding and removing an item) thousands of time may reveal memory leaks in the application. It really comes down to test design and being smart about choosing what makes sense to automate (and what doesn't make sense).

Hexawise: What is one thing you believe about software testing that many smart testers disagree with?

Alan: I don't believe there's any value from distinguishing "checks" from tests. I know a lot of people really like the distinction, but I don't see the value. Of course, I recognize that some testing is purely binary validation, but this is a(nother) example of where choosing more exact words in order to discern meaning leads to more confusion and weird looks than it benefits the craft of testing.

Industry Observations / Industry Trends

Hexawise: Do you believe testing is becoming (or will become) more integrated with the software development process? And how do you see extending the view of the scope of testing to include all the way from understanding customer needs to reviewing actual customer experience to drive the testing efforts at an organization.

Alan: I believe testing has become more integrated into software development. In fact, I believe that testing must be integrated into software development. Long ship cycles are over for most organizations, and test-last approaches, or approaches that throw code to test to find-all-the-bugs are horribly inefficient. The role of a tester is to provide testing expertise to the rest of the team and accelerate the entire team's ability to ship high-quality software. Full integration is mandatory for this to occur.

It's also important for testers to use data from customer usage to help them develop new tests and prioritize existing bugs. We've all had conversations before about whether or not the really cool bug we found was "anything a customer would ever do" - but with sufficient diagnostic data in our applications or services, we can use data to prove exactly how many customers could (or would) hit the issue. We can also use data to discover unexpected usage patterns from customers, and use that knowledge to explore new tests and create new test ideas.

There's an important shift in product development that many companies have recognized. They've moved from "let's make something we think is awesome and that you'll love" - i.e. we-make-it-you-take-it to "we want to understand what you need so we can make you happy". This shift cannot happen without understanding (and wallowing in) customer data.

Hexawise: In your blog post, Watch out for the HiPPO, you stated: "What I’ve discovered is that no matter how strongly someone feels they “know what’s best for the customer”, without data, they’re almost always wrong." What advice do you have for testers for learning what customers actually care about? To me, this points out one of the challenges software testers (and everyone else actually) faces, which is the extent to which they are dependent on the management system they work within. Clayton Christensen's Theory of Jobs to Be Done is very relevant to this topic in my opinion. But many software testers would have difficulty achieving this level of customer understanding without a management system already very consistent with this idea.

Alan: Honestly, I don't know how a product can be successful without analysis and understanding of how customers use the product. The idea of a tester "pretending to be the customer" based purely on their tester-intuition is an incomplete solution to the software quality problem. If you hear, "No customer would ever do that", you can either argue based on your intuition, or goo look at the data and prove yourself right (or wrong).

There's an important shift in product development that many companies have recognized. They've moved from "let's make something we think is awesome and that you'll love" - i.e. we-make-it-you-take-it to "we want to understand what you need so we can make you happy". This shift cannot happen without understanding (and wallowing in) customer data. Testers may not (and probably won't) create this system, but any product team interested in remaining in business should have a system for collecting and analyzing how customers use their product.

Hexawise: I agree, this shift is extremely important. Do you have an example of how using such a deep understanding of users influenced testing or how it was used to shift the software development focus. Since software testing is meant to help make sure the company provides software that users want it certainly seems important to make sure software testers understand what users want (and don't want).

Alan: First off, I prefer to think of the role of test as accelerating the achievement of shipping quality software; which is similar to making sure the company provides software customers want, but (IMO), is a more focused goal.

As far as examples go...how many do you want? Here a few to ponder:

  • We tracked how long people ran our app. 45% ran it continuously - all the time. This is the way we ran the app internally. But another 45% ran it for 10 minutes or less. We had a lot of tests that tried to mimic a week of usage and look for memory leaks and other weirdness, but after seeing the data, we ended up doing a lot more testing (and improving) of start up and shut down scenarios.
  • We once canceled a feature after seeing that the data told us that it was barely used. In this case, we didn't add the tracking data until late (a mistake on our part). Ideally, we'd discover something like this earlier, and than redesign (rather than cancel)
  • We saw that a brand new feature was being used a lot by our early adopters. This was pretty exciting...but as testers, we always validate our findings. It turned out after a bit more investigation, that the command following usage of the brand new feature was almost always undo. Users were trying the feature... but they didn't like what it was doing.

Staying Current / Learning

Hexawise: What advice do you have for people attending software conferences so that they can get more out of the experience?

Alan: Number one bit of advice is to talk to people. In fact, I could argue that you get negative value (when balanced against the time investment) if you show up and only attend talks. If there's a talk you like in particular, get to know the speaker (even us introverts are highly approachable - especially if you bring beer). Make connections, talk about work, talk about challenges you have, and things you're proud of. Take advantage of the fact that there are a whole lot of people around you that you can learn from (or who can learn from you).

Additionally, if you're in a multi-track conference, feel free to jump from talk to talk if you're not getting what you need - or if it just happens that two talks that interest you are happening at the same time.

Hexawise: How do you stay current on improvements in software testing practices; or how would you suggest testers stay current?

Alan: I read a lot of testing blog posts (I use feedly for aggregation). I subscribe to at least fifty software development and test related blogs. I skim and discard liberally, but I usually find an idea or two every week that encourages me to dig deeper.

Biggest tip I have, however, is to know how essential learning is to your success. Learning is as important to the success of a knowledge worker as food is to human life. Anyone who thinks they're an "expert" and that learning isn't important anymore is someone on a career death spiral.

Profile

image of book cover for The A Word by Alan Page

Alan has been a software tester (among other roles) since 1993 and is currently the Director of Quality for Services at Unity. Alan spent over twenty years at Microsoft working on a variety of operating systems and applications in nearly every Microsoft division. Alan blogs at angryweasel.com, hosts a podcast (w/ Brent Jensen) at angryweasel.com/ABTesting, and on occasion, he speaks at industry testing and software engineering conferences.

Links Books: How We Test Software at Microsoft, The A Word

Blog: Tooth of the Weasel

Podcast: AB Testing

Twitter: @alanpage


Related posts: Testing Smarter with Dorothy Graham - Testing Smarter with James Bach

By: John Hunter on Mar 16, 2017

Categories: Combinatorial Software Testing, Customer Success, Software Development, Software Testing, Testing Smarter with..., Interview

This interview with Dorothy Graham is part of our series of “Testing Smarter with…” interviews. Our goal with these interviews is to highlight insights and experiences as told by many of the software testing field’s leading thinkers.

Dorothy Graham has been in software testing for over 40 years, and is co-author of 4 books: Software Inspection, Software Test Automation, Foundations of Software Testing and Experiences of Test Automation.


Dorothy Graham

Personal Background

Hexawise: Do your friends and relatives understand what you do? How do you explain it to them?

Dot: Some of them do! Sometimes I try to explain a bit, but others are just simply mystified that I get invited to speak all over the world!

Hexawise: If you could write a letter and send it back in time to yourself when you were first getting into software testing, what advice would you include in it?

Dot: I would tell myself to go for it. When I started in testing it was definitely not a popular or highly-regarded area, so I accidentally made the right career choice (or maybe Someone up there planned my career much better than I could have).

Hexawise: Which person or people have had the greatest influence on your understanding and practice of software testing?

Dot: Tom Gilb was (and is) a big influence, although in software engineering in general rather than testing. The early testing books (Glenford Myers, Bill Hetzel, Cem Kaner, Boris Beizer) were very influential in helping me to understand testing. I attended Bill Hetzel’s course when I was just beginning to specialize in testing, and he was a big influence on my view of testing back then. Many discussions about testing with friends in the Specialist Group In Software Testing (SIGIST), and colleagues at Grove, especially Mark Fewster.

Hexawise: Failures can often lead to interesting lessons learned. Do you have any noteworthy failure stories that you’d be willing to share?

Dot: I was able to give a talk about a failure story in Inspection, where a very successful Inspection initiative had been “killed off” by a management re-organisation that failed to realise what they had killed. There were a lot of lessons about communicating the value of Inspection (which also apply to testing and test automation). The most interesting presentation was when I gave it at the company where it had happened a number of years before; one (only one) person realized it was them.

One fallacy is in thinking that the automation does only what is done by manual tests. There are often many things that are impossible or infeasible in manual testing that can easily be done automatically.

Hexawise: Describe a testing experience you are especially proud of. What discovery did you make while testing and how did you share this information so improvements could be made to the software?

Dot: One of the most interesting experiences I had was when I was doing a training course in Software Inspection for General Motors in Michigan, and in the afternoon, when people were working on their own documents, several people left the room quite abruptly. I later found out that they had gone to stop the assembly line! They were Inspecting either braking systems or cruise control systems.

When I was working at Ferranti on Police Command and Control Systems, I discovered that the operating system’s timing routines were not working correctly. This was critical for our system, as we had to produce reminders in 2 minutes and 5 minutes if certain actions had not been taken on critical incidents. I wrote a test program that clearly highlighted the problems in the OS and sent it to the OS developers, wondering what their reaction would be. In fact they were delighted and found it very useful (and they did fix the bugs). It did make me wonder why they hadn’t thought to test it themselves though.

Views on Software Testing

Hexawise: In That's No Reason to Automate, you state "Testing and automation are different and distinct activities." Could you elaborate on that distinction?

Dot: By “automation”, I am referring to tools that execute tests that they have been set up to run. Execution of tests is one part of testing but testing is far more than just running tests. First what it is that needs to be tested? Then how best can that be tested - what specific examples would make good test cases? Then how are these tests constructed to be run? Then they are run, and afterwards, what was the outcome from the test compared to what was expected, i.e. did the software being tested do what it should have done? This is why tools don’t replace testers - they don’t do testing, they just run stuff.


Diagram from That's No Reason to Automate

Hexawise: Test automation allows for great efficiency. But there are limits to what can be effectively automated. How do you suggest testers determine what to automate and what to devote tester's precious time toward investigating personally?

Dot: There are two fallacies when people say “automate all the manual tests” (which is unfortunately quite common). The first one is that all manual tests should be automated - this is not true. Some tests that are now manual would be good candidates for automation, but other tests should always be manual. For example, “do these colours look nice?” or “is this what users really do?” or even “Captcha” where it should only work if a person not a machine is filling in a form. Think about it - if your automated test can do the Captcha, then the Captcha is broken.

Automating the manual tests “as is" is not the best way to automate, since manual tests are optimized for human testers. Machines are different to people, and tests optimized for automation may be quite different. And finally, a test that takes an inordinate amount of time to automate, if the test isn’t needed often, is not worth automating.

The other fallacy is in thinking that the automation does only what is done by manual tests. There are often many things that are impossible or infeasible in manual testing that can easily be done automatically. For example if you are testing manually, you see what is on the screen, but you don’t know if all the GUI (graphical user interface) objects are in the correct state - this could be checked by a tool. There are also types of testing that can only be done automatically, such as Monkey testing or High Volume Automated Testing.

Hexawise: Can you describe a view or opinion about software testing that you have that many or most smart experienced software testers probably disagree with? What led you to this belief?

Dot: One view that I feel strongly about, and many testers seem to disagree with this, is that I don’t think that all testers should be able to write code, or, what seems to be the prevailing view, that the only good testers are testers who can code. I have no objection to people who do want to do both, and I think this is great, especially in Agile teams. But what I object to is the idea that this has to apply to all testers.

There are many testers who came into testing perhaps through the business side, or who got into testing because they didn’t want to be developers. They have great skills in testing, and don’t want to learn to code. Not only would they not enjoy it, they probably wouldn’t be very good at it either! As Hans Buwalda says, “you lose a good tester and gain a poor programmer.” The ultimate effect of this attitude is that testing skills are being devalued, and I don’t think that is right or good for our industry!

The best way to have fewer bugs in the software is not to put them in in the first place. I would like to see testers become bug prevention advisors! And I think this is what happens on good teams.

Hexawise: What do you wish more developers, business analysts, and project managers understood about software testing?

Dot: The limitations of testing. Testing is only sampling, and can find useful problems (bugs) but there aren’t enough resources or time in the universe to “test everything”.

The best way to have fewer bugs in the software is not to put them in in the first place. I would like to see testers become bug prevention advisors! And I think this is what happens on good teams.

Hexawise: I agree that becoming bug prevention advisors is a very powerful concept. Software development organizations should be seeking to improve the entire system (don't create bugs, then try to catch them and then fix them - stop creating them). I recently discussed these ideas in Software Code Reviews from a Deming Perspective.

Dot: I had a quick look at the blog and like it - especially the emphasis on it being a learning experience. It seems to be viewing “inspection” in the manufacturing sense as looking at what comes off the line at the end. In our book “Software Inspection” (which is actually a review process), it is very much an up-front process. In fact applying inspection/reviews to things like requirements, contracts, architecture design etc is often more valuable than reviewing code.

I also fully agree with selecting what to look at - in our book we recommend a sampling process to identify the types of serious bug, then try to remove all other of that type - that way you can actually remove more bugs than you found.

Hexawise: In what ways would you see "becoming bug prevention advisors" happening? Done in the wrong way it can make people defensive about others advising them how to do their jobs in order to avoid creating bugs. What advice do you have for how software testers can make this happen and do it well?

Dot: I see the essential mind-set of the tester as asking “what could go wrong? or “what if it isn’t” [as it is supposed to be]. If you have a Agile team that includes a tester, then those questions are asked perhaps as part of pair working - that way the potential problems are identified and thought about before or as the code is written. The best way is close collaboration with developers - developers who respect and value the tester’s perspective.

Industry Observations / Industry Trends:

Hexawise: Your latest book, Experiences of Test Automation, includes an essay on Exploratory Test Automation. How are better tools making the ideas in this essay more applicable now than they were previously?

Dot: Actually this is now called High-Volume Automated Testing, which is a much better name for it. HiVAT uses massive amounts of test inputs, with automated partial oracles. The tests are checking that the results are reasonable, not exact results, and any that are outside those bounds are reported to humans to investigate. Harry Robinson described using this technique to test Bing - quite a large thing to test!

Hexawise: When looking to automate tests one thing people sometimes overlook is that given the new process it may well be wise to add more tests than existed before. If all test cases were manually completed that list of cases would naturally have been limited by the cost of repeatedly manually checking so many test cases in regression testing. If you automate the tests it may well be wise to expand the breadth of variations in order to catch bugs caused by the interaction of various parameter values. What are your thoughts on this idea?

Dot: Nice analogy - I like the term “grapefruit juice bugs”. Using some of the combinatorial techniques is a good way to cover more combinations in a reasonably rigorous way. Automation can help to run more tests (provided that expected results are available for them) and may be a good way to implement the combination tests, using pair-wise and/or orthogonal arrays.

Hexawise: In your experience how big of a problem is automation decay (automated tests not being maintained properly)? How do recommend companies avoid this pitfall?

Dot: This seems to be the most common way that test automation efforts get abandoned (and tools become shelfware), particularly for system-level test automation. It can be avoided by having a well-structured testware architecture in the first place, where maintenance concerns are thought about beforehand, using good programming practices for the automation code. Although fixing this later is possible, preventing it in the first place is far better.

Hexawise: Large companies often discount the importance of software testing. What advice do you have for software testers to help their organizations understand the importance of software testing efforts in the organization?

Dot: Unfortunately often the best way to get an organisation to appreciate testing is to have a small disaster, which adequate testing could have prevented. I’m not exactly suggesting that people should do this, but testing, like insurance, is best appreciated by those who suffer the consequences of not having had it. Perhaps just try to make people aware of the risks they are taking (pretend headlines in the paper about failed systems?)

Staying Current / Learning

Hexawise: I see that you’ll be presenting at the StarEast conference in May. What could you share with us about what you’ll be talking about?

Dot: I have two tutorials on test automation. One is aimed mainly at managers who want to ensure that a new (or renewed) automation effort gets off “on the right foot”. There are a number of things that should be done right at the start to give a much better chance of lasting success in automation.

My other tutorial is about Test Automation Patterns (using the wiki). Here we will look at common problems on the technical side (rather than Management issues), and how people have solved these problems - ideas that you can apply. This also a code-free tutorial - we are talking about generic technical issues and patterns, whatever test execution tool you use.

Hexawise: What advice do you have for people attending software conferences so that they can get more out of the experience?

Dot: Think about what you want to find out about before you come, and look through the programme to decide which presentations will be most relevant and helpful for you. It’s very frustrating to realise at the end of the conference that you missed the one presentation that would have given you the best value, so do your homework first!

But also realise that the conference is more than just attending sessions. Do talk to other delegates - find out what problems you might share (especially if you are in the same session). You may want to skip a session now and then to have a more thorough look around the Expo, or to have a deeper conversation with someone you have met or one of the speakers. The friends you make at the conference can be your “support group” if you keep in touch afterwards.

I don’t think that all testers should be able to write code, or, what seems to be the prevailing view, that the only good testers are testers who can code. I have no objection to people who do want to do both, and I think this is great, especially in Agile teams. But what I object to is the idea that this has to apply to all testers.

Hexawise: How do you stay current on improvements in software testing practices; or how would you suggest testers stay current?

Dot: I am fortunate to be invited to quite a few conferences and events. I find them stimulating and I always learn new things. There are also many blogs, webinars and online resources to help us all try to keep up to some extent.

Hexawise: What software testing-related books would you recommend should be on a tester’s bookshelf?

Mine, of course! ;-) I do hope that testers have a bookshelf! For testers, I recommend Lee Copeland’s book “A Practitioner’s Guide to Software Test Design” as a great introduction to many techniques. For a deep understanding, get Kem Caner’s book “The Domain Testing Workbook”. I also like “Lessons Learned in Software Testing” by Kaner, Bach and Pettichord, and “Perfect Software and other illusions about testing” by the great Gerry Weinberg. But there are many other books about testing too!

Hexawise: What are you working on now?

Dot: I am working on a wiki called TestAutomationPatterns.org with Seretta Gamba. This is a free resource for system level automation, with lists of different issues or problems and resolving patterns to help address them. There are four categories: Process, Management, Design and Execution. There is a “Diagnostic” to help people identify their most pressing issue, which then leads them to the patterns to help. We are looking for people to contribute their own short experiences to the issues or patterns.

Dorothy Graham has been in software testing for over 40 years, and is co-author of 4 books: Software Inspection, Software Test Automation, Foundations of Software Testing and Experiences of Test Automation, and currently helps develop TestAutomationPatterns.org.

Dot has been on the boards of conferences and publications in software testing, including programme chair for EuroStar (twice). She was a founder member of the ISEB Software Testing Board and helped develop the first ISTQB Foundation Syllabus. She has attended Star conferences since the first one in 1992. She was awarded the European Excellence Award in Software Testing in 1999 and the first ISTQB Excellence Award in 2012.

Website

Twitter: @DorothyGraham

By: John Hunter on Mar 10, 2017

Categories: Software Testing, Testing Smarter with..., Interview

We are excited to announce an ongoing partnership with Datalex to improve software test efficiency and accuracy. Datalex has achieved extreme benefits in software quality assurance and speed-to-market through their use of Hexawise. Some of these benefits include:

  • Greater than 65 percent reduction in the Datalex test suite.
  • Clearer understanding of test coverage
  • Higher confidence in the thoroughness of software tests.
  • Complete and consistently formatted tests that are simple to automate

An airline company’s regression suite typically contains thousands of test cases. Hexawise is used by Datalex to optimize these test cases, leading to fewer tests as well as greater overall testing coverage. Hexawise also provides Datalex with a complete understanding of exactly what has been tested after each and every test, allowing them to make fact-based decisions about how much testing is enough on each project.

Hexawise has been fundamental in improving the way we approach our Test Design, Test Coverage and Test Execution at Datalex... My team love using Hexawise given its intuitive interface and it’s ability to provide a risk based approach to coverage which gives them more confidence during release sign-off.

Screen shot 2016 10 04 at 12.58.53 pm

Áine Sherry

Global Test Manager at Datalex

“As a senior Engineer in a highly innovative company, I find Hexawise crucial in regards to achieving excellent coverage with a fraction of the time and effort. Hexawise will also facilitate us to scale onwards and upwards as we continue to innovate and grow,“ – Dean Richardson, Software Test Engineer at Datalex.

By eliminating duplicative tests and optimizing the test coverage of each test case Hexawise provides great time savings in the test execution phase. Hexawise can generate fewer test scenarios compared to what testers would create on their own and those test cases provide more test coverage. Time savings in test execution come about simply because it takes less time to execute fewer tests.

Related: How to Pack More Coverage Into Fewer Software Tests - Large Benefits = Happy Hexawise Clients and Happy Colleagues

By: John Hunter on Nov 10, 2016

Categories: Business Case, Customer Success, Testing Case Studies

Performance testing examines how the software performs (normally "how fast") in various situations.

Performance testing does not just result in one value. You normally performance test various aspects of the software in differing conditions to learn about the overall performance characteristics. It can well be that certain changes will improve the performance results for some conditions (say a powerful laptop with a fiber connection) and greatly degrade the performance for other use cases. And often the software can be coded to attempt to provide different solutions under different conditions.

All this makes performance testing complex. But trying to over-simplify performance testing removes much of its value.

Another form of performance testing is done on sub components of a system to determine what solutions may be best. These often are server based issues. These likely don't depend on individual user conditions but can be impacted by other things. Such as under normal usage option 1 provides great performance but under larger load option 1 slows down a great deal and option 2 is better.

Focusing on these tests of sub components run the risk of sub-optimization, where optimizing individual sub-components result in a less than optimal overall performance. Performance testing sub-components is important but it is most important is testing the performance of the overall system. Performance testing should always place a priority on overall system performance and not fall into the trap of creating a system with components that perform well individually but when combined do not work well together.

Load testing, stress testing and configuration testing are all part of performance testing.

Continue reading about performance testing in the Hexawise software testing glossary.

By: John Hunter on Sep 27, 2016

Categories: User Experience, Software Testing Testing Strategies, Software Testing

Hexawise has been driven by the vison to provide software testers more effective coverage in fewer tests. The Hexawise combinatorial software testing application allows software testers to achieve both seemingly contradictory statements.

Too Many Tests

See some previous posts where we have explored how Hexawise achieves more coverage with fewer tests:

By: John Hunter on Sep 14, 2016

Categories: Hexawise test case generating tool, Hexawise

Software testing concepts help us compartmentalize the complexity that we face in testing software. Breaking the testing domain into various areas (such as usability testing, performance testing, functional testing, etc.) helps us organize and focus our efforts.

But those concepts are constructs that often have fuzzy boundries. What matters isn't where we should place certain software testing efforts. What matters is helping create software that users find worthwhile and hopeful enjoyable.

One of the frustration I have faced in using internet based software in the last few years is that it often seems to be tested without considering that some users will not have fiber connections (and might have high latency connections). I am not certain latency (combined maybe with lower bandwidth) is the issue but I have often found websites either actually physically unusable or mentally unusable (it is way too frustrating to use).

It might be the user experience I face (on the poorly performing sites) is as bad for all users, but my guess is it is a decent user experience on the fiber connections that the managers have when they decide this is an OK solution. It is a usbility issue but it is also a performance issue in my opinion.

It is certainly possible to test performance results on powerful laptops with great internet connections and get good performance results for web applications that will provide bad performance results on smart phones via wifi or less than ideal cell connections. This failure to understand the real user conditions is a significant problem and an area of testing that should be improved.

I consider this an interaction between performance testing and user-experience testing (I use "user-experience" to distinguish it from "usability testing", since I can test aspects of the user experience without users testing the software). The page may load in under 1 second on a laptop with a fiber connection but that isn't the only measure of performance. What about your users that are connecting via a wifi connection with high latency? What if the performance in that case is that it takes 8 seconds to load and your various interactive features either barely work or won't work at all given the high latency.

In some cases ignoring the performance for some users may be OK. But if you care about a system that delivers fast load times to users you need to consider the performance not just for a subset of users but consider how it performs for users overall. The extent you will prioritize various use cases will depend on your specific situation.

I have a large bias for keeping the basic experience very good for all users. If I add fancy features that are useful I do not like to accept meaningful degradation to any user's experience - graceful degradation is very important to me. That is less important to many of sites that I use, unfortunately. What priority you place on it is a decision that impacts your software development and software testing process.

Hexawise attempts to add features that are useful while at the same time paying close attention to making sure we don't make things worse for users that don't care about the new feature. Making sure the interface remains clear and easy to use is very important to us. It is also a challenge when you have powerful and fairly complex software to keep the usability high. It is very easy to slip and degrade the users experience. Sean Johnson does a great job making sure we avoid doing that.

Maintaining the responsiveness of Hexawise is a huge effort on our part given the heavy computation required in generating tests in large test case scenarios.

You also have to realize where you cannot be all things to all people. Using Hexawise on a smart phone is just not going to be a great experience. Hexawise is just not suited to that use case at all and therefore we wouldn't test such a use case.

For important performance characteristics it may well be that you should create a separate Hexawise test plan to test the performance under server different conditions (relating to latency, bandwidth and perhaps phone operating system). It could be done within a test plan it just seems to me more likely separate test plans would be more effective most of the time. It may well be that you have the primary test plan to cover many functional aspects and have a much smaller test plan just to check that several things work fine in a high latency and smart phone use case).

Within that plan you may well want to test out various parameter values for certain parameters operating system iOS Android 7 Android 6 Android 5

latency ...

Of course, what should be tested depends on the software being tested. If none of the items above matter in your case they shouldn't be used. If you are concerned about a large user base you may well be concerned about performance on various Android versions since the upgrade cycle to new versions is so slow (while most iOS users are on the latest version fairly quickly).

If latency has a big impact on performance then including a parameter on latency would be worthwhile and testing various parameter values for it could be sensible (maybe high, medium and low). And the same with testing various levels of bandwidth (again, depending on your situation).

My view is always very user focused so the way I naturally think is relating pretty much everything I do to how it impacts the end user's experience.

Related: 10 Questions to Ask When Designing Software Tests - Don’t Do What Your Users Say - Software Testers Are Test Pilots

By: John Hunter on Jul 26, 2016

Categories: User Experience, Testing Strategies, Software Testing

Usability testing is the practice of having actual users try the software. Outcomes include the data of the tasks given to the user to complete (successful completion, time to complete, etc.), comments the users make and expert evaluation of their use of the software (noticing for example that none of the users follow the intended path to complete the task, or that many users looked for a different way to complete a task but failing to find it eventually found a way to succeed).

Usability testing involves systemic evaluation of real people using the software. This can be done in a testing lab where an expert can watch the user but this is expensive. Remote monitoring (watching the screen of the user; communication via voice by the user and expert; and viewing a webcam showing the user) is also commonly used.

In these setting the user will be given specific tasks to complete and the testing expert will watch what the user does. The expert will also ask the user questions about what they found difficult and confusing (in addition to what they liked) about the software.

The concept of usability testing is to have feedback from real users. In the event you can't test with the real users of a system it is important to consider if you are fairly accuratately representing that population with your usability testers. If the users of the system of fairly unsophisticated users if you use usability testers that are very computer savy they may well not provide good feedback (as their use of the software may be very different from the actually users).

"Usability testing" does not encompass experts evaluating the software based on known usability best practices and common problems. This form of expert knowledge of wise usability practices is important but it is not considered part of "usability testing."

Find more exploration of software testing terms in the Hexawise software testing glossary.

Related: Usability Testing Demystified - Why You Only Need to Test with 5 Users (this is not a complete answer, it does provide insite into the value of quick testing to run during the development of the software) - Streamlining Usability Testing by Avoiding the Lab - Quick and Dirty Remote User Testing - 4 ways to combat usability testing avoidance

By: John Hunter on Jul 4, 2016

Categories: User Experience, User Interface, Testing Strategies, Software Testing

Recently we added the revisions feature as an enhancement to Hexawise created test plans. This allows you to easily revert to a previous test plan for whatever reason you wish. We provide a list of each revision and the date and all you have to do is click a button to revert to that version. We also give you the option to copy that plan (in case you want to view that plan, but ​also​ want to ​keep all​ the updates you have made since then).

Now when you are editing a test plan in Hexawise you will see a revisions link on the top of the screen (see image):

revisions-button

Note, revisions are available in editable test plans, the revisions link is not available in uneditable test plans (such as the sample plans). I saved an editable copy of the plan into my private plans.

In the example in this post, I am using the Hexawise sample plan "F) Flight Reservations" which you can view in your own account.

revisions-note

One of the notes in this sample plan says we should also test for whether java is enabled or not. So I added a new paramater for java and included enabled and not-enabled as paramater values.

At a later date, if we wanted to go to a previous version all we have to do is click the revisions link (see top image) and then we get a list of all the revisions:

revisions-action

Mouseover the revision we want to use and we can make a copy of that version or we can revert to the version we desire.

Versions make is very easy to get back to a previous version of your test plan. This has been quite a popular feature addition. I hope you enjoy it. We are constantly seeking to improve based on feedback from our users. If you have comments and suggestions please share them with use.

Related: My plan has a lot of constraints in it. Should I split it into separate plans? - Whaddya Mean "No Possible Value"? - Customer Delight

By: John Hunter on May 31, 2016

Categories: Hexawise, Hexawise test case generating tool, Hexawise tips

At Hexawise we aim to improve the way software is tested. Achieving that aim requires not only providing our clients with a wonderful software tool (which our customers say we’re succeeding at) but also a commitment from the users of our tool to adopt new ways of thinking about software testing.

We have written previously about our focus on the importance of the values Bill Hunter (our founder's father) to Hexawise. That has led us to constantly focus on how maximize the benefits our customers gain using Hexawise. This focus has led us to realize that our customers that take advantage of the high-touch training services and ongoing expert test design support on demand that we offer often realize unusually large benefits and roll out usage of Hexawise more quickly and broadly than our customers who acquire licenses to Hexawise and try to “get the tool and make it available to the team.”

We are now looking for someone to take on the challenge of helping our clients succeed. The principles behind our decision to put so much focus on helping our customers succeed are obvious to those that understand the thinking of Bill Hunter, W. Edwards Deming, Russel Ackoff etc. but they may seem a bit odd to others. The focus of this senior-level position really is to help our customers improve their software testing results. It isn't just a happy sounding title that has no bearing on what the job actually entails.

The person holding this position will report to the CEO and work with other executives at Hexawise who all share a commitment to delighting our customers and improving the practice of software testing.

Hexawise is an innovative SaaS firm focused on helping large companies use smarter approaches to test their enterprise software systems. Teams using Hexawise get to market faster with higher quality products. We are the world’s leading firm in our niche market and have a growing client base of highly satisfied customers. Since we launched in 2009, we have grown both revenues and profits every year. Hexawise is changing the way that large companies test software. More than 100 Fortune 500 companies and hundreds of other smaller firms use our industry leading software.

Join our journey to transform how companies test their software systems.

Hexawise office

Description: VP of Customer Success

In the Weeks Prior to a Sale Closing

  • Partner with sales representatives to conduct virtual technical presentations and demonstrations of our Hexawise test design solution.

  • Clearly explain the benefits and limitations of combinatorial test design to potential customers using language and concepts relevant to their context by drawing upon your own “been there, done that” experiences of having successfully introduced combinatorial test design methods in multiple similar situations.

  • Identify and assess business and technical requirements, and position Hexawise solutions accordingly.

Immediately Upon a New Sale Closing

  • Assess a new client’s existing testing-related processes, tools, and methods (as well as their organizational structure) in order to provide the client with customized, actionable recommendations about how they can best incorporate Hexawise.

  • Collaborate with client stakeholders to proactively identify potential barriers to successful adoption and put plans in place to mitigate / overcome such barriers.

  • Provide remote, instructor-led training sessions via webinars.

  • Provide multi-day onsite instructor-led training sessions that: cover basic software test design concepts (such as Equivalence Class Partitioning, the definition of Pairwise-Testing coverage, etc.) as well as how to use the specific features of Hexawise.

  • Include industry-specific and customer-specific customized training modules and hands-on test design exercises to help make the sessions relevant to the testers and BA’s who attend the training sessions.

  • Collaborate with new users and help them iterate, improve, and finalize their first few sets of Hexawise-generated software tests.

  • Set rollout and adoption success criteria with clients and put plans in place to help them achieve their goals.

Months After a New Sale Closing

  • Continue to engage with customers onsite and virtually to understand their needs, answer their test design questions, and help them achieve large benefits from test optimization.

  • Monitor usage statistics of Hexawise clients and proactively reach out to clients, as appropriate, to provide proactive assistance at the first sign that they might be facing any potential adoption/rollout challenges.

  • Collaborate with stakeholders and end users at our clients to identify opportunities to improve the features and capabilities of Hexawise and then collaborate with our development team to share that feedback and implement improvements.

Required Skills and Experience

We are looking for a highly-experienced combinatorial test design expert with outstanding analytical and communication skills to provide these high touch on-boarding services and partner with our sales team with prospective clients.

Education and Experience

  • Bachelor’s or technical university degree.

  • Deep experience successfully introducing combinatorial test design methods on multiple different kinds of projects to several different groups of testers.

  • Set rollout and adoption success criteria with multiple teams and put plans in place to achieve them.

  • Minimum 5 years in software testing, preferably at a IT consulting firm or large financial services firm.

Knowledge and Skills

  • Ability to present and demonstrate capabilities of the Hexawise tool, and the additional services we provides to our clients beyond our tool.
  • Exhibit excellent communication and presentation skills, including questioning techniques.
  • Demonstrate passion regarding consulting with customers.
  • Understand how IT and enterprise software is used to address the business and technical needs of customers.
  • Demonstrate hands-on level skills with relevant and/or related software technology domains.
  • Communicate the value of products and solutions in terms of financial return and impact on customer business goals.
  • Possess a solid level of industry acumen; keeping current with software testing trends and able to converse with customers at a detailed level on pertinent issues and challenges.
  • Represents Hexawise knowledgeably, based on a solid understanding of Hexawise’s business direction, portfolio and capabilities
  • Understand the competitive landscape for Hexawise and position Hexawise effectively.
  • A cover letter that describes who you are, what you've done, and why you want to join Hexawise.
  • Ability to work and learn independently and as part of a team
  • Desire to work in a fast-paced, challenging start-up environment

Why join Hexawise?

salary + bonus; medical and dental, 401(k) plans; free parking and very slick Chapel Hill office! Opportunity to experience work with a fast-growing, innovative technology company that is changing the way software is tested.

Key Benefits:

Salary: Negotiable, but minimum of $100,000 + Commissions based upon client license renewals Benefits: Health, dental included, 401k plan Travel: Average of no more than 2-3 days onsite per week Location: Chapel Hill, NC*

*Working from our offices would be highly preferable. We might consider remote working arrangements for an exceptional candidate based in the US.

Apply for the VP of Customer Success position at Hexawise.

By: John Hunter on May 12, 2016

Categories: Hexawise, Career, Software Testing, Lean, Customer Success, Agile

It has been quite a long time since we last posted a roundup of great content on software testing from around the web.

  • Mistakes We Make in Testing by Michael Sowers - "Not being involved in development at the beginning and reviewing and providing feedback on the artifacts, such as user stories, requirements, and so forth. Not collaborating and developing a positive relationship with the development team members..."
  • Changing the conversation about change in testing by Katrina Clokie - "I'm planning to stop talking about bugs, time and money. Instead I'm going to start framing my reasoning in terms of corporate image, increasing quality awareness among all disciplines, and improving end-user satisfaction."
  • How to Pack More Coverage Into Fewer Software Tests by Justin Hunter - "There are simply too many interactions for a tester to keep track of on their own. As a result, manually-selected test sets almost always fail to test for a rather large number of potentially important interactions."
  • Building Quality by Alan Page - "your best shot at creating a product that customers like crave is to get quantitative and qualitative feedback directly from your users... Get it in their hands, listen to them, and make it better."
  • Dr. StrangeCareer or: How I Learned to Stop Worrying and Love the Software Testing Industry by Keith Klain - "Testing is hard. Doing it right is very hard. An ambiguous, unchartered journey into a sea of bias and experimentation, but as the line from the movie goes, 'the hard is what makes it great'."
  • Exploratory Testing 3.0 by James Bach and Michael Bolton - "Because we now define all testing as exploratory. Our definition of testing is now this: 'Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.'"
  • Coverage Is Not Strongly Correlated With Test Suite Effectiveness by Laura Inozemtseva and Reid Holmes - "Our results suggest that coverage, while useful for identifying under-tested parts of a program, should not be used as a quality target because it is not a good indicator of test suite effectiveness. "
  • How Software Testers Can Teach, Share and Play with Others by Justin Rohrman - Software testers "bring a varied skill set to software development organizations — math, experiment design, modeling, and critical thinking."
  • Who Killed My Battery: Analyzing Mobile Browser Energy Consumption - " dynamic Javascript requests can greatly increase the cost of rendering the page... by modifying scripts on the Wikipedia mobile site we reduced by 30% the energy needed to download and render Wikipedia pages with no change to the user experience."

By: John Hunter on Apr 27, 2016

Categories: Roundup, Software Testing

Begin with the “Goldilocks Rule” in mind to identify how much detail is appropriate.

unnamed

If your tests cover a large scope, as in a set of end-to-end tests of a process, you focus first on entering only the most important elements of that process into Hexawise. If your tests cover a small scope, as in tests that focus on a few items on a single screen, the amount of detail you will want to include in Hexawise is higher. Learn more about the Goldilocks Rule.

 

Imagine explaining your System Under Test to someone’s mother. Start with 5 things that may change in each test

Suggestion: do not start this process with detailed Requirements or Technical Specifications. Instead, start with your basic common sense description of some things that would change from test to test.

  • If you were explaining the application to someone’s mother, how would you explain what it does in 2 minutes?
  • What kinds of things would be important to vary from test to test?
    • Hardware and software configurations?
    • User types?
    • Different actions that a user might take?
  • Identify 5 things that change from test to test and turn those 5 things into your first Parameters.
    • How might those things change? Add one or more Values for each Parameter.
    • At this point general descriptions might be fine; (e.g., SUV’s or Economy cars vs. Toyota Corolla).
    • Remember that, where possible, you should avoid creating long lists of values.

 

Create a draft set tests, assess obvious big gaps in the tests, and start filling them.

What obvious types of scenarios are missing? Add parameters and values as necessary to fill those big, obvious holes.

 

Create tests again, assess whether you’re covering necessary business rules and requirements.

If your tests are not yet testing a business rule or Requirement that you want to test:

  • Consider adding a Parameter or Value
  • Consider adding a specific combination of values to be tested together in the requirements tab

 

Reduce scope if the test plan is too complex

  • Consider cutting the scope of the plan (e.g., create two different plans with largely similar parameters – one for regular users and one for special users – instead of one big plan which tries to “do it all.”)
  • Consider changing the way you are describing “hard coded” values. Instead, of “iPhone 4S with International roaming” (which might not be a valid option after the first part of the draft test case suggests a transaction for a phone for corporate customers from a Northern location responding to the special holiday offer…) consider using descriptive parameters and values along the lines of “from the available options of phones available at this point in the test case, select an option that meets as many of these conditions as you can…

 

Consider adding additional details into your plan from other sources.

Other sources for test ideas could be:

J Michael Hunter’s “You Are Not Done Yet”
Elisabeth Hendrikson’s Test Heuristics Cheat Sheet
Hans Buwalda’s Soap Opera Testing

By: John Hunter on Jun 17, 2014

Categories: Combinatorial Software Testing, Software Testing

The Hexawise Software Testing blog carnival focuses on sharing interesting and useful blogposts related to software testing.

 

  • The Zen of Application Test Suites by Curtis “Ovid” Poe – “This document is about testing applications — it’s not about how to write tests. Application test suites require a different, more disciplined approach than library test suites. I describe common misfeatures experienced in large application test suites and follow with recommendations on best practices.”

  • The Software Tester’s Easter Egg Hunt by Ben Austin – “The testing industry will benefit greatly if more people follow Holland’s example and prioritize critical thinking over the ability to write test scripts. That said, it is equally important to recognize that scripting tools, when used in the situations they’re intended for, can be great time-savers that can enable more thoughtful, context-driven exploratory testing.

  • “Grapefruit Juice Bugs” – A New Term for a Surprisingly Common Type of Surprising Bugs by Justin Hunter – “Like grapefruit juice’s impact on prescription drugs, software testing involves critical interactions between different parts of the system. And risks exist when these different parts interact with one another.”

  • Testing at Airbnb by Lou Kosak – “Building good habits around testing is hard, especially at an established company. One of the biggest lessons I’ve learned this year is that as long as you have a team that’s open to the idea, it’s always possible to start testing (even if you’ve got a six year old monolithic Rails app to contend with). Once you have a decent test suite in place, you can actually start refactoring your legacy code, and from there anything is possible. The trick is just to get started.”

  • Disruptive Testing – James Bach Interviewed by Rodney Urquhart – “the future I am helping to build is about systematically training up skilled testers, some of whom but not all with coding skills, so that a small number of testers can do– or coordinate to be done– all the testing that a large project might need. A good future for testing would be one with a lot fewer “testers” but each one of those testers being passionate about his craft.”

  • The role of a Test Manager in agile by Katrina Clokie – “In a cross-skilled team, the agile tester must ensure that the quality of testing is good regardless of who in the team is completing it. The tester becomes the spokesperson for collaborative testing practices, and provides coaching via peer reviews or workshops to those without a testing background.”

  • Speaking to Your Business Using Measurements by Justin Rohrman – “In my experience, no one measure did a great job of telling the story about my software ecosystem. I’ve been deceived by groups of measures, too, because I misunderstood their weaknesses. If we are so easily deceived by measurements, imagine what happens when we send them off to others who need quick, high-level information.”

 

unnamed

Photo in Ranakpur India by Justin Hunter.

 

  • Shine a light by Rob Lambert – “Tours, personas and a variety of other test ideas give you a way of re-shining your light. Use these ideas to see the product in different ways, but never forget that it’s often time that is against you. And time is one of the hardest commodities to argue for during your testing phase.”

  • Using mind-mapping software as a visual test management tool by Aaron Hodder – “I like to employ a style of software testing that emphasises the personal freedom and responsibility of the individual tester to continually optimise the quality of his/her work by treating test-related learning, test design, test execution and test result interpretation as mutually supportive activities that run in parallel throughout the project. When performed by a skilled tester, this approach yields valuable and consistent results.”

  • Helpful Tips for Hiring Better Testers by Isaac Howard – “I looked at the good testers around me and tried to identify the “whys” of their success. All of them were driven to learn and capable of adapting to change. If they didn’t know a tool or a tech, they learned it. Because under the hood, testing is learning and relearning software everyday. The following are seven changes I made to my interviewing process.”

By: John Hunter on Jun 10, 2014

Categories: Software Testing

Justin posted this to the Hexawise Twitter feed

cave-question

It sparked some interesting and sometimes humorous discussion.

cave-exploration

The parallels to software testing are easy to see. In order to learn useful information we need to understand what the purpose of the visit to the cave is (or what we are seeking to learn in software testing). Normally the most insight can be gained by adjusting what you seek to learn based on what you find. As George Box said

Always remember the process of discovery is iterative. The results of each stage of investigation generating new questions to answered during the next.

Some "software testing" amounts to software checking or confirming that a specific known result is as we expect. This is important for confirming that the software continues to work as expected as we make changes to the code (due to the complexity of software unintended consequences of changes could lead to problems if we didn't check). This can, as described in the tweet, take a form such as 0:00 Step 1- Turn on flashlight with pre-determined steps all the way to 4:59 Step N- Exit But checking is only a portion of what needs to be done. Exploratory testing relies on the brain, experience and insight of a software tester to learn about the state of the software; exploratory testing seeks to go beyond what pre-defined checking can illuminate. In order to explore you need to understand the aim of the mission (what is important to learn) and you need to be flexible as you learn to adjust based on your understanding of the mission and the details you learn as you take your journey through the cave or through the software. Exploratory software testing will begin with ideas of what areas you wish to gain an understanding of, but it will provide quite a bit of flexibility for the path that learning takes. The explorer will adjust based on what they learn and may well find themselves following paths they had not thought of prior to starting their exploration.

 

Related: Maximizing Software Tester Value by Letting Them Spend More Time Thinking - Rapid Software Testing Overview - Software Testers Are Test Pilots - What is Exploratory Testing? by James Bach

By: John Hunter on Apr 8, 2014

Categories: Exploratory Testing, Scripted Software Testing, Software Testing, Testing Checklists

Some of those using Hexawise use Gherkin as their testing framework. Gherkin is based on using a given [a], when [b] --> then [c] format. The idea is this helps make communication clear and make sure business rules are understood properly. Portions of this post may be a bit confusing for new Hexawise users, links are provided for more details on various topics. But, if you don't need to create output for Gherkin and you are confused you can just skip this post.

A simple Gherkin scenario: Making an ATM withdrawal

Given a regular account
  And the account was originally opened at Goliath National
  And the account has a balance of $500 
When using a Goliath National Bank 
  And making a withdrawal of $200 
Then the withdrawal should be handled appropriately 

Hexawise users want to be able to specify the parameters (used in given and when statements) and then import the set of Hexawise generated test cases into a Gherkin style output.

In this example we will use Hexawise sample test plan (Gherkin example), which you can access in your Hexawise account.

I'll get into how to export the Hexawise created test plans so they can be used to create Gherkin data tables below (we do this ourselves at Hexawise).

In the then field we default to an expected value of "the withdrawal should be handled appropriately." This is something that may benefit from some explanation.

If we want to provide exact details on exactly what happens on every variation of parameter values for each test script those have to be manually created. That creates a great deal of work that has very little value. And it is an expensive way to manage for the long term as each of those has to be updated every time. So in general using a "behaves as expected" default value is best and then providing extra details when worthwhile.

For some people, this way of thinking can be a bit difficult to take in at first and they have to keep reminding themselves how to best use Hexawise to improve efficiency and effectiveness.

enter-default-expected-value

To enter the default expected value mouse-over the final step in the auto scripts screen. When you mouse over that step you will see the "Add Expected Results" link. Click that and add your expected result text.

expected-value-entry

The expect value entered on the last step with no conditions (the when drop down box is blank) will be the default value used for the export (and therefor the one imported into Gherkin).

In those cases when providing special notes to tester are deemed worth the extra effort, Hexawise has 2 ways of doing this. In the event a special expected value exists for the particular conditions in the individual test case then the special expected value content will be exported (and therefore used for Gherkin).

Conditional expected results can be entered using the auto scripts feature.

Or we can use the requirements feature when we want to require a specific set of parameter values to be tested. If we chose 2 way coverage (the default, pairwise coverage) every pair of parameter values will be tested at least once.

But if we wanted a specific set of say 3 exact parameter values ([account type] = VIP, [withdrawal ATM] = bank-owned ATM, [withdrawal amount] = $600 then we need to include that as a requirement. Each required test script added also includes the option to include an expected result. The sample plan includes a required test case with those parameters and an expected result of "The normal limit of $400 is raised to $600 in the special case of a VIP account using a Goliath National Bank owned ATM."

So, the most effective way to use Hexawise to create a pairwise (or higher) test plan to then use to create Gherkin data tables will be to have the then case be similar to "behaves as expected." And when there is a need for special expected results details to use the auto script or requirements features to include those details. Doing so will result the expected result entered for that special case being the value used in the Gherkin table for then.

When you click auto script button the test are then generated, you can download them using the export icon.

autoscripts-export

Then select option to download as csv file.

script-export-options

You will download a zip file that you can then unzip to get 2 folders with various files. The file you want to use for this is the combinations.txt file in the csv directory.

The Ruby code we use to convert the commas to pipes | used for Gherkin is

!/usr/bin/env ruby
require 'csv'
tests = CSV.read("combinations.csv")
table = []
tests.each do |test|
table << "| " + test[1..-1].join(" | ") + " |\n"
end
IO.write("gherkin.txt", table.join())

Of course, you can use whatever method to convert the format you wish, this is just what we use. See this explanation for a few more details on the process.

Now you have your Gherkin file to use however you wish. And as the code is changed over time (perhaps adding parameter value options, new parameters, etc.) you can just regenerate the test plan and export it. Then convert it and the updated Gherkin test plan is available.

 

Related: Create a Risk-based Testing Plan With Extra Coverage on Higher Priority Areas - Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Designing a Test Plan with Dependent Parameter Values

By: John Hunter on Mar 27, 2014

Categories: Hexawise test case generating tool, Hexawise tips, Scripted Software Testing, Software Testing Efficiency, Testing Strategies

The process used to hire employees is inefficient in general and even more inefficient for knowledge work. Justin Hunter, Hexawise CEO, posted the following tweet:

The labor market is highly inefficient for software testers. Many excellent testers are undervalued relative to average testers. Agree?

 

The tweet sparked quite a few responses:

inefficient-job-market-tweet

I think there are several reasons for why the job market is inefficient in general, and for why it is even more inefficient for software testing than for most jobs.

 

  • Often, how companies go about hiring people is less about finding the best people for the organization and more about following a process that the organization has created. Without intending to, people can become more more concerned about following procedural rules than in finding the best people.

  • The hiring process is often created much like software checking, a bunch of simple things to check - not because doing so is actually useful but because simple procedural checks are easy to verify. So organizations require a college degree (and maybe even require a specific major). And they will use keywords to select or reject applicants. Or require certification or experience with a specific tool. Often the checklist used to disqualify people contains items that might be useful but shouldn't be used as barriers but it is really easy for people that don't understand the work to apply the rules in the checklist to filter the list of applicants.

  • It is very hard to hire software testers well when those doing the hiring don't understand the role software testing should play. Most organizations don't understand, so they hire for software checkers. They, of course, don't value people that could provide much more value (software testers that go far beyond checking). The weakness of hiring without understanding the work is common for knowledge work positions and likely even more problematic for software testing due to the even worse understanding of what they should be doing compared to most knowledge workers.

 

And there are plenty more reasons for the inefficient market.

Here are few ideas that can help improve the process:

  • Spend time to understand and document what your organization seeks to gain from new hires.

  • Deemphasize HR's role in the talent evaluation process and eliminate dysfunctional processes that HR may have instituted. Talent evaluation should be done by people that understand the work that needs to get done. HR can be useful in providing guidance on legal and company-decided policies for hiring. Don't have people that can't evaluate the difference between great testers and good testers decide who should be hired or what salary is acceptable. Incidentally, years of experience, certifications, degrees, past salary and most anything else HR departments routinely use are often not correlated to the value a potential employee brings.

  • A wonderful idea, though a bit of a challenge in most organizations, is to use job auditions. Have the people actually do the job to figure out if they can do what you need or not (work on a project for a couple weeks, for example). This has become more common in the last 10 years but is still rare.

  • I also believe you are better off hiring for somewhat loose job descriptions, if possible, and then adjusting the job to who you hire. That way you can maximize the benefit to the organization based on the people you have. At Hexawise, for example, most of the people we hire have strengths in more than one "job description" area. Developers with strong UI skills, for instance, are encouraged to make regular contributions in both areas.

  • Creating a rewarding work environment helps (this is a long term process). One of the challenges in getting great people is they are often not interested in working for dysfunctional organizations. If you build up a strong testing reputation great testers will seek out opportunities to work for you and when you approach great testers they will be more likely to listen. This also reduces turnover and while that may not seem to relate to the hiring process is does (one reason we hire so poorly is we don't have time to do it right, which is partly because we have to do so much of it).

  • Having employees participate in user groups and attending conferences can help your organization network in the testing community. And this can help when you need to hire. But if your organization isn't a great one for testers to work in, they may well leave for more attractive organizations. The "solution" to this risk is not to stunt the development of your staff, but to improve the work environment so testers want to work for your organization.

 

Great quote from Dee Hock, founder of Visa:

Hire and promote first on the basis of integrity; second, motivation; third, capacity; fourth, understanding; fifth, knowledge; and last and least, experience. Without integrity, motivation is dangerous; without motivation, capacity is impotent; without capacity, understanding is limited; without understanding, knowledge is meaningless; without knowledge, experience is blind. Experience is easy to provide and quickly put to good use by people with all the other qualities.

Please share your thoughts and suggestions on how to improve the hiring process.

 

Related: Finding, and Keeping, Good IT People - Improving the Recruitment Process - Six Tips for Your Software Testing Career - Understanding How to Manage Geeks - People: Team Members or Costs - Scores of Workers at Amazon are Deputized to Vet Job Candidates and Ensure Cultural Fit

By: John Hunter on Jan 14, 2014

Categories: Checklists, Software Testing, Career

The Hexawise Software Testing blog carnival focuses on sharing interesting and useful blog posts related to software testing.

 

  • Using mind-mapping software as a visual test management tool by Aaron Hodder - "I want to be able to give and receive as much information as I can in the limited amount of time I have and communicate it in a way that is respectful of others' time and resources. These are my values and what I think constitutes responsible testing."

  • Healthcare.gov and the Tyranny of the Innocents by James Bach - "Management created the conditions whereby this project was 'delivered' in a non-working state. Not like the Dreamliner. The 787 had some serious glitches, and Boeing needs to shape that up. What I’m talking about is boarding an aircraft for a long trip only to be told by the captain 'Well, folks it looks like we will be stuck here at the gate for a little while. Maintenance needs to install our wings and engines. I don’t know much about aircraft building, but I promise we will be flying by November 30th. Have some pretzels while you wait.'"

 

jungle-bridge

Rope bridge in the jungle by Justin Hunter

 

  • Software Testers Are Test Pilots by John Hunter - "Software testers should be test pilots. Too many people think software testing is the pre-flight checklist an airline pilot uses."

  • Where to begin? by Katrina Clokie - "Then you need to grab your Product Owner and anyone else with an interest in testing (perhaps architect, project manager or business analyst, dependent on your team). I'm not sure what your environment is like, usually I'd book an hour meeting to do this, print out my mind map on an A3 page and take it in to a meeting room with sticky notes and pens. First tackle anything that you've left a question mark next to, so that you've fleshed out the entire model, then get them to prioritise their top 5 things that they want you to test based on everything that you could do."

  • Being a Software Tester in Scrum by Dave McNulla - "Pairing on development and testing strengthens both team members. With people crossing disciplines, they improve understanding of the product, the code, and what other stakeholders find important."

  • Stop Writing Code You Can’t Yet Test by Dennis Stevens - "The goal is not to write code faster. The goal is to produce valuable, working, testing, remediated code faster. The most expensive thing developers can do is write code that doesn’t produce something needed by anyone (product, learning, etc). The second most expensive thing developers can do is write code that can’t be tested right away."

  • Is Healthcare.gov security now fixed? by Ben Simo - "I am very happy that the most egregious issue was immediately fixed. Others issues remain. The vulnerabilities I've listed above are defects that should not make it to production. It doesn't take a security expert or “super hacker” to exploit these vulnerabilities. This is basic web security. Most of these are the kinds of issues that competent web developers try to avoid; and in the rare case that they are created, are usually found by competent testers."

  • Embracing Chaos Testing Helps Create Near-Perfect Clouds - "Chaos Monkey works on the simple premise that if we need to design for high availability, we should design for failure. To design for failure, there should be ways to simulate failures as they would happen in real-world situations. This is exactly what a Chaos Monkey helps achieve in a cloud setup.
    Netflix recently made the source code of Chaos Monkey (and other Simian Army services) open source and announced that more such monkeys will be made available to the community."

  • Bugs in UK Post Office System had Dire Consequences - "A vocal minority of sub-postmasters have claimed for years that they were wrongly accused of theft after their Post Office computers apparently notified them of shortages that sometimes amounted to tens of thousands of pounds. They were forced to pay in the missing amounts themselves, lost their contracts and in some cases went to jail. Second Sight said the Post Office's initial investigation failed at first to identify the root cause of the problems. The report says more help should have been given to sub-postmasters, who had no way of defending themselves."

  • Traceability Matrix: Myth and Tricks by Adam Howard - "And this is where we get to the crux of the problem with traceability matrices. They are too simplistic a representation of an impossibly complex thing. They reduce testing to a series of one to one relationships between intangible ideas. They allow you to place a number against testing. A percentage complete figure. What they do not do is convey the story of the testing."

  • Six Tips for Your Software Testing Career by John Hunter - "Read what software testing experts have written. It’s surprising how few software testers have read books and articles about software testing.Here are some authors (of books, articles and blogs) that I've found particularly useful..."

By: John Hunter on Dec 16, 2013

Categories: Software Testing

We have created a new site to highlight Hexawise videos on combinatorial, pairwise + orthogonal array software testing. We have posted videos on a variety of software testing topics including: selecting appropriate test inputs for pairwise and combinatorial software test design, how to select the proper inputs to create a pairwise test plan, using value expansions for values in the same equivalence classes.

Here is a video with an introduction to Hexawise:

 

 

Subscribe to the Hexawise TV blog. And if you haven't subscribed to the RSS feed for the main Hexawise blog, do so now.

By: John Hunter on Nov 20, 2013

Categories: Combinatorial Testing, Hexawise test case generating tool, Multi-variate Testing, Pairwise Testing, Software Testing Presentations, Testing Case Studies, Testing Strategies, Training, Hexawise tips

Software testers should be test pilots. Too many people think software testing is the pre-flight checklist an airline pilot uses.

 

The checklists airline pilots use before each flight are critical. Checklists are extremely valuable tools that help assure steps in a process are followed. Checklists are valuable in many professions. The Checklist – If something so simple can transform intensive care, what else can it do? by Atul Gawande

 

Sick people are phenomenally more various than airplanes. A study of forty-one thousand trauma patients—just trauma patients—found that they had 1,224 different injury-related diagnoses in 32,261 unique combinations for teams to attend to. That’s like having 32,261 kinds of airplane to land. Mapping out the proper steps for each is not possible, and physicians have been skeptical that a piece of paper with a bunch of little boxes would improve matters much. In 2001, though, a critical-care specialist at Johns Hopkins Hospital named Peter Pronovost decided to give it a try. … Pronovost and his colleagues monitored what happened for a year afterward. The results were so dramatic that they weren’t sure whether to believe them: the ten-day line-infection rate went from eleven per cent to zero. So they followed patients for fifteen more months. Only two line infections occurred during the entire period. They calculated that, in this one hospital, the checklist had prevented forty-three infections and eight deaths, and saved two million dollars in costs.

 

Checklists are extremely useful in software development. And using checklist-type automated tests is a valuable part of maintaining and developing software. But those pass-fail tests are equivalent to checklists - they provide a standardized way to check that planned checks pass. They are not equivalent to thoughtful testing by a software testing professional.

I have been learning about software testing for the last few years. This distinction between testing and checking software was not one I had before. Reading experts in the field, especially James Bach and Michael Bolton is where I learned about this idea.

 

Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference.

(A test is an instance of testing.)

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.

 

I think this is a valuable distinction to understand when looking to produce reliable and useful software. Both are necessary. Both are done too little in practice. But testing (as defined above) is especially underused - in the last 5 years checking has been increasing significantly, which is good. But now we really need to focus on software testing - thoughtful experimenting.

 

Related: Mistake Proofing the Deployment of Software Code - Improving Software Development with Automated Tests - Rapid Software Testing Overview Webcast by James Bach - Checklists in Software Development

By: John Hunter on Nov 13, 2013

Categories: Checklists, Software Testing, Testing Checklists, Testing Strategies

Hexawise allows you to adjust testing coverage to focus more thorough coverage on selected, high-priority areas. Mixed strength test plans allow you to select different levels of coverage for different parameters.

Increasing from pairwise to "trips" (3 way) coverage increases the test plan so that bugs that are the results of 3 parameters interacting can be found. That is a good thing. But the tradeoff is that it requires more tests to catch the interactions.

The mixed-strength option that Hexawise provides allow you to do is select a higher coverage level for some parameters in your test plan. That lets you control the balance between increased test thoroughness with the workload created by additional tests.

 

hexawise-mixed-strength

 

See our help section for more details on how to create a risk-based testing plan that focuses more coverage on higher priority areas.

As that example shows, Hexawise allows you to focus additional thoroughness on the 3 highest priority parameters with just 120 tests while also providing full pairwise coverage on all factors. Mixed strength test plans are a great tool to provide extra benefit to your test plans.

 

Related: How Not to Design Pairwise Software Tests - How to Model and Test CRUD Functionality - Designing a Test Plan with Dependent Parameter Values

By: John Hunter on Nov 6, 2013

Categories: Combinatorial Testing, Efficiency, Hexawise test case generating tool, Hexawise tips, Software Testing Efficiency, Testing Strategies

When we built our test case design tool, we thought:

  1. When used thoughtfully, this test design approach is powerful.

  2. The kinds of things going on "underneath the covers," like the optimization algorithms and combinatorial coverage risk confusing and alienating potential users before they get started.

  3. Designing an easy-to-use interface is absolutely critical. (We kept in the "delighter" in our MVP release.)

  4. As we have continued to evolve, we've kept (a) focusing on design, and (b) having learned that customers don't find the tool itself hard to use but they do sometimes find the test design principles confusing, we've focused on adding self-paced training modules into the tool (in addition to training and help web resources and Hexawise TV tutorials.

 

hexawise-levels

View of the screen showing a user's progress through the integrated learning modules in Hexawise

 

Our original hope was that if we focused on design (and iterated it based on user feedback), that Hexawise would "sell itself." And that users would tell their friends and drive adoption through word of mouth. That's exact what we've seen.

We did learn that we needed to include more help getting started with the challenges of learning the test design principles needed to successful create tests. We knew that was a bit of a hurdle but we have seen it was more of a hurdle than we anticipated so we have put more resources into helping get over than initial resistance. And we have increased the assistance to clients on how to think differently about creating test plans.

A recent interview with Andy Budd takes an interesting look at the role of design in startups.

Andy Budd: Design is often one of the few competitive advantages that people have. So I think it’s important to have design at the heart of the process if you want to cross that chasm and have a hugely successful product. ... Des Traynor: If a product is bought, it means that users are so attracted to it, that they’ll literally chase it down and queue up for it. If it’s sold, it means that there are people on commission trying to force it into customer’s hands. And I find design can often be the key difference between those two types of models for business.

Andy: There’s a lot of logic in spending less money on marketing and sales, and more money on creating a brilliantly delightful product. If you do something that people really, really want, they will tell their friends.

 

As W. Edward Deming said in Out of the Crisis

Profit in business comes from repeat customers, customers that boast about your product and service, and that bring friends with them

 

The benefits of delighting customers so much that they help promote your product are enormous. This result is about the best one any business can hope for. And great design is critical in creating a user experience that delights users so much they want to share it with their friends and colleagues.

The article also discusses one of the difficult decision points for a software startup. The minimal viable product (MVP) is a great idea to help test out what customers actually want (not just what they say they want). This MVP is a very useful concept (pushed into many people's consciousness by the popularity of lean startup and agile software development).

MVP is really about testing the marketplace. The aim is to get a working product in people's hands and to learn from what happens. If the user experience is a big part of what you are offering (and normally it should be) a poor user experience (Ux) on a MVP is not a great way to learn about the market for what you are thinking of offering.

In my opinion, adding features to existing software can be tested in a MVP way with less concern for Ux than completely new software, but I imagine some would want the great Ux for this case too. My experience is that users that appreciate your product can understand the rough Ux in a mock up and separate the usefulness from the somewhat awkward Ux. This is especially true for example with internal software applications where the developers can directly interact with the users (and you can select people you know who can separate the awkward temporary Ux from the usefulness of a new feature).

 

Related: The two weeks of on-site visits with testing teams proved to be great way to: (a) reconnect with customers, (b) get actionable input about what users like / don’t like about our tool, (c) identify new ways we can continue to refine our tool, and even (d) understand a couple unexpected ways teams are using it. - Automatically Generating Expected Results for Tests in Hexawise - Pairwise and Combinatorial Software Testing in Agile Projects

By: John Hunter on Sep 17, 2013

Categories: Experimenting

Software testing is becoming more critical as the proportion of our economic activity that is dependent on software (and often very complex software) continues to grow. There seems to be a slowly growing understanding of the importance of software testing (though it still remains an under-appreciated area). My belief is that skilled software testers will be appreciated more with time and that the opportunities for expert software testers will expand.

Here are a few career tips for software testers.

 

  • Read what software testing experts have written. It's surprising how few software testers have read books and articles about software testing.Here are some authors (of books, articles and blogs) that I've found particularly useful. Feel free to suggest other software testing authors in the comments.

James Bach

Cem Kaner

Lisa Crispin

Michael Bolton

Lanette Creamer

Jerry Weinberg

Keith Klain

James Whittaker - posts while at Google and his posts with his new employer, Microsoft

Pradeep Soundararajan

Elisabeth Hendrickson

Ajay Balamurugadas

Matt Heusser

our own, Justin Hunter

 

  • Join the community You'll learn a lot as a lurker, and even more if you interact with others. Software Testing Club is one good option. Again, following those experts (listed above) is useful; engaging with them on their blogs and on Twitter is better. The software testing community is open and welcoming. In addition, see: 29 Testers to Follow on Twitter and Top 20 Software Testing Tweeps. Interacting with other testers who truly care about experimenting, learning, and sharing lessons learned is energizing.

 

  • Develop your communication skills Communication is critical to a career in software testing as it is to most professional careers. Your success will be determined by how well you communicate with others. Four critical relationship are with: software developers, your supervisors, users of the software and other software testers.Unfortunately in many organizations, managers and corporate structures restrict your communication with some of these groups. You may have to work with what you are allowed, but if you don't have frequent, direct communication with those four groups, you won't be able to be as effective.
    Work on developing your communication skills. Given the nature of software testing two particular types of communication - describing problems ("what is it?") and explaining how significant problem is ("why should we fix it?") - are much more common than in many other fields. Learning how to communicate these things clearly and without making people defensive is of extra importance for software testers.
    A great deal of your communication will be in writing and developing your ability to communicate clearly will prove valuable to your continued growth in your career.
    Writing your own blog is a great way to further you career. You will learn a great deal by writing about software testing. You will also develop your writing ability merely by writing more. And you will create a personal brand that will grown your network of contacts. It also provides a way for those possibly interested in hiring you to learn more. We all know hiring is often done outside the job announcement - job application process. By making a name for yourself in the software testing community you will increase the chances of being recruited for jobs.

 

  • Do what you love You career will be much more rewarding if you find something you love to do. You will most likely have parts of your job you could do without but finding something you have a passion for makes all the difference in the world. If you don't have a passion for software testing, you are likely better off finding something you are passionate about and pursuing a career in that field.

Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle.

Steve Jobs

 

  • Practice and learn new techniques and ideas. James Bach provides such opportunities. Also the Hexawise tool, lets you easily and quickly try out different scenarios and learn by doing. We also provide guided training, help and webcasts explaining software testing concepts and how to apply them using Hexawise.We include a pathway that guides you through the process of learning about software testing, with learning modules, practical experience and tests along the way to make sure you learn what is important. And once you reach the highest level and become a Hexawise guru we have offer a community experience to allow all those reaching that level to share their experience and learn from each other.

 

  • Go to good software testing conferences.I've heard great things about CAST (by the Association for Software Testing), Test Bash, and Let's Test in particular. Going to conferences is energizing because while you're there, you're surrounded by the minority of people in this industry who are passionate about software testing. Justin, Hexawise founder and CEO, will be presenting at CAST next week, August 26-28 in Madison, Wisconsin.

 

Related: Software Testing and Career Advice by Keith Klain - Looking at the Empirical Evidence for Using Pairwise and Combinatorial Software Testing - Maximizing Software Tester Value by Letting Them Spend More Time Thinking

By: John Hunter on Aug 22, 2013

Categories: Software Testing

As I looked at our administrative dashboard for Hexawise.com today I was struck by how diverse our users are. I was looking at the most active users in the last week and the top 11 users were from 8 different countries: USA, France, Norway, India, Israel, Spain, Thailand and Canada. The only country with more than 1 was the USA with 2 users from Florida and 1 from Wisconsin. Brazil, Belgium and the Russian Federation were also represented in the top 25.

If you look at the top 25 users in the last month, in addition to the countries above (except Belgium) 3 more countries are represented: China, Netherlands and Malaysia.

hexawise-visitors

Visitors to the Hexawise web site in the last month by country.

 

Looking at our web site demographics the top countries of our visitors were: United States, India, Philippines, Australia, Brazil, United Kingdom, Israel, Malaysia, Italy and Netherlands. In the last month we have had visitors from 84 countries.

It is exciting to see the widespread use of Hexawise across the globe. The feedback on our upgrades included in Hexawise 2.0 have been very positive. We continue to get more and more users which makes us happy: we believe we have created a valuable tool for software testers and it is exciting to get confirmation from users. Please share your experiences with us; knowing what you like is helpful and we have made numerous enhancements based on user feedback.

 

Related: Empirical Evidence for Using Pairwise and Combinatorial Software Testing - Hexawise TV (webcasts on Hexawise and wise software testing practices) - Training material on Hexawise and software testing principles

By: John Hunter on Jul 17, 2013

Categories: Hexawise test case generating tool

The Hexawise Software Testing blog carnival focuses on sharing interesting and useful blog posts related to software testing.

 

  • T-Shaped Testers and their role in a team by Rob Lambert - "I believe that testers, actually – anyone, can contribute a lot more to the business than their standard role traditionally dictates. The tester’s critical and skeptical thinking can be used earlier in the process. Their other skills can be used to solve other problems within the business. Their role can stretch to include other aspects that intrigue them and keep them interested."

  • Testing triangles, pyramids and circles, and UAT by Allan Kelly - "Thus: UAT and Beta testing can only truly be performed by USERS (yes I am shouting). If you have professional testers performing it then it is in effect a form of System Testing.
    This also means UAT/Beta cannot be automated because it is about getting real life users to user the software and get their feedback. If users delegate the task to a machine then it is some other form of testing."

 

flower-justin

Photo by Justin Hunter, taken in South Africa.

 

  • Software Testing in Distributed Teams by Lisa Crispin - "Remote pairing is a great way to promote constant communication among multiple offices and to keep people working from home 'in the loop'.
    I often remote pair with a developer to specify test scenarios, do exploratory testing, or write automated test scripts. We use screen-sharing software that allows either person to control the mouse and keyboard. Video chat is best, but if bandwidth is a problem, audio will do. Make sure everyone has good quality headphones and microphone, and camera if possible."

  • Seven Kinds of Testers by James Bach - "I propose that there are at least seven different types of testers: administrative tester, technical tester, analytical tester, social tester, empathic tester, user, and developer. As I explain each type, I want you to understand this: These types are patterns, not prisons. They are clusters of heuristics; or in some cases, roles. Your style or situation may fit more than one of these patterns."

  • Which is Better, Orthogonal Array or Pairwise Software Testing? by John Hunter and Justin Hunter - "After more study I have concluded that: Pairwise is more efficient and effective than orthogonal arrays for software testing. Orthogonal Arrays are more efficient and effective for manufacturing, and agriculture, and advertising, and many other settings."

  • Experience Report: Pairing with a Programmer by Erik Brickarp - "We have different investigation methods. The programmer did low level investigations really well adding debug printouts, investigating code etc. while I did high level investigations really well checking general patterns, running additional scenarios etc. Not only did this make us avoid getting stuck by changing 'method' but also, my high level investigations benefited from his low level additions and vice versa."

  • I decided to evolve a faster test case by Ben Tilly - "I first wrote a script to run the program twice, and report how many things it found wrong on the second run. I wrote a second script that would take a record set, sample half of it randomly, and then run the first script.
    I wrote a third script that would take all records, run 4 copies of my test program, and then save the smaller record set that best demonstrated the bug. Wash, rinse, and repeat..."

  • Tacit and Explicit Knowledge and Exploratory Testing by John Stevenson - "It is time we started to recognise that testing is a tacit activity and requires testers to think both creativity and critically."

  • Tear Down the Wall by Alan Page - "There will always be a place for people who know testing to be valuable contributors to software development – but perhaps it’s time for all testing titles to go away?"

By: John Hunter on Jul 10, 2013

Categories: Software Testing

Here is a wonderful webcast that provides a very quick, and informative, overview of rapid software testing.

Software testing is when a person is winding around a space searching that space for important information.

 

James Bach starts by providing a definition of software testing to set the proper thinking for the overview.

Rapid software testing is a set of heuristics [and a set of skills]. Heuristics live at the border of explicit and tacit knowledge... Heuristics solve problems when they are under the control of a skilled human... It takes skill to use the heuristics effectively - to solve the problems of testing. Rapid software testing focuses on the tester... Tacit skills are developed through practice.

 

Automated software tests are useful but limited. In the context of rapid software testing only a human tester can do software testing (automated checks are defined as "software checking"). See his blog post: Testing and Checking Refined.

 

Related: People are better at figuring out interesting ideas to test. Generating a highly efficient, maximally varied, minimally repetitive set of tests based on a given set of test inputs is something computer algorithms are more effective at than a person. - Hexawise Lets Software Testers Spend More Time Focused on Important Testing Issues - 3 Strategies to Maximize Effectiveness of Your Tests

By: John Hunter on Jul 2, 2013

Categories: Software Testing, Testing Strategies

We have created the Hexawise Software Testing Glossary. The purpose is to provide some background for both general software testing tool and some Hexawise specific details.

While we continue to develop, expand and improve the Hexawise software itself we also realize that users success with using Hexawise rests not only on the tool itself but on the knowledge of those using the tool. The software testing glossary adds to the list of offerings we provide to help users. Other sources of help include sample test plans, training material (on software testing in general and Hexawise in particular), detailed help on specific topics, webcast tutorials on using Hexawise and this blog.

In Hexawise 2.0 we integrated the idea of building your expertise directly into the Hexawise application with specific actions, reading and practicums that guide users from novice to practitioner to expert to guru.

achievements

View for a Hexawise user that is currently a practitioner and is on their way to becoming an expert.

 

We aim to provide software services, and educational material, that help software testers focus their energy, insite, knowledge and time where it is most valuable while Hexawise software automates some tasks and does other tasks - that are essentially impossible for people to do manually (such as creating the most effective test plan, with pairwise, or better, coverage, given the parameters and values determined by the tester).

Our help, training and webcast resources have been found to be very useful by Hexawise users. We hope the software testing glossary will also prove to be of value to Hexawise users.

 

Related: Maximizing Software Tester Value by Letting Them Spend More Time Thinking - Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Maximize Test Coverage Efficiency And Minimize the Number of Tests Needed

By: John Hunter on Jun 24, 2013

Categories: Hexawise test case generating tool, Hexawise tips, Software Testing

The Expected Result feature in Hexawise provides 3 benefits in the software testing process. Specifically, it...

  • Saves you time documenting test scripts.

  • Makes it less likely for you to make a mistake when documenting Expected Results for your tests.

  • Makes maintaining sets of tests easy from release to release and/or in the face of requirements changes.

There are two different places you can insert auto-generated Expected Results into your tests: the "Auto-script" screen, and the "Requirements" screen. So which place should you use?

It may depend upon how many values are required to trigger the Expected Result you're documenting.

If you want to create an Expected Result that can be triggered by 0, 1, or 2 values, you can safely add your Expected Result using the Auto-script screen. If you're creating an Expected Result that can only be triggered when 3 or more specific values appear in the same test case, then - at least if you're creating a set of 2-way tests - you will probably want to add that particular Expected Result to the Requirements screen, not on the Auto-script screen. Why is that?

It's because if you use the Expected Result feature on the Auto-script screen, it is like telling the test generation engine "if you happen to see a test that matches the setting I provided then show this Expected Result text to the tester." This automates the process so the tester will be provided clear instructions on what to look for in cases that might otherwise be less clear to the tester. If the only way

Using the requirements feature is similar but a bit different. Using the requirements feature, in contrast, is like telling the test generation engine "make sure that the specific combination of values I'm specifying here definitely appear together in the set of tests at least one time (and, if I tell you that there's an expected result associated with that scenario, please be sure to include it)." The requirements feature is used when you not only want to provide an expected value for a specific test case situation but to require that the specific test case scenario be included in the generated test plan.

See our help page for more details: How to save test documentation time by automatically generating Expected Results in test scripts.

 

Related: How to Model and Test CRUD Functionality - 3 Strategies to Maximize Effectiveness of Your Tests

By: John Hunter on Jun 4, 2013

Categories: Hexawise test case generating tool, Hexawise tips, Software Testing

The Hexawise Software Testing carnival focuses on sharing interesting and useful blog posts related to software testing.

 

  • Testing and Checking Refined by James Bach and Michael Bolton - "a robust role for tools in testing must be embraced. As we work toward a future of skilled, powerful, and efficient testing, this requires a careful attention to both the human side and the mechanical side of the testing equation. Tools can help us in many ways far beyond the automation of checks. But in this, they necessarily play a supporting role to skilled humans; and the unskilled use of tools may have terrible consequences."

  • Bugs Spread Disease by Elisabeth Hendrickson - "Cancel all the bug triage meetings; spend the time instead preventing and fixing defects. Test early and often to find the bugs earlier. Fix bugs as soon as they’re identified."

 

haija-sophia-istanbul

Haija Sophia in Istanbul, Turkey by Justin Hunter

 

  • Becoming a World-Class Tester by Ilari Henrik Aegerter - "World-class testing means following the urge to get to the bottom of things, not giving up until you have experienced enough value for yourself, thinking more expansively about the role of a tester, and thinking non-traditionally about what skills are required to thrive in the role."

  • Test Coverage Triage by Parimala Hariprasad - "My experience shows that a mind map based test design works great at this stage. Business teams will be thrilled to have a Visual Walkthrough of tests and provide inputs quickly. As a participant observer on several dozen IT projects, I have found out that testers’ personally walking them through the tests works really well."

  • What Does it Take to Change the Software Testing Industry? Courage! by Keith Klain - "According to Mark Twain, courage is not the absence of fear – but the mastery of it. There are people working in software testing all over the globe who are questioning long standing ways of working - some for the first time. Get yourself energized and get involved."

  • Explaining Exploratory Testing Relies On Good Notes. by Rob Lambert - "Being able to do good exploratory testing is one thing, being able to explain this testing (and the insights it brings) to the people that matter is a different set of skills. I believe many testers are ignoring and neglecting the communication side of their skills, and it’s a shame because it may be directly affecting the opportunities they have to do exploratory testing in the first place."

  • Human-Computer Cooperation by John Hunter - "people are better at figuring out interesting ideas to test. Once those are identified, those test conditions and other test ideas need to be combined together and put into tests. Generating a highly efficient, maximally varied, minimally repetitive set of tests based on a given set of test inputs is something computer algorithms are more effective at than a person."

  • Software testing can be sexy, too by Ole Lensmar - "What Klain, who serves on the board of the AST, and the others would like to do is not only bring those skills back home but increase the availability and accessibility of this kind of training and job opportunity. Ideally, colleges and universities would start offering majors in Software Testing so we can set young people on a path toward testing as a career."

  • Yet another future of testing post (YAFOTP) by Alan Page - "I think we’ll always need people to look at big end-to-end scenarios and determine how non-functional attributes (e.g. performance, privacy, usability, reliability, etc.) contribute to the user experience. Some of this evaluation will come from manually walking through scenarios), but there will be plenty of need for programmatic measurement and analysis as well..."

By: John Hunter on Apr 18, 2013

Categories: Software Testing

We are proud of the enhancements about to be delivered in version 2.0 of Hexawise (software test plan generation solution). Hexawise a software test design tool that helps teams test their software faster (by decreasing the time it takes to design and document tests and decreasing the amount of time it takes to execute tests) and more thoroughly (by scientifically packing as much coverage into each test as possible). We provide it as software as a service solution. Three of the most imported enhancements in our dramatically improved, soon-to-be-released version are:

 

  • Requirements (forcing certain specific combinations to appear in the test plan)

  • Adding Expected Results, which saves time in test case documentation

  • Better auto-scripting, which also saves time in test case documentation

 

 

The embedded slide presentation above, provides graphic illustration of these features.

At the "Required" screen (short for "Requirements"), you will be able to add specific combinations of test conditions/test inputs to the tests that Hexawise generates. Tracing requirements to specific test scripts can be challenging, particularly as requirements change and sets of regression tests age. You'll find this feature helps make requirements traceability easier and less error-prone.

You can add Expected Results to the test scripts generated. This provides test stating exactly what the result should be so that someone reviewing the result of the test can verify if the expected results conditions were actually met. If Hexawise is the first test generation tool you've used, you might take this for granted and think that this is just how the world should be.

If you've used other test generation tools before finding Hexawise though, you might feel compelled to publicly declare your love of Hexawise and/or send gifts to the engineers and designers at Hexawise who created this great feature. We believe its a feature unique to Hexawise should save you huge amounts of time when you document your test scripts.

Auto-scripting was very appreciated by users, and we have enhanced this feature of Hexawise significantly in Hexawise 2.0. Our help system includes a through review of this (and many other features of Hexawise): Auto-Script test cases to quickly include detailed written tester instructions.

 

Related: How do I save test documentation time by automatically generating Expected Results in test scripts? - Pairwise and Combinatorial Software Testing in Agile Projects - Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Maximizing Software Tester Value by Letting Them Spend More Time Thinking

By: John Hunter on Mar 21, 2013

Categories: Combinatorial Testing, Hexawise test case generating tool, Hexawise tips

Attempting to assess the relative benefits of more than 200 software development practices is not for the faint of heart. Context-specific considerations run the risk of confounding the conclusions at every turn. Even so, Capers Jones, a software development expert with dozens of years of experience and nearly twenty books related to software development to his credit, recently attempted the task. He's literally devoted decades of his career to assessing such things for clients. We're quite pleased with how using Hexawise fared in the analysis.

Scoring and Evaluating Software Methods, Practices and Results by Capers Jones (Vice President and CTO, Namcook Analytics) provides some great idea on software project management. The article is based on the Software Engineering Best Practices with some new data is taken from The Economics of Software Quality (two of the books Capers Jones has authored).

Software development, maintenance, and software management have dozens of methodologies and hundreds of tools available that are beneficial. In addition, there are quite a few methods and practices that have been shown to be harmful, based on depositions and court documents in litigation for software project failures.

In order to evaluate the effectiveness or harm of these numerous and disparate factors, a simple scoring method has been developed. The scoring method runs from +10 for maximum benefits to -10 for maximum harm.

The scoring method is based on quality and productivity improvements or losses compared to a mid-point. The mid point is traditional waterfall development carried out by projects at about level 1 on the Software Engineering Institute capability maturity model (CMMI) using low-level programming languages. Methods and practices that improve on this mid point are assigned positive scores, while methods and practices that show declines are assigned negative scores.

The data for the scoring comes from observations among about 150 Fortune 500 companies, some 50 smaller companies, and 30 government organizations. Negative scores also include data from 15 lawsuits.

 

The article provides guidance, based on the results achieved by many, and varied, organizations with respect to software projects.

finding and fixing bugs is overall the most expensive activity in software development. Quality leads and productivity follows. Attempts to improve productivity without improving quality first are not effective.

 

This is an extremely important point for business managers to understand. Those involved in software development professionally don't find this surprising. But business people often greatly underestimate the costs of maintaining and updating software. The costs of bugs introduced by fairly minor feature requests to a system that doesn't have good software test coverage or test plans often create far more trouble than business managers expect.

This is especially true because there is a high correlation between software applications that have poor software testing processes (including poor test coverage and poor or completely missing test plans) and those application that were designed without long term maintenance in mind. Both deficiencies result of decisions made to minimize initial development costs and time. They both show a lack of appreciation for wise software engineering practices and software application project management.

The article discusses a complicating factor for accessing the most effective software development practices: the extremely wide differences in software engineering scope. Projects range from simple applications one software developer can create in a short period of time to massive application requiring thousands of developer-years or effort.

In order to be considered a “best practice” a method or tool has to have some quantitative proof that it actually provides value in terms of quality improvement, productivity improvement, maintainability improvement, or some other tangible factors.

Looking at the situation from the other end, there are also methods, practices, and social issues have demonstrated that they are harmful and should always be avoided. ... Although the author’s book Software Engineering Best Practices dealt with methods and practices by size and by type, it might be of interest to show the complete range of factors ranked in descending order, with the ones having the widest and most convincing proof of usefulness at the top of the list. Table 2 lists a total of 220 methodologies, practices, and social issues that have an impact on software applications and projects.

The average scores shown in table 2 are actually based on the composite average of six separate evaluations:

  1. Small applications < 1000 function points

  2. Medium applications between 1000 and 10,000 function points

  3. Large applications > 10,000 function points

  4. Information technology and web applications

  5. Commercial, systems, and embedded applications

  6. Government and military applications

The data for the scoring comes from observations among about 150 Fortune 500 companies, some 50 smaller companies, and 30 government organizations and around 13,000 total projects. Negative scores also include data from 15 lawsuits.

The scoring method does not have high precision and the placement is somewhat subjective.

Top 10 tools and practices listed in the article:

Practice Score
1. Reusability (> 85% zero-defect materials) 9.65
2. Requirements patterns - InteGreat 9.50
3. Defect potentials < 3.00 per function point 9.35
4. Requirements modeling (T-VEC) 9.33
5. Defect removal efficiency > 95% 9.32
6. Personal Software Process (PSP) 9.25
7. Team Software Process (TSP) 9.18
8. Automated static analysis - code 9.17
8. Mathematical test case design (Hexawise) 9.17
10. Inspections (code) 9.15

 

We are obviously thrilled that Hexawise is listed. We have seen the value our customers have achieved using mathematical based combinatorial software test plans (see several Hexawise case studies). It is great to see that value recognized in comparison to other software development practices and judged to be of such high value to software development projects.

The article makes it clear the importance of the results is not "the precision of the rankings, which are somewhat subjective, but in the ability of the simple scoring method to show the overall sweep of many disparate topics using a single scale."

The methodology behind the results shown in the article can be used to evaluate your organization's software development practice and determine opportunities for improvement. But, as stated above, software projects cover a huge range of scopes. The specific software project needs will drive which practices are most critical to achieving success for a specific project. The list in the article, of what practices have provided huge value and what practices have resulted great harm, is a very helpful resource but project managers and software developers and testers need to apply their judgement to the information the article provides in order to achieve success.

A leading company will deploy methods that, when summed, total to more than 250 and average more than 5.5. Lagging organizations and lagging projects will sum to less than 100 and average below 4.0.

 

The use of Hexawise has been growing; that has helped increase the number of software projects using best practices (that score 9, or higher), however as the article states there is quite a need for improvement.

From data and observations on the usage patterns of software methods and practices, it is distressing to note that practices in the harmful or worst set are actually found on about 65% of U.S. Software projects as noted when doing assessments. Conversely, best practices that score 9 or higher have only been noted on about 14% of U.S. Software projects. It is no wonder that failures far outnumber successes for large software applications!

 

A score of 9 to 10 for a practice means that practice results 20-30% improvement in quality and productivity of software projects.

Conclusion: while your individual mileage may vary, this report provides further evidence that using Hexawise really does lead to large, measurable improvements in efficiency and effectiveness.

We are very proud of the success of Hexawise thus far; as a new year starts we see huge potential to help many organizations improve their software development efforts.

The article includes a list of references and suggested readings that is valuable. Included in that list are:

DeMarco, Tom; Controlling Software Projects, 1986, 296 pages.

Gilb, Tom and Graham, Dorothy; Software Inspections, 1994, 496 pages.

Jones, Capers; Applied Software Measurement, 3rd edition, 2008, 662 pages.

McConnell, Code Complete, (I'm linking to the 2nd edition the article references the 1st edition) 2004, 960 pages.

 

Related: Maximizing Software Tester Value by Letting Them Spend More Time Thinking - A Powerful Software Test Design Approach - 3 Strategies to Maximize Effectiveness of Your Tests

By: John Hunter on Mar 18, 2013

Categories: Software Testing, Software Testing Efficiency, Testing Case Studies, Testing Strategies

The video makes the case that the value to be gained from human-computer cooperation is being ignored far too often. A focus on maximizing the results based on improving the ability to cooperate is worthwhile.

What this means in practice is people taking more responsibility for using computers as tools to accomplish what is needed. This already happens a great deal but in a way which is unexamined and often therefore the current methods leave a great deal of room for improvement. We rarely focus on how to enhance the co-operation, we mainly see the software as one separate part of the process and a person's contribution as another separate part of the process. Focusing on computers (and software) as tools used by people to accomplish objectives is helpful.

Viewing software as a tool used to achieve an aim mirrors the idea of a company viewing their products and services as solving specific problems for customers. The tool is valuable in how it is used by people - not in some abstract way (say technical specifications).

Weaknesses in how people use the product, service or software are often weaknesses in focusing on the way people will really use it versus how it is "supposed" to be used. By understanding the process that matters is one of a person and the computer together adding value, we can create more effective software applications.

People often try to design software solutions that remove the need for humans to be involved. For complex problems, though, it is often much more effective to design solutions where people take advantage of computer tools to achieve results. People should use computers to automate the things that make sense to automate, keep track of data, and make calculations, thus leaving themselves free to use their superior insight, vision, intuition, and flexibility in making judgements.

Hexawise is built to take advantage of this type of cooperation. Even though it is a "test design tool," Hexawise doesn't take the lead role in designing the tests. Humans do. Humans do the things that they're better than computers at, such as (a) thinking up clever test ideas and test inputs, and (b) identifying, from dozens of possible parameters, which are the ones that are most important to vary in order to achieve potentially interesting interactions from test to test. Computer algorithms aren't nearly as good as humans at such tasks. Computers, though, will run circles around any human who tries to construct a set of tests such that (a) the variation between each test is as different as possible, (b) the wasteful repetition of combinations of values that appear together in different tests is minimized, (c) gaps in coverage are minimized (by, e.g., ensuring that every single pair or every single 3-way combination of tests appears in at least one test case), and (d) all of the above objectives are achieved in the fewest possible test cases. Computer algorithms eat those kind of challenges for breakfast. And complete them without error. In seconds.

Said a different way, people are better at figuring out interesting ideas to test. Once those are identified, those test conditions and other test ideas need to be combined together and put into tests. Generating a highly efficient, maximally varied, minimally repetitive set of tests based on a given set of test inputs is something computer algorithms are more effective at than a person.

Hexawise is not intended to eliminate the need for software testing experts. Hexawise is designed to allow software testing experts to focus on what they do well and allow Hexawise to make everything else easy (creating test plans based on software experts inputs etc.). This allows software testing experts to spend their time thinking by removing time consuming tasks they have to do. Hexawise also creates test plan coverage that is simply beyond the ability of people to create no matter how much time they were given. And software testing experts provide the inputs that no matter how much time the Hexawise software could not create in any amount of time.

Hexawise is designed to optimize the ease of cooperation. We spend a great deal of time optimizing the software to make it most useful for people. The design decisions made in creating a software application are very different if the users are meant to thoughtfully interact with the application.

We see Hexawise as an extension of the software tester. We seek to optimize how a person can use Hexawise to create the most value. The measure is how much more effective the testing solutions are, not just how Hexawise performs in isolation from the user.

A great deal of our time has been spent on how to help software testing experts use Hexawise most effectively. These efforts often take the form of many many small improvements that create an experience that is more similar to cooperation between two parties with different strengths instead of a more typical user experience of forcing you to do whatever the software demands.

 

Related: Gamers Use Foldit to Solve Enzyme Configuration in 3 Weeks That Stumped Scientists for Over a Decade - Ten Reasons to Use Software Testing Checklists and Cheat Sheets

By: John Hunter on Mar 12, 2013

Categories: Testing Strategies

This is the second edition of our carnival that focuses on finding interesting and useful blog posts related to software testing.

 

  • Testing != test execution by Jeff Fry - "We often talk about testing as if it’s only test execution, yet often the most interesting, challenging, skill-intensive aspects of testing are in creating a mental model that helps us understand the problem space, designing tests to quickly and effectively answer key questions, analyzing what specifically the problem is, and communicating it effectively."

  • Test Mercenaries by Mike Bland - "good testing practice goes a long way towards finding and killing a lot of bugs before they can grow more expensive, and possibly multiply. The bugs that manage to pass through a healthy testing regimen are usually only the really interesting ones. Few things are less worthy of our intellectual prowess than debugging a production crash to find a head-slappingly stupid bug that a straightforward unit or integration test could’ve caught."

 

yellow cameleon

Yellow Cameleon in South Africa by Justin Hunter

 

  • The Oracle Problem and the Teaching of Software Testing by Cem Kaner - "I’ve been emphasizing the oracle problem in my testing courses for about a dozen years. I see this as one of the defining problems of software testing: one of the reasons that skilled testing is a complex cognitive activity rather than a routine activity. Most of the time, I start my courses with a survey of the fundamental challenges of software testing, including an extended discussion of the oracle problem."

  • The new V-Model by Kim Ming Leung - "We design by specifying the measurements before coding but not writing the test before coding. We write code and invite user feedback before writing automated testing for these measurements. Code quality is still guaranteed because first, measurement is the design and second, we code with testing in mind (i.e. write testable code)."

  • Maximizing Software Tester Value by Letting Them Spend More Time Thinking by John Hunter - "Hexawise also lets the software tester easily tune the test coverage based on what is most important. Certain factors can be emphasized, others can be de-emphasised. Knowledge is needed to decide what factors are most important, but after that designing a test plan based on that knowledge shouldn’t take up staff time, good software can take care of that time consuming and difficult task."

  • Improving the State of your Testing Team: Part One – Values by Keith Klain - "Typically, the first thing out of my test teams mouths when asked “how can we improve the state of testing here”, usually relates to something that other people should do. Very few people or teams take an introspective based approach to improvement, or state their management values, but the ones that do, typically have great success.

  • Interview and Book Review: How Google Tests Software by Craig Smith - "stop treating development and test as separate disciplines. Testing and development go hand in hand. Code a little and test what you built. Then code some more and test some more. Test isn’t a separate practice; it’s part and parcel of the development process itself. Quality is not equal to test. Quality is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other."

  • Introduction to test strategy (review of Rikard Edgren's presentation) by Mauri Edo - "Rikard referred to a test strategy as the outcome of the gathered and shared information plus our own thinking processes, encouraging us to find strategies for our context, learning to understand what is important for us, in our situation."

  • Experience report EuroSTAR Testlab 2012 by Martin Jansson - "The testlab is very much alike our every day life as testers. We prepare and plan for many things, but when reality hits us our preparations might be in vain. Therefore it is important in being prepared for the unknown and the unexpected. Working with the testlab is a great way of practising that."

 

Related: Software Testing Carnival #1 - Hexawise TV blog

By: John Hunter on Jan 21, 2013

Categories: Software Testing

Testing sucks by James Whittaker

Bet that got your attention. It's true, but let me qualify it: Running test cases over and over in the hope that bugs will manifest sucks. It’s boring, uncreative work and since half the world thinks that is all testing is about, it is no great wonder few people covet testing positions. Testing is either too tedious and repetitive or it’s downright too hard. Either way, who would want to stay in such a position? ... For the hard parts of the testing process like deciding what to test and determining test completeness, user scenarios and so forth we have another creative and interesting task. Testers who spend time categorizing tests and developing strategy (the interesting part) are more focused on testing and thus spend less time running tests (the boring part). ... So all the managers out there need to ask themselves what they've done lately to make their testers more creative. If you don't have an answer, then testing isn't the only thing that sucks.

One of the great benefits of Hexawise is that it takes care of figuring out the best test plan to provide the coverage is needed. The software test planer needs to use their knowledge, experience and creativity to determine what factors and parameters are critical to test. Then Hexawise generates a test plan that provides maximum coverage with the fewest possible tests. If people try to manually create test plans to address interaction between factors to be tested it is not only extremely time consuming, not very fun and it is essentially impossible to do well.

Some things are just so complex or so effectively handled with well designed software people cannot compete. Designing software test plan coverage is one of those areas.

Hexawise also lets the software tester easily tune the test coverage based on what is most important. Certain factors can be emphasized, others can be de-emphasised. Knowledge is needed to decide what factors are most important, but after that designing a test plan based on that knowledge shouldn't take up staff time, good software can take care of that time consuming and difficult task.

Another nice feature included with Hexawise is automated detailed tester instructions are generated. And you can easily provide customized text to assure the test instructions and the expected outcomes are clear and complete.

Hexawise greatly reduces the number of test that need to be run by creating powerful test plans that provide more coverage with fewer tests. This again, frees up tester time to focus on value added activities.

Allowing testers to focus on adding value is a key aim of ours. We strive to automate what we can and allow testers to apply their knowledge, experience and creativity to helping create great software. Hexawise grew out of the work of George Box, William Hunter (the founders father) and W. Edwards Deming, they sought to use statistical tools to free people to focus on creative tasks. For example read: Managing Our Way to Economic Success, Two Untapped Resources by William G. Hunter - "Two resources, largely untapped in American organizations, are potential information and employee creativity."

Hexawise includes sample test plans that let you see the benefits above in action. Sign up today to try it out (free trial).

 

Related: Cem Kaner: Testing Checklists = Good / Testing Scripts = Bad? - A Faster Way to Enter Test Inputs – the “Bulk Add” Option - Practical Ways to Respect People

By: John Hunter on Nov 11, 2012

Categories: Context-Driven Testing, Testing Strategies

I have started focusing on the Hexawise blog recently. For a good part of this year we have not had much activity on the blog, but we plan to make this blog more active going forward. As part of that effort we are starting a Software Testing Blog Carnival to highlight some post on software testing that I find interesting and think others will too. Enjoy,

 

 

water buffalo south africa

Water buffalo, Timbavati Game Preserve, South Africa by Justin Hunter

 

  • Bugs Spread Disease by Elisabeth Hendrickson - "It wasn’t the bugs that killed us directly. Rather, the bugs became a pervasive infection. They hampered us, slowing down our productivity. Eventually we were paralyzed, unable to make even tiny changes safely or quickly."

  • A Sticky Situation by Michael Bolton - on using agile, kanban style, system for managing the software testing workload: "The whiteboard was the center of an active discussion between programmers and project managers about the project status. After the meeting, the whiteboard and the notes on it remained as a kind of information radiator. 'I suddenly realized that if they could do that, I could too,' Paul said. He began by dividing the whiteboard into three columns: To Be Done, Work in Progress, and Done."

  • A Quick Testing Challenge by Alan Page and a response, Angry Weasel Challenge, a presentation by to Figure out why TheApp.exe won't load and cause it to load by solving the problem.

  • 3 Strategies to Maximize Effectiveness of Your Tests by John Hunter - "Use the MECE principle. The MECE principles states you should define your values in a way that makes each of them “Mutually Exclusive” from the others in the list (no subsets should represent any other subsets, no overlaps) and “Collectively Exhaustive” as a group (the set of all subsets, taken together, should fully encompass all items, no gaps)."

  • Musings on Test Design by Alan Page - "The point is, and this is worth repeating so often, is that thinking of test automation and “human” testing as separate activities is one of the biggest sins I see in software testing today. It’s not only ineffective, but I think this sort of thinking is a detriment to the advancement of software testing."

  • Creating a Shared Understanding of Testing Culture on a Social Coding Site by Leif Singer - "those learning testing in this environment write tests to make sure that a program behaves a certain way, and forget that they might also need to test how it should not behave (e.g. on invalid input)."

  • Testing in Scrum with a Waterfall Interaction by Davide Noaro - "Testing each user story separately is, for me, the basis of the Agile process, even in an interaction with a Waterfall-at-end scenario like the one described. Integrating testing into the process itself is something we should do for any software development process, not only in Agile or Scrum."

By: John Hunter on Oct 31, 2012

Categories: Software Testing

Hexawise includes an array of sample plans when a new user account is created. These provide concrete examples of how to categorize items when creating a combinatorial test plans (also called pairwise test plans, orthogonal array test plans, etc.). Once you [sign in to your Hexawise account](http://hexawise.com/ (or setup a new, free, account) looking at this [sample test plan](https://app.hexawise.com/share/HT3UG7M8 (which is similar to the situation raised in the question that follows), might be useful.

Within your Hexawise account you can copy the sample test plans that you are provided with and then make adjustments to them. This lets you quickly see what effects changes you make have on real test plans. And it also lets you see how easy it is to adjust as changes in priorities are made, or gaps are found in the existing test plan.

 

A Hexawise user sent us the following question.

What is the recommended approach to configuring parameter with one or more values?

I have two parameters which are related.

If Parameter 1 = Yes, Parameter 2 allows the user to select one or more values out of a list of 25 - most of which are not equivalent.

For Parameter 2, is the recommended approach to handle this to create separate parameters each with a yes/no value? i.e. create one parameter for each non-equivalent value, and one parameter for the equivalent values. Then link each of these as a married pair to Parameter 1.

I'm open to suggestions as to alternatives.

Here's the screen in question. Parameter 1 = "Pilot", Parameter 2 = checkboxes for types of plans.

aviation question inline

Great question.

I would recommend that you use different parameters for each option (e.g., "Scheduled Commercial" as a parameter with "Selected, Not Selected" as your Values associated with it).

Also, I'd recommend following these 3 strategies to maximize the effectiveness of your tests.

First, consider using adjusted weightings. You may find it useful to weight certain values multiple times, e.g., have 4 values such as "Select, Do Not Select, Do Not Select, Do Not Select" to create 3 times as many tests with "Do Not Select" as "Select."

Second, use the MECE principle. The MECE principles states you should define your Values in a way that makes each of them "Mutually Exclusive" from the others in the list (no subsets should represent any other subsets, no overlaps) and "Collectively Exhaustive" as a group (the set of all subsets, taken together, should fully encompass all items, no gaps)

Third, avoid "ands" in your value names. As a general rule it is unwise to define values like "Old and Male" or "Young and Female", etc. A better strategy is to break those ideas into two separate Parameters, like so:

First Parameter = "Age" --- Values for "Age" = Old / Young

Second Parameter = "Gender" --- Values for "Gender" = "Male / Female"

 

Related: Efficient and Effective Test Design - Context-Driven Usability Considerations, and Wireframing - Why isn't Software Testing Performed as Efficiently and Effecively as it could be?

By: John Hunter on Oct 25, 2012

Categories: Efficiency, Hexawise tips, Pairwise Software Testing, Software Testing, Software Testing Efficiency, Testing Strategies

When we met with Hexawise users in India, we noticed that the page load response times for Hexawise users there were sometimes significantly slower than in the United States due to sporadic internet connectivity issues. One area that troubled us in particular was the extra few seconds it could take to enter test inputs into plans.

We are constantly looking for ways to make our software test case generating tool better and came up with a solution.

Hexawise Bulk Add Instructions 1 inline

Hexawise Bulk Add Instructions 2 inline

 

Even with a great internet connection this feature lets you be more productive, so if you have not tried out the bulk add feature, give it a try today.

Hexawise: More Coverage. Fewer Tests

 

Related: Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Maximize Test Coverage Efficiency And Minimize the Number of Tests Needed - 13 Great Questions To Ask Software Testing Tool Vendors

By: John Hunter on Oct 23, 2012

Categories: Hexawise test case generating tool, Hexawise tips, Software Testing

It's common to have a test plan where the possible values of one parameter depend on the value of another parameter. There are many options for how you can represent this scenario in Hexawise, some options that involve using value expansions (when there is equivalence) and other options that do not use value expansions (when there is not equivalence).

Using Value Expansions in Hexawise

The general rule of thumb for value expansions is that they are for setting up equivalence classes. The key there being the equivalence. The expectations of the system should be the same for every value listed in that particular value expansion.

Let's consider a real world example involving a classification parameter with a value that is dependent on the value of a role parameter:

Inputs
Role: Student, Staff
Classification: Freshman, Sophomore, Junior, Senior, Adjunct, Assistant, Professor, Administrator

So if the Role parameter has a value of Student, then the Classification parameter must have a value of Freshman, Sophomore, Junior or Senior, but if the Role parameter has a value of Staff, then the Classification parameter must have a value of Adjunct, Assistant, Professor or Administrator.

Using value expansions in this case might be a good option. You could setup your inputs, value expansions and value pairs this way:

Inputs
Role: Student, Staff
Classification: Student Classification, Staff Classification

Value Expansion
Student Classification: Freshman, Sophomore, Junior, Senior
Staff Classification: Adjunct, Assistant, Professor, Administrator

Value Pairs
When Role=Student Always Classification=Student Classification
When Role=Staff Always Classification=Staff Classification

You would use this approach if there were no important differences in the business logic or expected behavior of the system when the different expansions of the value were used. If Freshman versus Sophomore is an important label for the users to be able to enter and see, but the system under test doesn't change its behavior based on which value is selected, then those expansions of the value are equivalent and don't need to be tested individually for how they might interact with other parts of the system and create bugs. If this equivalence scenario is true, then you will greatly simplify things for yourself and create fewer tests that are just as powerful by using value expansions.

In the scenario that would support using value expansions, the system might have different behavior for a Junior versus an Adjunct Professor, but not for a Freshman versus a Senior. A Freshman and a Senior are always equivalent in the system, so they can be combined in a value expansion.

However, if the expectations are not the same, then a value expansion should not be used. For example, let's suppose this hypothetical system has business logic giving priority class scheduling to Seniors and only last available scheduling priority to Administrators. In this case, using value expansions as described above would probably be a mistake. Why? Because a Sophomore and a Senior aren't treated the same way by the system, yet Hexawise considers all the expansions of the Student Classification value as equivalent. As long as you've got a test that has paired a value expansion of the Student Classification value with the Overbooked value of the Class Status parameter, then Hexawise won't insist on pairing all the other value expansions for the Student Classification value with Class Status = Overbooked in other tests. You could therefore miss a bug that only occurs when a Senior signs up for an overbooked class.

"One to many" or "multi-valued" married pair model

If the system under test does not consider the values to be equivalent and has requirements and business logic to behave differently, then by using value expansions to signal equivalency to Hexawise when there isn't equivalency is probably a mistake.

So what would you do in that case?

We've decided that it might be nice to be able to set up your inputs and value pairs like this:

Inputs
Role: Student, Staff
Classification: Freshman, Sophomore, Junior, Senior, Adjunct, Assistant, Professor, Administrator

Value Pairs
When Role=Student Always Classification=Freshman, Sophomore, Junior, or Senior When Role=Staff Always Classification=Adjunct, Assistant, Professor, or Administrator

Unfortunately, this kind of a "one to many" or "multi-valued" value pair is something we've only recently realized would be very helpful, and is something we have on the drawing board for Hexawise in the intermediate future, but is not a feature of Hexawise today. In the meantime, you could model it with three parameters:

Inputs
Role: Student, Staff
Student Classification: Freshman, Sophomore, Junior, Senior, N/A Staff Classification: Adjunct, Assistant, Professor, Administrator, N/A

Value Pairs
When Role=Student Always Staff Classification=N/A
When Role=Staff Always Student Classification=N/A

Another modeling option to consider, if there is only special logic for Administrator and for Seniors, but the rest of the values we've been discussing are equivalent, is to use value expansions for just the equivalent values:

Inputs
Role: Student, Staff
Classification: Underclassman, Senior, Professor, Administrator

Value Expansions
Underclassman: Freshman, Sophomore, Junior Professor: Adjunct, Assistant, Full

Value Pairs
When Role=Student Never Classification=Professor When Role=Student Never Classification=Administrator When Role=Staff Never Classification=Underclassman When Role=Staff Never Classification=Senior

I hope this helps you understand the role of value expansions in Hexawise, when to use them (in cases of equivalency), and when to avoid them, and how value pairs and value expansions can be used together to handle cases of dependent parameter values. Value Expansions are a powerful tool to help you decrease the number of tests you need to execute, so take advantage of them, and if you have any questions, just let us know!

By: John Hunter on Apr 26, 2012

Categories: Combinatorial Software Testing, Combinatorial Testing, Hexawise tips, Pairwise Testing, Software Testing, Testing Strategies

Request for ideas: What are good examples of good interactive online training (including exercises & quizzes) for how to use web apps?

We continue to look for ways to improve the ability of our users to add value to their organizations. We are always in the process of improving the software tools we provide.

The value proposition for using pairwise and combinatorial tools are huge. The biggest roadblocks we see in adoption are not being aware of the advantages and not being sure how to proceed once the advantages are seen.

We designed Hexawise to be very easy to use. But even so the concepts behind combinatorial testing take some people a bit of time to operationalize. To help with this we offer on site and off site personal training.

We are looking to create some online training to ease the transition into using Hexawise's combinatorial software testing tools most effectively. We would love to know what good examples of interactive online training people have found useful.

 

Related: Why isn't Software Testing Performed as Efficiently and Effecively as it could be? - In Praise of Data-Driven Management (AKA "Why You Should be Skeptical of HiPPO's")

By: John Hunter on Apr 24, 2012

Categories: Uncategorized

84 percent coverage in 20 tests

Hexawise test coverage graph showing 83.9% coverage in just 20 tests

 

Among the many benefits Hexawise provides is creating a test plan that maximizes test coverage with each new scenario tested. The graph above shows that after just 20 test 83.9% of the test combinations have been tested. Read more about this in our case study of a mortgage application software test plan. Just 48 test combinations are needed to test for every valid pair (3.7 million possible tests combinations exist in this case). If you are lost now, this video may help.

The coverage achieved by the first few tests in the plan will be quite high (and the graph line will point up sharply) then the slope will decrease in the middle of the plan (because each new test will tend to test fewer net new pairs of values for the first time) and then at the end of the plan the line will flatten out quite a lot (because by the end, relatively few pairs of values will be tested together for the first time).

One of the benefits Hexawise provides is making that slope as steep as possible. The steeper the slope the more efficient your test plan is. If you repeat the same tests of pairs and triples and... while not taking advantage of the chance to test, untested pairs and triples you will have to create and run far more test than if you intelligently create a test plan. With many interactions to test it is far too complex to manually derive an intelligent test plan. A combinatorial testing tool, like Hexawise, that maximizes test plan efficiency is needed.

For any set of test inputs, there is a finite number of pairs of values that could be tested together (that can be quite a large number). The coverage chart answers, after each tests, what percentage of the total number of pairs (or triples, etc.) that could be tested together have been tested together so far?

The Hexawise algorithms achieve the following objectives that help testers find as many defects as possible in as few tests as possible. In each and every step of each and every test case, the algorithm chooses a test condition that will maximize the number of pairs that can be covered for the first time in the test case. (Or, the maximum number of triplets or quadruplets, etc. based on the thoroughness setting defined by the user). Allpairs (AKA pairwise) is a well known and easy to understand test design strategy. Hexawise lets users create pairwise sets of tests that will test not only every pair but it also allows test designers to generate far more thorough sets of tests (3-way to 6-way coverage). This allows users to "turn up the coverage dial" and generate tests that cover every single possible triplet of test inputs together at least once (or every 4-way combination or 5-way combination or 6-way combination).

Note that the coverage ratio Hexawise shows is based on the factors entered as items to be tested: not a code coverage percentage. Hexawise sorts the test plan to front load the coverage of the tuple pairs, not the coverage of the code paths. Coverage of code paths ultimately depends on how good a job the test designer did at extracting the relevant parameters and values of the system under test. You would expect there to be some loose correlation between coverage of identified tuple pairs and coverage of code paths in most typical systems.

If you want to learn more about these concepts, I would recommend Scott's Scott Sehlhorst articles on pairwise and combinatorial test design. They are some of the clearest introductory articles about pairwise and combinatorial testing that I have seen. They also contain some interesting data points related to the correlation between 2-way / allpairs / pairwise / n-way coverage (in Hexawise) and the white box metrics of branch coverage, block coverage and code coverage (not measurable by Hexawise).

In Software testing series: Pairwise testing, for example, Scott includes these data points:

 

  • We measured the coverage of combinatorial design test sets for 10 Unix commands: basename, cb, comm, crypt, sleep, sort, touch, tty, uniq, and wc... The pairwise tests gave over 90 percent block coverage.

 

  • Our initial trial of this was on a subset Nortel’s internal e-mail system where we able cover 97% of branches with less than 100 valid and invalid testcases, as opposed to 27 trillion exhaustive testcases.

 

  • A set of 29 pair-wise... tests gave 90% block coverage for the UNIX sort command. We also compared pair-wise testing with random input testing and found that pair-wise testing gave better coverage.

 

Related: Why isn't Software Testing Performed as Efficiently and Effecively as it could be? - Video Highlight Reel of Hexawise – a pairwise testing tool and combinatorial testing tool - Combinatorial Testing, The Quadrant of Massive Efficiency Gains

Specific guidance on how to view the percentage of coverage graph for the test plan in Hexawise:

 

When working on your test plan in Hexawise, to get the checklist to be visible, click on the two downward arrow keys located shown in the image:

How-To Progress Checklists-2 inline

Then you'll want to open up the "Advanced" list. So you might need to click here:

Advanced How-To Progress Checklist inline

Then the detailed explanation will begin when you click on "Analyze Tests"

Decreasing Marginal Returns inline

 

This post is adapted (and some new content added) from comments posted by Justin Hunter and Sean Johnson.

By: John Hunter on Feb 3, 2012

Categories: Combinatorial Software Testing, Combinatorial Testing, Efficiency, Multi-variate Testing, Pairwise Software Testing, Pairwise Testing, Scripted Software Testing, Software Testing, Software Testing Efficiency