We are excited to announce an ongoing partnership with Datalex to improve software test efficiency and accuracy. Datalex has achieved extreme benefits in software quality assurance and speed-to-market through their use of Hexawise. Some of these benefits include:

  • Greater than 65 percent reduction in the Datalex test suite.
  • Clearer understanding of test coverage
  • Higher confidence in the thoroughness of software tests.
  • Complete and consistently formatted tests that are simple to automate

An airline company’s regression suite typically contains thousands of test cases. Hexawise is used by Datalex to optimize these test cases, leading to fewer tests as well as greater overall testing coverage. Hexawise also provides Datalex with a complete understanding of exactly what has been tested after each and every test, allowing them to make fact-based decisions about how much testing is enough on each project.

Hexawise has been fundamental in improving the way we approach our Test Design, Test Coverage and Test Execution at Datalex... My team love using Hexawise given its intuitive interface and it’s ability to provide a risk based approach to coverage which gives them more confidence during release sign-off.

Screen shot 2016 10 04 at 12.58.53 pm

Áine Sherry

Global Test Manager at Datalex

“As a senior Engineer in a highly innovative company, I find Hexawise crucial in regards to achieving excellent coverage with a fraction of the time and effort. Hexawise will also facilitate us to scale onwards and upwards as we continue to innovate and grow,“ – Dean Richardson, Software Test Engineer at Datalex.

By eliminating duplicative tests and optimizing the test coverage of each test case Hexawise provides great time savings in the test execution phase. Hexawise can generate fewer test scenarios compared to what testers would create on their own and those test cases provide more test coverage. Time savings in test execution come about simply because it takes less time to execute fewer tests.

Related: How to Pack More Coverage Into Fewer Software Tests - Large Benefits = Happy Hexawise Clients and Happy Colleagues

By: John Hunter on Nov 10, 2016

Categories: Business Case, Customer Success, Testing Case Studies

Performance testing examines how the software performs (normally "how fast") in various situations.

Performance testing does not just result in one value. You normally performance test various aspects of the software in differing conditions to learn about the overall performance characteristics. It can well be that certain changes will improve the performance results for some conditions (say a powerful laptop with a fiber connection) and greatly degrade the performance for other use cases. And often the software can be coded to attempt to provide different solutions under different conditions.

All this makes performance testing complex. But trying to over-simplify performance testing removes much of its value.

Another form of performance testing is done on sub components of a system to determine what solutions may be best. These often are server based issues. These likely don't depend on individual user conditions but can be impacted by other things. Such as under normal usage option 1 provides great performance but under larger load option 1 slows down a great deal and option 2 is better.

Focusing on these tests of sub components run the risk of sub-optimization, where optimizing individual sub-components result in a less than optimal overall performance. Performance testing sub-components is important but it is most important is testing the performance of the overall system. Performance testing should always place a priority on overall system performance and not fall into the trap of creating a system with components that perform well individually but when combined do not work well together.

Load testing, stress testing and configuration testing are all part of performance testing.

Continue reading about performance testing in the Hexawise software testing glossary.

By: John Hunter on Sep 27, 2016

Categories: User Experience, Software Testing Testing Strategies, Software Testing

Hexawise has been driven by the vison to provide software testers more effective coverage in fewer tests. The Hexawise combinatorial software testing application allows software testers to achieve both seemingly contradictory statements.

Too Many Tests

See some previous posts where we have explored how Hexawise achieves more coverage with fewer tests:

By: John Hunter on Sep 14, 2016

Categories: Hexawise test case generating tool, Hexawise

Software testing concepts help us compartmentalize the complexity that we face in testing software. Breaking the testing domain into various areas (such as usability testing, performance testing, functional testing, etc.) helps us organize and focus our efforts.

But those concepts are constructs that often have fuzzy boundries. What matters isn't where we should place certain software testing efforts. What matters is helping create software that users find worthwhile and hopeful enjoyable.

One of the frustration I have faced in using internet based software in the last few years is that it often seems to be tested without considering that some users will not have fiber connections (and might have high latency connections). I am not certain latency (combined maybe with lower bandwidth) is the issue but I have often found websites either actually physically unusable or mentally unusable (it is way too frustrating to use).

It might be the user experience I face (on the poorly performing sites) is as bad for all users, but my guess is it is a decent user experience on the fiber connections that the managers have when they decide this is an OK solution. It is a usbility issue but it is also a performance issue in my opinion.

It is certainly possible to test performance results on powerful laptops with great internet connections and get good performance results for web applications that will provide bad performance results on smart phones via wifi or less than ideal cell connections. This failure to understand the real user conditions is a significant problem and an area of testing that should be improved.

I consider this an interaction between performance testing and user-experience testing (I use "user-experience" to distinguish it from "usability testing", since I can test aspects of the user experience without users testing the software). The page may load in under 1 second on a laptop with a fiber connection but that isn't the only measure of performance. What about your users that are connecting via a wifi connection with high latency? What if the performance in that case is that it takes 8 seconds to load and your various interactive features either barely work or won't work at all given the high latency.

In some cases ignoring the performance for some users may be OK. But if you care about a system that delivers fast load times to users you need to consider the performance not just for a subset of users but consider how it performs for users overall. The extent you will prioritize various use cases will depend on your specific situation.

I have a large bias for keeping the basic experience very good for all users. If I add fancy features that are useful I do not like to accept meaningful degradation to any user's experience - graceful degradation is very important to me. That is less important to many of sites that I use, unfortunately. What priority you place on it is a decision that impacts your software development and software testing process.

Hexawise attempts to add features that are useful while at the same time paying close attention to making sure we don't make things worse for users that don't care about the new feature. Making sure the interface remains clear and easy to use is very important to us. It is also a challenge when you have powerful and fairly complex software to keep the usability high. It is very easy to slip and degrade the users experience. Sean Johnson does a great job making sure we avoid doing that.

Maintaining the responsiveness of Hexawise is a huge effort on our part given the heavy computation required in generating tests in large test case scenarios.

You also have to realize where you cannot be all things to all people. Using Hexawise on a smart phone is just not going to be a great experience. Hexawise is just not suited to that use case at all and therefore we wouldn't test such a use case.

For important performance characteristics it may well be that you should create a separate Hexawise test plan to test the performance under server different conditions (relating to latency, bandwidth and perhaps phone operating system). It could be done within a test plan it just seems to me more likely separate test plans would be more effective most of the time. It may well be that you have the primary test plan to cover many functional aspects and have a much smaller test plan just to check that several things work fine in a high latency and smart phone use case).

Within that plan you may well want to test out various parameter values for certain parameters operating system iOS Android 7 Android 6 Android 5

latency ...

Of course, what should be tested depends on the software being tested. If none of the items above matter in your case they shouldn't be used. If you are concerned about a large user base you may well be concerned about performance on various Android versions since the upgrade cycle to new versions is so slow (while most iOS users are on the latest version fairly quickly).

If latency has a big impact on performance then including a parameter on latency would be worthwhile and testing various parameter values for it could be sensible (maybe high, medium and low). And the same with testing various levels of bandwidth (again, depending on your situation).

My view is always very user focused so the way I naturally think is relating pretty much everything I do to how it impacts the end user's experience.

Related: 10 Questions to Ask When Designing Software Tests - Don’t Do What Your Users Say - Software Testers Are Test Pilots

By: John Hunter on Jul 26, 2016

Categories: User Experience, Testing Strategies, Software Testing

Usability testing is the practice of having actual users try the software. Outcomes include the data of the tasks given to the user to complete (successful completion, time to complete, etc.), comments the users make and expert evaluation of their use of the software (noticing for example that none of the users follow the intended path to complete the task, or that many users looked for a different way to complete a task but failing to find it eventually found a way to succeed).

Usability testing involves systemic evaluation of real people using the software. This can be done in a testing lab where an expert can watch the user but this is expensive. Remote monitoring (watching the screen of the user; communication via voice by the user and expert; and viewing a webcam showing the user) is also commonly used.

In these setting the user will be given specific tasks to complete and the testing expert will watch what the user does. The expert will also ask the user questions about what they found difficult and confusing (in addition to what they liked) about the software.

The concept of usability testing is to have feedback from real users. In the event you can't test with the real users of a system it is important to consider if you are fairly accuratately representing that population with your usability testers. If the users of the system of fairly unsophisticated users if you use usability testers that are very computer savy they may well not provide good feedback (as their use of the software may be very different from the actually users).

"Usability testing" does not encompass experts evaluating the software based on known usability best practices and common problems. This form of expert knowledge of wise usability practices is important but it is not considered part of "usability testing."

Find more exploration of software testing terms in the Hexawise software testing glossary.

Related: Usability Testing Demystified - Why You Only Need to Test with 5 Users (this is not a complete answer, it does provide insite into the value of quick testing to run during the development of the software) - Streamlining Usability Testing by Avoiding the Lab - Quick and Dirty Remote User Testing - 4 ways to combat usability testing avoidance

By: John Hunter on Jul 4, 2016

Categories: User Experience, User Interface, Testing Strategies, Software Testing

Recently we added the revisions feature as an enhancement to Hexawise created test plans. This allows you to easily revert to a previous test plan for whatever reason you wish. We provide a list of each revision and the date and all you have to do is click a button to revert to that version. We also give you the option to copy that plan (in case you want to view that plan, but ​also​ want to ​keep all​ the updates you have made since then).

Now when you are editing a test plan in Hexawise you will see a revisions link on the top of the screen (see image):

revisions-button

Note, revisions are available in editable test plans, the revisions link is not available in uneditable test plans (such as the sample plans). I saved an editable copy of the plan into my private plans.

In the example in this post, I am using the Hexawise sample plan "F) Flight Reservations" which you can view in your own account.

revisions-note

One of the notes in this sample plan says we should also test for whether java is enabled or not. So I added a new paramater for java and included enabled and not-enabled as paramater values.

At a later date, if we wanted to go to a previous version all we have to do is click the revisions link (see top image) and then we get a list of all the revisions:

revisions-action

Mouseover the revision we want to use and we can make a copy of that version or we can revert to the version we desire.

Versions make is very easy to get back to a previous version of your test plan. This has been quite a popular feature addition. I hope you enjoy it. We are constantly seeking to improve based on feedback from our users. If you have comments and suggestions please share them with use.

Related: My plan has a lot of constraints in it. Should I split it into separate plans? - Whaddya Mean "No Possible Value"? - Customer Delight

By: John Hunter on May 31, 2016

Categories: Hexawise, Hexawise test case generating tool, Hexawise tips

At Hexawise we aim to improve the way software is tested. Achieving that aim requires not only providing our clients with a wonderful software tool (which our customers say we’re succeeding at) but also a commitment from the users of our tool to adopt new ways of thinking about software testing.

We have written previously about our focus on the importance of the values Bill Hunter (our founder's father) to Hexawise. That has led us to constantly focus on how maximize the benefits our customers gain using Hexawise. This focus has led us to realize that our customers that take advantage of the high-touch training services and ongoing expert test design support on demand that we offer often realize unusually large benefits and roll out usage of Hexawise more quickly and broadly than our customers who acquire licenses to Hexawise and try to “get the tool and make it available to the team.”

We are now looking for someone to take on the challenge of helping our clients succeed. The principles behind our decision to put so much focus on helping our customers succeed are obvious to those that understand the thinking of Bill Hunter, W. Edwards Deming, Russel Ackoff etc. but they may seem a bit odd to others. The focus of this senior-level position really is to help our customers improve their software testing results. It isn't just a happy sounding title that has no bearing on what the job actually entails.

The person holding this position will report to the CEO and work with other executives at Hexawise who all share a commitment to delighting our customers and improving the practice of software testing.

Hexawise is an innovative SaaS firm focused on helping large companies use smarter approaches to test their enterprise software systems. Teams using Hexawise get to market faster with higher quality products. We are the world’s leading firm in our niche market and have a growing client base of highly satisfied customers. Since we launched in 2009, we have grown both revenues and profits every year. Hexawise is changing the way that large companies test software. More than 100 Fortune 500 companies and hundreds of other smaller firms use our industry leading software.

Join our journey to transform how companies test their software systems.

Hexawise office

Description: VP of Customer Success

In the Weeks Prior to a Sale Closing

  • Partner with sales representatives to conduct virtual technical presentations and demonstrations of our Hexawise test design solution.

  • Clearly explain the benefits and limitations of combinatorial test design to potential customers using language and concepts relevant to their context by drawing upon your own “been there, done that” experiences of having successfully introduced combinatorial test design methods in multiple similar situations.

  • Identify and assess business and technical requirements, and position Hexawise solutions accordingly.

Immediately Upon a New Sale Closing

  • Assess a new client’s existing testing-related processes, tools, and methods (as well as their organizational structure) in order to provide the client with customized, actionable recommendations about how they can best incorporate Hexawise.

  • Collaborate with client stakeholders to proactively identify potential barriers to successful adoption and put plans in place to mitigate / overcome such barriers.

  • Provide remote, instructor-led training sessions via webinars.

  • Provide multi-day onsite instructor-led training sessions that: cover basic software test design concepts (such as Equivalence Class Partitioning, the definition of Pairwise-Testing coverage, etc.) as well as how to use the specific features of Hexawise.

  • Include industry-specific and customer-specific customized training modules and hands-on test design exercises to help make the sessions relevant to the testers and BA’s who attend the training sessions.

  • Collaborate with new users and help them iterate, improve, and finalize their first few sets of Hexawise-generated software tests.

  • Set rollout and adoption success criteria with clients and put plans in place to help them achieve their goals.

Months After a New Sale Closing

  • Continue to engage with customers onsite and virtually to understand their needs, answer their test design questions, and help them achieve large benefits from test optimization.

  • Monitor usage statistics of Hexawise clients and proactively reach out to clients, as appropriate, to provide proactive assistance at the first sign that they might be facing any potential adoption/rollout challenges.

  • Collaborate with stakeholders and end users at our clients to identify opportunities to improve the features and capabilities of Hexawise and then collaborate with our development team to share that feedback and implement improvements.

Required Skills and Experience

We are looking for a highly-experienced combinatorial test design expert with outstanding analytical and communication skills to provide these high touch on-boarding services and partner with our sales team with prospective clients.

Education and Experience

  • Bachelor’s or technical university degree.

  • Deep experience successfully introducing combinatorial test design methods on multiple different kinds of projects to several different groups of testers.

  • Set rollout and adoption success criteria with multiple teams and put plans in place to achieve them.

  • Minimum 5 years in software testing, preferably at a IT consulting firm or large financial services firm.

Knowledge and Skills

  • Ability to present and demonstrate capabilities of the Hexawise tool, and the additional services we provides to our clients beyond our tool.
  • Exhibit excellent communication and presentation skills, including questioning techniques.
  • Demonstrate passion regarding consulting with customers.
  • Understand how IT and enterprise software is used to address the business and technical needs of customers.
  • Demonstrate hands-on level skills with relevant and/or related software technology domains.
  • Communicate the value of products and solutions in terms of financial return and impact on customer business goals.
  • Possess a solid level of industry acumen; keeping current with software testing trends and able to converse with customers at a detailed level on pertinent issues and challenges.
  • Represents Hexawise knowledgeably, based on a solid understanding of Hexawise’s business direction, portfolio and capabilities
  • Understand the competitive landscape for Hexawise and position Hexawise effectively.
  • A cover letter that describes who you are, what you've done, and why you want to join Hexawise.
  • Ability to work and learn independently and as part of a team
  • Desire to work in a fast-paced, challenging start-up environment

Why join Hexawise?

salary + bonus; medical and dental, 401(k) plans; free parking and very slick Chapel Hill office! Opportunity to experience work with a fast-growing, innovative technology company that is changing the way software is tested.

Key Benefits:

Salary: Negotiable, but minimum of $100,000 + Commissions based upon client license renewals Benefits: Health, dental included, 401k plan Travel: Average of no more than 2-3 days onsite per week Location: Chapel Hill, NC*

*Working from our offices would be highly preferable. We might consider remote working arrangements for an exceptional candidate based in the US.

Apply for the VP of Customer Success position at Hexawise.

By: John Hunter on May 12, 2016

Categories: Hexawise, Career, Software Testing, Lean, Customer Success, Agile

It has been quite a long time since we last posted a roundup of great content on software testing from around the web.

  • Mistakes We Make in Testing by Michael Sowers - "Not being involved in development at the beginning and reviewing and providing feedback on the artifacts, such as user stories, requirements, and so forth. Not collaborating and developing a positive relationship with the development team members..."
  • Changing the conversation about change in testing by Katrina Clokie - "I'm planning to stop talking about bugs, time and money. Instead I'm going to start framing my reasoning in terms of corporate image, increasing quality awareness among all disciplines, and improving end-user satisfaction."
  • How to Pack More Coverage Into Fewer Software Tests by Justin Hunter - "There are simply too many interactions for a tester to keep track of on their own. As a result, manually-selected test sets almost always fail to test for a rather large number of potentially important interactions."
  • Building Quality by Alan Page - "your best shot at creating a product that customers like crave is to get quantitative and qualitative feedback directly from your users... Get it in their hands, listen to them, and make it better."
  • Dr. StrangeCareer or: How I Learned to Stop Worrying and Love the Software Testing Industry by Keith Klain - "Testing is hard. Doing it right is very hard. An ambiguous, unchartered journey into a sea of bias and experimentation, but as the line from the movie goes, 'the hard is what makes it great'."
  • Exploratory Testing 3.0 by James Bach and Michael Bolton - "Because we now define all testing as exploratory. Our definition of testing is now this: 'Testing is the process of evaluating a product by learning about it through exploration and experimentation, which includes: questioning, study, modeling, observation and inference, output checking, etc.'"
  • Coverage Is Not Strongly Correlated With Test Suite Effectiveness by Laura Inozemtseva and Reid Holmes - "Our results suggest that coverage, while useful for identifying under-tested parts of a program, should not be used as a quality target because it is not a good indicator of test suite effectiveness. "
  • How Software Testers Can Teach, Share and Play with Others by Justin Rohrman - Software testers "bring a varied skill set to software development organizations — math, experiment design, modeling, and critical thinking."
  • Who Killed My Battery: Analyzing Mobile Browser Energy Consumption - " dynamic Javascript requests can greatly increase the cost of rendering the page... by modifying scripts on the Wikipedia mobile site we reduced by 30% the energy needed to download and render Wikipedia pages with no change to the user experience."

By: John Hunter on Apr 27, 2016

Categories: Roundup, Software Testing

Begin with the “Goldilocks Rule” in mind to identify how much detail is appropriate.

unnamed

If your tests cover a large scope, as in a set of end-to-end tests of a process, you focus first on entering only the most important elements of that process into Hexawise. If your tests cover a small scope, as in tests that focus on a few items on a single screen, the amount of detail you will want to include in Hexawise is higher. Learn more about the Goldilocks Rule.

 

Imagine explaining your System Under Test to someone’s mother. Start with 5 things that may change in each test

Suggestion: do not start this process with detailed Requirements or Technical Specifications. Instead, start with your basic common sense description of some things that would change from test to test.

  • If you were explaining the application to someone’s mother, how would you explain what it does in 2 minutes?
  • What kinds of things would be important to vary from test to test?
    • Hardware and software configurations?
    • User types?
    • Different actions that a user might take?
  • Identify 5 things that change from test to test and turn those 5 things into your first Parameters.
    • How might those things change? Add one or more Values for each Parameter.
    • At this point general descriptions might be fine; (e.g., SUV’s or Economy cars vs. Toyota Corolla).
    • Remember that, where possible, you should avoid creating long lists of values.

 

Create a draft set tests, assess obvious big gaps in the tests, and start filling them.

What obvious types of scenarios are missing? Add parameters and values as necessary to fill those big, obvious holes.

 

Create tests again, assess whether you’re covering necessary business rules and requirements.

If your tests are not yet testing a business rule or Requirement that you want to test:

  • Consider adding a Parameter or Value
  • Consider adding a specific combination of values to be tested together in the requirements tab

 

Reduce scope if the test plan is too complex

  • Consider cutting the scope of the plan (e.g., create two different plans with largely similar parameters – one for regular users and one for special users – instead of one big plan which tries to “do it all.”)
  • Consider changing the way you are describing “hard coded” values. Instead, of “iPhone 4S with International roaming” (which might not be a valid option after the first part of the draft test case suggests a transaction for a phone for corporate customers from a Northern location responding to the special holiday offer…) consider using descriptive parameters and values along the lines of “from the available options of phones available at this point in the test case, select an option that meets as many of these conditions as you can…

 

Consider adding additional details into your plan from other sources.

Other sources for test ideas could be:

J Michael Hunter’s “You Are Not Done Yet”
Elisabeth Hendrikson’s Test Heuristics Cheat Sheet
Hans Buwalda’s Soap Opera Testing

By: John Hunter on Jun 17, 2014

Categories: Combinatorial Software Testing, Software Testing

The Hexawise Software Testing blog carnival focuses on sharing interesting and useful blogposts related to software testing.

 

  • The Zen of Application Test Suites by Curtis “Ovid” Poe – “This document is about testing applications — it’s not about how to write tests. Application test suites require a different, more disciplined approach than library test suites. I describe common misfeatures experienced in large application test suites and follow with recommendations on best practices.”

  • The Software Tester’s Easter Egg Hunt by Ben Austin – “The testing industry will benefit greatly if more people follow Holland’s example and prioritize critical thinking over the ability to write test scripts. That said, it is equally important to recognize that scripting tools, when used in the situations they’re intended for, can be great time-savers that can enable more thoughtful, context-driven exploratory testing.

  • “Grapefruit Juice Bugs” – A New Term for a Surprisingly Common Type of Surprising Bugs by Justin Hunter – “Like grapefruit juice’s impact on prescription drugs, software testing involves critical interactions between different parts of the system. And risks exist when these different parts interact with one another.”

  • Testing at Airbnb by Lou Kosak – “Building good habits around testing is hard, especially at an established company. One of the biggest lessons I’ve learned this year is that as long as you have a team that’s open to the idea, it’s always possible to start testing (even if you’ve got a six year old monolithic Rails app to contend with). Once you have a decent test suite in place, you can actually start refactoring your legacy code, and from there anything is possible. The trick is just to get started.”

  • Disruptive Testing – James Bach Interviewed by Rodney Urquhart – “the future I am helping to build is about systematically training up skilled testers, some of whom but not all with coding skills, so that a small number of testers can do– or coordinate to be done– all the testing that a large project might need. A good future for testing would be one with a lot fewer “testers” but each one of those testers being passionate about his craft.”

  • The role of a Test Manager in agile by Katrina Clokie – “In a cross-skilled team, the agile tester must ensure that the quality of testing is good regardless of who in the team is completing it. The tester becomes the spokesperson for collaborative testing practices, and provides coaching via peer reviews or workshops to those without a testing background.”

  • Speaking to Your Business Using Measurements by Justin Rohrman – “In my experience, no one measure did a great job of telling the story about my software ecosystem. I’ve been deceived by groups of measures, too, because I misunderstood their weaknesses. If we are so easily deceived by measurements, imagine what happens when we send them off to others who need quick, high-level information.”

 

unnamed

Photo in Ranakpur India by Justin Hunter.

 

  • Shine a light by Rob Lambert – “Tours, personas and a variety of other test ideas give you a way of re-shining your light. Use these ideas to see the product in different ways, but never forget that it’s often time that is against you. And time is one of the hardest commodities to argue for during your testing phase.”

  • Using mind-mapping software as a visual test management tool by Aaron Hodder – “I like to employ a style of software testing that emphasises the personal freedom and responsibility of the individual tester to continually optimise the quality of his/her work by treating test-related learning, test design, test execution and test result interpretation as mutually supportive activities that run in parallel throughout the project. When performed by a skilled tester, this approach yields valuable and consistent results.”

  • Helpful Tips for Hiring Better Testers by Isaac Howard – “I looked at the good testers around me and tried to identify the “whys” of their success. All of them were driven to learn and capable of adapting to change. If they didn’t know a tool or a tech, they learned it. Because under the hood, testing is learning and relearning software everyday. The following are seven changes I made to my interviewing process.”

By: John Hunter on Jun 10, 2014

Categories: Software Testing

Justin posted this to the Hexawise Twitter feed

cave-question

It sparked some interesting and sometimes humorous discussion.

cave-exploration

The parallels to software testing are easy to see. In order to learn useful information we need to understand what the purpose of the visit to the cave is (or what we are seeking to learn in software testing). Normally the most insight can be gained by adjusting what you seek to learn based on what you find. As George Box said

Always remember the process of discovery is iterative. The results of each stage of investigation generating new questions to answered during the next.

Some "software testing" amounts to software checking or confirming that a specific known result is as we expect. This is important for confirming that the software continues to work as expected as we make changes to the code (due to the complexity of software unintended consequences of changes could lead to problems if we didn't check). This can, as described in the tweet, take a form such as 0:00 Step 1- Turn on flashlight with pre-determined steps all the way to 4:59 Step N- Exit But checking is only a portion of what needs to be done. Exploratory testing relies on the brain, experience and insight of a software tester to learn about the state of the software; exploratory testing seeks to go beyond what pre-defined checking can illuminate. In order to explore you need to understand the aim of the mission (what is important to learn) and you need to be flexible as you learn to adjust based on your understanding of the mission and the details you learn as you take your journey through the cave or through the software. Exploratory software testing will begin with ideas of what areas you wish to gain an understanding of, but it will provide quite a bit of flexibility for the path that learning takes. The explorer will adjust based on what they learn and may well find themselves following paths they had not thought of prior to starting their exploration.

 

Related: Maximizing Software Tester Value by Letting Them Spend More Time Thinking - Rapid Software Testing Overview - Software Testers Are Test Pilots - What is Exploratory Testing? by James Bach

By: John Hunter on Apr 8, 2014

Categories: Exploratory Testing, Scripted Software Testing, Software Testing, Testing Checklists

Some of those using Hexawise use Gherkin as their testing framework. Gherkin is based on using a given [a], when [b] --> then [c] format. The idea is this helps make communication clear and make sure business rules are understood properly. Portions of this post may be a bit confusing for new Hexawise users, links are provided for more details on various topics. But, if you don't need to create output for Gherkin and you are confused you can just skip this post.

A simple Gherkin scenario: Making an ATM withdrawal

Given a regular account
  And the account was originally opened at Goliath National
  And the account has a balance of $500 
When using a Goliath National Bank 
  And making a withdrawal of $200 
Then the withdrawal should be handled appropriately 

Hexawise users want to be able to specify the parameters (used in given and when statements) and then import the set of Hexawise generated test cases into a Gherkin style output.

In this example we will use Hexawise sample test plan (Gherkin example), which you can access in your Hexawise account.

I'll get into how to export the Hexawise created test plans so they can be used to create Gherkin data tables below (we do this ourselves at Hexawise).

In the then field we default to an expected value of "the withdrawal should be handled appropriately." This is something that may benefit from some explanation.

If we want to provide exact details on exactly what happens on every variation of parameter values for each test script those have to be manually created. That creates a great deal of work that has very little value. And it is an expensive way to manage for the long term as each of those has to be updated every time. So in general using a "behaves as expected" default value is best and then providing extra details when worthwhile.

For some people, this way of thinking can be a bit difficult to take in at first and they have to keep reminding themselves how to best use Hexawise to improve efficiency and effectiveness.

enter-default-expected-value

To enter the default expected value mouse-over the final step in the auto scripts screen. When you mouse over that step you will see the "Add Expected Results" link. Click that and add your expected result text.

expected-value-entry

The expect value entered on the last step with no conditions (the when drop down box is blank) will be the default value used for the export (and therefor the one imported into Gherkin).

In those cases when providing special notes to tester are deemed worth the extra effort, Hexawise has 2 ways of doing this. In the event a special expected value exists for the particular conditions in the individual test case then the special expected value content will be exported (and therefore used for Gherkin).

Conditional expected results can be entered using the auto scripts feature.

Or we can use the requirements feature when we want to require a specific set of parameter values to be tested. If we chose 2 way coverage (the default, pairwise coverage) every pair of parameter values will be tested at least once.

But if we wanted a specific set of say 3 exact parameter values ([account type] = VIP, [withdrawal ATM] = bank-owned ATM, [withdrawal amount] = $600 then we need to include that as a requirement. Each required test script added also includes the option to include an expected result. The sample plan includes a required test case with those parameters and an expected result of "The normal limit of $400 is raised to $600 in the special case of a VIP account using a Goliath National Bank owned ATM."

So, the most effective way to use Hexawise to create a pairwise (or higher) test plan to then use to create Gherkin data tables will be to have the then case be similar to "behaves as expected." And when there is a need for special expected results details to use the auto script or requirements features to include those details. Doing so will result the expected result entered for that special case being the value used in the Gherkin table for then.

When you click auto script button the test are then generated, you can download them using the export icon.

autoscripts-export

Then select option to download as csv file.

script-export-options

You will download a zip file that you can then unzip to get 2 folders with various files. The file you want to use for this is the combinations.txt file in the csv directory.

The Ruby code we use to convert the commas to pipes | used for Gherkin is

!/usr/bin/env ruby
require 'csv'
tests = CSV.read("combinations.csv")
table = []
tests.each do |test|
table << "| " + test[1..-1].join(" | ") + " |\n"
end
IO.write("gherkin.txt", table.join())

Of course, you can use whatever method to convert the format you wish, this is just what we use. See this explanation for a few more details on the process.

Now you have your Gherkin file to use however you wish. And as the code is changed over time (perhaps adding parameter value options, new parameters, etc.) you can just regenerate the test plan and export it. Then convert it and the updated Gherkin test plan is available.

 

Related: Create a Risk-based Testing Plan With Extra Coverage on Higher Priority Areas - Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Designing a Test Plan with Dependent Parameter Values

By: John Hunter on Mar 27, 2014

Categories: Hexawise test case generating tool, Hexawise tips, Scripted Software Testing, Software Testing Efficiency, Testing Strategies

The process used to hire employees is inefficient in general and even more inefficient for knowledge work. Justin Hunter, Hexawise CEO, posted the following tweet:

The labor market is highly inefficient for software testers. Many excellent testers are undervalued relative to average testers. Agree?

 

The tweet sparked quite a few responses:

inefficient-job-market-tweet

I think there are several reasons for why the job market is inefficient in general, and for why it is even more inefficient for software testing than for most jobs.

 

  • Often, how companies go about hiring people is less about finding the best people for the organization and more about following a process that the organization has created. Without intending to, people can become more more concerned about following procedural rules than in finding the best people.

  • The hiring process is often created much like software checking, a bunch of simple things to check - not because doing so is actually useful but because simple procedural checks are easy to verify. So organizations require a college degree (and maybe even require a specific major). And they will use keywords to select or reject applicants. Or require certification or experience with a specific tool. Often the checklist used to disqualify people contains items that might be useful but shouldn't be used as barriers but it is really easy for people that don't understand the work to apply the rules in the checklist to filter the list of applicants.

  • It is very hard to hire software testers well when those doing the hiring don't understand the role software testing should play. Most organizations don't understand, so they hire for software checkers. They, of course, don't value people that could provide much more value (software testers that go far beyond checking). The weakness of hiring without understanding the work is common for knowledge work positions and likely even more problematic for software testing due to the even worse understanding of what they should be doing compared to most knowledge workers.

 

And there are plenty more reasons for the inefficient market.

Here are few ideas that can help improve the process:

  • Spend time to understand and document what your organization seeks to gain from new hires.

  • Deemphasize HR's role in the talent evaluation process and eliminate dysfunctional processes that HR may have instituted. Talent evaluation should be done by people that understand the work that needs to get done. HR can be useful in providing guidance on legal and company-decided policies for hiring. Don't have people that can't evaluate the difference between great testers and good testers decide who should be hired or what salary is acceptable. Incidentally, years of experience, certifications, degrees, past salary and most anything else HR departments routinely use are often not correlated to the value a potential employee brings.

  • A wonderful idea, though a bit of a challenge in most organizations, is to use job auditions. Have the people actually do the job to figure out if they can do what you need or not (work on a project for a couple weeks, for example). This has become more common in the last 10 years but is still rare.

  • I also believe you are better off hiring for somewhat loose job descriptions, if possible, and then adjusting the job to who you hire. That way you can maximize the benefit to the organization based on the people you have. At Hexawise, for example, most of the people we hire have strengths in more than one "job description" area. Developers with strong UI skills, for instance, are encouraged to make regular contributions in both areas.

  • Creating a rewarding work environment helps (this is a long term process). One of the challenges in getting great people is they are often not interested in working for dysfunctional organizations. If you build up a strong testing reputation great testers will seek out opportunities to work for you and when you approach great testers they will be more likely to listen. This also reduces turnover and while that may not seem to relate to the hiring process is does (one reason we hire so poorly is we don't have time to do it right, which is partly because we have to do so much of it).

  • Having employees participate in user groups and attending conferences can help your organization network in the testing community. And this can help when you need to hire. But if your organization isn't a great one for testers to work in, they may well leave for more attractive organizations. The "solution" to this risk is not to stunt the development of your staff, but to improve the work environment so testers want to work for your organization.

 

Great quote from Dee Hock, founder of Visa:

Hire and promote first on the basis of integrity; second, motivation; third, capacity; fourth, understanding; fifth, knowledge; and last and least, experience. Without integrity, motivation is dangerous; without motivation, capacity is impotent; without capacity, understanding is limited; without understanding, knowledge is meaningless; without knowledge, experience is blind. Experience is easy to provide and quickly put to good use by people with all the other qualities.

Please share your thoughts and suggestions on how to improve the hiring process.

 

Related: Finding, and Keeping, Good IT People - Improving the Recruitment Process - Six Tips for Your Software Testing Career - Understanding How to Manage Geeks - People: Team Members or Costs - Scores of Workers at Amazon are Deputized to Vet Job Candidates and Ensure Cultural Fit

By: John Hunter on Jan 14, 2014

Categories: Checklists, Software Testing, Career

The Hexawise Software Testing blog carnival focuses on sharing interesting and useful blog posts related to software testing.

 

  • Using mind-mapping software as a visual test management tool by Aaron Hodder - "I want to be able to give and receive as much information as I can in the limited amount of time I have and communicate it in a way that is respectful of others' time and resources. These are my values and what I think constitutes responsible testing."

  • Healthcare.gov and the Tyranny of the Innocents by James Bach - "Management created the conditions whereby this project was 'delivered' in a non-working state. Not like the Dreamliner. The 787 had some serious glitches, and Boeing needs to shape that up. What I’m talking about is boarding an aircraft for a long trip only to be told by the captain 'Well, folks it looks like we will be stuck here at the gate for a little while. Maintenance needs to install our wings and engines. I don’t know much about aircraft building, but I promise we will be flying by November 30th. Have some pretzels while you wait.'"

 

jungle-bridge

Rope bridge in the jungle by Justin Hunter

 

  • Software Testers Are Test Pilots by John Hunter - "Software testers should be test pilots. Too many people think software testing is the pre-flight checklist an airline pilot uses."

  • Where to begin? by Katrina Clokie - "Then you need to grab your Product Owner and anyone else with an interest in testing (perhaps architect, project manager or business analyst, dependent on your team). I'm not sure what your environment is like, usually I'd book an hour meeting to do this, print out my mind map on an A3 page and take it in to a meeting room with sticky notes and pens. First tackle anything that you've left a question mark next to, so that you've fleshed out the entire model, then get them to prioritise their top 5 things that they want you to test based on everything that you could do."

  • Being a Software Tester in Scrum by Dave McNulla - "Pairing on development and testing strengthens both team members. With people crossing disciplines, they improve understanding of the product, the code, and what other stakeholders find important."

  • Stop Writing Code You Can’t Yet Test by Dennis Stevens - "The goal is not to write code faster. The goal is to produce valuable, working, testing, remediated code faster. The most expensive thing developers can do is write code that doesn’t produce something needed by anyone (product, learning, etc). The second most expensive thing developers can do is write code that can’t be tested right away."

  • Is Healthcare.gov security now fixed? by Ben Simo - "I am very happy that the most egregious issue was immediately fixed. Others issues remain. The vulnerabilities I've listed above are defects that should not make it to production. It doesn't take a security expert or “super hacker” to exploit these vulnerabilities. This is basic web security. Most of these are the kinds of issues that competent web developers try to avoid; and in the rare case that they are created, are usually found by competent testers."

  • Embracing Chaos Testing Helps Create Near-Perfect Clouds - "Chaos Monkey works on the simple premise that if we need to design for high availability, we should design for failure. To design for failure, there should be ways to simulate failures as they would happen in real-world situations. This is exactly what a Chaos Monkey helps achieve in a cloud setup.
    Netflix recently made the source code of Chaos Monkey (and other Simian Army services) open source and announced that more such monkeys will be made available to the community."

  • Bugs in UK Post Office System had Dire Consequences - "A vocal minority of sub-postmasters have claimed for years that they were wrongly accused of theft after their Post Office computers apparently notified them of shortages that sometimes amounted to tens of thousands of pounds. They were forced to pay in the missing amounts themselves, lost their contracts and in some cases went to jail. Second Sight said the Post Office's initial investigation failed at first to identify the root cause of the problems. The report says more help should have been given to sub-postmasters, who had no way of defending themselves."

  • Traceability Matrix: Myth and Tricks by Adam Howard - "And this is where we get to the crux of the problem with traceability matrices. They are too simplistic a representation of an impossibly complex thing. They reduce testing to a series of one to one relationships between intangible ideas. They allow you to place a number against testing. A percentage complete figure. What they do not do is convey the story of the testing."

  • Six Tips for Your Software Testing Career by John Hunter - "Read what software testing experts have written. It’s surprising how few software testers have read books and articles about software testing.Here are some authors (of books, articles and blogs) that I've found particularly useful..."

By: John Hunter on Dec 16, 2013

Categories: Software Testing

We have created a new site to highlight Hexawise videos on combinatorial, pairwise + orthogonal array software testing. We have posted videos on a variety of software testing topics including: selecting appropriate test inputs for pairwise and combinatorial software test design, how to select the proper inputs to create a pairwise test plan, using value expansions for values in the same equivalence classes.

Here is a video with an introduction to Hexawise:

 

 

Subscribe to the Hexawise TV blog. And if you haven't subscribed to the RSS feed for the main Hexawise blog, do so now.

By: John Hunter on Nov 20, 2013

Categories: Combinatorial Testing, Hexawise test case generating tool, Multi-variate Testing, Pairwise Testing, Software Testing Presentations, Testing Case Studies, Testing Strategies, Training, Hexawise tips

Software testers should be test pilots. Too many people think software testing is the pre-flight checklist an airline pilot uses.

 

The checklists airline pilots use before each flight are critical. Checklists are extremely valuable tools that help assure steps in a process are followed. Checklists are valuable in many professions. The Checklist – If something so simple can transform intensive care, what else can it do? by Atul Gawande

 

Sick people are phenomenally more various than airplanes. A study of forty-one thousand trauma patients—just trauma patients—found that they had 1,224 different injury-related diagnoses in 32,261 unique combinations for teams to attend to. That’s like having 32,261 kinds of airplane to land. Mapping out the proper steps for each is not possible, and physicians have been skeptical that a piece of paper with a bunch of little boxes would improve matters much. In 2001, though, a critical-care specialist at Johns Hopkins Hospital named Peter Pronovost decided to give it a try. … Pronovost and his colleagues monitored what happened for a year afterward. The results were so dramatic that they weren’t sure whether to believe them: the ten-day line-infection rate went from eleven per cent to zero. So they followed patients for fifteen more months. Only two line infections occurred during the entire period. They calculated that, in this one hospital, the checklist had prevented forty-three infections and eight deaths, and saved two million dollars in costs.

 

Checklists are extremely useful in software development. And using checklist-type automated tests is a valuable part of maintaining and developing software. But those pass-fail tests are equivalent to checklists - they provide a standardized way to check that planned checks pass. They are not equivalent to thoughtful testing by a software testing professional.

I have been learning about software testing for the last few years. This distinction between testing and checking software was not one I had before. Reading experts in the field, especially James Bach and Michael Bolton is where I learned about this idea.

 

Testing is the process of evaluating a product by learning about it through experimentation, which includes to some degree: questioning, study, modeling, observation and inference.

(A test is an instance of testing.)

Checking is the process of making evaluations by applying algorithmic decision rules to specific observations of a product.

 

I think this is a valuable distinction to understand when looking to produce reliable and useful software. Both are necessary. Both are done too little in practice. But testing (as defined above) is especially underused - in the last 5 years checking has been increasing significantly, which is good. But now we really need to focus on software testing - thoughtful experimenting.

 

Related: Mistake Proofing the Deployment of Software Code - Improving Software Development with Automated Tests - Rapid Software Testing Overview Webcast by James Bach - Checklists in Software Development

By: John Hunter on Nov 13, 2013

Categories: Checklists, Software Testing, Testing Checklists, Testing Strategies

Hexawise allows you to adjust testing coverage to focus more thorough coverage on selected, high-priority areas. Mixed strength test plans allow you to select different levels of coverage for different parameters.

Increasing from pairwise to "trips" (3 way) coverage increases the test plan so that bugs that are the results of 3 parameters interacting can be found. That is a good thing. But the tradeoff is that it requires more tests to catch the interactions.

The mixed-strength option that Hexawise provides allow you to do is select a higher coverage level for some parameters in your test plan. That lets you control the balance between increased test thoroughness with the workload created by additional tests.

 

hexawise-mixed-strength

 

See our help section for more details on how to create a risk-based testing plan that focuses more coverage on higher priority areas.

As that example shows, Hexawise allows you to focus additional thoroughness on the 3 highest priority parameters with just 120 tests while also providing full pairwise coverage on all factors. Mixed strength test plans are a great tool to provide extra benefit to your test plans.

 

Related: How Not to Design Pairwise Software Tests - How to Model and Test CRUD Functionality - Designing a Test Plan with Dependent Parameter Values

By: John Hunter on Nov 6, 2013

Categories: Combinatorial Testing, Efficiency, Hexawise test case generating tool, Hexawise tips, Software Testing Efficiency, Testing Strategies

When we built our test case design tool, we thought:

  1. When used thoughtfully, this test design approach is powerful.

  2. The kinds of things going on "underneath the covers," like the optimization algorithms and combinatorial coverage risk confusing and alienating potential users before they get started.

  3. Designing an easy-to-use interface is absolutely critical. (We kept in the "delighter" in our MVP release.)

  4. As we have continued to evolve, we've kept (a) focusing on design, and (b) having learned that customers don't find the tool itself hard to use but they do sometimes find the test design principles confusing, we've focused on adding self-paced training modules into the tool (in addition to training and help web resources and Hexawise TV tutorials.

 

hexawise-levels

View of the screen showing a user's progress through the integrated learning modules in Hexawise

 

Our original hope was that if we focused on design (and iterated it based on user feedback), that Hexawise would "sell itself." And that users would tell their friends and drive adoption through word of mouth. That's exact what we've seen.

We did learn that we needed to include more help getting started with the challenges of learning the test design principles needed to successful create tests. We knew that was a bit of a hurdle but we have seen it was more of a hurdle than we anticipated so we have put more resources into helping get over than initial resistance. And we have increased the assistance to clients on how to think differently about creating test plans.

A recent interview with Andy Budd takes an interesting look at the role of design in startups.

Andy Budd: Design is often one of the few competitive advantages that people have. So I think it’s important to have design at the heart of the process if you want to cross that chasm and have a hugely successful product. ... Des Traynor: If a product is bought, it means that users are so attracted to it, that they’ll literally chase it down and queue up for it. If it’s sold, it means that there are people on commission trying to force it into customer’s hands. And I find design can often be the key difference between those two types of models for business.

Andy: There’s a lot of logic in spending less money on marketing and sales, and more money on creating a brilliantly delightful product. If you do something that people really, really want, they will tell their friends.

 

As W. Edward Deming said in Out of the Crisis

Profit in business comes from repeat customers, customers that boast about your product and service, and that bring friends with them

 

The benefits of delighting customers so much that they help promote your product are enormous. This result is about the best one any business can hope for. And great design is critical in creating a user experience that delights users so much they want to share it with their friends and colleagues.

The article also discusses one of the difficult decision points for a software startup. The minimal viable product (MVP) is a great idea to help test out what customers actually want (not just what they say they want). This MVP is a very useful concept (pushed into many people's consciousness by the popularity of lean startup and agile software development).

MVP is really about testing the marketplace. The aim is to get a working product in people's hands and to learn from what happens. If the user experience is a big part of what you are offering (and normally it should be) a poor user experience (Ux) on a MVP is not a great way to learn about the market for what you are thinking of offering.

In my opinion, adding features to existing software can be tested in a MVP way with less concern for Ux than completely new software, but I imagine some would want the great Ux for this case too. My experience is that users that appreciate your product can understand the rough Ux in a mock up and separate the usefulness from the somewhat awkward Ux. This is especially true for example with internal software applications where the developers can directly interact with the users (and you can select people you know who can separate the awkward temporary Ux from the usefulness of a new feature).

 

Related: The two weeks of on-site visits with testing teams proved to be great way to: (a) reconnect with customers, (b) get actionable input about what users like / don’t like about our tool, (c) identify new ways we can continue to refine our tool, and even (d) understand a couple unexpected ways teams are using it. - Automatically Generating Expected Results for Tests in Hexawise - Pairwise and Combinatorial Software Testing in Agile Projects

By: John Hunter on Sep 17, 2013

Categories: Experimenting

Software testing is becoming more critical as the proportion of our economic activity that is dependent on software (and often very complex software) continues to grow. There seems to be a slowly growing understanding of the importance of software testing (though it still remains an under-appreciated area). My belief is that skilled software testers will be appreciated more with time and that the opportunities for expert software testers will expand.

Here are a few career tips for software testers.

 

  • Read what software testing experts have written. It's surprising how few software testers have read books and articles about software testing.Here are some authors (of books, articles and blogs) that I've found particularly useful. Feel free to suggest other software testing authors in the comments.

James Bach

Cem Kaner

Lisa Crispin

Michael Bolton

Lanette Creamer

Jerry Weinberg

Keith Klain

James Whittaker - posts while at Google and his posts with his new employer, Microsoft

Pradeep Soundararajan

Elisabeth Hendrickson

Ajay Balamurugadas

Matt Heusser

our own, Justin Hunter

 

  • Join the community You'll learn a lot as a lurker, and even more if you interact with others. Software Testing Club is one good option. Again, following those experts (listed above) is useful; engaging with them on their blogs and on Twitter is better. The software testing community is open and welcoming. In addition, see: 29 Testers to Follow on Twitter and Top 20 Software Testing Tweeps. Interacting with other testers who truly care about experimenting, learning, and sharing lessons learned is energizing.

 

  • Develop your communication skills Communication is critical to a career in software testing as it is to most professional careers. Your success will be determined by how well you communicate with others. Four critical relationship are with: software developers, your supervisors, users of the software and other software testers.Unfortunately in many organizations, managers and corporate structures restrict your communication with some of these groups. You may have to work with what you are allowed, but if you don't have frequent, direct communication with those four groups, you won't be able to be as effective.
    Work on developing your communication skills. Given the nature of software testing two particular types of communication - describing problems ("what is it?") and explaining how significant problem is ("why should we fix it?") - are much more common than in many other fields. Learning how to communicate these things clearly and without making people defensive is of extra importance for software testers.
    A great deal of your communication will be in writing and developing your ability to communicate clearly will prove valuable to your continued growth in your career.
    Writing your own blog is a great way to further you career. You will learn a great deal by writing about software testing. You will also develop your writing ability merely by writing more. And you will create a personal brand that will grown your network of contacts. It also provides a way for those possibly interested in hiring you to learn more. We all know hiring is often done outside the job announcement - job application process. By making a name for yourself in the software testing community you will increase the chances of being recruited for jobs.

 

  • Do what you love You career will be much more rewarding if you find something you love to do. You will most likely have parts of your job you could do without but finding something you have a passion for makes all the difference in the world. If you don't have a passion for software testing, you are likely better off finding something you are passionate about and pursuing a career in that field.

Your work is going to fill a large part of your life, and the only way to be truly satisfied is to do what you believe is great work. And the only way to do great work is to love what you do. If you haven’t found it yet, keep looking. Don’t settle.

Steve Jobs

 

  • Practice and learn new techniques and ideas. James Bach provides such opportunities. Also the Hexawise tool, lets you easily and quickly try out different scenarios and learn by doing. We also provide guided training, help and webcasts explaining software testing concepts and how to apply them using Hexawise.We include a pathway that guides you through the process of learning about software testing, with learning modules, practical experience and tests along the way to make sure you learn what is important. And once you reach the highest level and become a Hexawise guru we have offer a community experience to allow all those reaching that level to share their experience and learn from each other.

 

  • Go to good software testing conferences.I've heard great things about CAST (by the Association for Software Testing), Test Bash, and Let's Test in particular. Going to conferences is energizing because while you're there, you're surrounded by the minority of people in this industry who are passionate about software testing. Justin, Hexawise founder and CEO, will be presenting at CAST next week, August 26-28 in Madison, Wisconsin.

 

Related: Software Testing and Career Advice by Keith Klain - Looking at the Empirical Evidence for Using Pairwise and Combinatorial Software Testing - Maximizing Software Tester Value by Letting Them Spend More Time Thinking

By: John Hunter on Aug 22, 2013

Categories: Software Testing

As I looked at our administrative dashboard for Hexawise.com today I was struck by how diverse our users are. I was looking at the most active users in the last week and the top 11 users were from 8 different countries: USA, France, Norway, India, Israel, Spain, Thailand and Canada. The only country with more than 1 was the USA with 2 users from Florida and 1 from Wisconsin. Brazil, Belgium and the Russian Federation were also represented in the top 25.

If you look at the top 25 users in the last month, in addition to the countries above (except Belgium) 3 more countries are represented: China, Netherlands and Malaysia.

hexawise-visitors

Visitors to the Hexawise web site in the last month by country.

 

Looking at our web site demographics the top countries of our visitors were: United States, India, Philippines, Australia, Brazil, United Kingdom, Israel, Malaysia, Italy and Netherlands. In the last month we have had visitors from 84 countries.

It is exciting to see the widespread use of Hexawise across the globe. The feedback on our upgrades included in Hexawise 2.0 have been very positive. We continue to get more and more users which makes us happy: we believe we have created a valuable tool for software testers and it is exciting to get confirmation from users. Please share your experiences with us; knowing what you like is helpful and we have made numerous enhancements based on user feedback.

 

Related: Empirical Evidence for Using Pairwise and Combinatorial Software Testing - Hexawise TV (webcasts on Hexawise and wise software testing practices) - Training material on Hexawise and software testing principles

By: John Hunter on Jul 17, 2013

Categories: Hexawise test case generating tool

The Hexawise Software Testing blog carnival focuses on sharing interesting and useful blog posts related to software testing.

 

  • T-Shaped Testers and their role in a team by Rob Lambert - "I believe that testers, actually – anyone, can contribute a lot more to the business than their standard role traditionally dictates. The tester’s critical and skeptical thinking can be used earlier in the process. Their other skills can be used to solve other problems within the business. Their role can stretch to include other aspects that intrigue them and keep them interested."

  • Testing triangles, pyramids and circles, and UAT by Allan Kelly - "Thus: UAT and Beta testing can only truly be performed by USERS (yes I am shouting). If you have professional testers performing it then it is in effect a form of System Testing.
    This also means UAT/Beta cannot be automated because it is about getting real life users to user the software and get their feedback. If users delegate the task to a machine then it is some other form of testing."

 

flower-justin

Photo by Justin Hunter, taken in South Africa.

 

  • Software Testing in Distributed Teams by Lisa Crispin - "Remote pairing is a great way to promote constant communication among multiple offices and to keep people working from home 'in the loop'.
    I often remote pair with a developer to specify test scenarios, do exploratory testing, or write automated test scripts. We use screen-sharing software that allows either person to control the mouse and keyboard. Video chat is best, but if bandwidth is a problem, audio will do. Make sure everyone has good quality headphones and microphone, and camera if possible."

  • Seven Kinds of Testers by James Bach - "I propose that there are at least seven different types of testers: administrative tester, technical tester, analytical tester, social tester, empathic tester, user, and developer. As I explain each type, I want you to understand this: These types are patterns, not prisons. They are clusters of heuristics; or in some cases, roles. Your style or situation may fit more than one of these patterns."

  • Which is Better, Orthogonal Array or Pairwise Software Testing? by John Hunter and Justin Hunter - "After more study I have concluded that: Pairwise is more efficient and effective than orthogonal arrays for software testing. Orthogonal Arrays are more efficient and effective for manufacturing, and agriculture, and advertising, and many other settings."

  • Experience Report: Pairing with a Programmer by Erik Brickarp - "We have different investigation methods. The programmer did low level investigations really well adding debug printouts, investigating code etc. while I did high level investigations really well checking general patterns, running additional scenarios etc. Not only did this make us avoid getting stuck by changing 'method' but also, my high level investigations benefited from his low level additions and vice versa."

  • I decided to evolve a faster test case by Ben Tilly - "I first wrote a script to run the program twice, and report how many things it found wrong on the second run. I wrote a second script that would take a record set, sample half of it randomly, and then run the first script.
    I wrote a third script that would take all records, run 4 copies of my test program, and then save the smaller record set that best demonstrated the bug. Wash, rinse, and repeat..."

  • Tacit and Explicit Knowledge and Exploratory Testing by John Stevenson - "It is time we started to recognise that testing is a tacit activity and requires testers to think both creativity and critically."

  • Tear Down the Wall by Alan Page - "There will always be a place for people who know testing to be valuable contributors to software development – but perhaps it’s time for all testing titles to go away?"

By: John Hunter on Jul 10, 2013

Categories: Software Testing

Here is a wonderful webcast that provides a very quick, and informative, overview of rapid software testing.

Software testing is when a person is winding around a space searching that space for important information.

 

James Bach starts by providing a definition of software testing to set the proper thinking for the overview.

Rapid software testing is a set of heuristics [and a set of skills]. Heuristics live at the border of explicit and tacit knowledge... Heuristics solve problems when they are under the control of a skilled human... It takes skill to use the heuristics effectively - to solve the problems of testing. Rapid software testing focuses on the tester... Tacit skills are developed through practice.

 

Automated software tests are useful but limited. In the context of rapid software testing only a human tester can do software testing (automated checks are defined as "software checking"). See his blog post: Testing and Checking Refined.

 

Related: People are better at figuring out interesting ideas to test. Generating a highly efficient, maximally varied, minimally repetitive set of tests based on a given set of test inputs is something computer algorithms are more effective at than a person. - Hexawise Lets Software Testers Spend More Time Focused on Important Testing Issues - 3 Strategies to Maximize Effectiveness of Your Tests

By: John Hunter on Jul 2, 2013

Categories: Software Testing, Testing Strategies

We have created the Hexawise Software Testing Glossary. The purpose is to provide some background for both general software testing tool and some Hexawise specific details.

While we continue to develop, expand and improve the Hexawise software itself we also realize that users success with using Hexawise rests not only on the tool itself but on the knowledge of those using the tool. The software testing glossary adds to the list of offerings we provide to help users. Other sources of help include sample test plans, training material (on software testing in general and Hexawise in particular), detailed help on specific topics, webcast tutorials on using Hexawise and this blog.

In Hexawise 2.0 we integrated the idea of building your expertise directly into the Hexawise application with specific actions, reading and practicums that guide users from novice to practitioner to expert to guru.

achievements

View for a Hexawise user that is currently a practitioner and is on their way to becoming an expert.

 

We aim to provide software services, and educational material, that help software testers focus their energy, insite, knowledge and time where it is most valuable while Hexawise software automates some tasks and does other tasks - that are essentially impossible for people to do manually (such as creating the most effective test plan, with pairwise, or better, coverage, given the parameters and values determined by the tester).

Our help, training and webcast resources have been found to be very useful by Hexawise users. We hope the software testing glossary will also prove to be of value to Hexawise users.

 

Related: Maximizing Software Tester Value by Letting Them Spend More Time Thinking - Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Maximize Test Coverage Efficiency And Minimize the Number of Tests Needed

By: John Hunter on Jun 24, 2013

Categories: Hexawise test case generating tool, Hexawise tips, Software Testing

The Expected Result feature in Hexawise provides 3 benefits in the software testing process. Specifically, it...

  • Saves you time documenting test scripts.

  • Makes it less likely for you to make a mistake when documenting Expected Results for your tests.

  • Makes maintaining sets of tests easy from release to release and/or in the face of requirements changes.

There are two different places you can insert auto-generated Expected Results into your tests: the "Auto-script" screen, and the "Requirements" screen. So which place should you use?

It may depend upon how many values are required to trigger the Expected Result you're documenting.

If you want to create an Expected Result that can be triggered by 0, 1, or 2 values, you can safely add your Expected Result using the Auto-script screen. If you're creating an Expected Result that can only be triggered when 3 or more specific values appear in the same test case, then - at least if you're creating a set of 2-way tests - you will probably want to add that particular Expected Result to the Requirements screen, not on the Auto-script screen. Why is that?

It's because if you use the Expected Result feature on the Auto-script screen, it is like telling the test generation engine "if you happen to see a test that matches the setting I provided then show this Expected Result text to the tester." This automates the process so the tester will be provided clear instructions on what to look for in cases that might otherwise be less clear to the tester. If the only way

Using the requirements feature is similar but a bit different. Using the requirements feature, in contrast, is like telling the test generation engine "make sure that the specific combination of values I'm specifying here definitely appear together in the set of tests at least one time (and, if I tell you that there's an expected result associated with that scenario, please be sure to include it)." The requirements feature is used when you not only want to provide an expected value for a specific test case situation but to require that the specific test case scenario be included in the generated test plan.

See our help page for more details: How to save test documentation time by automatically generating Expected Results in test scripts.

 

Related: How to Model and Test CRUD Functionality - 3 Strategies to Maximize Effectiveness of Your Tests

By: John Hunter on Jun 4, 2013

Categories: Hexawise test case generating tool, Hexawise tips, Software Testing

The Hexawise Software Testing carnival focuses on sharing interesting and useful blog posts related to software testing.

 

  • Testing and Checking Refined by James Bach and Michael Bolton - "a robust role for tools in testing must be embraced. As we work toward a future of skilled, powerful, and efficient testing, this requires a careful attention to both the human side and the mechanical side of the testing equation. Tools can help us in many ways far beyond the automation of checks. But in this, they necessarily play a supporting role to skilled humans; and the unskilled use of tools may have terrible consequences."

  • Bugs Spread Disease by Elisabeth Hendrickson - "Cancel all the bug triage meetings; spend the time instead preventing and fixing defects. Test early and often to find the bugs earlier. Fix bugs as soon as they’re identified."

 

haija-sophia-istanbul

Haija Sophia in Istanbul, Turkey by Justin Hunter

 

  • Becoming a World-Class Tester by Ilari Henrik Aegerter - "World-class testing means following the urge to get to the bottom of things, not giving up until you have experienced enough value for yourself, thinking more expansively about the role of a tester, and thinking non-traditionally about what skills are required to thrive in the role."

  • Test Coverage Triage by Parimala Hariprasad - "My experience shows that a mind map based test design works great at this stage. Business teams will be thrilled to have a Visual Walkthrough of tests and provide inputs quickly. As a participant observer on several dozen IT projects, I have found out that testers’ personally walking them through the tests works really well."

  • What Does it Take to Change the Software Testing Industry? Courage! by Keith Klain - "According to Mark Twain, courage is not the absence of fear – but the mastery of it. There are people working in software testing all over the globe who are questioning long standing ways of working - some for the first time. Get yourself energized and get involved."

  • Explaining Exploratory Testing Relies On Good Notes. by Rob Lambert - "Being able to do good exploratory testing is one thing, being able to explain this testing (and the insights it brings) to the people that matter is a different set of skills. I believe many testers are ignoring and neglecting the communication side of their skills, and it’s a shame because it may be directly affecting the opportunities they have to do exploratory testing in the first place."

  • Human-Computer Cooperation by John Hunter - "people are better at figuring out interesting ideas to test. Once those are identified, those test conditions and other test ideas need to be combined together and put into tests. Generating a highly efficient, maximally varied, minimally repetitive set of tests based on a given set of test inputs is something computer algorithms are more effective at than a person."

  • Software testing can be sexy, too by Ole Lensmar - "What Klain, who serves on the board of the AST, and the others would like to do is not only bring those skills back home but increase the availability and accessibility of this kind of training and job opportunity. Ideally, colleges and universities would start offering majors in Software Testing so we can set young people on a path toward testing as a career."

  • Yet another future of testing post (YAFOTP) by Alan Page - "I think we’ll always need people to look at big end-to-end scenarios and determine how non-functional attributes (e.g. performance, privacy, usability, reliability, etc.) contribute to the user experience. Some of this evaluation will come from manually walking through scenarios), but there will be plenty of need for programmatic measurement and analysis as well..."

By: John Hunter on Apr 18, 2013

Categories: Software Testing

We are proud of the enhancements about to be delivered in version 2.0 of Hexawise (software test plan generation solution). Hexawise a software test design tool that helps teams test their software faster (by decreasing the time it takes to design and document tests and decreasing the amount of time it takes to execute tests) and more thoroughly (by scientifically packing as much coverage into each test as possible). We provide it as software as a service solution. Three of the most imported enhancements in our dramatically improved, soon-to-be-released version are:

 

  • Requirements (forcing certain specific combinations to appear in the test plan)

  • Adding Expected Results, which saves time in test case documentation

  • Better auto-scripting, which also saves time in test case documentation

 

 

The embedded slide presentation above, provides graphic illustration of these features.

At the "Required" screen (short for "Requirements"), you will be able to add specific combinations of test conditions/test inputs to the tests that Hexawise generates. Tracing requirements to specific test scripts can be challenging, particularly as requirements change and sets of regression tests age. You'll find this feature helps make requirements traceability easier and less error-prone.

You can add Expected Results to the test scripts generated. This provides test stating exactly what the result should be so that someone reviewing the result of the test can verify if the expected results conditions were actually met. If Hexawise is the first test generation tool you've used, you might take this for granted and think that this is just how the world should be.

If you've used other test generation tools before finding Hexawise though, you might feel compelled to publicly declare your love of Hexawise and/or send gifts to the engineers and designers at Hexawise who created this great feature. We believe its a feature unique to Hexawise should save you huge amounts of time when you document your test scripts.

Auto-scripting was very appreciated by users, and we have enhanced this feature of Hexawise significantly in Hexawise 2.0. Our help system includes a through review of this (and many other features of Hexawise): Auto-Script test cases to quickly include detailed written tester instructions.

 

Related: How do I save test documentation time by automatically generating Expected Results in test scripts? - Pairwise and Combinatorial Software Testing in Agile Projects - Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Maximizing Software Tester Value by Letting Them Spend More Time Thinking

By: John Hunter on Mar 21, 2013

Categories: Combinatorial Testing, Hexawise test case generating tool, Hexawise tips

Attempting to assess the relative benefits of more than 200 software development practices is not for the faint of heart. Context-specific considerations run the risk of confounding the conclusions at every turn. Even so, Capers Jones, a software development expert with dozens of years of experience and nearly twenty books related to software development to his credit, recently attempted the task. He's literally devoted decades of his career to assessing such things for clients. We're quite pleased with how using Hexawise fared in the analysis.

Scoring and Evaluating Software Methods, Practices and Results by Capers Jones (Vice President and CTO, Namcook Analytics) provides some great idea on software project management. The article is based on the Software Engineering Best Practices with some new data is taken from The Economics of Software Quality (two of the books Capers Jones has authored).

Software development, maintenance, and software management have dozens of methodologies and hundreds of tools available that are beneficial. In addition, there are quite a few methods and practices that have been shown to be harmful, based on depositions and court documents in litigation for software project failures.

In order to evaluate the effectiveness or harm of these numerous and disparate factors, a simple scoring method has been developed. The scoring method runs from +10 for maximum benefits to -10 for maximum harm.

The scoring method is based on quality and productivity improvements or losses compared to a mid-point. The mid point is traditional waterfall development carried out by projects at about level 1 on the Software Engineering Institute capability maturity model (CMMI) using low-level programming languages. Methods and practices that improve on this mid point are assigned positive scores, while methods and practices that show declines are assigned negative scores.

The data for the scoring comes from observations among about 150 Fortune 500 companies, some 50 smaller companies, and 30 government organizations. Negative scores also include data from 15 lawsuits.

 

The article provides guidance, based on the results achieved by many, and varied, organizations with respect to software projects.

finding and fixing bugs is overall the most expensive activity in software development. Quality leads and productivity follows. Attempts to improve productivity without improving quality first are not effective.

 

This is an extremely important point for business managers to understand. Those involved in software development professionally don't find this surprising. But business people often greatly underestimate the costs of maintaining and updating software. The costs of bugs introduced by fairly minor feature requests to a system that doesn't have good software test coverage or test plans often create far more trouble than business managers expect.

This is especially true because there is a high correlation between software applications that have poor software testing processes (including poor test coverage and poor or completely missing test plans) and those application that were designed without long term maintenance in mind. Both deficiencies result of decisions made to minimize initial development costs and time. They both show a lack of appreciation for wise software engineering practices and software application project management.

The article discusses a complicating factor for accessing the most effective software development practices: the extremely wide differences in software engineering scope. Projects range from simple applications one software developer can create in a short period of time to massive application requiring thousands of developer-years or effort.

In order to be considered a “best practice” a method or tool has to have some quantitative proof that it actually provides value in terms of quality improvement, productivity improvement, maintainability improvement, or some other tangible factors.

Looking at the situation from the other end, there are also methods, practices, and social issues have demonstrated that they are harmful and should always be avoided. ... Although the author’s book Software Engineering Best Practices dealt with methods and practices by size and by type, it might be of interest to show the complete range of factors ranked in descending order, with the ones having the widest and most convincing proof of usefulness at the top of the list. Table 2 lists a total of 220 methodologies, practices, and social issues that have an impact on software applications and projects.

The average scores shown in table 2 are actually based on the composite average of six separate evaluations:

  1. Small applications < 1000 function points

  2. Medium applications between 1000 and 10,000 function points

  3. Large applications > 10,000 function points

  4. Information technology and web applications

  5. Commercial, systems, and embedded applications

  6. Government and military applications

The data for the scoring comes from observations among about 150 Fortune 500 companies, some 50 smaller companies, and 30 government organizations and around 13,000 total projects. Negative scores also include data from 15 lawsuits.

The scoring method does not have high precision and the placement is somewhat subjective.

Top 10 tools and practices listed in the article:

Practice Score
1. Reusability (> 85% zero-defect materials) 9.65
2. Requirements patterns - InteGreat 9.50
3. Defect potentials < 3.00 per function point 9.35
4. Requirements modeling (T-VEC) 9.33
5. Defect removal efficiency > 95% 9.32
6. Personal Software Process (PSP) 9.25
7. Team Software Process (TSP) 9.18
8. Automated static analysis - code 9.17
8. Mathematical test case design (Hexawise) 9.17
10. Inspections (code) 9.15

 

We are obviously thrilled that Hexawise is listed. We have seen the value our customers have achieved using mathematical based combinatorial software test plans (see several Hexawise case studies). It is great to see that value recognized in comparison to other software development practices and judged to be of such high value to software development projects.

The article makes it clear the importance of the results is not "the precision of the rankings, which are somewhat subjective, but in the ability of the simple scoring method to show the overall sweep of many disparate topics using a single scale."

The methodology behind the results shown in the article can be used to evaluate your organization's software development practice and determine opportunities for improvement. But, as stated above, software projects cover a huge range of scopes. The specific software project needs will drive which practices are most critical to achieving success for a specific project. The list in the article, of what practices have provided huge value and what practices have resulted great harm, is a very helpful resource but project managers and software developers and testers need to apply their judgement to the information the article provides in order to achieve success.

A leading company will deploy methods that, when summed, total to more than 250 and average more than 5.5. Lagging organizations and lagging projects will sum to less than 100 and average below 4.0.

 

The use of Hexawise has been growing; that has helped increase the number of software projects using best practices (that score 9, or higher), however as the article states there is quite a need for improvement.

From data and observations on the usage patterns of software methods and practices, it is distressing to note that practices in the harmful or worst set are actually found on about 65% of U.S. Software projects as noted when doing assessments. Conversely, best practices that score 9 or higher have only been noted on about 14% of U.S. Software projects. It is no wonder that failures far outnumber successes for large software applications!

 

A score of 9 to 10 for a practice means that practice results 20-30% improvement in quality and productivity of software projects.

Conclusion: while your individual mileage may vary, this report provides further evidence that using Hexawise really does lead to large, measurable improvements in efficiency and effectiveness.

We are very proud of the success of Hexawise thus far; as a new year starts we see huge potential to help many organizations improve their software development efforts.

The article includes a list of references and suggested readings that is valuable. Included in that list are:

DeMarco, Tom; Controlling Software Projects, 1986, 296 pages.

Gilb, Tom and Graham, Dorothy; Software Inspections, 1994, 496 pages.

Jones, Capers; Applied Software Measurement, 3rd edition, 2008, 662 pages.

McConnell, Code Complete, (I'm linking to the 2nd edition the article references the 1st edition) 2004, 960 pages.

 

Related: Maximizing Software Tester Value by Letting Them Spend More Time Thinking - A Powerful Software Test Design Approach - 3 Strategies to Maximize Effectiveness of Your Tests

By: John Hunter on Mar 18, 2013

Categories: Software Testing, Software Testing Efficiency, Testing Case Studies, Testing Strategies

The video makes the case that the value to be gained from human-computer cooperation is being ignored far too often. A focus on maximizing the results based on improving the ability to cooperate is worthwhile.

What this means in practice is people taking more responsibility for using computers as tools to accomplish what is needed. This already happens a great deal but in a way which is unexamined and often therefore the current methods leave a great deal of room for improvement. We rarely focus on how to enhance the co-operation, we mainly see the software as one separate part of the process and a person's contribution as another separate part of the process. Focusing on computers (and software) as tools used by people to accomplish objectives is helpful.

Viewing software as a tool used to achieve an aim mirrors the idea of a company viewing their products and services as solving specific problems for customers. The tool is valuable in how it is used by people - not in some abstract way (say technical specifications).

Weaknesses in how people use the product, service or software are often weaknesses in focusing on the way people will really use it versus how it is "supposed" to be used. By understanding the process that matters is one of a person and the computer together adding value, we can create more effective software applications.

People often try to design software solutions that remove the need for humans to be involved. For complex problems, though, it is often much more effective to design solutions where people take advantage of computer tools to achieve results. People should use computers to automate the things that make sense to automate, keep track of data, and make calculations, thus leaving themselves free to use their superior insight, vision, intuition, and flexibility in making judgements.

Hexawise is built to take advantage of this type of cooperation. Even though it is a "test design tool," Hexawise doesn't take the lead role in designing the tests. Humans do. Humans do the things that they're better than computers at, such as (a) thinking up clever test ideas and test inputs, and (b) identifying, from dozens of possible parameters, which are the ones that are most important to vary in order to achieve potentially interesting interactions from test to test. Computer algorithms aren't nearly as good as humans at such tasks. Computers, though, will run circles around any human who tries to construct a set of tests such that (a) the variation between each test is as different as possible, (b) the wasteful repetition of combinations of values that appear together in different tests is minimized, (c) gaps in coverage are minimized (by, e.g., ensuring that every single pair or every single 3-way combination of tests appears in at least one test case), and (d) all of the above objectives are achieved in the fewest possible test cases. Computer algorithms eat those kind of challenges for breakfast. And complete them without error. In seconds.

Said a different way, people are better at figuring out interesting ideas to test. Once those are identified, those test conditions and other test ideas need to be combined together and put into tests. Generating a highly efficient, maximally varied, minimally repetitive set of tests based on a given set of test inputs is something computer algorithms are more effective at than a person.

Hexawise is not intended to eliminate the need for software testing experts. Hexawise is designed to allow software testing experts to focus on what they do well and allow Hexawise to make everything else easy (creating test plans based on software experts inputs etc.). This allows software testing experts to spend their time thinking by removing time consuming tasks they have to do. Hexawise also creates test plan coverage that is simply beyond the ability of people to create no matter how much time they were given. And software testing experts provide the inputs that no matter how much time the Hexawise software could not create in any amount of time.

Hexawise is designed to optimize the ease of cooperation. We spend a great deal of time optimizing the software to make it most useful for people. The design decisions made in creating a software application are very different if the users are meant to thoughtfully interact with the application.

We see Hexawise as an extension of the software tester. We seek to optimize how a person can use Hexawise to create the most value. The measure is how much more effective the testing solutions are, not just how Hexawise performs in isolation from the user.

A great deal of our time has been spent on how to help software testing experts use Hexawise most effectively. These efforts often take the form of many many small improvements that create an experience that is more similar to cooperation between two parties with different strengths instead of a more typical user experience of forcing you to do whatever the software demands.

 

Related: Gamers Use Foldit to Solve Enzyme Configuration in 3 Weeks That Stumped Scientists for Over a Decade - Ten Reasons to Use Software Testing Checklists and Cheat Sheets

By: John Hunter on Mar 12, 2013

Categories: Testing Strategies

This is the second edition of our carnival that focuses on finding interesting and useful blog posts related to software testing.

 

  • Testing != test execution by Jeff Fry - "We often talk about testing as if it’s only test execution, yet often the most interesting, challenging, skill-intensive aspects of testing are in creating a mental model that helps us understand the problem space, designing tests to quickly and effectively answer key questions, analyzing what specifically the problem is, and communicating it effectively."

  • Test Mercenaries by Mike Bland - "good testing practice goes a long way towards finding and killing a lot of bugs before they can grow more expensive, and possibly multiply. The bugs that manage to pass through a healthy testing regimen are usually only the really interesting ones. Few things are less worthy of our intellectual prowess than debugging a production crash to find a head-slappingly stupid bug that a straightforward unit or integration test could’ve caught."

 

yellow cameleon

Yellow Cameleon in South Africa by Justin Hunter

 

  • The Oracle Problem and the Teaching of Software Testing by Cem Kaner - "I’ve been emphasizing the oracle problem in my testing courses for about a dozen years. I see this as one of the defining problems of software testing: one of the reasons that skilled testing is a complex cognitive activity rather than a routine activity. Most of the time, I start my courses with a survey of the fundamental challenges of software testing, including an extended discussion of the oracle problem."

  • The new V-Model by Kim Ming Leung - "We design by specifying the measurements before coding but not writing the test before coding. We write code and invite user feedback before writing automated testing for these measurements. Code quality is still guaranteed because first, measurement is the design and second, we code with testing in mind (i.e. write testable code)."

  • Maximizing Software Tester Value by Letting Them Spend More Time Thinking by John Hunter - "Hexawise also lets the software tester easily tune the test coverage based on what is most important. Certain factors can be emphasized, others can be de-emphasised. Knowledge is needed to decide what factors are most important, but after that designing a test plan based on that knowledge shouldn’t take up staff time, good software can take care of that time consuming and difficult task."

  • Improving the State of your Testing Team: Part One – Values by Keith Klain - "Typically, the first thing out of my test teams mouths when asked “how can we improve the state of testing here”, usually relates to something that other people should do. Very few people or teams take an introspective based approach to improvement, or state their management values, but the ones that do, typically have great success.

  • Interview and Book Review: How Google Tests Software by Craig Smith - "stop treating development and test as separate disciplines. Testing and development go hand in hand. Code a little and test what you built. Then code some more and test some more. Test isn’t a separate practice; it’s part and parcel of the development process itself. Quality is not equal to test. Quality is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other."

  • Introduction to test strategy (review of Rikard Edgren's presentation) by Mauri Edo - "Rikard referred to a test strategy as the outcome of the gathered and shared information plus our own thinking processes, encouraging us to find strategies for our context, learning to understand what is important for us, in our situation."

  • Experience report EuroSTAR Testlab 2012 by Martin Jansson - "The testlab is very much alike our every day life as testers. We prepare and plan for many things, but when reality hits us our preparations might be in vain. Therefore it is important in being prepared for the unknown and the unexpected. Working with the testlab is a great way of practising that."

 

Related: Software Testing Carnival #1 - Hexawise TV blog

By: John Hunter on Jan 21, 2013

Categories: Software Testing

Testing sucks by James Whittaker

Bet that got your attention. It's true, but let me qualify it: Running test cases over and over in the hope that bugs will manifest sucks. It’s boring, uncreative work and since half the world thinks that is all testing is about, it is no great wonder few people covet testing positions. Testing is either too tedious and repetitive or it’s downright too hard. Either way, who would want to stay in such a position? ... For the hard parts of the testing process like deciding what to test and determining test completeness, user scenarios and so forth we have another creative and interesting task. Testers who spend time categorizing tests and developing strategy (the interesting part) are more focused on testing and thus spend less time running tests (the boring part). ... So all the managers out there need to ask themselves what they've done lately to make their testers more creative. If you don't have an answer, then testing isn't the only thing that sucks.

One of the great benefits of Hexawise is that it takes care of figuring out the best test plan to provide the coverage is needed. The software test planer needs to use their knowledge, experience and creativity to determine what factors and parameters are critical to test. Then Hexawise generates a test plan that provides maximum coverage with the fewest possible tests. If people try to manually create test plans to address interaction between factors to be tested it is not only extremely time consuming, not very fun and it is essentially impossible to do well.

Some things are just so complex or so effectively handled with well designed software people cannot compete. Designing software test plan coverage is one of those areas.

Hexawise also lets the software tester easily tune the test coverage based on what is most important. Certain factors can be emphasized, others can be de-emphasised. Knowledge is needed to decide what factors are most important, but after that designing a test plan based on that knowledge shouldn't take up staff time, good software can take care of that time consuming and difficult task.

Another nice feature included with Hexawise is automated detailed tester instructions are generated. And you can easily provide customized text to assure the test instructions and the expected outcomes are clear and complete.

Hexawise greatly reduces the number of test that need to be run by creating powerful test plans that provide more coverage with fewer tests. This again, frees up tester time to focus on value added activities.

Allowing testers to focus on adding value is a key aim of ours. We strive to automate what we can and allow testers to apply their knowledge, experience and creativity to helping create great software. Hexawise grew out of the work of George Box, William Hunter (the founders father) and W. Edwards Deming, they sought to use statistical tools to free people to focus on creative tasks. For example read: Managing Our Way to Economic Success, Two Untapped Resources by William G. Hunter - "Two resources, largely untapped in American organizations, are potential information and employee creativity."

Hexawise includes sample test plans that let you see the benefits above in action. Sign up today to try it out (free trial).

 

Related: Cem Kaner: Testing Checklists = Good / Testing Scripts = Bad? - A Faster Way to Enter Test Inputs – the “Bulk Add” Option - Practical Ways to Respect People

By: John Hunter on Nov 11, 2012

Categories: Context-Driven Testing, Testing Strategies

I have started focusing on the Hexawise blog recently. For a good part of this year we have not had much activity on the blog, but we plan to make this blog more active going forward. As part of that effort we are starting a Software Testing Blog Carnival to highlight some post on software testing that I find interesting and think others will too. Enjoy,

 

 

water buffalo south africa

Water buffalo, Timbavati Game Preserve, South Africa by Justin Hunter

 

  • Bugs Spread Disease by Elisabeth Hendrickson - "It wasn’t the bugs that killed us directly. Rather, the bugs became a pervasive infection. They hampered us, slowing down our productivity. Eventually we were paralyzed, unable to make even tiny changes safely or quickly."

  • A Sticky Situation by Michael Bolton - on using agile, kanban style, system for managing the software testing workload: "The whiteboard was the center of an active discussion between programmers and project managers about the project status. After the meeting, the whiteboard and the notes on it remained as a kind of information radiator. 'I suddenly realized that if they could do that, I could too,' Paul said. He began by dividing the whiteboard into three columns: To Be Done, Work in Progress, and Done."

  • A Quick Testing Challenge by Alan Page and a response, Angry Weasel Challenge, a presentation by to Figure out why TheApp.exe won't load and cause it to load by solving the problem.

  • 3 Strategies to Maximize Effectiveness of Your Tests by John Hunter - "Use the MECE principle. The MECE principles states you should define your values in a way that makes each of them “Mutually Exclusive” from the others in the list (no subsets should represent any other subsets, no overlaps) and “Collectively Exhaustive” as a group (the set of all subsets, taken together, should fully encompass all items, no gaps)."

  • Musings on Test Design by Alan Page - "The point is, and this is worth repeating so often, is that thinking of test automation and “human” testing as separate activities is one of the biggest sins I see in software testing today. It’s not only ineffective, but I think this sort of thinking is a detriment to the advancement of software testing."

  • Creating a Shared Understanding of Testing Culture on a Social Coding Site by Leif Singer - "those learning testing in this environment write tests to make sure that a program behaves a certain way, and forget that they might also need to test how it should not behave (e.g. on invalid input)."

  • Testing in Scrum with a Waterfall Interaction by Davide Noaro - "Testing each user story separately is, for me, the basis of the Agile process, even in an interaction with a Waterfall-at-end scenario like the one described. Integrating testing into the process itself is something we should do for any software development process, not only in Agile or Scrum."

By: John Hunter on Oct 31, 2012

Categories: Software Testing

Hexawise includes an array of sample plans when a new user account is created. These provide concrete examples of how to categorize items when creating a combinatorial test plans (also called pairwise test plans, orthogonal array test plans, etc.). Once you [sign in to your Hexawise account](http://hexawise.com/ (or setup a new, free, account) looking at this [sample test plan](https://app.hexawise.com/share/HT3UG7M8 (which is similar to the situation raised in the question that follows), might be useful.

Within your Hexawise account you can copy the sample test plans that you are provided with and then make adjustments to them. This lets you quickly see what effects changes you make have on real test plans. And it also lets you see how easy it is to adjust as changes in priorities are made, or gaps are found in the existing test plan.

 

A Hexawise user sent us the following question.

What is the recommended approach to configuring parameter with one or more values?

I have two parameters which are related.

If Parameter 1 = Yes, Parameter 2 allows the user to select one or more values out of a list of 25 - most of which are not equivalent.

For Parameter 2, is the recommended approach to handle this to create separate parameters each with a yes/no value? i.e. create one parameter for each non-equivalent value, and one parameter for the equivalent values. Then link each of these as a married pair to Parameter 1.

I'm open to suggestions as to alternatives.

Here's the screen in question. Parameter 1 = "Pilot", Parameter 2 = checkboxes for types of plans.

aviation question inline

Great question.

I would recommend that you use different parameters for each option (e.g., "Scheduled Commercial" as a parameter with "Selected, Not Selected" as your Values associated with it).

Also, I'd recommend following these 3 strategies to maximize the effectiveness of your tests.

First, consider using adjusted weightings. You may find it useful to weight certain values multiple times, e.g., have 4 values such as "Select, Do Not Select, Do Not Select, Do Not Select" to create 3 times as many tests with "Do Not Select" as "Select."

Second, use the MECE principle. The MECE principles states you should define your Values in a way that makes each of them "Mutually Exclusive" from the others in the list (no subsets should represent any other subsets, no overlaps) and "Collectively Exhaustive" as a group (the set of all subsets, taken together, should fully encompass all items, no gaps)

Third, avoid "ands" in your value names. As a general rule it is unwise to define values like "Old and Male" or "Young and Female", etc. A better strategy is to break those ideas into two separate Parameters, like so:

First Parameter = "Age" --- Values for "Age" = Old / Young

Second Parameter = "Gender" --- Values for "Gender" = "Male / Female"

 

Related: Efficient and Effective Test Design - Context-Driven Usability Considerations, and Wireframing - Why isn't Software Testing Performed as Efficiently and Effecively as it could be?

By: John Hunter on Oct 25, 2012

Categories: Efficiency, Hexawise tips, Pairwise Software Testing, Software Testing, Software Testing Efficiency, Testing Strategies

When we met with Hexawise users in India, we noticed that the page load response times for Hexawise users there were sometimes significantly slower than in the United States due to sporadic internet connectivity issues. One area that troubled us in particular was the extra few seconds it could take to enter test inputs into plans.

We are constantly looking for ways to make our software test case generating tool better and came up with a solution.

Hexawise Bulk Add Instructions 1 inline

Hexawise Bulk Add Instructions 2 inline

 

Even with a great internet connection this feature lets you be more productive, so if you have not tried out the bulk add feature, give it a try today.

Hexawise: More Coverage. Fewer Tests

 

Related: Hexawise Tip: Using Value Expansions and Value Pairs to Handle Dependent Values - Maximize Test Coverage Efficiency And Minimize the Number of Tests Needed - 13 Great Questions To Ask Software Testing Tool Vendors

By: John Hunter on Oct 23, 2012

Categories: Hexawise test case generating tool, Hexawise tips, Software Testing

It's common to have a test plan where the possible values of one parameter depend on the value of another parameter. There are many options for how you can represent this scenario in Hexawise, some options that involve using value expansions (when there is equivalence) and other options that do not use value expansions (when there is not equivalence).

Using Value Expansions in Hexawise

The general rule of thumb for value expansions is that they are for setting up equivalence classes. The key there being the equivalence. The expectations of the system should be the same for every value listed in that particular value expansion.

Let's consider a real world example involving a classification parameter with a value that is dependent on the value of a role parameter:

Inputs
Role: Student, Staff
Classification: Freshman, Sophomore, Junior, Senior, Adjunct, Assistant, Professor, Administrator

So if the Role parameter has a value of Student, then the Classification parameter must have a value of Freshman, Sophomore, Junior or Senior, but if the Role parameter has a value of Staff, then the Classification parameter must have a value of Adjunct, Assistant, Professor or Administrator.

Using value expansions in this case might be a good option. You could setup your inputs, value expansions and value pairs this way:

Inputs
Role: Student, Staff
Classification: Student Classification, Staff Classification

Value Expansion
Student Classification: Freshman, Sophomore, Junior, Senior
Staff Classification: Adjunct, Assistant, Professor, Administrator

Value Pairs
When Role=Student Always Classification=Student Classification
When Role=Staff Always Classification=Staff Classification

You would use this approach if there were no important differences in the business logic or expected behavior of the system when the different expansions of the value were used. If Freshman versus Sophomore is an important label for the users to be able to enter and see, but the system under test doesn't change its behavior based on which value is selected, then those expansions of the value are equivalent and don't need to be tested individually for how they might interact with other parts of the system and create bugs. If this equivalence scenario is true, then you will greatly simplify things for yourself and create fewer tests that are just as powerful by using value expansions.

In the scenario that would support using value expansions, the system might have different behavior for a Junior versus an Adjunct Professor, but not for a Freshman versus a Senior. A Freshman and a Senior are always equivalent in the system, so they can be combined in a value expansion.

However, if the expectations are not the same, then a value expansion should not be used. For example, let's suppose this hypothetical system has business logic giving priority class scheduling to Seniors and only last available scheduling priority to Administrators. In this case, using value expansions as described above would probably be a mistake. Why? Because a Sophomore and a Senior aren't treated the same way by the system, yet Hexawise considers all the expansions of the Student Classification value as equivalent. As long as you've got a test that has paired a value expansion of the Student Classification value with the Overbooked value of the Class Status parameter, then Hexawise won't insist on pairing all the other value expansions for the Student Classification value with Class Status = Overbooked in other tests. You could therefore miss a bug that only occurs when a Senior signs up for an overbooked class.

"One to many" or "multi-valued" married pair model

If the system under test does not consider the values to be equivalent and has requirements and business logic to behave differently, then by using value expansions to signal equivalency to Hexawise when there isn't equivalency is probably a mistake.

So what would you do in that case?

We've decided that it might be nice to be able to set up your inputs and value pairs like this:

Inputs
Role: Student, Staff
Classification: Freshman, Sophomore, Junior, Senior, Adjunct, Assistant, Professor, Administrator

Value Pairs
When Role=Student Always Classification=Freshman, Sophomore, Junior, or Senior When Role=Staff Always Classification=Adjunct, Assistant, Professor, or Administrator

Unfortunately, this kind of a "one to many" or "multi-valued" value pair is something we've only recently realized would be very helpful, and is something we have on the drawing board for Hexawise in the intermediate future, but is not a feature of Hexawise today. In the meantime, you could model it with three parameters:

Inputs
Role: Student, Staff
Student Classification: Freshman, Sophomore, Junior, Senior, N/A Staff Classification: Adjunct, Assistant, Professor, Administrator, N/A

Value Pairs
When Role=Student Always Staff Classification=N/A
When Role=Staff Always Student Classification=N/A

Another modeling option to consider, if there is only special logic for Administrator and for Seniors, but the rest of the values we've been discussing are equivalent, is to use value expansions for just the equivalent values:

Inputs
Role: Student, Staff
Classification: Underclassman, Senior, Professor, Administrator

Value Expansions
Underclassman: Freshman, Sophomore, Junior Professor: Adjunct, Assistant, Full

Value Pairs
When Role=Student Never Classification=Professor When Role=Student Never Classification=Administrator When Role=Staff Never Classification=Underclassman When Role=Staff Never Classification=Senior

I hope this helps you understand the role of value expansions in Hexawise, when to use them (in cases of equivalency), and when to avoid them, and how value pairs and value expansions can be used together to handle cases of dependent parameter values. Value Expansions are a powerful tool to help you decrease the number of tests you need to execute, so take advantage of them, and if you have any questions, just let us know!

By: John Hunter on Apr 26, 2012

Categories: Combinatorial Software Testing, Combinatorial Testing, Hexawise tips, Pairwise Testing, Software Testing, Testing Strategies

Request for ideas: What are good examples of good interactive online training (including exercises & quizzes) for how to use web apps?

We continue to look for ways to improve the ability of our users to add value to their organizations. We are always in the process of improving the software tools we provide.

The value proposition for using pairwise and combinatorial tools are huge. The biggest roadblocks we see in adoption are not being aware of the advantages and not being sure how to proceed once the advantages are seen.

We designed Hexawise to be very easy to use. But even so the concepts behind combinatorial testing take some people a bit of time to operationalize. To help with this we offer on site and off site personal training.

We are looking to create some online training to ease the transition into using Hexawise's combinatorial software testing tools most effectively. We would love to know what good examples of interactive online training people have found useful.

 

Related: Why isn't Software Testing Performed as Efficiently and Effecively as it could be? - In Praise of Data-Driven Management (AKA "Why You Should be Skeptical of HiPPO's")

By: John Hunter on Apr 24, 2012

Categories: Uncategorized

84 percent coverage in 20 tests

Hexawise test coverage graph showing 83.9% coverage in just 20 tests

 

Among the many benefits Hexawise provides is creating a test plan that maximizes test coverage with each new scenario tested. The graph above shows that after just 20 test 83.9% of the test combinations have been tested. Read more about this in our case study of a mortgage application software test plan. Just 48 test combinations are needed to test for every valid pair (3.7 million possible tests combinations exist in this case). If you are lost now, this video may help.

The coverage achieved by the first few tests in the plan will be quite high (and the graph line will point up sharply) then the slope will decrease in the middle of the plan (because each new test will tend to test fewer net new pairs of values for the first time) and then at the end of the plan the line will flatten out quite a lot (because by the end, relatively few pairs of values will be tested together for the first time).

One of the benefits Hexawise provides is making that slope as steep as possible. The steeper the slope the more efficient your test plan is. If you repeat the same tests of pairs and triples and... while not taking advantage of the chance to test, untested pairs and triples you will have to create and run far more test than if you intelligently create a test plan. With many interactions to test it is far too complex to manually derive an intelligent test plan. A combinatorial testing tool, like Hexawise, that maximizes test plan efficiency is needed.

For any set of test inputs, there is a finite number of pairs of values that could be tested together (that can be quite a large number). The coverage chart answers, after each tests, what percentage of the total number of pairs (or triples, etc.) that could be tested together have been tested together so far?

The Hexawise algorithms achieve the following objectives that help testers find as many defects as possible in as few tests as possible. In each and every step of each and every test case, the algorithm chooses a test condition that will maximize the number of pairs that can be covered for the first time in the test case. (Or, the maximum number of triplets or quadruplets, etc. based on the thoroughness setting defined by the user). Allpairs (AKA pairwise) is a well known and easy to understand test design strategy. Hexawise lets users create pairwise sets of tests that will test not only every pair but it also allows test designers to generate far more thorough sets of tests (3-way to 6-way coverage). This allows users to "turn up the coverage dial" and generate tests that cover every single possible triplet of test inputs together at least once (or every 4-way combination or 5-way combination or 6-way combination).

Note that the coverage ratio Hexawise shows is based on the factors entered as items to be tested: not a code coverage percentage. Hexawise sorts the test plan to front load the coverage of the tuple pairs, not the coverage of the code paths. Coverage of code paths ultimately depends on how good a job the test designer did at extracting the relevant parameters and values of the system under test. You would expect there to be some loose correlation between coverage of identified tuple pairs and coverage of code paths in most typical systems.

If you want to learn more about these concepts, I would recommend Scott's Scott Sehlhorst articles on pairwise and combinatorial test design. They are some of the clearest introductory articles about pairwise and combinatorial testing that I have seen. They also contain some interesting data points related to the correlation between 2-way / allpairs / pairwise / n-way coverage (in Hexawise) and the white box metrics of branch coverage, block coverage and code coverage (not measurable by Hexawise).

In Software testing series: Pairwise testing, for example, Scott includes these data points:

 

  • We measured the coverage of combinatorial design test sets for 10 Unix commands: basename, cb, comm, crypt, sleep, sort, touch, tty, uniq, and wc... The pairwise tests gave over 90 percent block coverage.

 

  • Our initial trial of this was on a subset Nortel’s internal e-mail system where we able cover 97% of branches with less than 100 valid and invalid testcases, as opposed to 27 trillion exhaustive testcases.

 

  • A set of 29 pair-wise... tests gave 90% block coverage for the UNIX sort command. We also compared pair-wise testing with random input testing and found that pair-wise testing gave better coverage.

 

Related: Why isn't Software Testing Performed as Efficiently and Effecively as it could be? - Video Highlight Reel of Hexawise – a pairwise testing tool and combinatorial testing tool - Combinatorial Testing, The Quadrant of Massive Efficiency Gains

Specific guidance on how to view the percentage of coverage graph for the test plan in Hexawise:

 

When working on your test plan in Hexawise, to get the checklist to be visible, click on the two downward arrow keys located shown in the image:

How-To Progress Checklists-2 inline

Then you'll want to open up the "Advanced" list. So you might need to click here:

Advanced How-To Progress Checklist inline

Then the detailed explanation will begin when you click on "Analyze Tests"

Decreasing Marginal Returns inline

 

This post is adapted (and some new content added) from comments posted by Justin Hunter and Sean Johnson.

By: John Hunter on Feb 3, 2012

Categories: Combinatorial Software Testing, Combinatorial Testing, Efficiency, Multi-variate Testing, Pairwise Software Testing, Pairwise Testing, Scripted Software Testing, Software Testing, Software Testing Efficiency