Performance testing examines how the software performs (normally "how fast") in various situations.

Performance testing does not just result in one value. You normally performance test various aspects of the software in differing conditions to learn about the overall performance characteristics. It can well be that certain changes will improve the performance results for some conditions (say a powerful laptop with a fiber connection) and greatly degrade the performance for other use cases. And often the software can be coded to attempt to provide different solutions under different conditions.

All this makes performance testing complex. But trying to over-simplify performance testing removes much of its value.

Another form of performance testing is done on sub components of a system to determine what solutions may be best. These often are server based issues. These likely don't depend on individual user conditions but can be impacted by other things. Such as under normal usage option 1 provides great performance but under larger load option 1 slows down a great deal and option 2 is better.

Focusing on these tests of sub components run the risk of sub-optimization, where optimizing individual sub-components result in a less than optimal overall performance. Performance testing sub-components is important but it is most important is testing the performance of the overall system. Performance testing should always place a priority on overall system performance and not fall into the trap of creating a system with components that perform well individually but when combined do not work well together.

Load testing, stress testing and configuration testing are all part of performance testing.

Continue reading about performance testing in the Hexawise software testing glossary.

By: John Hunter on Sep 27, 2016

Categories: User Experience, Software Testing Testing Strategies, Software Testing

Software testing concepts help us compartmentalize the complexity that we face in testing software. Breaking the testing domain into various areas (such as usability testing, performance testing, functional testing, etc.) helps us organize and focus our efforts.

But those concepts are constructs that often have fuzzy boundries. What matters isn't where we should place certain software testing efforts. What matters is helping create software that users find worthwhile and hopeful enjoyable.

One of the frustration I have faced in using internet based software in the last few years is that it often seems to be tested without considering that some users will not have fiber connections (and might have high latency connections). I am not certain latency (combined maybe with lower bandwidth) is the issue but I have often found websites either actually physically unusable or mentally unusable (it is way too frustrating to use).

It might be the user experience I face (on the poorly performing sites) is as bad for all users, but my guess is it is a decent user experience on the fiber connections that the managers have when they decide this is an OK solution. It is a usbility issue but it is also a performance issue in my opinion.

It is certainly possible to test performance results on powerful laptops with great internet connections and get good performance results for web applications that will provide bad performance results on smart phones via wifi or less than ideal cell connections. This failure to understand the real user conditions is a significant problem and an area of testing that should be improved.

I consider this an interaction between performance testing and user-experience testing (I use "user-experience" to distinguish it from "usability testing", since I can test aspects of the user experience without users testing the software). The page may load in under 1 second on a laptop with a fiber connection but that isn't the only measure of performance. What about your users that are connecting via a wifi connection with high latency? What if the performance in that case is that it takes 8 seconds to load and your various interactive features either barely work or won't work at all given the high latency.

In some cases ignoring the performance for some users may be OK. But if you care about a system that delivers fast load times to users you need to consider the performance not just for a subset of users but consider how it performs for users overall. The extent you will prioritize various use cases will depend on your specific situation.

I have a large bias for keeping the basic experience very good for all users. If I add fancy features that are useful I do not like to accept meaningful degradation to any user's experience - graceful degradation is very important to me. That is less important to many of sites that I use, unfortunately. What priority you place on it is a decision that impacts your software development and software testing process.

Hexawise attempts to add features that are useful while at the same time paying close attention to making sure we don't make things worse for users that don't care about the new feature. Making sure the interface remains clear and easy to use is very important to us. It is also a challenge when you have powerful and fairly complex software to keep the usability high. It is very easy to slip and degrade the users experience. Sean Johnson does a great job making sure we avoid doing that.

Maintaining the responsiveness of Hexawise is a huge effort on our part given the heavy computation required in generating tests in large test case scenarios.

You also have to realize where you cannot be all things to all people. Using Hexawise on a smart phone is just not going to be a great experience. Hexawise is just not suited to that use case at all and therefore we wouldn't test such a use case.

For important performance characteristics it may well be that you should create a separate Hexawise test plan to test the performance under server different conditions (relating to latency, bandwidth and perhaps phone operating system). It could be done within a test plan it just seems to me more likely separate test plans would be more effective most of the time. It may well be that you have the primary test plan to cover many functional aspects and have a much smaller test plan just to check that several things work fine in a high latency and smart phone use case).

Within that plan you may well want to test out various parameter values for certain parameters operating system iOS Android 7 Android 6 Android 5

latency ...

Of course, what should be tested depends on the software being tested. If none of the items above matter in your case they shouldn't be used. If you are concerned about a large user base you may well be concerned about performance on various Android versions since the upgrade cycle to new versions is so slow (while most iOS users are on the latest version fairly quickly).

If latency has a big impact on performance then including a parameter on latency would be worthwhile and testing various parameter values for it could be sensible (maybe high, medium and low). And the same with testing various levels of bandwidth (again, depending on your situation).

My view is always very user focused so the way I naturally think is relating pretty much everything I do to how it impacts the end user's experience.

Related: 10 Questions to Ask When Designing Software Tests - Don’t Do What Your Users Say - Software Testers Are Test Pilots

By: John Hunter on Jul 26, 2016

Categories: User Experience, Testing Strategies, Software Testing

Usability testing is the practice of having actual users try the software. Outcomes include the data of the tasks given to the user to complete (successful completion, time to complete, etc.), comments the users make and expert evaluation of their use of the software (noticing for example that none of the users follow the intended path to complete the task, or that many users looked for a different way to complete a task but failing to find it eventually found a way to succeed).

Usability testing involves systemic evaluation of real people using the software. This can be done in a testing lab where an expert can watch the user but this is expensive. Remote monitoring (watching the screen of the user; communication via voice by the user and expert; and viewing a webcam showing the user) is also commonly used.

In these setting the user will be given specific tasks to complete and the testing expert will watch what the user does. The expert will also ask the user questions about what they found difficult and confusing (in addition to what they liked) about the software.

The concept of usability testing is to have feedback from real users. In the event you can't test with the real users of a system it is important to consider if you are fairly accuratately representing that population with your usability testers. If the users of the system of fairly unsophisticated users if you use usability testers that are very computer savy they may well not provide good feedback (as their use of the software may be very different from the actually users).

"Usability testing" does not encompass experts evaluating the software based on known usability best practices and common problems. This form of expert knowledge of wise usability practices is important but it is not considered part of "usability testing."

Find more exploration of software testing terms in the Hexawise software testing glossary.

Related: Usability Testing Demystified - Why You Only Need to Test with 5 Users (this is not a complete answer, it does provide insite into the value of quick testing to run during the development of the software) - Streamlining Usability Testing by Avoiding the Lab - Quick and Dirty Remote User Testing - 4 ways to combat usability testing avoidance

By: John Hunter on Jul 4, 2016

Categories: User Experience, User Interface, Testing Strategies, Software Testing

Few things make us happier at Hexawise than hearing reports from clients about how much Hexawise is helping them improve their software testing efforts.

 

Happy

 

A recent conversation with our newest international banking client is a case in point. Carrie, a senior testing manager who is leading adoption efforts at the bank, reported the following impressive benefits:

 

Project 1

Without Hexawise: estimated testing effort for the project = 8.5 man months

With Hexawise: estimated testing effort for the project = 1.5 man months

Savings = > 80%

Hexawise Case Studies 2016 01 23

 

Project 2

Without Hexawise (e.g. immediately prior release): defects found during testing = 67%; defects found in UAT = 33%

With Hexawise (most recent release): defects found during testing = 98.5%; defects found in UAT = 1.5%

Defect Removal Effectiveness Improvement = Stunning

Hexawise Case Studies 2016 01 23 2

 

A few interesting things are worth pointing out as context behind these results.

 

First, let's be clear: these are significantly higher than normal Hexawise-generated benefits. We're not suggesting that every project will see benefits this large. They won't. "Your milage may vary." The 80% reduction in testing time is not unheard of but definitely larger than most teams tend to see. Similarly, the massive improvement to defect removal efficiency is larger than typically occurs. These are more typical benefits from case studies using Hexawise's pairwise testing methods and/or combinatorial testing methods and/or Orthogonal Array OATS testing methods.

 

Second, the test designers involved in these two projects are significantly more talented and skilled than "average" software testers working at banks. The test designers' skill has a lot to do with the unusually large successes they achieved in these projects. We know about how talented they are because we worked closely with these test designers during a 4-day onsite instructor-led test design training program we led and we have a good sense of the "average" test design skills possessed by software testers because we regularly conduct software test design training sessions around the world at hundreds of companies. During the hands-on interactive test design exercises in our face-to-face training sessions with the bank's software testers, several of their test designers demonstrated exceptional analytical thinking and problem solving skills.

 

Third, as we try to do with all of our new clients, our test design experts have actively kept in touch with the bank's test designers since the initial onsite training took place. We have been answering their test design questions when they have arisen, offering to review their draft test tests, and jumping on ad hoc screen sharing sessions to explain/demonstrate how to use Hexawise test design features. This helps our clients maximize the value they obtain from using Hexawise and helps us stay closely attuned to real-world testing challenges so that we can continuously improve our tool and fine-tune our software test design training messages.

 

If you're using Hexawise and have experiences to share with us, whether good or bad, we would love to hear about them. We're here to help. As corny as it sounds, helping clients succeed is a huge part of what motivates us at Hexawise. Please contact us at: support@hexawise.com

By: Justin Hunter on Sep 10, 2015

Categories: User Experience, Pairwise Testing, Combinatorial Testing, Business Case

As the CEO of a small but quickly-growing SaaS (Software as a Service) firm that often doubles software tester productivity, I can attest that Fortune 500 firms I'm talking to are way less "anti-SaaS" than they were just 12 and 24 months ago. Business is booming. More than 100 Fortune 500 firms currently have testers using our tool to design their software tests.

It doesn't take Nostradamus to predict that news stories talking about how "SaaS solutions are innovative and beating out 'traditional' software" will become more and more rare. Increasingly, SaaS solutions, with data stored remotely "in the cloud" by hosting providers like Amazon Web Services, are receiving mainstream acceptance.

The situation we're in now reminds me of when I helped launch Asia's first internet-based stock brokerage firm in 1996/97. It was "big news!" that generated coverage from CNN, Time, a front page article on the South China Morning Post business section, etc. Every reporter we talked to focused a lot of their attention on the potentially grave security risks of this new way of trading stocks. Today, trillions of dollars worth of online trade executions later, a Hong Kong brokerage firm offering its customers the ability to trade stocks online wouldn't be worthy of a mention in a neighborhood newspaper. It's just accepted as the way things are done.

We're quickly heading that way with SaaS solutions too.

 

Related: Looking at the Empirical Evidence for Using Pairwise and Combinatorial Software Testing - A Fun Presentation on a Powerful Software Test Design Approach - What Software Testers Can Learn from the Game of 20 Questions

By: Justin Hunter on May 28, 2013

Categories: User Experience

Context is Important in Usability Testing

As Adam Goucher recently pointed out, it is important to keep in mind WHY you are testing. Testing teams working on similar projects will have different priorities that will impact how much time they test, what they look for, and what test design methods they use. (Kaner and Bach provide some great specific examples that underscore this point here). In short, the context in which you're testing should impact how you test.

The same maxim holds true when you're conducting usability testing. Considering the context is important is well, both the context of the users of the application and the context of the application itself vis a vis other similar products. Important considerations can include:

  1. What problem does the application solve for the user?

  2. What does the application you're testing aspire to be as compared to competing applications?

  3. Who is the target audience of the application? What differentiating features does the application have?

  4. What is the "personality" of the application?

  5. What benefits and values do specific segments of target users prioritize?

These questions are all important when you analyze a web site with an eye on usability. I would recommend combining both a "checklist" approach (e.g., Jakob Nielsen's well-known Ten Usability Heuristics) with an approach that takes context-specific considerations (such as the 5 questions listed above) into account.

 

The Context of a User Group I'm Familiar with: the Hexawise Team

As of the end of June, 2010, our website leaves a great deal to be desired, so say the least. Hexawise.com consists mainly of a single landing page with anemic content that we threw together a year ago thinking that we'd "turn it into a real site" when we got around to it. We then proceeded to focus all of our development efforts on the Hexawise tool itself as opposed to our website (which we've let fester). Apologies if you've visited our site and wanted to know more details about what our test design tool does and how it complements test management tools. To date, we haven't provided as much information as we should have.

We've kicked off a project now to right this wrong. To do so, we're drafting up new content and organizing our thoughts about how to present it to visitors. Our needs are relatively simple. We want to create a set of simple wireframes that will allow us to quickly experiment with a few design options, gather feedback from friends and target users. For us, ease of use is key. Quickly being able to use the tool (without needing to read through a user guide) is critical. Ability to use the tool without reading through user guides is a must. We also value a tool's ability to make it easy to collaborate with one another easily.

With that as background, what follows are some quick comments on a couple wireframing tools I've recently explored in the context of our preferences and values. Wireframing is the practice of creating a skeletal visual interface for software. It is used for the the purposes of prototyping, soliciting early user/client feedback. It comes before the more time consuming phases of design. Two popular wireframe creation tools are Balsamiq and Hotgloo. Both are flash applications. Balsamiq is a desktop app. Hotgloo is a SaaS tool used over the internet.

 

Balsamiq and HotGloo

The first thing that strikes me about Balsamiq is the rich library of UX elements neatly organized and accessible by category or through a quick add search box. Everything works as it should: the drag, drop, click and type interface follows the principle of least astonishment. Fortunately, ease of use doesn't preclude speed: modifying the content and structure of UX elements is text-based versus form-based - blending in a touch of UNIX command line efficiency into otherwise graphical interface. UNIX and IRC users will feel right at home.

HotGloo is a very promising wireframing tool. They have clearly taken a page from the 37 Signals product development playbook. They have made a tool with a smaller set of features that is very intuitive to use. They've avoided the potential risk of "feature bloat" by having fewer bells and whistles. Where I think they add value: as a SaaS tool, HotGloo is exceptionally good at allowing multiple members on a team to collaborate on iterative designs. Whereas Balsamiq uses traditional files, HotGloo is accessible from anywhere. HotGloo enables multiple users to chat and view mockups in real time. Only one user can make changes at a time. Feedback is very easy to give and I found their support to be exceptionally responsive.

HotGloo is easy to learn for the first time, but my designer felt frustrated how much time he had to spend tweaking little things (like changing the names and links of a tabbed window element). The element controller pop-ups got in the way of work and he found myself frequently dragging them away. Hotgloo also takes a more minimalist approach than Basalmiq with UX elements with respect to features. Whether this is a strength or a weakness to users is a matter of personal preference. The 37 Signals camp (which I am highly sympathetic to) argues that is often preferable to have fewer, easier-to-use features since the vast majority of users will not want or need too many bells and whistles. Our designer felt that Balsamiq's feature set fit his needs better. As a "meddlesome manager" who wants to provide regular input into the content for version 2.0 of our site, feature-richness is less important to me than the collaborative ability.

 

Usability Considerations I Shared with the Hotgloo Team

20100630-ded769wwe9a5teycpej4t7jny9

20100630-d1gfj7nyjr4naaffxffus9dmw2

 

Balsamiq

20100705-n7rymxeb8yj3dfddt24itnr2ir

 

Balsamiq has a couple usability features that make it fun to use. A case in point is how you insert an image. Balsamiq gives you three choices, the third of which is really a nice touch: You can 1. Upload a file 2. Use a photo on the web or 3. Perform a flickr search right there and then without ever leaving comfort of the Balsamiq window. In my book, that kind of thoughtful workflow integration is what makes a good product great.

 

"Postscript" - Good Karma and an Open Invitation

20100705-g1cjw9ab78ji6nqcj61uch5aau 20100705-kt8qxwjes9xabycnpm97djjtch 20100705-x3gauxcqahddtum8aj75kbfjw8

 

As a post-script of sorts, after sending 5 UX suggestions (including the 2 above) to the HotGloo team last week, I received 5 outstanding UX suggestions for our Hexawise tool this week - out of the blue - from Janesh Kodikara, a new Hexawise user based in Sri Lanka. In addition, the HotGloo team provided 5 excellent UX suggestions for improving our tool as well. Taken together, they are some of the best suggestions we've had to date. If anyone reading this would be willing to share your usability suggestions with us, I can assure you, we're extremely interested in hearing your ideas.

By: Justin Hunter on Jul 5, 2010

Categories: Context-Driven Testing, Pairwise Software Testing, Uncategorized, User Experience, User Interface