This interview with Alan Page is part of our series of “Testing Smarter with…” interviews. Our goal with these interviews is to highlight insights and experiences as told by many of the software testing field’s leading thinkers.

Alan Page has been a software tester (among other roles) since 1993 and is currently the Director of Quality for Services at Unity. Alan spent over twenty years at Microsoft working on a variety of operating systems and applications in nearly every Microsoft division.


Alan Page

Personal Background

Hexawise: Looking back on over 20 years in software testing at Microsoft can you describe a testing experience you are especially proud of?

Alan: There are a lot of great experiences to choose from - but I'm probably most proud of my experiences on the Xbox One team. My role was as much about building a community of testers across the the Xbox ecosystem (console, services, game development) as it was about testing and test strategy for the console. We had a great team of testers, including a handful of us with a lot of testing experience under our belts. We worked really well together, leveraged each others strengths, and delivered a product that has had nearly zero quality issues since the day of release.

Hexawise: What did you enjoy about working in such a large software company, Microsoft, for so long?

Alan: The only way I survived at Microsoft so long was that I could change jobs within the company and get new experiences and challenges whenever I felt I needed them. I love to take on new challenges and find things that are hard to do - and I always had that opportunity.

The biggest thing I look for in testers is a passion and ability to learn. I've interviewed hundreds of testers... The testers who really impress me are those who love to learn - not just about testing, but about many different things. Critical thinking and problem solving are also quite important.

Hexawise: What new challenges are you looking forward to tackling in your new role at Unity?

Alan: I'm looking forward to any and all challenges I can find. Specifically, I want to build a services testing community at Unity. Building community is something I feel really strongly about, and from what I've seen so far, I think there are a lot of opportunities for Unity testers to learn a lot from each other while we play our part in the coming growth of Unity services.

Views on Software Testing

Hexawise: What thoughts do you have in involving testers in A/B and multivariate testing? That stretches the bounds of how many people categorize testers but it could be a good use of the skills and knowledge some testers process.

Alan: My approach to experimentation (A/B testing) is that there are three roles needed. Someone needs to design the experiment. A product owner / program manager often plays this role and will attempt to figure out the variation (or variations) for the experiment as well as how to measure the business or customer value of both the control (original implementation) and treatment / experiment.

The second role is the implementer - this is typically a developer or designer depending on the nature of the experiment. Implementing an experiment is really no different than implementing any other product functionality.

The final role is the analyst role - someone needs to look at the data from the experiment, as well as related data and attempt to prove whether the experiment is a success (i.e. the new idea / treatment is an improvement) or not. I've seen a lot of testers be successful in this third role, and statistics and data science in general, are great skills for testers to learn as they expand their skill set.

image of book cover for How We Test Software at Microsoft

Hexawise: During your 20 years at Microsoft you have been involved in hiring many software testers. What do you look for when choosing software testers? What suggestions do you have for those looking to advance in their in software testing career?

Alan: The biggest thing I look for in testers is a passion and ability to learn. I've interviewed hundreds of testers, including many who came from top universities with advanced degrees who just weren't excited about learning. For them, maybe learning was a means to an end, but not something they were passionate about.

The testers who really impress me are those who love to learn - not just about testing, but about many different things. Critical thinking and problem solving are also quite important.

As far as suggestions go, keep building your tool box. As long as you're willing to try new things, you'll always be able to find challenging, fun work. As soon as you think you know it all, you will be stuck in your career.

Combinatorial testing is actually pretty useful in game testing. For example, consider a role-playing game with six races, ten character classes, four different factions, plus a choice for gender. That's 480 unique combinations to test! Fortunately, this has been proven to be an area where isolating pairs (or triples) of variations makes testing possible, while still finding critical bugs.

Hexawise: Do you have a favorite example of a combinatorial bug and how it illuminated a challenge with software testing?

Alan: I was only indirectly involved in discovering this one, but the ridiculously complex font-picker dialog from the Office apps was a mess to test - but it was tested extensively (at least according the person in charge of testing it). A colleague of mine showed them the all pairs technique, and they used it to massively decrease their test suite - and found a few bugs that had existed for years.

Hexawise: It seems to me that testing games would have significant challenges not found in testing fairly straightforward business applications. Could you share some strategies for coping with those challenges?

Alan: Combinatorial testing is actually pretty useful in game testing. For example, consider a role-playing game with six races, ten character classes, four different factions, plus a choice for gender. That's 480 unique combinations to test! Fortunately, this has been proven to be an area where isolating pairs (or triples) of variations makes testing possible, while still finding critical bugs.

Beyond that, testing games requires a lot of human eyeballs and critical thinking to ensure gameplay makes sense, objects are in the right places, etc. I've never seen a case where automating gameplay, for example, has been successful. I have, however, seen some really innovative tools written by testers to help make game testing much easier, and much more effective.

Hexawise: That sounds fascinating, could you describe one or more of those tools?

Alan: The games test organization at Microsoft wrote a few pretty remarkable tools. One linked the bug tracking system to a "teleportation" system in the game under test, including a protocol to communicate between a windows PC and an xbox console.

This enabled a cool two-way system, where a tester could log a bug directly from the game, and the world coordinates of where the bug occurred were stored automatically in the bug report. Then, during triage or debugging, a developer / designer could click on a link in the bug report, and automatically transport to the exact place on the map where the issue was occurring.

Hexawise: What type of efforts to automate gameplay came close to providing useful feedback? What makes automating gameplay for testing purposes ineffective?

Alan: I would never automate gameplay. There are too many human elements needed for a game to be successful - it has to be fun to play - ideally right at that point between too challenging, and too easy. If the game has a story line, a tester needs to experience that story line, evaluate it, and use it as a reference when evaluating gameplay.

Tools to evaluate cpu load or framerate during gameplay are usually a much better investment than trying to simulate gameplay via automation.

That said, there are some "what if" scenarios in games that may provide interesting bugs. For example simulating a user action (e.g. adding and removing an item) thousands of time may reveal memory leaks in the application. It really comes down to test design and being smart about choosing what makes sense to automate (and what doesn't make sense).

Hexawise: What is one thing you believe about software testing that many smart testers disagree with?

Alan: I don't believe there's any value from distinguishing "checks" from tests. I know a lot of people really like the distinction, but I don't see the value. Of course, I recognize that some testing is purely binary validation, but this is a(nother) example of where choosing more exact words in order to discern meaning leads to more confusion and weird looks than it benefits the craft of testing.

Industry Observations / Industry Trends

Hexawise: Do you believe testing is becoming (or will become) more integrated with the software development process? And how do you see extending the view of the scope of testing to include all the way from understanding customer needs to reviewing actual customer experience to drive the testing efforts at an organization.

Alan: I believe testing has become more integrated into software development. In fact, I believe that testing must be integrated into software development. Long ship cycles are over for most organizations, and test-last approaches, or approaches that throw code to test to find-all-the-bugs are horribly inefficient. The role of a tester is to provide testing expertise to the rest of the team and accelerate the entire team's ability to ship high-quality software. Full integration is mandatory for this to occur.

It's also important for testers to use data from customer usage to help them develop new tests and prioritize existing bugs. We've all had conversations before about whether or not the really cool bug we found was "anything a customer would ever do" - but with sufficient diagnostic data in our applications or services, we can use data to prove exactly how many customers could (or would) hit the issue. We can also use data to discover unexpected usage patterns from customers, and use that knowledge to explore new tests and create new test ideas.

There's an important shift in product development that many companies have recognized. They've moved from "let's make something we think is awesome and that you'll love" - i.e. we-make-it-you-take-it to "we want to understand what you need so we can make you happy". This shift cannot happen without understanding (and wallowing in) customer data.

Hexawise: In your blog post, Watch out for the HiPPO, you stated: "What I’ve discovered is that no matter how strongly someone feels they “know what’s best for the customer”, without data, they’re almost always wrong." What advice do you have for testers for learning what customers actually care about? To me, this points out one of the challenges software testers (and everyone else actually) faces, which is the extent to which they are dependent on the management system they work within. Clayton Christensen's Theory of Jobs to Be Done is very relevant to this topic in my opinion. But many software testers would have difficulty achieving this level of customer understanding without a management system already very consistent with this idea.

Alan: Honestly, I don't know how a product can be successful without analysis and understanding of how customers use the product. The idea of a tester "pretending to be the customer" based purely on their tester-intuition is an incomplete solution to the software quality problem. If you hear, "No customer would ever do that", you can either argue based on your intuition, or goo look at the data and prove yourself right (or wrong).

There's an important shift in product development that many companies have recognized. They've moved from "let's make something we think is awesome and that you'll love" - i.e. we-make-it-you-take-it to "we want to understand what you need so we can make you happy". This shift cannot happen without understanding (and wallowing in) customer data. Testers may not (and probably won't) create this system, but any product team interested in remaining in business should have a system for collecting and analyzing how customers use their product.

Hexawise: I agree, this shift is extremely important. Do you have an example of how using such a deep understanding of users influenced testing or how it was used to shift the software development focus. Since software testing is meant to help make sure the company provides software that users want it certainly seems important to make sure software testers understand what users want (and don't want).

Alan: First off, I prefer to think of the role of test as accelerating the achievement of shipping quality software; which is similar to making sure the company provides software customers want, but (IMO), is a more focused goal.

As far as examples go...how many do you want? Here a few to ponder:

  • We tracked how long people ran our app. 45% ran it continuously - all the time. This is the way we ran the app internally. But another 45% ran it for 10 minutes or less. We had a lot of tests that tried to mimic a week of usage and look for memory leaks and other weirdness, but after seeing the data, we ended up doing a lot more testing (and improving) of start up and shut down scenarios.
  • We once canceled a feature after seeing that the data told us that it was barely used. In this case, we didn't add the tracking data until late (a mistake on our part). Ideally, we'd discover something like this earlier, and than redesign (rather than cancel)
  • We saw that a brand new feature was being used a lot by our early adopters. This was pretty exciting...but as testers, we always validate our findings. It turned out after a bit more investigation, that the command following usage of the brand new feature was almost always undo. Users were trying the feature... but they didn't like what it was doing.

Staying Current / Learning

Hexawise: What advice do you have for people attending software conferences so that they can get more out of the experience?

Alan: Number one bit of advice is to talk to people. In fact, I could argue that you get negative value (when balanced against the time investment) if you show up and only attend talks. If there's a talk you like in particular, get to know the speaker (even us introverts are highly approachable - especially if you bring beer). Make connections, talk about work, talk about challenges you have, and things you're proud of. Take advantage of the fact that there are a whole lot of people around you that you can learn from (or who can learn from you).

Additionally, if you're in a multi-track conference, feel free to jump from talk to talk if you're not getting what you need - or if it just happens that two talks that interest you are happening at the same time.

Hexawise: How do you stay current on improvements in software testing practices; or how would you suggest testers stay current?

Alan: I read a lot of testing blog posts (I use feedly for aggregation). I subscribe to at least fifty software development and test related blogs. I skim and discard liberally, but I usually find an idea or two every week that encourages me to dig deeper.

Biggest tip I have, however, is to know how essential learning is to your success. Learning is as important to the success of a knowledge worker as food is to human life. Anyone who thinks they're an "expert" and that learning isn't important anymore is someone on a career death spiral.

Profile

image of book cover for The A Word by Alan Page

Alan has been a software tester (among other roles) since 1993 and is currently the Director of Quality for Services at Unity. Alan spent over twenty years at Microsoft working on a variety of operating systems and applications in nearly every Microsoft division. Alan blogs at angryweasel.com, hosts a podcast (w/ Brent Jensen) at angryweasel.com/ABTesting, and on occasion, he speaks at industry testing and software engineering conferences.

Links Books: How We Test Software at Microsoft, The A Word

Blog: Tooth of the Weasel

Podcast: AB Testing

Twitter: @alanpage


Related posts: Testing Smarter with Dorothy Graham - Testing Smarter with James Bach

By: John Hunter on Mar 16, 2017

Categories: Combinatorial Software Testing, Customer Success, Software Development, Software Testing, Testing Smarter with..., Interview

This interview with Dorothy Graham is part of our series of “Testing Smarter with…” interviews. Our goal with these interviews is to highlight insights and experiences as told by many of the software testing field’s leading thinkers.

Dorothy Graham has been in software testing for over 40 years, and is co-author of 4 books: Software Inspection, Software Test Automation, Foundations of Software Testing and Experiences of Test Automation.


Dorothy Graham

Personal Background

Hexawise: Do your friends and relatives understand what you do? How do you explain it to them?

Dot: Some of them do! Sometimes I try to explain a bit, but others are just simply mystified that I get invited to speak all over the world!

Hexawise: If you could write a letter and send it back in time to yourself when you were first getting into software testing, what advice would you include in it?

Dot: I would tell myself to go for it. When I started in testing it was definitely not a popular or highly-regarded area, so I accidentally made the right career choice (or maybe Someone up there planned my career much better than I could have).

Hexawise: Which person or people have had the greatest influence on your understanding and practice of software testing?

Dot: Tom Gilb was (and is) a big influence, although in software engineering in general rather than testing. The early testing books (Glenford Myers, Bill Hetzel, Cem Kaner, Boris Beizer) were very influential in helping me to understand testing. I attended Bill Hetzel’s course when I was just beginning to specialize in testing, and he was a big influence on my view of testing back then. Many discussions about testing with friends in the Specialist Group In Software Testing (SIGIST), and colleagues at Grove, especially Mark Fewster.

Hexawise: Failures can often lead to interesting lessons learned. Do you have any noteworthy failure stories that you’d be willing to share?

Dot: I was able to give a talk about a failure story in Inspection, where a very successful Inspection initiative had been “killed off” by a management re-organisation that failed to realise what they had killed. There were a lot of lessons about communicating the value of Inspection (which also apply to testing and test automation). The most interesting presentation was when I gave it at the company where it had happened a number of years before; one (only one) person realized it was them.

One fallacy is in thinking that the automation does only what is done by manual tests. There are often many things that are impossible or infeasible in manual testing that can easily be done automatically.

Hexawise: Describe a testing experience you are especially proud of. What discovery did you make while testing and how did you share this information so improvements could be made to the software?

Dot: One of the most interesting experiences I had was when I was doing a training course in Software Inspection for General Motors in Michigan, and in the afternoon, when people were working on their own documents, several people left the room quite abruptly. I later found out that they had gone to stop the assembly line! They were Inspecting either braking systems or cruise control systems.

When I was working at Ferranti on Police Command and Control Systems, I discovered that the operating system’s timing routines were not working correctly. This was critical for our system, as we had to produce reminders in 2 minutes and 5 minutes if certain actions had not been taken on critical incidents. I wrote a test program that clearly highlighted the problems in the OS and sent it to the OS developers, wondering what their reaction would be. In fact they were delighted and found it very useful (and they did fix the bugs). It did make me wonder why they hadn’t thought to test it themselves though.

Views on Software Testing

Hexawise: In That's No Reason to Automate, you state "Testing and automation are different and distinct activities." Could you elaborate on that distinction?

Dot: By “automation”, I am referring to tools that execute tests that they have been set up to run. Execution of tests is one part of testing but testing is far more than just running tests. First what it is that needs to be tested? Then how best can that be tested - what specific examples would make good test cases? Then how are these tests constructed to be run? Then they are run, and afterwards, what was the outcome from the test compared to what was expected, i.e. did the software being tested do what it should have done? This is why tools don’t replace testers - they don’t do testing, they just run stuff.


Diagram from That's No Reason to Automate

Hexawise: Test automation allows for great efficiency. But there are limits to what can be effectively automated. How do you suggest testers determine what to automate and what to devote tester's precious time toward investigating personally?

Dot: There are two fallacies when people say “automate all the manual tests” (which is unfortunately quite common). The first one is that all manual tests should be automated - this is not true. Some tests that are now manual would be good candidates for automation, but other tests should always be manual. For example, “do these colours look nice?” or “is this what users really do?” or even “Captcha” where it should only work if a person not a machine is filling in a form. Think about it - if your automated test can do the Captcha, then the Captcha is broken.

Automating the manual tests “as is" is not the best way to automate, since manual tests are optimized for human testers. Machines are different to people, and tests optimized for automation may be quite different. And finally, a test that takes an inordinate amount of time to automate, if the test isn’t needed often, is not worth automating.

The other fallacy is in thinking that the automation does only what is done by manual tests. There are often many things that are impossible or infeasible in manual testing that can easily be done automatically. For example if you are testing manually, you see what is on the screen, but you don’t know if all the GUI (graphical user interface) objects are in the correct state - this could be checked by a tool. There are also types of testing that can only be done automatically, such as Monkey testing or High Volume Automated Testing.

Hexawise: Can you describe a view or opinion about software testing that you have that many or most smart experienced software testers probably disagree with? What led you to this belief?

Dot: One view that I feel strongly about, and many testers seem to disagree with this, is that I don’t think that all testers should be able to write code, or, what seems to be the prevailing view, that the only good testers are testers who can code. I have no objection to people who do want to do both, and I think this is great, especially in Agile teams. But what I object to is the idea that this has to apply to all testers.

There are many testers who came into testing perhaps through the business side, or who got into testing because they didn’t want to be developers. They have great skills in testing, and don’t want to learn to code. Not only would they not enjoy it, they probably wouldn’t be very good at it either! As Hans Buwalda says, “you lose a good tester and gain a poor programmer.” The ultimate effect of this attitude is that testing skills are being devalued, and I don’t think that is right or good for our industry!

The best way to have fewer bugs in the software is not to put them in in the first place. I would like to see testers become bug prevention advisors! And I think this is what happens on good teams.

Hexawise: What do you wish more developers, business analysts, and project managers understood about software testing?

Dot: The limitations of testing. Testing is only sampling, and can find useful problems (bugs) but there aren’t enough resources or time in the universe to “test everything”.

The best way to have fewer bugs in the software is not to put them in in the first place. I would like to see testers become bug prevention advisors! And I think this is what happens on good teams.

Hexawise: I agree that becoming bug prevention advisors is a very powerful concept. Software development organizations should be seeking to improve the entire system (don't create bugs, then try to catch them and then fix them - stop creating them). I recently discussed these ideas in Software Code Reviews from a Deming Perspective.

Dot: I had a quick look at the blog and like it - especially the emphasis on it being a learning experience. It seems to be viewing “inspection” in the manufacturing sense as looking at what comes off the line at the end. In our book “Software Inspection” (which is actually a review process), it is very much an up-front process. In fact applying inspection/reviews to things like requirements, contracts, architecture design etc is often more valuable than reviewing code.

I also fully agree with selecting what to look at - in our book we recommend a sampling process to identify the types of serious bug, then try to remove all other of that type - that way you can actually remove more bugs than you found.

Hexawise: In what ways would you see "becoming bug prevention advisors" happening? Done in the wrong way it can make people defensive about others advising them how to do their jobs in order to avoid creating bugs. What advice do you have for how software testers can make this happen and do it well?

Dot: I see the essential mind-set of the tester as asking “what could go wrong? or “what if it isn’t” [as it is supposed to be]. If you have a Agile team that includes a tester, then those questions are asked perhaps as part of pair working - that way the potential problems are identified and thought about before or as the code is written. The best way is close collaboration with developers - developers who respect and value the tester’s perspective.

Industry Observations / Industry Trends:

Hexawise: Your latest book, Experiences of Test Automation, includes an essay on Exploratory Test Automation. How are better tools making the ideas in this essay more applicable now than they were previously?

Dot: Actually this is now called High-Volume Automated Testing, which is a much better name for it. HiVAT uses massive amounts of test inputs, with automated partial oracles. The tests are checking that the results are reasonable, not exact results, and any that are outside those bounds are reported to humans to investigate. Harry Robinson described using this technique to test Bing - quite a large thing to test!

Hexawise: When looking to automate tests one thing people sometimes overlook is that given the new process it may well be wise to add more tests than existed before. If all test cases were manually completed that list of cases would naturally have been limited by the cost of repeatedly manually checking so many test cases in regression testing. If you automate the tests it may well be wise to expand the breadth of variations in order to catch bugs caused by the interaction of various parameter values. What are your thoughts on this idea?

Dot: Nice analogy - I like the term “grapefruit juice bugs”. Using some of the combinatorial techniques is a good way to cover more combinations in a reasonably rigorous way. Automation can help to run more tests (provided that expected results are available for them) and may be a good way to implement the combination tests, using pair-wise and/or orthogonal arrays.

Hexawise: In your experience how big of a problem is automation decay (automated tests not being maintained properly)? How do recommend companies avoid this pitfall?

Dot: This seems to be the most common way that test automation efforts get abandoned (and tools become shelfware), particularly for system-level test automation. It can be avoided by having a well-structured testware architecture in the first place, where maintenance concerns are thought about beforehand, using good programming practices for the automation code. Although fixing this later is possible, preventing it in the first place is far better.

Hexawise: Large companies often discount the importance of software testing. What advice do you have for software testers to help their organizations understand the importance of software testing efforts in the organization?

Dot: Unfortunately often the best way to get an organisation to appreciate testing is to have a small disaster, which adequate testing could have prevented. I’m not exactly suggesting that people should do this, but testing, like insurance, is best appreciated by those who suffer the consequences of not having had it. Perhaps just try to make people aware of the risks they are taking (pretend headlines in the paper about failed systems?)

Staying Current / Learning

Hexawise: I see that you’ll be presenting at the StarEast conference in May. What could you share with us about what you’ll be talking about?

Dot: I have two tutorials on test automation. One is aimed mainly at managers who want to ensure that a new (or renewed) automation effort gets off “on the right foot”. There are a number of things that should be done right at the start to give a much better chance of lasting success in automation.

My other tutorial is about Test Automation Patterns (using the wiki). Here we will look at common problems on the technical side (rather than Management issues), and how people have solved these problems - ideas that you can apply. This also a code-free tutorial - we are talking about generic technical issues and patterns, whatever test execution tool you use.

Hexawise: What advice do you have for people attending software conferences so that they can get more out of the experience?

Dot: Think about what you want to find out about before you come, and look through the programme to decide which presentations will be most relevant and helpful for you. It’s very frustrating to realise at the end of the conference that you missed the one presentation that would have given you the best value, so do your homework first!

But also realise that the conference is more than just attending sessions. Do talk to other delegates - find out what problems you might share (especially if you are in the same session). You may want to skip a session now and then to have a more thorough look around the Expo, or to have a deeper conversation with someone you have met or one of the speakers. The friends you make at the conference can be your “support group” if you keep in touch afterwards.

I don’t think that all testers should be able to write code, or, what seems to be the prevailing view, that the only good testers are testers who can code. I have no objection to people who do want to do both, and I think this is great, especially in Agile teams. But what I object to is the idea that this has to apply to all testers.

Hexawise: How do you stay current on improvements in software testing practices; or how would you suggest testers stay current?

Dot: I am fortunate to be invited to quite a few conferences and events. I find them stimulating and I always learn new things. There are also many blogs, webinars and online resources to help us all try to keep up to some extent.

Hexawise: What software testing-related books would you recommend should be on a tester’s bookshelf?

Mine, of course! ;-) I do hope that testers have a bookshelf! For testers, I recommend Lee Copeland’s book “A Practitioner’s Guide to Software Test Design” as a great introduction to many techniques. For a deep understanding, get Kem Caner’s book “The Domain Testing Workbook”. I also like “Lessons Learned in Software Testing” by Kaner, Bach and Pettichord, and “Perfect Software and other illusions about testing” by the great Gerry Weinberg. But there are many other books about testing too!

Hexawise: What are you working on now?

Dot: I am working on a wiki called TestAutomationPatterns.org with Seretta Gamba. This is a free resource for system level automation, with lists of different issues or problems and resolving patterns to help address them. There are four categories: Process, Management, Design and Execution. There is a “Diagnostic” to help people identify their most pressing issue, which then leads them to the patterns to help. We are looking for people to contribute their own short experiences to the issues or patterns.

Dorothy Graham has been in software testing for over 40 years, and is co-author of 4 books: Software Inspection, Software Test Automation, Foundations of Software Testing and Experiences of Test Automation, and currently helps develop TestAutomationPatterns.org.

Dot has been on the boards of conferences and publications in software testing, including programme chair for EuroStar (twice). She was a founder member of the ISEB Software Testing Board and helped develop the first ISTQB Foundation Syllabus. She has attended Star conferences since the first one in 1992. She was awarded the European Excellence Award in Software Testing in 1999 and the first ISTQB Excellence Award in 2012.

Website

Twitter: @DorothyGraham

By: John Hunter on Mar 10, 2017

Categories: Software Testing, Testing Smarter with..., Interview

This interview with James Bach is the first our series of “Testing Smarter with…” interviews. Our goal with these interviews is to highlight insights and experiences as told by many of the software testing field’s leading thinkers.

James Bach, one of the most well-known and controversial leaders in the software testing community, challenges himself and others to continually develop their software testing approaches. James believes that excellent testing is a craft that requires many skills and ongoing practice and focus to develop and maintain those skills. The skills of testing include general systems analysis and critical thinking, but also social skills. In some sense, any child can test. But children and other amateurs cannot test systematically, nor can they provide professional self-assessment and reporting on the testing they do.

photo of James Bach
James Bach, Founder and CEO of Satisfice Inc

Personal Background

Hexawise: What one or two software testing-related experiences have you found to be most personally satisfying in your career?

James Bach: In 2002, Microsoft complained to a federal judge that I hadn’t given it a power cord. Yes, an ordinary power cord of the kind you can pull out of the back of any standard desktop computer. Yes, to a federal judge. No, I am not making this up. Yes, I also thought it was bizarre-- bizarre but kind of satisfying.

This happened during the Microsoft Remedies Trial, wherein nine American states were suing Microsoft and the government because they wanted a tougher punishment for Microsoft after it lost its big antitrust case. The states hired me as an expert witness to find out if Microsoft was telling the truth when it claimed it was “technically infeasible” to remove IE and the Windows Media Player. I gathered a team and went to work. I soon discovered that it was possible to remove these things-- using only public information and Microsoft’s own helpful tech support people to set up my testing to prove it. (The tech support people did not realize the purpose of my questions, and cheerily gave me all that I needed to know.)

When I revealed my results, Microsoft demanded that I turn over all the materials necessary to reproduce my results. So I gave them one of my test systems. At the last moment, I pulled out the power cord, thinking that I was causing some low-level techie 30 seconds of annoyance. But the next day Microsoft was in court acting like I had withheld the Golden Power Cord of Truth.

It was satisfying because Microsoft never even attempted to proved me wrong on the facts. They used lawyer tricks to stop my truth bombs, instead. Way to go, Bill Gates.

Another satisfying moment was watching my son find a catastrophic bug in a life-critical piece of medical equipment. He didn’t ask for a spec or a test case specification. He used video-gamer techniques to confuse the system until it overrode its own safety features and melted itself inside the simulated patient. Yeah. Melted. That was a $3000 piece of equipment he ruined. I’ve rarely been prouder of myself for having such foresight as to create a son like him: he is such a good test tool.

people who don’t “embrace exploratory testing” are, to me, not even testers. They are fact checkers, maybe. I think that’s not good enough.

Hexawise: Failures can often lead to interesting lessons learned. Do you have any noteworthy failure stories that you’d be willing to share?

James: How about the time I tried to set up my corporate server. I got it all working. Then I moved it from the conference room to the server room. I couldn’t get it to come online after that. For 12 hours I worked on it, all through the night. At long last I relented to my brother’s suggestion-- that we move it back to the conference room. I had refused to do that because it didn’t make any sense. The conference room simply connected to the server room. How would adding an extraneous variable like that change anything. But, zoom, we were back online.

After a moment of “wha??” the solution flashed into my mind: we must have two feeds to the Internet instead of one. It turned out that the conference room was patched into the open net, but the other port I had been using in the server room was routed through a firewall which in turn connected to the net. Nobody told me this during the buildout of my office space. It was a completely missing possibility in my mind.

So what did I learn? I learned about the importance of de-focusing, which includes trying apparently silly things to solve problems. I’m more open to that now.

Here is another interesting failure. I recently wrote a report involving the calculation of percentages. A non-technical person (a lawyer I worked with) checked my math and found it to be wrong. In fact, every one of her calculations was wrong. But in the process of refuting her claims, I discovered a different error in one of my own numbers. So, isn’t that interesting? Even if a critique of your work is incorrect, it could still be a useful stimulant to help you find your own problems.

Hexawise: What kinds of activities do you enjoy when you’re not at work?

James: I run a business, so I feel like I’m always at work. But I guess I do take little bits of time off each day. What do I do? I daydream. I read science news. I solve math and logic puzzles. I try to walk each day. And I watch videos with my wife. We binge on English television series, mostly.

Hexawise: Describe a testing experience you are especially proud of. What discovery did you make while testing and how did you share this information so improvements could be made to the software?

James: Well, many of those things I now use as testing exercises for my students, so I don’t want to spoil them. But, hmm, here’s one. I was given one day to break into an invoicing system for a large pharmaceutical company. I found three ways to do it. One of the methods I used was to get one of the sales engineers to sit with me while I tested. I asked him to demo the system to me and then he hung around while I tried to break-in. The first time I broke in (using a traversal attack if you follow such things) I didn’t even know I had done it until the sales engineer said “hey you aren’t supposed to see that data.” Good thing he was there, huh? So part of testing can be charming people into helping you, and you never know what that help will bring.

Views on Software Testing

Hexawise: Some of the thought-provoking ideas you and Michael Bolton have come up with, like the important distinction between Testing vs. Checking have received a great deal of attention within the community. Other intellectual contributions to the community you have made are not as well known but are arguably equally important and insightful. One such contribution that comes to mind which really resonates with us at Hexawise is the exploratory-scripted (or formality) continuum you and your brother Jon described.

image of software testing formality continuum

Do you have one or two intellectual contributions to the community that you wish were more widely known?

James: I wish that more people understood the folly, the sheer silliness, of counting test cases and calculating pass rates.

I don’t care if you have 80 test cases or 8 million of them. That number tells me nothing about you or your testing. It tells me nothing by itself, and it tells me nothing in conjuction with other information (except in rare cases not worth talking about). It’s like telling me that you have broken your day into 27 tasks, of 1353 tasks, or whatever. Just stop. Instead of fake science smoke rings, tell me what you actually did. Here’s a simple suggestion: instead of giving me a number, give me a list: a list of test ideas, test cases, test activities, bugs, features, people… I can do something with a list. But if you give me a number I just have to say “show me the things you are counting.”

Hexawise: Can you describe a view or opinion about software testing that you have changed your mind about in the last few years? What caused you to change your mind?

James: I used to think it was useful to talk about exploratory testing. But now I think it’s more helpful to say that all testing is exploratory. To say “exploratory testing” is the same as saying “testing.” Instead, I speak of how testing can be more or less formal, but it is always informal to some degree or else it ceases to be testing.

Also, in the last few years I have concluded that term “test automation” is toxic and should be avoided. It deposits a little poison in the mind whenever it is uttered. An angel loses its wings every time someone calls himself an “automated tester.” I changed my mind on both things as the result of ongoing attempts to teach students and hearing their questions and seeing where they get confused. That, and deep conversations with Michael Bolton.

Hexawise: How do/would you test very complex systems such as genetic algorithm systems and evolutionary systems? How do you test systems when we don't understand how they work? It seems kind of like medical differential diagnosis: poke, observe, learn, hypothesize, poke again. Or is there a better way?

James: I test them using social science methods. That, after all, is how scientists attempt to test their theories about social life. That means an emphasis on qualitative analysis, but bringing in statistical methods whenever applicable.

I agree that the medical world is a good example of where statistical methods and heuristic approaches are also needed. In testing complex things, some of what you need to do includes:

  • You must use time to your advantage-- observing systems over time the way primatologists observe chimps in the wild.
  • You must use Grounded Theory, beginning with immersion and observation, until patterns begin to reveal themselves.
  • You must focus on testability. To create an environment where you can control and observe more of what is there.
  • You must pay attention to clues. Many, many clues. Stop looking for simplistic “test cases” that will “prove” that the software works.
  • You must become expert at data wrangling, since these systems usually involve huge amounts of data.
  • Let other people help you.
  • Forge partnerships with users.

If it’s a training gig then my objective is to show them what testing can be, show them a path to get there, and encourage them to walk that path. A lot of that is about removing the obstacles to moving along.

Hexawise: It is clear from your writings and frequent presentations, that you feel passionately that the software testing community would greatly benefit if far more testers embraced Exploratory Testing. It’s a deeply held conviction. What particular testing practice(s) do you most wish the software testing community would embrace?

James: Testing is exploratory. So, people who don’t “embrace exploratory testing” are, to me, not even testers. They are fact checkers, maybe. I think that’s not good enough.

I wish more testers were mathematically inclined. You must see this, too, at Hexawise-- the widespread math-phobia in our field. I want to talk about Karnaugh maps and the value of de Bruijn sequences. But I have to keep that stuff out of my classes or I will freak most people out. It’s not that I am a mathematics expert. I’m just an enthusiast who wants to be held to a higher standard. But even my dalliances in Bayesian belief nets sound like high elf incantations to most testers. Mathematical disabilities, in general, make our craft prey to quackery and fraud of all kinds.

At the same time, I want to be inclusive. Mathophobes have a lot of offer and I don’t want them to think I don’t welcome them. But must they necessarily be the majority of testers? I guess I’m saying I want a cure for mathophobia, please.

Hexawise: What do you wish more developers, business analysts, and project managers understood about software testing?

James: I wish they understood that it benefits from specialization. When software people get heart disease they don’t limit themselves to a GP, do they? If they need surgery they don’t insist on being operated on by a rotating team of generic medical people who took a three day class in “Agile medicine” do they?

I get to be a specialist tester mainly because I do it on my own time. My clients are paying me, usually, for training, not for doing testing. So I usually test in my “free time” to sharpen my skills. I recently did have a wonderful and lucrative testing-related gig (because this particular project knew that it needed the best tester and analyst it could possibly get and became convinced I was the Chosen One), but those gigs are few and far between for someone like me.

Hexawise: When individual companies hire you for consulting engagements, how would you describe what it is that you usually seek to provide to them?

James: If it’s a training gig then my objective is to show them what testing can be, show them a path to get there, and encourage them to walk that path. A lot of that is about removing the obstacles to moving along. Chief among those obstacles is lack of confidence. So I do a lot of pep talking. Another obstacle is the very primitive, mechanical way that people think about testing. I have to replace that with systems thinking.

If it’s a testing gig then my objective is usually to provide deep, exemplary testing, that is transparent to my client. I want them to feel that they see their product in a beautiful focus. The danger I am always in when I test is that I will get too deep (and therefore be too slow and expensive). But for me, deep testing is the most fun, so it’s a constant struggle to hold myself back from using my most penetrating methods and tools.

Industry Observations / Industry Trends

Hexawise: As Artificial Intelligence increases in capability (for example the strides made by Watson) do you foresee an increase in the capabilities of computer checking? I am thinking, not of an elimination of the difference between human lead testing and what can be done without people but to what extent you see the possibility for AI to do a progressively much better job of checking.

James: I foresee a collapse of critical thinking about these complex systems, followed by some sort of disaster, followed by a new realization of the risks of surrendering human judgment to a machine. I foresee that this will be an ongoing cycle. This collapse of critical thinking will lead to more shallow testing and perfunctory checking, presented as if it were deep. For an example of what I mean, see this old computer commercial:

Pay attention starting at 2:05. Oh look, the computer is assuring us that it has no errors. Everyone relax! It’s “electronic brain” can be trusted!

Hexawise: Do you have any predictions about how large an impact Artificial Intelligence and Machine Learning will have on software testing in the next 5-10 years?

James: I don’t think it will have any impact on testing as such, except inasmuch as many people (not skeptical testers, but people who might otherwise hire testers) will trust black boxes when they should be challenging them.

I suppose as machine learning become more available to the masses, someone might try to train one to recognize bad output of some kind. That’s a sort of test tool. But it would only apply to well-established kinds of badness.

If you think about it, anti-spam systems are a sort of test system. Machine learning is used in spam filters. So, I guess testing is already using machine learning in that sense. But I don’t see the average tester applying machine learning methods to testing. I don’t see a developer doing that, either. It’s too involved and complicated; too narrow in application.

I hear that people at Google are going to “put coders out of business” with a system that writes code based on people just talking. You know what that’s called? A compiler. They are inventing a high level compiler. Now the people who talk will be called developers and will have to learn to talk properly, because it will emerge that normal people can’t say what they actually want.

Hexawise: Have you seen a particularly effective process where the software testing team was integrated into the feedback from a deployed software application (getting feedback from users on problems, exploring issues the software noted as possible bugs...)? What was so effective about that instance?

James: Not really. What I see is developers ignoring feedback. It’s too overwhelming. I suspect there are people who are really good at doing that. But I haven’t run across any.

One of the things that has happened with DevOps is a de-emphasis on testing and more of an emphasis on overall risk management. That’s a valid strategy, of course, but it has interesting blind spots. Whenever I hear a developer speak about wanting feedback from users I immediately think about how abusive and incompetent most users are about reporting problems. No, my developer friends, you really don’t want to read all those Internet comments on your software. You will be demoralized. But testers? We love reading that stuff. It’s our wheelhouse. We get clues and then we can reproduce the problems and make them sensible for the devs.

My brother, at eBay, with his testing mentality, loves going over the user feedback and bringing it to the teams there. But he will tell you it’s a constant struggle to get the attention of the dev teams.

attend a conference. Don’t bother to go to the talks, though. Most of the talks are full of fluff. Instead, find people and talk to them. Compare notes, make friends. Go to the testing lab.

Hexawise: Often one of the major roadblocks to software testers is their own management. Do you think this is a fair statement? Do you have suggestions for how testers can attempt to improve the situation. My background is strongly influenced by W. Edwards Deming so I have a tendency to look at the organization as a system and see room to improve the management system. It seems to me often the biggest gains are not possible if we keep departments separate (software development, software testing, marketing, customer service...). We can make improvements in software testing even if it is largely seen as separate from the organization but in doing so we miss much greater potential improvement.

James: The collapse of the test management industry is a terrible problem. It’s getting harder to find any kind of test manager out there. Do they even exist in Silicon Valley any more or have they all been hunted down by parasitic wasps who lay “scrum master” eggs in their living carcasses?

People who seem to know little about management or testing tell me that test managers are not needed. Okay, that means a whole lot of things that test managers do will not get done. This includes: providing a protected place for testers to work, free of harassment; negotiating for testability; negotiating for resources; assuring that schedules are reasonable; assuring that testing gets the respect it requires in order to attract and keep talented people; assuring that deadlines are met; explaining testing to management; assuring that testers are properly trained. When those things aren’t happening, testers tend to become more zombie-like and reactive (I’m not speaking of those fast zombies); or they become cheerleaders for the devs, instead of critical thinkers.

Staying Current / Learning

Hexawise: How do you stay current on improvements in software testing practices?

James: I’m not convinced there are improvements in testing practices in the absence of improvements in the thinking and social systems that drive practices-- and those things don’t improve much, as I’m sure you’ve noticed. Seems to me that the current nonsense in our craft is very similar to the old nonsense. Maybe some of the buzzwords have changed, but not much else.

The landscape of testing has definitely changed. Agile and Lean have aggressively colonized a lot of the testing space. Since most testers are young people (and test management has been eviscerated) they are easy pickings for the Agile Universalists (the people who think that we don’t need testers because we can just all test whenever we feel like it).

What this means is that testing remains rather primitive wherever I go (with a few interesting exceptions, driven inevitably by a single enlightened Elrond-like or Galadriel-like manager, who always seems to disappear off into the Grey Havens within a couple of years of me meeting him or her).

How I become aware of new and interesting ideas is through my community. For instance, a student told me about Karnaugh maps the other day and now I am trying to find a use for them.

Hexawise: How would you suggest testers stay current?

James: I don’t think currency is a thing in testing, except with respect to learning about certain emerging technologies and buzzwords.

The bigger thing in testing is to push us forward, which not enough testers are trying to do. Don’t worry about currency, worry about whether you truly understand testing, and keep working on that study.

Read widely about science. Get ideas from that. And play with the ideas. For instance, I read on Hacker News about 350,000 free images being released by the Metropolitan Museum of Art. I decided to experiment with turning that into a practical resource for test data. This led to playing with data wrangling and image analysis tools.

Also, attend a conference. Don’t bother to go to the talks, though. Most of the talks are full of fluff. Instead, find people and talk to them. Compare notes, make friends. Go to the testing lab. Or host a little conference. Invite testers to a small gathering where you can share experience reports.

Hexawise: What software testing-related books would you recommend should be on a tester’s bookshelf?

James:

Profile

James Bach has authored two books and consulted and presented on software testing worldwide. It is difficult to put into words how unique and insightful James is. In order to get a feel, we suggest listening to his presentations yourself and reading his excellent blog.

Some Career Highlights

Links

By: John Hunter and Justin Hunter on Mar 2, 2017

Categories: Interesting People , Software Testing, Testing Strategies, Interview, Testing Smarter with...

I read a great post by Aleksis Tulonen Brainstorming Test Ideas With Developers and wanted to share it with you.

The ideas in the post provide a very efficient way to:

  1. increase the robustness and thoroughness of testing,
  2. prioritize among different testing ideas (both from a relative importance standpoint and from a relative timing perspective)
  3. surface new testing ideas that would not otherwise be considered
  4. provide some useful risk mitigation protection against the reality that "all models are wrong; some models are useful" problem

I'd suggest also scheduling a debrief with the same testers and developers a few weeks or months after the code was deployed.

At that time take a look at photos of the pre-testing Post It notes and a list of defects that slipped past testing and ask

  1. "what extra tests would we need to have added to have found these defects?"
  2. "what specific inputs did we forget to include?",
  3. would techniques such as pairwise testing have been beneficial?
  4. what areas did we spent a lot of effort on that did not result in learning much?
  5. what lessons learned should we incorporate for next time?

The idea of delibrately examining your software development and testing practices will be familar to those using agile retrospectives. The power of continually improving the development practices used withing the organization is hard to appreciate but it is immense. The gains compound over time so the initial benefits are only a glimpse of what can be achieve by continuing to iterate and improve.

Related: Pairwise and Combinatorial Software Testing in Agile Projects - Applying Toyota Kata to Agile Retrospectives - Create a Risk-based Testing Plan With Extra Coverage on Higher Priority Areas

By: John Hunter and Justin Hunter on Dec 15, 2016

Categories: Agile, Software Testing, Testing Strategies

We are excited to announce an ongoing partnership with Datalex to improve software test efficiency and accuracy. Datalex has achieved extreme benefits in software quality assurance and speed-to-market through their use of Hexawise. Some of these benefits include:

  • Greater than 65 percent reduction in the Datalex test suite.
  • Clearer understanding of test coverage
  • Higher confidence in the thoroughness of software tests.
  • Complete and consistently formatted tests that are simple to automate

An airline company’s regression suite typically contains thousands of test cases. Hexawise is used by Datalex to optimize these test cases, leading to fewer tests as well as greater overall testing coverage. Hexawise also provides Datalex with a complete understanding of exactly what has been tested after each and every test, allowing them to make fact-based decisions about how much testing is enough on each project.

Hexawise has been fundamental in improving the way we approach our Test Design, Test Coverage and Test Execution at Datalex... My team love using Hexawise given its intuitive interface and it’s ability to provide a risk based approach to coverage which gives them more confidence during release sign-off.

Screen shot 2016 10 04 at 12.58.53 pm

Áine Sherry

Global Test Manager at Datalex

“As a senior Engineer in a highly innovative company, I find Hexawise crucial in regards to achieving excellent coverage with a fraction of the time and effort. Hexawise will also facilitate us to scale onwards and upwards as we continue to innovate and grow,“ – Dean Richardson, Software Test Engineer at Datalex.

By eliminating duplicative tests and optimizing the test coverage of each test case Hexawise provides great time savings in the test execution phase. Hexawise can generate fewer test scenarios compared to what testers would create on their own and those test cases provide more test coverage. Time savings in test execution come about simply because it takes less time to execute fewer tests.

Related: How to Pack More Coverage Into Fewer Software Tests - Large Benefits = Happy Hexawise Clients and Happy Colleagues

By: John Hunter on Nov 10, 2016

Categories: Business Case, Customer Success, Testing Case Studies

Performance testing examines how the software performs (normally "how fast") in various situations.

Performance testing does not just result in one value. You normally performance test various aspects of the software in differing conditions to learn about the overall performance characteristics. It can well be that certain changes will improve the performance results for some conditions (say a powerful laptop with a fiber connection) and greatly degrade the performance for other use cases. And often the software can be coded to attempt to provide different solutions under different conditions.

All this makes performance testing complex. But trying to over-simplify performance testing removes much of its value.

Another form of performance testing is done on sub components of a system to determine what solutions may be best. These often are server based issues. These likely don't depend on individual user conditions but can be impacted by other things. Such as under normal usage option 1 provides great performance but under larger load option 1 slows down a great deal and option 2 is better.

Focusing on these tests of sub components run the risk of sub-optimization, where optimizing individual sub-components result in a less than optimal overall performance. Performance testing sub-components is important but it is most important is testing the performance of the overall system. Performance testing should always place a priority on overall system performance and not fall into the trap of creating a system with components that perform well individually but when combined do not work well together.

Load testing, stress testing and configuration testing are all part of performance testing.

Continue reading about performance testing in the Hexawise software testing glossary.

By: John Hunter on Sep 27, 2016

Categories: User Experience, Software Testing Testing Strategies, Software Testing

Hexawise has been driven by the vison to provide software testers more effective coverage in fewer tests. The Hexawise combinatorial software testing application allows software testers to achieve both seemingly contradictory statements.

Too Many Tests

See some previous posts where we have explored how Hexawise achieves more coverage with fewer tests:

By: John Hunter on Sep 14, 2016

Categories: Hexawise test case generating tool, Hexawise

Software testing concepts help us compartmentalize the complexity that we face in testing software. Breaking the testing domain into various areas (such as usability testing, performance testing, functional testing, etc.) helps us organize and focus our efforts.

But those concepts are constructs that often have fuzzy boundries. What matters isn't where we should place certain software testing efforts. What matters is helping create software that users find worthwhile and hopeful enjoyable.

One of the frustration I have faced in using internet based software in the last few years is that it often seems to be tested without considering that some users will not have fiber connections (and might have high latency connections). I am not certain latency (combined maybe with lower bandwidth) is the issue but I have often found websites either actually physically unusable or mentally unusable (it is way too frustrating to use).

It might be the user experience I face (on the poorly performing sites) is as bad for all users, but my guess is it is a decent user experience on the fiber connections that the managers have when they decide this is an OK solution. It is a usbility issue but it is also a performance issue in my opinion.

It is certainly possible to test performance results on powerful laptops with great internet connections and get good performance results for web applications that will provide bad performance results on smart phones via wifi or less than ideal cell connections. This failure to understand the real user conditions is a significant problem and an area of testing that should be improved.

I consider this an interaction between performance testing and user-experience testing (I use "user-experience" to distinguish it from "usability testing", since I can test aspects of the user experience without users testing the software). The page may load in under 1 second on a laptop with a fiber connection but that isn't the only measure of performance. What about your users that are connecting via a wifi connection with high latency? What if the performance in that case is that it takes 8 seconds to load and your various interactive features either barely work or won't work at all given the high latency.

In some cases ignoring the performance for some users may be OK. But if you care about a system that delivers fast load times to users you need to consider the performance not just for a subset of users but consider how it performs for users overall. The extent you will prioritize various use cases will depend on your specific situation.

I have a large bias for keeping the basic experience very good for all users. If I add fancy features that are useful I do not like to accept meaningful degradation to any user's experience - graceful degradation is very important to me. That is less important to many of sites that I use, unfortunately. What priority you place on it is a decision that impacts your software development and software testing process.

Hexawise attempts to add features that are useful while at the same time paying close attention to making sure we don't make things worse for users that don't care about the new feature. Making sure the interface remains clear and easy to use is very important to us. It is also a challenge when you have powerful and fairly complex software to keep the usability high. It is very easy to slip and degrade the users experience. Sean Johnson does a great job making sure we avoid doing that.

Maintaining the responsiveness of Hexawise is a huge effort on our part given the heavy computation required in generating tests in large test case scenarios.

You also have to realize where you cannot be all things to all people. Using Hexawise on a smart phone is just not going to be a great experience. Hexawise is just not suited to that use case at all and therefore we wouldn't test such a use case.

For important performance characteristics it may well be that you should create a separate Hexawise test plan to test the performance under server different conditions (relating to latency, bandwidth and perhaps phone operating system). It could be done within a test plan it just seems to me more likely separate test plans would be more effective most of the time. It may well be that you have the primary test plan to cover many functional aspects and have a much smaller test plan just to check that several things work fine in a high latency and smart phone use case).

Within that plan you may well want to test out various parameter values for certain parameters operating system iOS Android 7 Android 6 Android 5

latency ...

Of course, what should be tested depends on the software being tested. If none of the items above matter in your case they shouldn't be used. If you are concerned about a large user base you may well be concerned about performance on various Android versions since the upgrade cycle to new versions is so slow (while most iOS users are on the latest version fairly quickly).

If latency has a big impact on performance then including a parameter on latency would be worthwhile and testing various parameter values for it could be sensible (maybe high, medium and low). And the same with testing various levels of bandwidth (again, depending on your situation).

My view is always very user focused so the way I naturally think is relating pretty much everything I do to how it impacts the end user's experience.

Related: 10 Questions to Ask When Designing Software Tests - Don’t Do What Your Users Say - Software Testers Are Test Pilots

By: John Hunter on Jul 26, 2016

Categories: User Experience, Testing Strategies, Software Testing

Usability testing is the practice of having actual users try the software. Outcomes include the data of the tasks given to the user to complete (successful completion, time to complete, etc.), comments the users make and expert evaluation of their use of the software (noticing for example that none of the users follow the intended path to complete the task, or that many users looked for a different way to complete a task but failing to find it eventually found a way to succeed).

Usability testing involves systemic evaluation of real people using the software. This can be done in a testing lab where an expert can watch the user but this is expensive. Remote monitoring (watching the screen of the user; communication via voice by the user and expert; and viewing a webcam showing the user) is also commonly used.

In these setting the user will be given specific tasks to complete and the testing expert will watch what the user does. The expert will also ask the user questions about what they found difficult and confusing (in addition to what they liked) about the software.

The concept of usability testing is to have feedback from real users. In the event you can't test with the real users of a system it is important to consider if you are fairly accuratately representing that population with your usability testers. If the users of the system of fairly unsophisticated users if you use usability testers that are very computer savy they may well not provide good feedback (as their use of the software may be very different from the actually users).

"Usability testing" does not encompass experts evaluating the software based on known usability best practices and common problems. This form of expert knowledge of wise usability practices is important but it is not considered part of "usability testing."

Find more exploration of software testing terms in the Hexawise software testing glossary.

Related: Usability Testing Demystified - Why You Only Need to Test with 5 Users (this is not a complete answer, it does provide insite into the value of quick testing to run during the development of the software) - Streamlining Usability Testing by Avoiding the Lab - Quick and Dirty Remote User Testing - 4 ways to combat usability testing avoidance

By: John Hunter on Jul 4, 2016

Categories: User Experience, User Interface, Testing Strategies, Software Testing