There is now a set of requirement manipulation links (add, edit, delete) at the right hand side of the requirement definition. This is handy for plans that have lots and lots of parameters so you are often scrolled all the way to the far right in the requirements panel.
There is now a spinner to decrease the likelihood that you add a requirement a second time by clicking Add while the first one is being added.
Now there is a message in this case that shows which requirement was satisfied by the test rather than a blank.
If you use Hexawise in multiple tabs, it's possible that the export dialog is open while you do something in the another tab that causes the cached generated tests to be cleared out. This would result in an error trying to export from the dialog. Hexawise is now smart enough to detect this rare situation and adjust.
The same security fix gone awry.
An overly draconian security fix gone awry.
Most of the various panels you see in the Hexawise interface are collapsible and expandable and the current state is preserved so the interface stays how you last used it for that particular plan. There was a small exception to this in that the value pair panel auto-expanded as soon as you add a value pair (invalid pair or married pair). In this one case, where the panel auto-expanded, it wasn't sticky. So when coming back into the define inputs page for that plan the panel would be collapsed again.
This is a classic pair-wise defect. Value Pair panel collapse state is sticky? Check! Value Pair panel collapse state is sticky when value pair panel auto-expands due to value pair addition? Broken!
Broken no more.
When generating mixed-strength tests, you can't select fewer than n parameters to receive n-way coverage. For example, it doesn't make sense to ask for 3-way coverage on just 2 parameters. Hexawise has a user message explaining this, but this was covered up (literally in this case) in the user interface by a regression that happened when we changed how test generation results are retrieved. This issue is fixed.
Thanks to Geordie for pointing out the problem.
Mixed-strength tests start out with all parameters set to 2-way strength, but mixed-strength tests for this were still generated even if 2-way tests had already been generated. Rather than computing these as mixed-strength, Hexawise will now re-use 2-way, or 3-way or n-way tests from cache that match an all n-way mixed-strength test request.
The export type has to be represented somewhere, since you can export both at the same time into 1 zip file, but now the export type is in the directory name rather than the file name so the file name is more descriptive of the tests inside (file name is the test plan name and the strength of the tests, assuming it includes generated tests).
OPML importing is much more robust to a variety a variations in how the mind mapping or outlining tool might have exported the test plan inputs as OPML. Automated regression tests now in place on this to boot! QA FTW!
Did you know when adding a new parameter you can ignore the instructions to enter the values one on each line and instead enter them comma delimited? Probably not, but you can, and this impacted our range parsing code in some cases. Entering a range like so worked fine:
1,000-10,000
But entering a range like so (with the white space) did not work:
1,000 - 10,000
It resulted in 3 values "1", "000 - 10" and "000" rather than a single value range.
This means to trigger this defect you needed to do a parameter add, with only 1 parameter value that happened to be a comma delimited range, and that happened to have white space in it. That's somewhere between a 3-way and 4-way fault depending on exactly how you would have modeled the tests.
Hexawise creates value pairs on your behalf in some circumstances when you create constraints that imply other logically necessary constraints. There were two cases where Hexawise would complain about not being able to create these because they already existed. Of course them already existing is just fine and it shouldn't have complained.
An overly draconian security fix gone awry.
see above
It's massive in the same way the Sun is massive.
Did we mention it's massive?
Accessing Analyze Tests with a certain sequence of user interactions would cause Analyze Tests to stick at 0% complete. (Sequencing issues are an interesting form of pairwise defect to think about including in your tests).
The adding and editing of expected results in Auto-Scripts is now more robust with detection of unsaved edits when navigating away from the editing before it's completed. It's now consistent with editing Auto-Script steps.
The text area for defining test inputs (parameter add, parameter edit, bulk add, and bulk edit) now sizes to match the size of the top panel giving you more room to edit your parameter values.
The export a mind map achievement has a more clear description and a link to the help file about exporting.
In some cases and on some screen resolutions the 2 panels in the "Your Achievements" page would be inconsistently sized.
There were some cases (involving mixed use of delimiting commas and no commas) where value ranges would not expand to particular values. There were also cases where the distribution of: lowest value in the range, highest value in the range, random value in the range, then repeat, would not be used.
The UI is now more consistent about showing a "View" or "Discard" option when you quit editing steps before completing them. The same handling is now provided for editing expected results as well.
Unsaved expected result edits are now saved (after a prompt) when navigating away from the auto-scripts page.
For large sets of tests that are already generated and cached but take some time to render for display, the rest of the page now renders, and there is a waiting display for just the portion of the page waiting on the rendered tests.