In the previous implementation of the Xray export dialog, the Test Name field only leveraged the global variables (like Model or Feature Name) and the Description field was automatically populated in the background.
This change allows users to specify both fields leveraging not only global, but also model variables. It applies to both Manual and Automate scripting types.
The caveat: previously, the Description for Automate included the entire content of the Feature tab without any user interaction or choice. If that behavior needs to be preserved, users would need to manually copy the content from the Automate tab into the Description field during export.
This was a pain point specific to system integration and end-to-end testing. In the previous functionality, let's say we have a Guidewire integration model. The Policy Center section has 10 parameters which result in 20 scenarios, and the Billing Center validation needs only 3 parameters. We couldn't write a separate script block like this:
Scenario: Billing Center validation
Given I navigate to policy summary in BC When I check that P1 is <P1> and P2 is <P2> And On the Amounts screen, premium is <Premium> Then Validation is successful
Because it would export only the number of unique combinations between P1, P2, and Premium, which is very often less than 20. Such mismatches caused issues during transitions to test management systems for execution/reporting. That forced users to write very long scripts which were harder to analyze for feedback. Also, different parts of the e2e flow could have different execution methods (manual vs automated).
This change adds @enumerate_all tag that, if specified above the Scenario or Scenario Outline name, will make sure that the number of exported scripts matches the number of rows on the Scenarios screen, regardless of the number of parameters involved in that particular script block.
Old hierarchy: Project (optional) -> Model
New hierarchy: Project (optional) -> Folder (optional) -> Model (for the avoidance of doubt, you can still put the models directly into the project)
So you should have more flexibility in terms of grouping your models. Folders can be created via the "New Test Model" dialog, "Edit Model" one, or Project level view (icon with a "+" sign).
Naming convention for Folders always includes Project Name automatically. Folders under the same project cannot have identical names.
If you select the Project, you will see all the models from all the folders on the right. If you select the Folder, you will see only the models in that folder.
If you delete a folder, it doesn't delete the models in it, it just removes them from the folder and puts them into the underlying project.
When accessing a Test Model that has Notes associated with it, you are now given a notification pop-up about the Notes existing. There is a “don’t notify me again” checkbox in the Note pop-up if you don't want to see the pop-up again when accessing the test model moving forward.
URLs are now clickable inside any given Test Model's Notes.
The dialog that appears when a user creates constraints contradicting forced interactions (or vice versa) has been updated to reflect the most recent terminology and represent the involved elements (forced interaction ID and constraint value/type) more precisely.
In the past, selecting different preview and export coverage strengths for Automate-related outputs resulted in the generated file matching the preview choice, not the export one. That could cause confusion since the selection in the Export dialog is the "final" one from the user workflow standpoint. This was adjusted to make the generated file use the export strength selection.
Resolved an issue where the warning dialog for 1-way computation strength could automatically redirect a user back to the previously generated n-way (without giving a chance to accept the warning).
Resolved an issue where the validation of model naming uniqueness was case-insensitive.
I.e., a user could create a model with the name that matches another existing model (both private or both in the same project), with the only difference being the case ("TestModel" vs "testmodel").
No error was presented, but the model was not created either. The duplicate naming error will now be properly shown to the user and the "Create" button will be disabled.
As part of the closer collaboration and consistency across Idera's toolset, we have changed the UI color scheme.
All references to "Test Plans" became "Test Models", and the dropdown next to the plan name (with Revisions, etc.) was streamlined to remove redundancy with the left vertical menu, but no functional changes otherwise.
Documentation and Certification screenshots will lag a bit from the update perspective, but we'll try to bring them up to speed asap.
If you notice any issues, please report them to support
We have added the "Onboarding Checklist" article to the Getting Started section of our documentation center. Since we removed the previous leveling system, it can be challenging to figure out the best first steps in the Hexawise learning process. The new article hopefully addresses that.
Administrators configuring Xray exports can now flag the configuration as targeting manual auto-scripts, Automate scripts, or both (which is the default).
When exporting, the scripts that can be exported for the configuration that is selected will match this flag.
The most important element of this improvement is that it is now easier to have manual auto-scripts and Automate scripts in the same test plan and export each of them to Xray using different configurations.
This issue has been resolved.
This defect is a good example of a pairwise defect caused by variation in the user's activity. This type of parameter isn't often modeled, but should be as it is the source of numerous real-world defects. Study defects in production in your own system to get a feel for this type of variation so you can model it effectively.
Specially named parameters in the test plan can be used to represent folders / hierarchy in the Tosca Commander export. These parameters should have just 1 value so as not to effect the generated scenarios, and use a parameter naming scheme that starts with a
# Folder Name ). The number of
# indicates the depth of the folder (e.g.
### This is 3 deep).
This scheme was limited to folder depth only ever staying the same or getting deeper, but not returning to a more shallow depth, but now works more flexibly and you can specify a higher depth after lower depths (for both folder names and individual parameters).
# Folder A: blank (column A in export)
Param 1: value 1, value 2, value 3 (column B)
Param 2: value 1, value 2 (column B)
## Folder A Deeper Child: blank (column B)
Param 3: value 1, value 2, value 3, value 4 (column C)
## Param 4: value 1, value 2, value 3 (column B)
# Folder B: blank (column A)
### Param 5: value 1, value 2, value 3, value 4 (column C)
In a specific case of a test plan with a skip constraint that was skipping just a single parameter and a copy is made of the test plan the copy will have the skip constraint defined as skipping to the end of the plan, rather than skipping just the single parameter. This pairwise defect has been fixed.
After navigating to "Scenarios" in a test plan, the most recent scenarios are retrieved and displayed, or if there aren't any then the generation of a 2-way set of scenarios is initiated. If you don't want those 2-way scenarios (e.g. you want 3-way, or mixed-strength scenarios instead) and cancel the operation, there wasn't then an ability to initiate the generation of a different set of scenarios in the UI. Now there is.
In a test plan with forced interactions, there was an error when previewing the final generated scenario in the manual auto-scripts UI. This was either a pairwise or 3-way defect depending on how you frame it. It is fixed.
In some cases commas coming from value expansions were not properly escaped during a CSV export. This is fixed.
Some users will have seen an issue exporting some test plans into CSV format. This has been resolved.
If you clicked the "what kind of file" link on the new plan dialog, there was a stray click registered which opened the browser's file open dialog. This has been resolved.
The table that is used to select the scenario being previewed when writing manual auto-scripts has been updated to the new table style and functionality.
We have made a couple adjustments to the scenario generation processes which will increase the speed for the large plans and also optimize the impact on overall performance.
When you delete a parameter with value expansions you should receive a warning that the value expansions will be deleted as well. This warning was temporarily missing.
Artificial Intelligence and Machine Learning (AI/ML) is a rapidly growing area for test design and planning, so we have added the article explaining how to apply Hexawise in this area: How to perform testing of AI/ML-based systems with Hexawise
Seeing the scenario count in Hexawise for the first time can be eye-opening for multiple reasons, so we have added the article explaining how you can evaluate & adjust that metric: Why "test case count" can be a misleading metric in model-based testing (and what to to about it)