Hexawise recent updates

Hexawise recent updates

February 10, 2026
NEW
TestRail Test Dataset Export

You can now exports directly to TestRail's CSV dataset import format, connecting your optimized scenario generation with TestRail Enterprise's data-driven testing in a single step. You'll find the new "TestRail Test Dataset" option under "Export & API" in your test models.

February 1, 2026
NEW
AI Parameter Reuse

AI-assisted parameter generation now draws from your team's "Reusable Parameter Library". When you use "Generate Parameters" from the Parameters screen, the AI considers your organization and project-level reusable parameters and suggests applicable ones for reuse in your model. This helps you build test plans faster while staying consistent with shared definitions.

January 29, 2026
IMPROVED
Value Expansion Support in Bulk Edit

The new <... , ...> syntax for parameter value expansions is now supported in the "Bulk" edit tab of the "Parameters" screen.

Example:

Operating System[Windows <Widows 10, Windows 11>, OSX, Linux, iOS, Android]

January 21, 2026
NEW
Reusable Parameter Library

Parameters often apply across multiple models, whether within a project, a system under test, or a domain. The new Reusable Parameter Library lets you save a parameter once and make it available to anyone in your project or organization, ensuring consistent definitions across models.

Saving a parameter for reuse: Click a parameter's name in the Parameters screen to open it for editing, then click "Save for Reuse" at the bottom of the dialog. You'll choose whether the parameter applies to the current project or across your organization. Learn more about creating reusable parameters.

Managing reusable parameters: Open the "Reusable Parameter Library" from any model's drop down menu. From here you can browse all available reusable parameters with their metadata and push updates to every model that uses them. Learn more about managing reusable parameters.

Adding reusable parameters to a model: Click "Reuse Parameters" in the Parameters screen, or select the bold auto-complete suggestions when creating a new parameter. Learn more about adding reusable parameters to your models.

January 14, 2026
IMPROVED
Updating Forced Interactions Creates Model Revisions

Edits to a model are versioned as revisions, accessible from the model's "Revisions" dropdown. Previously, changes to forced interactions weren't tracked but now they are.

NEW
Model Change Impact Analysis - Minimize Scenario Drift with Baselines

A baseline locks a set of generated scenarios as a reference point. When you change your model, the system preserves your baseline scenarios as much as possible while still achieving the required coverage: new parameter values are integrated into existing rows, scenarios with removed values are updated with valid alternatives, and additional scenarios are generated only as needed.

Using a baseline will result in less optimal scenarios. Set a baseline only if you have a meaningful reason to preserve specific scenarios — for example, you've invested in determining expected outcomes, built non-trivial automation, need ongoing execution traceability, or have formal approval of these scenarios.

Setting a baseline: Any scenarios generated after this feature's release can be baselined. Click "Establish Baseline" in the header of the scenarios table on the "Scenarios" screen.

Comparing changes: Use the "Compare to Baseline" toggle in the scenarios table header to see exactly what's changed. For a summary of all model changes, click "View Impact Report" in the baseline warning banner, or click "Download Full Impact Report" for a detailed Excel spreadsheet.

Learn more about minimizing scenario drift with baselines.

REMOVED
Freeze Scenarios

"Freeze Scenarios" has been replaced by the new "Establish Baseline" action on the "Scenarios" screen.

Baselines improve on frozen scenarios in several ways:

  • They can be established retroactively on any prior set of generated scenarios, even after model changes.
  • They can be enabled, disabled, and changed to explore the impact of different reference points
  • They preserve parameter value expansions and ranged value selections in newly generated scenarios.
  • They offer full visibility into what changed and why through the "Compare to Baseline" and "View Impact Report" options.

Existing models with frozen scenarios continue to function as before. Going forward, use baselines anywhere you would have used frozen scenarios.

Learn more about minimizing scenario drift with baselines.

December 3, 2025
NEW
Cumulative notes for Sept – Nov

Major new feature - Reusable Steps

To improve the efficiency and consistency of your scripting efforts, you can now save the elements of either Manual (individual steps or blocks with multiple steps) or Automate (individual steps) tabs then insert them into other scripts.

You can then manage all your reusable steps in a dedicated library for Manual or Automate, accessible from the model properties menu (downward arrow next to the model name).

You can find more information under the Usage button on the Scripts screen inside the tool or in the Scripts documentation.

Also

  • Bulk operations on models. In a view like “My Test Models”, you can now select multiple models via the checkboxes on the left then perform bulk operations – “copy”, “move”, “delete”.

  • On the Scenarios screen, the row with parameter names is now frozen in the view for all generation options (so you can still see it after you scroll down to review the data).

  • Panels in Scripts -> Automate can now be collapsed to create more space for editing or review (via icons in the top right, next to “Usage”).

  • When you open a non-private model, you can now see the “breadcrumbs” next to the model name, showing its Project and/or Folder membership. You can click on them to quickly navigate back to that Project/Folder.

Lastly

Both Foundations and Professional courses have been updated for the new UI and recent features (yes, finally).

July 10, 2025
NEW
New Release - API Endpoints

Introduction

The focus of our tool is test optimization, which means you move the Hexawise artifacts to either test management or test automation solutions at the end of your workflow. Furthermore, given the iterative nature of our model-based approach, the same artifact may need to be re-exported quite a few times over the course of its lifecycle.

This article describes our next step in unlocking more API capabilities and taking the export versatility to the higher level. The goal is to make the connection to external tools more streamlined, particularly in the automation-focused and CI/CD environments.

How It Works

For each export type available on our Export screen, you get an extra option called “API Endpoint”. It allows you to publish the optimized output in the specified format to a unique URL, which can then be used in HTTP REST “GET”-type API calls from other solutions.

Note: i.e. this implementation will allow other tools to “pull” the output from Hexawise. For future releases, we are evaluating a more flexible API connection with a select roster of external tools, so that you could also “push” the output to them.

June 26, 2025
IMPROVED
AI Guidance Update

Released UX and functional improvements to the AI Guidance. All users who have the feature enabled need to accept the formal AI Supplemental Terms before engaging with it further. And we have updated the documentation portal, both the walkthrough article and the dedicated AI Data Usage Policy one.

May 15, 2025
NEW
AI Guidance Beta Launch

We are excited to announce that the beta of our AI Guidance feature (powered by Sembi iQ) is live.

The goal of this improvement is to accelerate the user journey towards the initial draft of parameters and values, which is the crucial step in the modeling process. It could be helpful not only for users who are relatively new to the tool and our approach, but also for the ones working with numerous lengthy sources of information.

If you would like to participate, please coordinate with your stakeholders to make sure the AI usage is approved, then reach out to support@hexawise.com with the “Hexawise AI beta opt-in” request.

Key Benefits:

Speed - reduce hours of manual review and data entry and also leave the prompt engineering to us.

Versatility - upload a wide range of text-based documentation formats without extensive data cleaning or reformatting.

Nativeness - leverage dedicated and streamlined UI/UX as well as AI’s awareness of test design techniques like boundary testing and equivalence classes.

To use AI Guidance, you must provide **2 essential pieces of context** for the AI to work with:

- A high-level testing goal of at least 10 words that clearly articulates the scope and objectives of your modeling effort

- Relevant documentation about the system under test in supported formats, with each file not exceeding 50 MB

Scope: 5 AI actions on the Parameters screen

1. Suggest initial parameters: In a new, empty model, click the "Generate Parameters" button located below "Add Parameter" and "See Example Model" to create a set of initial parameters.

2. Additional parameters for an existing model: Click the "Generate Parameters" button at the end of your current parameter list to expand your model with additional parameters.

3. Generate values for a new parameter: When creating a new parameter with uncertain values, use the “Generate parameter variations” link in the "New Parameter" dialog to suggest appropriate values.

4. Expand parameter values: Hover over any existing parameter name and select "Suggest more parameter values" to increase the selection of test values.

5. Reduce parameter values: Hover over any existing parameter name and select "Suggest fewer parameter values" to streamline and focus the test scope.

We hope you enjoy it!

May 14, 2025
IMPROVED
New "Scenarios" option on the Export screen

Before this, you could export just the data table (without any scripts) leveraging the "Save As" dropdown on the Scenarios screen. Now, the same format options (CSV, JSON, Gherkin Data Table, etc.) are also available under the "Scenarios" tile on the Export screen.

This not only consolidates your choices in one place, but also establishes the foundation for the future API work.

May 13, 2025
IMPROVED
Manual Scripts Rework

We are excited to announce the rework of the Manual Scripting experience.

With this release, you can create multiple script templates per model (similar to Automate) and use the Condition field to specify when certain steps do or do not apply. That significantly improves the flexibility of your modeling, especially when it comes to rule-heavy, branching flows. The look and feel have also been modernized.

Please note that the “Relax” feature is now enjoying well-deserved retirement (along with “Start” and “Finish” sections). However, there is a “Simple Parameters Script” option on the Export screen that performs a similar function, and you can now clone steps.

You can find more details from the “Usage” section now available on the Manual Scripts screen, as well as from the updated documentation in the knowledge center (will be published shortly).

August 22, 2024
NEW
New Tutorial - Tips for Starting Points

New Tutorial - "Tips for Starting Points" - discussing modeling from acceptance criteria, flow diagrams, and state transition tables has been added to the knowledge base.

July 29, 2024
NEW
Release - Optimized Compute Option

“Optimized” compute option is now live.

As a summary,

  • it won't affect the existing computations automatically. There will be a dropdown on the Scenarios screen to allow the user to select optimized, at which point they will need to fill in a text field with the word “optimize” and confirm it. This is intentional friction to ensure people don’t just try to do optimized computes by default all the time.

  • it will run the computation on the same engine 50 times from 1 click, selecting the outcome with the least number of tests.

  • switching between standard and optimized resets cache - i.e. if you calculate standard, then switch to optimized, then back to standard, a new data table will be generated.

  • staying with the same compute option does not reset cache - i.e. if you calculate optimized, switch to another screen, don't do any other cache-resetting edits, switch back to optimized, the same table will be retrieved.

  • the recommended flow/best practice is to utilize standard computation while the model is undergoing edits, then launch the optimized version when you are ready to export.

August 10, 2023
NEW
Advanced N-way Constraints released

One of our features is Constraints which can be used to exclude invalid or irrelevant combinations of parameter values from the Scenarios table. Advanced N-way constraints (where N can be 2 or higher) make it possible to create dependencies across any number of parameters which should be a significant time saver when defining complex rules and should expand your ability to apply our tool to highly conditional workflows.

Please refer to the dedicated "Advanced N-way Constraints" article in the documentation for the feature overview.

July 18, 2023
IMPROVED
Parameterizable Test Name and Description for the Xray export option

In the previous implementation of the Xray export dialog, the Test Name field only leveraged the global variables (like Model or Feature Name) and the Description field was automatically populated in the background.

This change allows users to specify both fields leveraging not only global, but also model variables. It applies to both Manual and Automate scripting types.

The caveat: previously, the Description for Automate included the entire content of the Feature tab without any user interaction or choice. If that behavior needs to be preserved, users would need to manually copy the content from the Automate tab into the Description field during export.

January 25, 2023
NEW
Added an Automate script tag which will allow the number of exported scripts to match the number of rows on Scenarios (regardless of the parameters involved in the script block)

This was a pain point specific to system integration and end-to-end testing. In the previous functionality, let's say we have a Guidewire integration model. The Policy Center section has 10 parameters which result in 20 scenarios, and the Billing Center validation needs only 3 parameters. We couldn't write a separate script block like this:

Scenario: Billing Center validation

Given I navigate to policy summary in BC
    When I check that P1 is <P1> and P2 is <P2>
    And On the Amounts screen, premium is <Premium>
   Then Validation is successful

Because it would export only the number of unique combinations between P1, P2, and Premium, which is very often less than 20. Such mismatches caused issues during transitions to test management systems for execution/reporting. That forced users to write very long scripts which were harder to analyze for feedback. Also, different parts of the e2e flow could have different execution methods (manual vs automated).

This change adds @enumerate_all tag that, if specified above the Scenario or Scenario Outline name, will make sure that the number of exported scripts matches the number of rows on the Scenarios screen, regardless of the number of parameters involved in that particular script block.

January 24, 2023
NEW
Added 1 more level to the asset hierarchy

Old hierarchy: Project (optional) -> Model

New hierarchy: Project (optional) -> Folder (optional) -> Model (for the avoidance of doubt, you can still put the models directly into the project)

So you should have more flexibility in terms of grouping your models. Folders can be created via the "New Test Model" dialog, "Edit Model" one, or Project level view (icon with a "+" sign).

Naming convention for Folders always includes Project Name automatically. Folders under the same project cannot have identical names.

If you select the Project, you will see all the models from all the folders on the right. If you select the Folder, you will see only the models in that folder.

If you delete a folder, it doesn't delete the models in it, it just removes them from the folder and puts them into the underlying project.

December 28, 2022
NEW
Added Notes pop-up window

When accessing a Test Model that has Notes associated with it, you are now given a notification pop-up about the Notes existing. There is a “don’t notify me again” checkbox in the Note pop-up if you don't want to see the pop-up again when accessing the test model moving forward.

IMPROVED
Clickable URLs are now allowed in Notes

URLs are now clickable inside any given Test Model's Notes.

December 14, 2022
FIXED
Export now takes into account the strength selection from the export dialog, not the preview one.

In the past, selecting different preview and export coverage strengths for Automate-related outputs resulted in the generated file matching the preview choice, not the export one. That could cause confusion since the selection in the Export dialog is the "final" one from the user workflow standpoint. This was adjusted to make the generated file use the export strength selection.

FIXED
Resolved inconsistencies in the message about conflict handling between Forced Interactions and Constraints.

The dialog that appears when a user creates constraints contradicting forced interactions (or vice versa) has been updated to reflect the most recent terminology and represent the involved elements (forced interaction ID and constraint value/type) more precisely.

December 13, 2022
FIXED
Error when generating 1-way sets of scenarios

Resolved an issue where the warning dialog for 1-way computation strength could automatically redirect a user back to the previously generated n-way (without giving a chance to accept the warning).

FIXED
"Silent" error when using duplicate but case-sensitive model names

Resolved an issue where the validation of model naming uniqueness was case-insensitive.

I.e., a user could create a model with the name that matches another existing model (both private or both in the same project), with the only difference being the case ("TestModel" vs "testmodel").

No error was presented, but the model was not created either. The duplicate naming error will now be properly shown to the user and the "Create" button will be disabled.