Skip to main content
All CollectionsTest Wizard SetupTest Setup Overview
Create Test Instructions with the help of AI
Create Test Instructions with the help of AI

AI Test Setup Assistant, AI-Supported Setup, Gen-AI Test Instructions

Uliana Kruk avatar
Written by Uliana Kruk
Updated today

Our AI-infused Test Setup Assistant is now available in our test creation wizard. This feature grants you a secure access to OpenAI's Chat GPT and is designed to help you set up tests from scratch as well as improve test instructions.

All you need is to type in a short prompt (e.g. "Please test the complete process of adding items to the cart...", "We would like to check responsiveness of our start page..." ) on the New Exploratory Test page and our Test Setup Assistant will fill out the biggest part of the test setup form for you:

Additionally, you can also utilize the AI buttons next to the single test instruction fields to:

  • either help you craft test instructions from scratch (if the respective field is empty);

  • or help you enhance the existing test instructions in the respective field.

HOW TO ACTIVATE THIS FEATURE

Before its first use, users are required to confirm usage of AI. This is a one-time action only that activates AI features and tools for the complete customer dashboard. This confirmation can be done directly in the Gen-AI popup on the New Exploratory Test page:

or directly on the Test Setup page:

Also you can grant this consent on your profile page under 'AI Integration':

Additional preconditions apply to activate the feature:

  • If you would like to utilize the AI Test Setup Assistant from the New Exploratory Test page, your prompt must contain a minimum of 12 characters.

  • If you would like to activate AI buttons on the Test Setup page, you need to select at least one feature.

HOW TO USE THIS FEATURE AND WHICH DATA WILL BE SENT TO THE MODEL

On the New Exploratory Test page you can type in a short prompt (i.e.request) for creation of a functional test cycle. Examples include, but are not limited to:

  • Please test the complete process of adding items to the cart.

  • Users should be able to navigate through different sections of the app easily and intuitively.

  • We have updated our CMS and would like to test the search bar to ensure it returns relevant products.

  • A new landing page was deployed and we would like to check its responsiveness.

  • We would like to ensure that account-related functionalities such as signing up, logging in, password resets, and profile updates work correctly.

  • Product images, descriptions, and details should be displayed correctly and consistently across the site.

Apart from your prompt, the following information specific to your product is going to be sent to the assistant to ensure the result's relevance:

  • Feature titles and their descriptions that belong to the respective product/ section

  • Test environment titles in the given product/section

  • Information on the latest 5 tests in the given product/section:
    * Test title
    * Title of the test environment(s)
    * Feature titles and activated bug types per feature
    * Test instructions (Goal of this test, Out of Scope, Additional Requirements)

Once the assistant generates an output based on the information above, a new test scenario is created with some pre-selected options and pre-filled text fields. All pre-filled fields/pre-selected options will be marked marked with an 'AI badge'.

The assistant pre-fills/ pre-selects the following fields:

  • Test title

  • From the list of test environment the assistant will pre-select the most relevant one (this field might end up empty if there are no relevant test environments in the given product/section)

  • From the list of existing features, the assistant will pre-select relevant ones; or create a new one (see information below)

  • For every feature, the assistant will pre-select the relevant bug types (user stories are out of scope)

  • Goal of this test

  • Out of scope

  • Additional requirements


The created test scenario can then be further edited and finalized.

New Feature Creation

If AI Test Setup Assistant determines that an essential feature is missing from the list of existing features, it creates a new one - with a suitable title and description. The feature is created in the respective product, is pre-selected for the test cycle and is temporarily marked with an 'AI badge' next to its title.


Please note: We strongly recommend you to review the features that were created by the assistant and finalize those on the Features & User Stories page.


Improving Test Instructions

In addition to this, you also have an ability to further improve AI-generated test instructions or utilize our Assistant to create test instructions from scratch directly from the Test Setup Page.

You will find the AI-buttons next to the individual test instruction fields:


Once you click on any of the three button, and then select 'Create/Improve Instructions', the following happens:

  • If there are any test instructions in the respective field, the assistant is going to improve those.

  • If a respective test instruction field is empty, the assistant is going to create test instructions from scratch.

In both cases, the following data from the test scenario is going to be provided to the assistant:

  • test type

  • test title (if specified)

  • whether orders/booking are allowed (as specified for the test environment)

  • feature title(s) (without descriptions)

  • bug report language

  • any existing input in the given and other test instruction fields - Goal of this test, Out of Scope and Additional requirements (if available)

The instructions for the three test instruction fields are generated/improved separately, with the respective buttons.

This feature enables you to make further edits to test instructions generated by the assistant and send the new version to the model for further improvements.

CURRENT LIMITATIONS

  • Please note that the user stories, usability tests as well as test setup in German are not supported by this feature for now.

  • Device selection as well as scheduler can not be automatically pre-filled by the assistant yet.

  • Please pay attention to formatting - you can utilize the AI-buttons and our rich-text editor to improve it.

BEST PRACTICES AND RECOMMENDATIONS

  • It is crucial to review AI-generated setup and test instructions before test submission, removing or expanding these as necessary to ensure clarity and relevance.

  • Likewise, it is important to review the features that were created by the assistant. 'How to find' field is always empty in such cases and requires your finalization. You can edit and finalize features on the Features & User Stories page.

  • Given the volume of instructions our testers process daily, we recommend avoiding redundancy and repetition unless it's to emphasize crucial points.

  • Providing a more details in the initial prompt will help generate more relevant output from the model.

  • We recommend using our WYSIWYG editor to organize and highlight important information.

OPT OUT

You can opt out at any time on your profile page under 'AI Integration'.

Note that this change will apply to all users of your Test IO dashboard and even after opting out, the feature will remain visible (but deactivate). Should you wish to use it again in the future, you will need to opt back in.

Further information on AI data collection can be found on your profile page under 'AI Integration'. Please note that data sent to the assistant will not be used to train or improve OpenAI models.

Should you have any questions, require assistance, or wish to provide feedback, please feel free to contact your dedicated CSM or our team at [email protected].

Did this answer your question?