What is Unit Test Feature?

Unit Test feature is a custom-built one-stop solution that helps you automatically generate, execute, and debug test cases for an API based on its specifications to ensure it meets its design and functionality requirements. Under the hood, this feature abstracts and automatically configures the test suites, test cases, assertions, and much more. This allows you to focus on building your application instead of spending time manually testing the APIs. Additionally, you can now use curl commands, Postman collections, and OpenAPI Specs to define your API specifications.


Preconditions

  1. The API specifications are documented and accessible.
  2. API specifications are detailed and structured to enable automated test generation. These can be provided in various formats including curl commands, Postman collections, or OpenAPI Specifications.
  3. The user has access to ratl.ai with permissions to generate and run tests.


Step-by-step Guide

Upload collection

  • The user navigates to the Unit Test feature page and clicks on "Add Collection" button.
  • The user has multiple ways to provide the API specification details. These include:
  • Copy & paste a curl command.
  • Upload a Postman collection file.
  • Upload an OpenAPI Specification (JSON/YAML format).
  • Once the details are provided, the user can click on Generate button.

Generate Test Cases

  • ratl.ai automatically generates test cases based on the provided API specifications.
  • The system displays a summary of the generated test suites for review.
  • Test case generation can take anywhere between 2-5 minutes.

Review and Modify Test Cases

  • The user reviews the test cases. Each test case includes intended inputs, expected outputs, and test conditions.
  • The user modifies or adds test cases as needed to cover additional scenarios or to refine existing cases.

Test Execution

  • The user initiates the test execution process.
  • ratl.ai runs the test cases, tracking each test’s execution status, response times, and outcomes.

Results Analysis

  • Upon completion, ratl.ai displays the results, highlighting successful tests and failures.
  • The user analyzes the results, identifying tests that did not meet expectations.
  • The user can export the results at both the project level and the suite level for further analysis and reporting purposes.

Debug and Fix

  • For failed tests, the user investigates potential causes using logs and error messages provided by ratl.ai.
  • The user makes necessary corrections to the API or adjusts the test cases to address the failures.

Re-run Tests

  • The user re-runs the tests to validate the changes.
  • ratl.ai provides an updated report on the re-run, confirming fixes or identifying persisting issues.

Postconditions

  1. The API is thoroughly tested against its specifications.
  2. All discrepancies between the API's behavior and its specifications are identified and addressed.
  3. Detailed reports are generated and can be exported at both the project level and the suite level.
Be the first to experience
Thank you!
We will connect with you shortly.
Oops! Something went wrong while submitting the form.
Join waitlist