NL test agent

The Natural Language test agent lets you test our natural language APIs (/interact & /explain) directly from the Studio.

Provide unstructured data and a query and you will be able to answer any questions the reasoning engine has before being given a result complete with a natural language explanation of the evidence tree.

All features delivered by Rainbird Labs are beta. They may contain bugs, are subject to change and are not covered by our platform SLAs. Your feedback can help shape development.

Getting Started

Within the Rainbird Studio, click Natural Language then enter a query to test your knowledge graph.

The menu allows you turn on debug mode, to view the extracted queries and facts, and view evidence explanations when you receive a result.

A new session can be started with the refresh button.

Best Practice

  1. Test against the knowledge graph directly first

    1. Make sure the knowledge graph is able to provide results using structured data, by using Quick Query first.

  2. Clear naming conventions in your knowledge graph, aligned to the expected input data

    1. To best optimise the knowledge graph for natural language, ensure you use clear and descriptive naming for your concepts, relationships and concept instances. These do not have to match the input data exactly, but the closer aligned they are, the better the performance will be.

  3. Consider using evidence text to improve explanations

    1. Adding evidence text can act as a clearer explanation at a rule-level, which our LLM can use when creating a summarisation of the entire evidence tree.

Test scenarios

There are several integration patterns when using the natural language APIs. This solution will have a bearing on how you might configure your knowledge graph and test it via the NL test agent.

Here are some examples:

Chat agent

Your end-users use a chat agent, enabling direct interactions with the knowledge graph, where they can answer questions directly.

  • Knowledge graph should be configured to support an interactive mode, with questions switched on for relevant relationships and question text written with the end-user in-mind.

  • Testing via the NL Test Agent for this scenario would often start with a query only, followed by answering the questions that are returned before a result is reached.

Traditional workflow

You are embedding Rainbird-powered decisions into an autonomous workflow. This process may be triggered by some event before collecting data to inject into Rainbird, run a query and take action on the result.

  • The knowledge graph should have questions turned off on all relationships

  • Testing via the NL Test Agent can be done by providing a query and your sample data set into the chat box to validate fact extraction and the result.

Agentic solutions

You have an agentic solution that has a team of AI agents working together to accomplish a task. Where this involves a decision needing precisioning and auditability, one of the agents can be responsible for calling Rainbird as a tool.

This solution could integrate other systems to obtain data in advance of a query, or it could use questions from Rainbird as a prompt to retrieve information from other systems or an end-user.

  • The knowledge graph should be configured to ensure questions are turned on for relevant relationships. Consideration should be made as to whether the question text is intended for an end-user or for an AI agent to instruct it to retrieve the required data.

  • Testing via the NL Test Agent can be done by mimicking the expected queries and data structure provided by the agentic solution. Some fine-tuning could be performed either within the graph, or within the AI agent prompts to align the language and ensure the best outcomes.

Last updated