/interact
/interact provides a natural language interface to reasoning over your knowledge graph, allowing you to provide unstructured data and queries, but receiving deterministic results.
This enables you to embed deterministic, explainable decisions, into:
User-facing chatbots
Agentic AI solutions
Retrieval Augmented Generation (RAG) solutions
Fully autonomous workflows
Custom software integrations
All features delivered by Rainbird Labs are beta. They may contain bugs, are subject to change and are not covered by our platform SLAs. Your feedback can help shape development.
Getting Started
/interact is in open beta and details can be accessed from our API documentation. From here you download the OpenAPI specification to import into, for example Postman. Once you have API access you can do the following:
Establish a session with your chosen knowledge map through the /start endpoint.
Call /interact with a user prompt containing a question (the query) and/or data from which to extract facts
/interact will respond with a question, result or an error
Given you receive a question, send your answer to /interact (using the same SessionID). Repeat until a decision is returned.
Given you receive a result, you can make additional queries within the same session or use the FactID to get an explanation
For details of how to test this functionality within the Studio, see our page on the Natural Language test agent.
/interact response guide
Given a Session ID, set of data and a query, you will receive a response containing either a question, result or error. For simplicity, this is indicated in the responseType
field.
The question object uses our standard questionResponse schema also used in our Core endpoints.
The prompt
contains the question to present to a user or AI agent.
Additional properties can be used to control the experience, but this is not mandatory:
dataType
: data expected of the response, which will be string, number, date or truth.plural
: when this istrue
, multiple answers are allowed. If multiple answers are received when this isfalse
, an error will be returned.concepts
: this array provides a list of concept instances as possible answers (relevant only when the answer expected is a string). These could be presented to the user as options, summarised or ignored.canAdd
: when this istrue
, any string will be accepted. Whenfalse
, only known instances listed in the concept array above will be accepted. If an undefined instance is provided, an error will be returnedallowUnknown
: when this istrue
, it means the data is optional and can be skipped (e.g. you could return a userPrompt of "I don't know"). Whenfalse
the data is mandatory and we will continue to respond with the same question.
In addition to this the response also contains:
query
- details of the query in progressfacts
- a list of facts extracted from the request data which provides real-time ability to validate the accuracy of extracted factsmetadata
- API and request metadata
Best Practice
To optimise performance of /interact, it is recommended that the language used in the knowledge graph is reviewed in context of samples of the unstructured input.
A useful exercise can be to consider whether someone with limited knowledge of the domain could be given sample input and the graph be able to successfully map data to the knowledge graph.
Relationships you want to query should be given names with similarity to the questions that will be asked. For example, a query of Is Sarah be eligible for remortgage?
when the relationship is called overall result
is unlikely to succeed. We are testing out functionality to tag relationships as queryable and to assign example questions to improve query detection, although there is no set release date.
It is recommended to avoid 1st person queries such as Am I eligible for a remortgage?
as the subject extracted for the query will be I
. Although this can be done if the questions in the knowledge graph are designed with this in mind and it is understood the evidence tree will not reference the subject by name/ID.
Integration Patterns
These design patterns highlight different approaches to using interact:
Depending on the end-solution, a layer of middleware may be required to identify when to call Rainbird in the context of a message (e.g. a user message in a 3rd party agent) and which knowledge graph is most appropriate (if you have multiple).
Within a session, you can also mix and match our core API and NL endpoints to combine structured queries, fact injection or question responses with unstructured queries or fact injection.
Data Accuracy and Quality Control
Our integration of knowledge graphs with LLMs provides several built-in safeguards to ensure data quality:
Knowledge Graph Controls
Validates data against defined value ranges and relationships
Processes facts within established business rules
Returns null results for unrecognised values, preventing incorrect decisions
Validation Tools
Review extracted facts in real-time via
/interact
Trace decision paths through evidence trees
Verify data against your defined model
These controls work together to maintain accuracy and we will continue improving our approaches. However whilst the risk is reduced, it's important to be aware inaccurate facts are still possible within the constraints of the knowledge graph.
Important Information
Data Security
/interact operates using third-party AI services (OpenAI). Please ensure shared information is suitable for external processing.
Any request made via /interact shares the request and a subset of the knowledge graph with the LLM provider.
System Parameters
One active query per session. Additional queries detected whilst a query is in progress will be ignored. Only when a query completes (result is provided) can further queries be made in the same session.
Facts cannot be updated, therefore user requests to change any previous statements is not possible. A new session must be established.
Token limits apply. For large content and/or large knowledge maps, limits may be reached and not all facts extracted.
Service may be interrupted during peak periods.
Varying behaviour: LLMs are not deterministic so you will likely see different responses when testing the same content.
Last updated