How does Rainbird work?
Last updated
Last updated
You can create completely bespoke AI models in Rainbird, but a Rainbird model works very differently to the more widely known AI models based on Machine Learning (ML).
We use a unique blend of technologies to enable businesses to replicate their experts decision-making, which is created by them and can provide complete transparency for every decision made.
We will break these down, but first it useful to introduce the field of Knowledge Representation & Reasoning (KR&R):
Knowledge Representation & Reasoning is a field in artificial intelligence that deals with how knowledge about the world can be represented in a computer-friendly way, and how computers can use this information to solve complex problems.
Knowledge Representation (KR) involves the creation of formal structures that represent knowledge about the world, objects, events, or situations. These structures can include logic, rules, ontologies, and semantic networks.
Reasoning (R) is the process of deriving logical conclusions from a set of facts. In AI, this often involves algorithms that can infer new knowledge from the knowledge that has been encoded.
KR&R is fundamental in AI because it enables machines to mimic human-like understanding and reasoning, allowing them to interact more naturally with users and handle complex, real-world scenarios.
In Rainbird, knowledge is represented as an extended knowledge graph, or what we call a 'Knowledge Map'.
A knowledge graph is a way of storing and organising information that allows for a more intuitive understanding and retrieval of data. It represents a network of real-world entities β such as objects, events, situations, or concepts β and the relationships between them.
In the Rainbird Studio, a graph can be created that represents the data an expert would need to make decisions, and the relationships between these data.
These knowledge graphs are then extended with additional layers of information and functionality, including:
Enhanced reasoning capabilities: users can create rules on the graph to infer new data based on other data, just as experts can read a set of data, infer new data and arrive at a judgement.
Dynamic and real-time updates: users can connect their graph to real-time data to ensure decisions reflect the latest information.
Personalisation and context-awareness: users can design their graph to adapt to individual user contexts, so personalised responses can be provided.
These extended knowledge graphs act as a model of your knowledge. Importantly, these models are written in plain English (or Mandarin, or Swahili, whatever is meaningful to you). Therefore these expert-made models, unlike Machine-Learnt models, can be easily read, interpreted and modified, giving you the trust and confidence that they will provide high-quality and consistent decisions.
We also have tools to accelerate knowledge representation by building extended knowledge graphs from documentation or data. But this is covered elsewhere.
Reasoning is the process of deriving a logical conclusion from a set of facts.
This is where a user can ask for a decision from Rainbird (a 'query') and our reasoning engine will use the knowledge and logic encoded in the extended knowledge graph to acquire dynamic/real-time data and infer new data in order to arrive at a decision.
Unlike decision trees, the path to making this decision does not need to be explicitly defined for every scenario, which can become unmanageable. At runtime, the reasoning engine will dynamically figure out the best path to return a decision, including the data it needs to make it, and where to obtain it from. It will also try multiple different paths if data is missing and there are alternative paths to reach a decision.
This decision will also be in context to the entity you are making a decision about (e.g. a company, an account, a person etc.) as it will have collated and considered data unique to them. It is therefore highly personalised.
And because this knowledge representation was built and tested by your experts, it will provide a decision the same way they would.
Explainability in AI is about making the internal decision-making processes of AI systems transparent and understandable to humans. This is key for building trust, ensuring ethical use, and complying with legal standards where AI is involved in making significant decisions.
We've talked about how the extended knowledge graphs represent an experts knowledge, that they are written in a language you understand, that they are interpretable, and that the reasoning engine uses this model to collate data, reason over it and arrive at a decision.
This means we are also able to explain the full chain of reasoning used to arrive at a decision, which is transparent and understandable to humans.
We call this the evidence tree and it provides an audit trail that is explainable, traceable and enables Rainbird decisions to be used in regulated industries, or where explainability is important.
The Rainbird platform is able to provide a unique set of capabilities that are challenging to find together with other AI or non-AI technologies:
Keen to understand how this has can used for real? Check out our example use cases for some practical implementations using the Rainbird platform.