MENTOR is a system that makes unbiased and explainable decisions based on previously verified cases. The design is centered around human-interpretable knowledge, resulting in a system that is transparent to its core.
ExplainabilityWe believe that useful AI must be explainable. Our system gives exact explanations for its decisions. An explanation is based on the previously verified cases that were used to generate the new decision. The explanation is detailed, easily understood by non-technical users, and can be used to document the decision.
Knowledge as a tangible assetMENTOR learns as it processes new cases, building a knowledge base. This knowledge base is human-readable and can be used to analyse the knowledge assets of an organisation.
UnbiasedUnlike machine learning, MENTOR gives users control over the decision process. Users can decide exactly how the knowledge base is built and what knowledge is included in the decision making. Therefore, bias is easily detected and avoided.