Use evaluators and RAG

Root Signals provides evaluators for RAG use cases, where you can give the context as part of the evaluated content.

One such evaluator is the Truthfulness evaluator, which measures the factual consistency of the generated answer against the given context and general knowledge.

Here is an example of running the Truthfulness evaluator using the Python SDK. Pass the context used to get the LLM response in the contexts parameter.

from root import RootSignals

# Connect to the Root Signals API
client = RootSignals()

result = client.evaluators.Truthfulness(
    request="What was the revenue in Q1/2023",
    response="The revenue in the last quarter was 5.2 M USD",
    contexts=[
        "Financial statement of 2023"
        "2023 revenue and expenses...",
    ],
)
print(result.score)
# 0.5

Last updated