Root Signals Docs
  • Intro
  • QUICK START
    • Getting started in 30 seconds
    • Evaluator Portfolio
  • OVERVIEW
    • Why Anything?
    • Concepts
  • USAGE
    • Usage
      • Models
      • Objectives
      • Evaluators
      • Judges
      • Datasets
        • Dataset permissions
      • Execution, Auditability and Versioning
      • Access Controls & Roles
      • Lifecycle Management
    • Cookbook
      • Add a custom evaluator
        • Add a calibration set
      • Evaluate an LLM response
      • Use evaluators and RAG
      • Connect a model
      • SDK Examples
      • Poker app
  • Integrations
    • Haystack
    • LangGraph
    • LangChain
    • LlamaIndex
  • Frequently Asked Questions
  • Breaking Change Policy
  • RESOURCES
    • Python SDK
    • Github Repo
    • REST API
    • Root Signals MCP
Powered by GitBook
On this page
Export as PDF
  1. USAGE
  2. Cookbook

Use evaluators and RAG

Root Signals provides evaluators for RAG use cases, where you can give the context as part of the evaluated content.

One such evaluator is the Truthfulness evaluator, which measures the factual consistency of the generated answer against the given context and general knowledge.

Here is an example of running the Truthfulness evaluator using the Python SDK. Pass the context used to get the LLM response in the contexts parameter.

from root import RootSignals

# Connect to the Root Signals API
client = RootSignals()

result = client.evaluators.Truthfulness(
    request="What was the revenue in Q1/2023",
    response="The revenue in the last quarter was 5.2 M USD",
    contexts=[
        "Financial statement of 2023"
        "2023 revenue and expenses...",
    ],
)
print(result.score)
# 0.5
PreviousEvaluate an LLM responseNextConnect a model

Last updated 7 months ago