Root Signals Docs
  • Intro
  • QUICK START
    • Getting started in 30 seconds
    • Evaluator Portfolio
  • OVERVIEW
    • Why Anything?
    • Concepts
  • USAGE
    • Usage
      • Models
      • Objectives
      • Evaluators
      • Judges
      • Datasets
        • Dataset permissions
      • Execution, Auditability and Versioning
      • Access Controls & Roles
      • Lifecycle Management
    • Cookbook
      • Add a custom evaluator
        • Add a calibration set
      • Evaluate an LLM response
      • Use evaluators and RAG
      • Connect a model
      • SDK Examples
      • Poker app
  • Integrations
    • Haystack
    • LangGraph
    • LangChain
    • LlamaIndex
    • Langfuse
    • Vertex AI Agent Builder
  • Frequently Asked Questions
  • Breaking Change Policy
  • RESOURCES
    • Python SDK
    • Github Repo
    • REST API
    • Root Signals MCP
Powered by GitBook
On this page
Export as PDF
  1. Integrations

Langfuse

Example requires langfuse v3.0.0

import os
from langfuse import observe, get_client
from root import RootSignals 

# Initialize Langfuse client using environment variables
# LANGFUSE_SECRET_KEY, LANGFUSE_PUBLIC_KEY, LANGFUSE_HOST
langfuse = get_client()

# Initialize RootSignals client
rs = RootSignals()

Integration

observe(name="explain_concept_generation") # Name for traces in Langfuse UI
def explain_concept(topic: str) -> tuple[str | None, str | None]: # Returns content and trace_id
    # Get the trace_id for the current operation, created by @observe
    current_trace_id = langfuse.get_current_trace_id()
   
    prompt = prompt_template.format(question=topic)
    response_obj = client.chat.completions.create(
        messages=[{"role": "user", "content": prompt}],
        model="gpt-4",
    )
    content = response_obj.choices.message.content    
    return content, current_trace_id
def evaluate_concept(request: str, response: str, trace_id: str) -> None:

    # Invoke a specific Root Signals judge
    result = rs.judges.run(
        judge_id="4d369224-dcfa-45e9-939d-075fa1dad99e", 
        request=request,  # The input/prompt provided to the LLM
        response=response, # The LLM's output to be evaluated
    )

    # Iterate through evaluation results and log them as Langfuse scores
    for eval_result in result.evaluator_results:
        langfuse.create_score(
            trace_id=trace_id,                   # Links score to the specific Langfuse trace
            name=eval_result.evaluator_name,     # Name of the Root Signals evaluator (e.g., "Truthfulness")
            value=eval_result.score,             # Numerical score from the evaluator
            comment=eval_result.justification,   # Explanation for the score
        )
       
         
    

Table: Mapping Root Signals Output to Langfuse Score Parameters

Root Signals

Langfuse

Description in Langfuse Context

evaluator_name

name

The name of the evaluation criterion (e.g., "Hallucination," "Conciseness"). Used for identifying and filtering scores.

score

value

The numerical score assigned by the Root Signals evaluator.

justification

comment

The textual explanation from Root Signals for the score, providing qualitative insight into the evaluation

Done. Now you can explore detailed traces and metrics in the Langfuse dashboard.

PreviousLlamaIndexNextVertex AI Agent Builder

Last updated 3 days ago