Root Signals Docs
  • Intro
  • QUICK START
    • Getting started in 30 seconds
    • Evaluator Portfolio
  • OVERVIEW
    • Why Anything?
    • Concepts
  • USAGE
    • Usage
      • Models
      • Objectives
      • Evaluators
      • Judges
      • Datasets
        • Dataset permissions
      • Execution, Auditability and Versioning
      • Access Controls & Roles
      • Lifecycle Management
    • Cookbook
      • Add a custom evaluator
        • Add a calibration set
      • Evaluate an LLM response
      • Use evaluators and RAG
      • Connect a model
      • SDK Examples
      • Poker app
  • Integrations
    • Haystack
    • LangGraph
    • LangChain
    • LlamaIndex
    • Langfuse
    • Vertex AI Agent Builder
  • Frequently Asked Questions
  • Breaking Change Policy
  • RESOURCES
    • Python SDK
    • Github Repo
    • REST API
    • Root Signals MCP
Powered by GitBook
On this page
  • Architecture Overview
  • 🔧 Step-by-Step Integration
  • 1. Set up a webhook in Vertex AI Agent Builder
  • 2. Create a middleware endpoint (Cloud Function or Cloud Run)
  • 3. Configure evaluators and judges
Export as PDF
  1. Integrations

Vertex AI Agent Builder

Integrate Root Signals evaluations with Google Cloud's Vertex AI Agent Builder to monitor and improve your conversational AI agents in real-time.

Architecture Overview

[Vertex AI Agent Builder]
     |
     |—→ [Webhook call (to Cloud Function / Cloud Run)]
                  |
                  |—→ [Root Signals API]
                  |
                  |—→ [Evaluate response]
                  |
           [Log result / augment reply]
                  |
     ←——————— Reply to Agent Builder user

🔧 Step-by-Step Integration

1. Set up a webhook in Vertex AI Agent Builder

  • Go to "Manage Fulfillment" in the Agent Builder UI.

  • Create a webhook (can be a Cloud Function, Cloud Run, or any HTTP endpoint).

  • This webhook will receive request and response pairs from user interactions.


2. Create a middleware endpoint (Cloud Function or Cloud Run)

This endpoint will:

  • Receive user input and the LLM response.

  • Construct an evaluator call to Root Signals API.

  • Send the result back as part of the webhook response (optional).

Option 1: Using Built-in Evaluators

app.post('/evaluate', async (req, res) => {
  const userInput = req.body.sessionInfo.parameters.input;
  const modelResponse = req.body.fulfillmentResponse.messages[0].text.text[0];

  // Use a built-in evaluator (e.g., Relevance)
  const evaluatorPayload = {
    request: userInput,
    response: modelResponse,
  };

  const evaluatorResult = await fetch('https://api.app.rootsignals.ai/v1/skills/evaluator/execute/YOUR_EVALUATOR_ID/', {
    method: 'POST',
    headers: {
      'Authorization': 'Api-Key YOUR_API_KEY',
      'Content-Type': 'application/json',
    },
    body: JSON.stringify(evaluatorPayload),
  });

  const result = await evaluatorResult.json();
  console.log('Evaluator Score:', result.score);

  // Return modified response (if needed)
  res.json({
    fulfillment_response: {
      messages: [
        {
          text: {
            text: [
              `${modelResponse} (Quality score: ${result.score.toFixed(2)})`
            ]
          }
        }
      ]
    }
  });
});

Option 2: Using Custom Judges

app.post('/evaluate', async (req, res) => {
  const userInput = req.body.sessionInfo.parameters.input;
  const modelResponse = req.body.fulfillmentResponse.messages[0].text.text[0];

  // Use a custom judge
  const judgePayload = {
    request: userInput,
    response: modelResponse,
  };

  const judgeResult = await fetch('https://api.app.rootsignals.ai/v1/judges/YOUR_JUDGE_ID/execute/', {
    method: 'POST',
    headers: {
      'Authorization': 'Api-Key YOUR_API_KEY',
      'Content-Type': 'application/json',
    },
    body: JSON.stringify(judgePayload),
  });

  const result = await judgeResult.json();
  console.log('Judge Score:', result.evaluator_results);

  // Return modified response (if needed)
  res.json({
    fulfillment_response: {
      messages: [
        {
          text: {
            text: [
              `${modelResponse} (Judge results: ${JSON.stringify(result.evaluator_results)})`
            ]
          }
        }
      ]
    }
  });
});

3. Configure evaluators and judges

Built-in Evaluators:

  • Use evaluators like Relevance, Precision, Completeness, Clarity, etc.

  • Get available evaluators by logging in to https://app.rootsignals.ai/

  • Examples: Relevance, Truthfulness, Safety, Professional Writing

Custom Judges:

  • Create custom judges that combine multiple evaluators - use https://scorable.rootsignals.ai/ to generate a judge.

  • Judges provide aggregated scoring across multiple criteria

PreviousLangfuseNextFrequently Asked Questions

Last updated 2 days ago