Root Signals Docs
  • Intro
  • QUICK START
    • Getting started in 30 seconds
  • OVERVIEW
    • Why Anything?
    • Concepts
  • USAGE
    • Usage
      • Models
      • Objectives
      • Evaluators
      • Datasets
        • Dataset permissions
      • Execution, Auditability and Versioning
      • Access Controls & Roles
      • Lifecycle Management
    • Cookbook
      • Add a custom evaluator
        • Add a calibration set
      • Evaluate an LLM response
      • Use evaluators and RAG
      • Connect a model
      • SDK Examples
      • Poker app
  • Integrations
    • Haystack
    • LangGraph
    • LangChain
    • LlamaIndex
  • Frequently Asked Questions
  • Breaking Change Policy
  • RESOURCES
    • Python SDK
    • Github Repo
    • REST API
    • Root Signals MCP
Powered by GitBook
On this page
Export as PDF

Integrations

PreviousConnect a modelNextHaystack

Last updated 1 month ago

Both our Skills and Evaluators may be used as custom-generator LLMs in 3rd party frameworks and we are committed to support OpenAI ChatResponse compatible API.

Note, however, that additional functionality, such as validation results, calibration etc., are not available as part of OpenAI responses and require the user to implement additional code if anything besides failing on unsuccessful validation is required.

Advanced use-cases can rely on referencing the completion.id returned by our API as unique identifier for downstream tasks. Please refer to the section for details.

Cookbook