Getting started with Root Signals

To get started, please sign up and login to the Root Signals app. Using the SDK or API downstream requires creating an API key.

The following instructions will get you started on the SDK path. Alternatively, you can begin building on the UI or via the REST API.

pip install root-signals

Quick test

For the best experience, we encourage you to create an account. However, if you prefer to run quick tests at this point, please create a temporary API key here.

Running Root Signals evaluators

Root Signals provides over 30 evaluators or judges, which you can use to score any text based on a wealth of metrics. You can attach evaluators to an existing application with just a few lines of code.

from root import RootSignals

# Just a quick test?
# You can get a temporary API key from https://app.rootsignals.ai/demo-user 
client = RootSignals(api_key="my-developer-key")
client.evaluators.Politeness(
    response="You can find the instructions from our Careers page."
)
# {score=0.7, justification='The response is st...', execution_log_id=...}

Creating an integration using a skill

Root Signals offers an opinionated way to build LLM pipelines by wrapping a series of LLM operations to a Skill.

from root import RootSignals
from root.validators import Validator

client = RootSignals(api_key="my-developer-api-key")

skill = client.skills.create(
    name="sdk-demo-skill",
    intent="Simple Q&A chatbot",
    model="gpt-4",
    prompt="Answer the question in markdown {{question}}",
    validators=[
        Validator(evaluator_name="Relevance", threshold=0.8),
        Validator(evaluator_name="Precision", threshold=0.6)
        ],  
)

We can now run the skill

response = skill.run(variables={"question": "List the states in the US"})
print(response.llm_output)

Since we added validators when creating the skill, we can retrieve the results for each execution as a dictionary

result = skill.run(variables={"question": "List the biggest states and their largest everything in the US."})
result.validation
{
'validator_results': 
 [
   'evaluator_name': 'Relevance',
   'evaluator_id': '97730ce7-1837-43bc-a242-ae0fd1443627',
   'threshold': 0.8,
   'is_valid': False,
   'result': 0.65,
   'status': 'finished'
  },
  {
   'evaluator_name': 'Precision',
   'evaluator_id': '99622e9d-72f1-41cf-bc63-ed4fb0ae23f7',
   'threshold': 0.6,
   'is_valid': True,
   'result': 0.75,
   'status': 'finished'
   }
 ],
 'is_valid': False
 }

The result dictionary has a Boolean summary field is_valid with True or False indicating whether ALL of the validators passed their threshold values. You can easily implement your own validators and downstream conditional logic as necessary. You can remove the check altogether by not setting a threshold or setting it to 0.

The string response is available as result.llm_output:

1. **Texas**
   - Largest state by land area in the contiguous United States.
   - Known for having large cities like Houston, Dallas, and Austin.
   - Significant oil production and large ranches.

2. **California**
   - Most populous state in the U.S.
   - Home to the largest economy in the U.S. and the world's fifth-largest economy.
   - Contains the largest number of national parks and diverse landscapes.

3. **Alaska**
   - Largest state by land area overall.
   - Known for vast wilderness, largest national parks, and significant natural resources.

4. **New York**
   - Home to New York City, the largest city in the U.S. by population.
   - Major financial and cultural hub.

5. **Florida**
   - Known for having the longest coastline in the contiguous U.S.
   - Large tourism industry, with attractions like Walt Disney World and numerous beaches.

6. **Illinois**
   - Home to Chicago, one of the largest cities in the U.S. by population and a major economic center.

Adding evaluators and alternative models to OpenAI chat

By wrapping an OpenAI chat objects with a Root skill, you can:

  • attach it to any of the dozens of models supported by the Root proxy

  • attach any validators to the chat

  • monitor all chat interactions immediately via Root Signals UI.

Interaction results beyond OpenAI's ChatResponse can be retrieved separately. The two steps are:

  1. Create the Root Signals skill with system prompt and model selection.

  2. Pass the skill.openai_base_url as base_url and your RootSignals API key as api_key to the client.

from openai import OpenAI
from root import RootSignals
from root.validators import Validator

rs_client = RootSignals(api_key="my-developer-key")

skill = rs_client.skills.create(
    name="My chatbot",
    intent="Simple Q&A chatbot",
    system_message="You are a helpful assistant.",
    model="gpt-4-turbo",
    fallback_models=["gpt-4"],
    validators=[Validator(evaluator_name="Precision", threshold=0.8)],
)

# Start chatting with the skill
client = OpenAI(base_url=skill.openai_base_url, api_key=rs_client.api_key)
completion = client.chat.completions.create(
    model=skill.models[0],  #RS skills can have multiple models for failover
    messages=[{"role": "user", "content": "How many alligators live in Florida?"}]
)
print(completion.choices[0].message.content)

Getting validation results requires retrieving execution logs from the rs_client

log = rs_client.execution_logs.get(log_id=completion.id)
log.validation_result_average
# 1.0

Last updated