Execution, Auditability and Versioning

The requests to any models wrapped within skill objects, and their responses, are traceable within the log objects or Root Signals platform.

The retention of logs is determined by your platform license. You may export logs at any point for your local storage. Access to execution logs is restricted based on your user role and skill-specific access permissions.

Objectives, evaluators, skills and test datasets are strictly versioned. The version history allows keeping track of all local changes that could affect the execution.

To understand reproducibility of pipelines of generative models, these general principles hold:

  • For any models, we can control for the exact inputs to the model, record the responses received, and the evaluator results of each run.

  • For open source models, we can pinpoint the exact version of the model (weights) being used, if this is guaranteed by the model provider, or if the provider is Root Signals.

  • For proprietary models whose weights are not available, we can pinpoint the version based on the version information given by the providers (such as gpt-4-turbo-2024-04-09) but we cannot guarantee those models are, in reality, fully immutable

  • Any LLM request with 'temperature' parameter above 0 is guaranteed not to be deterministic. Temperature = 0 and/or a fixed value of a 'seed' parameter usually mean the result is deterministic, but your mileage may vary.

Last updated