LLM Clients
TrustTest provides a flexible abstraction layer for working with different LLM providers through itsLLMClient interface. This architecture allows for seamless integration with various LLM services while maintaining a consistent interface for generating questions, evaluations, and other LLM-powered features.
Architecture
The core of this system is theLLMClient abstract base class, which defines two main methods:
complete(instructions, system_prompt): For single-prompt completionscomplete_chat(messages): For multi-turn conversations
Supported Providers
TrustTest currently supports the following LLM providers:- OpenAI
- Anthropic
- Ollama
- vLLM
- Azure OpenAI
- Deepseek
Usage Example
Embeddings Clients
TrustTest provides a flexible abstraction layer for working with different embedding providers through itsEmbeddingsModel interface. This architecture allows for seamless integration with various embedding services while maintaining a consistent interface for generating vector representations of text.
Architecture
The core of this system is theEmbeddingsModel abstract base class, which defines the main method:
embed(texts): Converts a sequence of texts into numerical vector representations
Supported Providers
TrustTest currently supports the following embedding providers:- OpenAI
- Ollama
Usage Example
Global Configuration
TrustTest provides a global configuration system to manage LLM and embeddings settings across your application. The configuration can be set using theset_config() function, which accepts a dictionary with settings for different components:
evaluator: LLM settings for evaluation tasksquestion_generator: LLM settings for generating test questionsembeddings: Settings for the embeddings modeltopic_summarizer: LLM settings for topic summarization
provider: One of “openai”, “azure”, “google”, “anthropic”, “ollama” (for LLMs) or “openai”, “azure”, “google”, “ollama” (for embeddings)model: The specific model name for the chosen providertemperature: (LLMs only) Controls randomness in model outputs (0.0 to 1.0)
Implementing Custom Clients
Both LLM and Embeddings clients can be easily extended by implementing custom providers. The base classes provide a clear interface that you need to implement.Custom LLM Client
To create a custom LLM client, inherit fromLLMClient and implement the required methods:
Custom Embeddings Client
To create a custom embeddings client, inherit fromEmbeddingsModel and implement the required method: