Models Families


Machine Learning plaforms

Vertex AI (by Google)

DataBricks

SageMaker (by AWS)

Azure ML (by Microsoft)


Orchestration

LangChain

LlamaIndex

Semantic-Kernel (by Microsoft)


Data Parsing tools


Memory DB tools


Data Validation tools


Evaluation tools


Security tools


Serving tools


LLM Metrics


User Feedback

TruBrics

Enables AI teams to collect, analyse and manage user prompts & feedback on models. This allows teams to:


Cache (to reduce costs and speed up inference)

GPTCache

ChatGPT and various large language models (LLMs) boast incredible versatility, enabling the development of a wide range of applications. However, as your application grows in popularity and encounters higher traffic levels, the expenses related to LLM API calls can become substantial. Additionally, LLM services might exhibit slow response times, especially when dealing with a significant number of requests. To tackle this challenge, we have created GPTCache, a project dedicated to building a semantic cache for storing LLM responses.


Token estimators

Tiktokenizer


Interpretability

OpenAI Explanations


Autotuned prompts / Coded prompts

DSPy

NeuroPrompts

AutoPrompt