Chat Endpoint
Our generation services focus on leveraging LLMs and RAG with accredited VetMed data overlays to ensure high-quality outputs.
VetMed LLM
Singleshot (Chat Endpoint): Provides single-turn conversational capabilities tailored for veterinary use cases. Our generation services focus on leveraging advanced LLMs with accredited VetMed data overlays to ensure high-quality outputs. The Singleshot endpoint is built to deliver quick and precise responses for veterinary-specific queries by integrating your preferred LLM (e.g., ChatGPT-4, Claude) with our context-aware RAG engine. This ensures accurate, reliable insights for medical decision-making.
- Core Service: Choose your preferred LLM augmented with accredited VetMed data.
- Use Cases:
- Generate accurate medical insights on demand.
- Provide concise and contextually relevant answers for clinical and client interactions.
Features:
- Enhanced conversational AI with RAG-driven contextual understanding.
- Dynamic integration of curated VetMed data for precision.
- Streamlined outputs for single-turn interactions tailored to veterinary workflows.
Request
Request Body
The request body should be:
- input (string, required): The user’s query.
- model (string, optional, default=
gpt-4o
): The LLM to query against. It can be one of the following:
In the absense of a model
parameter, it will default to gpt-4o
.
Example Request
Response
Successful Response (Relevant Input)
If the input is relevant to veterinary medicine, the API will return a JSON object with:
- status:
true
- reason: A structured dictionary containing the answer to the query.
reason
Object Structure
The reason object contains:
- Plain English Summary: A detailed summary of the response.
- Plain English Concise: A concise overview of the response.
Example Successful Response
Unsuccessful Response (Irrelevant Input)
If the input is not relevant to veterinary medicine, the API will return a JSON object with:
- status: false
- reason: A string explaining why the input is not relevant.
Example Unsuccessful Response
Error Handling
If an error occurs during processing, the API will return a JSON object with an error key: