Natural Language API
Introduction
The Natural Language Query API allows you to interact with PromptQL directly, sending messages and receiving responses with support for streaming. This API is particularly useful for building interactive applications that need to communicate with PromptQL in real-time.
{
"version": "v1",
"promptql_api_key": "<YOUR_API_KEY_HERE>",
"llm": {
"provider": "hasura"
},
"ddn": {
"url": "https://<PROJECT_NAME>.ddn.hasura.app/v1/sql",
"headers": {}
},
"artifacts": [],
"system_instructions": "Responde solo en espanol, por favor",
"timezone": "America/Los_Angeles",
"interactions": [
{
"user_message": {
"text": "What can you do?"
},
"assistant_actions": []
}
],
"stream": false
}
Click here to see a sample response ☝️
{
"assistant_actions": [
{
"message": "¡Hola! Soy un asistente que puede ayudarte con varias tareas relacionadas con películas. Específicamente, puedo:\n\n1. Buscar y analizar películas de la base de datos basado en diferentes criterios como:\n - Título\n - Director\n - Año de lanzamiento\n - Género\n - Calificación IMDB\n - Actores\n - Y más\n\n2. Ayudarte a rentar películas a través del servicio Promptflix\n\n3. Crear análisis y resúmenes personalizados de películas\n\n4. Responder preguntas sobre películas específicas o tendencias generales\n\n¿Hay algo específico sobre películas que te gustaría explorar?",
"plan": null,
"code": null,
"code_output": null,
"code_error": null
}
],
"modified_artifacts": []
}
Query Endpoint
Send messages to PromptQL and receive responses, optionally in a streaming format.
Request
POST https://api.promptql.pro.hasura.io/query
Headers
Content-Type: application/json
Request Body
{
"version": "v1",
"promptql_api_key": "<YOUR_API_KEY_HERE>",
"llm": {
"provider": "hasura"
},
"ddn": {
"url": "https://<PROJECT_NAME>.ddn.hasura.app/v1/sql",
"headers": {}
},
"artifacts": [],
"system_instructions": "Optional system instructions for the LLM",
"timezone": "America/Los_Angeles",
"interactions": [
{
"user_message": {
"text": "Your message here"
},
"assistant_actions": [
{
"message": "Previous assistant message",
"plan": "Previous execution plan",
"code": "Previously executed code",
"code_output": "Previous code output",
"code_error": "Previous error message if any"
}
]
}
],
"stream": false
}
Request Body Fields
Field | Type | Required | Description |
---|---|---|---|
version | string | Yes | Must be set to "v1" |
promptql_api_key | string | Yes | PromptQL API key created from project settings |
llm | object | No | Configuration for the main LLM provider |
ai_primitives_llm | object | No | Optional Configuration for the AI primitives LLM provider. If this is missing, the main LLM provider is used |
ddn | object | Yes | DDN configuration including URL and headers |
artifacts | array | No | List of existing artifacts to provide context |
system_instructions | string | No | Custom system instructions for the LLM |
timezone | string | Yes | IANA timezone for interpreting time-based queries |
interactions | array | Yes | List of message interactions, including user messages and assistant responses. Each interaction contains a user_message object with the user's text and an optional assistant_actions array containing previous assistant responses with their messages, plans, code executions, and outputs. |
stream | boolean | Yes | Whether to return a streaming response |
LLM Provider Options
The llm
and ai_primitives_llm
fields support the following providers:
- Hasura:
{
"provider": "hasura"
}
You get $20 worth of free credits with Hasura's built-in provider during the trial. Reach out to us to add more credits.
- Anthropic:
{
"provider": "anthropic",
"api_key": "<your anthropic api key>"
}
- OpenAI:
{
"provider": "openai",
"api_key": "<your openai api key>"
}
Request DDN Auth
The ddn.headers
field can be used to pass any auth header information through to DDN. Read more about
auth with these APIs.
Response
The response format depends on the stream
parameter:
Non-streaming Response (application/json)
{
"assistant_actions": [
{
"message": "Response message",
"plan": "Execution plan",
"code": "Executed code",
"code_output": "Code output",
"code_error": "Error message if any"
}
],
"modified_artifacts": [
{
"identifier": "artifact_id",
"title": "Artifact Title",
"artifact_type": "text|table",
"data": "artifact_data"
}
]
}
Streaming Response (text/event-stream)
The streaming response sends chunks of data in Server-Sent Events (SSE) format:
data: {
"message": "Chunk of response message",
"plan": null,
"code": null,
"code_output": null,
"code_error": null,
"type": "assistant_action_chunk",
"index": 0
}
data: {
"type": "artifact_update_chunk",
"artifact": {
"identifier": "artifact_id",
"title": "Artifact Title",
"artifact_type": "text|table",
"data": "artifact_data"
}
}
Response Fields
Field | Type | Description |
---|---|---|
assistant_actions | array | List of actions taken by the assistant |
modified_artifacts | array | List of artifacts created or modified during the interaction |
Assistant Action Fields
Field | Type | Description |
---|---|---|
message | string | Text response from the assistant |
plan | string | Execution plan if any |
code | string | Code that was executed |
code_output | string | Output from code execution |
code_error | string | Error message if code execution failed |
Error Response
{
"type": "error_chunk",
"error": "Error message describing what went wrong"
}
Notes for using the Natural Language API
-
Streaming vs Non-streaming
- Use streaming (
stream: true
) for real-time interactive applications - Use non-streaming (
stream: false
) for batch processing or when you need the complete response at once
- Use streaming (
-
Artifacts
- You can pass existing artifacts to provide context for the conversation
- Both text and table artifacts are supported
- Artifacts can be modified or created during the interaction
-
Timezone Handling
- Always provide a timezone to ensure consistent handling of time-based queries
- Use standard IANA timezone formats (e.g., "America/Los_Angeles", "Europe/London")
-
Error Handling
- Always check for error responses, especially in streaming mode
- Implement appropriate retry logic for transient failures