Skip to main content
Version: PromptQL

Natural Language API

Introduction

The Natural Language Query API allows you to interact with PromptQL directly, sending messages and receiving responses with support for streaming. This API is particularly useful for building interactive applications that need to communicate with PromptQL in real-time.

Example request using our quickstart:
{
"version": "v1",
"promptql_api_key": "<YOUR_API_KEY_HERE>",
"llm": {
"provider": "hasura"
},
"ddn": {
"url": "https://<PROJECT_NAME>.ddn.hasura.app/v1/sql",
"headers": {}
},
"artifacts": [],
"system_instructions": "Responde solo en espanol, por favor",
"timezone": "America/Los_Angeles",
"interactions": [
{
"user_message": {
"text": "What can you do?"
},
"assistant_actions": []
}
],
"stream": false
}

Click here to see a sample response ☝️

Example response using our quickstart:
{
"assistant_actions": [
{
"message": "¡Hola! Soy un asistente que puede ayudarte con varias tareas relacionadas con películas. Específicamente, puedo:\n\n1. Buscar y analizar películas de la base de datos basado en diferentes criterios como:\n - Título\n - Director\n - Año de lanzamiento\n - Género\n - Calificación IMDB\n - Actores\n - Y más\n\n2. Ayudarte a rentar películas a través del servicio Promptflix\n\n3. Crear análisis y resúmenes personalizados de películas\n\n4. Responder preguntas sobre películas específicas o tendencias generales\n\n¿Hay algo específico sobre películas que te gustaría explorar?",
"plan": null,
"code": null,
"code_output": null,
"code_error": null
}
],
"modified_artifacts": []
}

Query Endpoint

Send messages to PromptQL and receive responses, optionally in a streaming format.

Request

POST https://api.promptql.pro.hasura.io/query

Headers

Content-Type: application/json

Request Body

{
"version": "v1",
"promptql_api_key": "<YOUR_API_KEY_HERE>",
"llm": {
"provider": "hasura"
},
"ddn": {
"url": "https://<PROJECT_NAME>.ddn.hasura.app/v1/sql",
"headers": {}
},
"artifacts": [],
"system_instructions": "Optional system instructions for the LLM",
"timezone": "America/Los_Angeles",
"interactions": [
{
"user_message": {
"text": "Your message here"
},
"assistant_actions": [
{
"message": "Previous assistant message",
"plan": "Previous execution plan",
"code": "Previously executed code",
"code_output": "Previous code output",
"code_error": "Previous error message if any"
}
]
}
],
"stream": false
}

Request Body Fields

FieldTypeRequiredDescription
versionstringYesMust be set to "v1"
promptql_api_keystringYesPromptQL API key created from project settings
llmobjectNoConfiguration for the main LLM provider
ai_primitives_llmobjectNoOptional Configuration for the AI primitives LLM provider. If this is missing, the main LLM provider is used
ddnobjectYesDDN configuration including URL and headers
artifactsarrayNoList of existing artifacts to provide context
system_instructionsstringNoCustom system instructions for the LLM
timezonestringYesIANA timezone for interpreting time-based queries
interactionsarrayYesList of message interactions, including user messages and assistant responses. Each interaction contains a user_message object with the user's text and an optional assistant_actions array containing previous assistant responses with their messages, plans, code executions, and outputs.
streambooleanYesWhether to return a streaming response

LLM Provider Options

The llm and ai_primitives_llm fields support the following providers:

  1. Hasura:
{
"provider": "hasura"
}
Free credits

You get $20 worth of free credits with Hasura's built-in provider during the trial. Reach out to us to add more credits.

  1. Anthropic:
{
"provider": "anthropic",
"api_key": "<your anthropic api key>"
}
  1. OpenAI:
{
"provider": "openai",
"api_key": "<your openai api key>"
}

Request DDN Auth

The ddn.headers field can be used to pass any auth header information through to DDN. Read more about auth with these APIs.

Response

The response format depends on the stream parameter:

Non-streaming Response (application/json)

{
"assistant_actions": [
{
"message": "Response message",
"plan": "Execution plan",
"code": "Executed code",
"code_output": "Code output",
"code_error": "Error message if any"
}
],
"modified_artifacts": [
{
"identifier": "artifact_id",
"title": "Artifact Title",
"artifact_type": "text|table",
"data": "artifact_data"
}
]
}

Streaming Response (text/event-stream)

The streaming response sends chunks of data in Server-Sent Events (SSE) format:

data: {
"message": "Chunk of response message",
"plan": null,
"code": null,
"code_output": null,
"code_error": null,
"type": "assistant_action_chunk",
"index": 0
}

data: {
"type": "artifact_update_chunk",
"artifact": {
"identifier": "artifact_id",
"title": "Artifact Title",
"artifact_type": "text|table",
"data": "artifact_data"
}
}

Response Fields

FieldTypeDescription
assistant_actionsarrayList of actions taken by the assistant
modified_artifactsarrayList of artifacts created or modified during the interaction

Assistant Action Fields

FieldTypeDescription
messagestringText response from the assistant
planstringExecution plan if any
codestringCode that was executed
code_outputstringOutput from code execution
code_errorstringError message if code execution failed

Error Response

{
"type": "error_chunk",
"error": "Error message describing what went wrong"
}

Notes for using the Natural Language API

  1. Streaming vs Non-streaming

    • Use streaming (stream: true) for real-time interactive applications
    • Use non-streaming (stream: false) for batch processing or when you need the complete response at once
  2. Artifacts

    • You can pass existing artifacts to provide context for the conversation
    • Both text and table artifacts are supported
    • Artifacts can be modified or created during the interaction
  3. Timezone Handling

    • Always provide a timezone to ensure consistent handling of time-based queries
    • Use standard IANA timezone formats (e.g., "America/Los_Angeles", "Europe/London")
  4. Error Handling

    • Always check for error responses, especially in streaming mode
    • Implement appropriate retry logic for transient failures