Skip to main content

Purpose and Vision:

This project's goal is to allow any person to talk to any relational database. For example if connected to a meal planning application, a user could say 'Create a meal plan for next week with at least two meals meatless' and our engine would create the database mutations necessary to get that done for the user. Our engine has access to the user's specific schema at runtime so it knows whether it needs to ask follow-up questions or if it can execute the command from the user.

To accomplish this we are using Devii's schema generation, api, and security engine so every database has consistent structure. This means you will need to set up a Devii account to use this api.

Quick Start

  1. Authenticate to your database via https://api.devii.io/auth

  2. Use your Access token to send a POST request to https://api.nlapi.io/nlapi

fetch('https://api.nlapi.io/nlapi', {
method: 'POST',
headers: {
'Content-Type': 'application/json',
Authorization: 'Bearer YOUR_ACCESS_TOKEN',
},
body: JSON.stringify({
user_input: 'create new ingredient called pinto beans',
// context: ['user id is 1'], // optional
// "thread_id" : 34232 // use this to follow-up on a conversation
}),
})
.then((response) => response.json())
.then((data) => console.log(data))
.catch((error) => console.error('Error:', error));

Documentation

Key Concepts

Messages

Messages are the foundation of the information that is passed to and from the NLAPI server. To send a message to the server a developer can simply send a POST request to the /nlapi endpoint with user_input apart of the json object in the body. user_input is the only required input for this endpoint. You can continue a conversation by including the thread_id key in the request. (see Threads section for more info) You can also include the key context in the request. (see Context for more info)

Return Messages: The api will return the latest message in the thread object returned in the messages array. The latest message will always be at messages[0] and the array will be ordered by create time in descending order (latest -> oldest ).

Message Object: Each message object returned will have the following keys:
content : The natural language input from the user (human) or natural language output from our models (bot)
speaker : Identifies who said the message. Currently will always be bot or human
created_at : Timestamp of when the message with created [in development]

Threads

Threads are simply a conversation of messages.

Using Threads: Sometimes a user will try to interact with a model, but not provide required information for a valid db mutation. When this happens, the nlapi will respond in natural language with a message indicating what information it needs to complete the users' request. When this happens, the developer will need to pass the thread_id key in the next request to follow up on the conversation. The nlapi will use the whole context of the whole thread to complete the request so the user does not have to repeat previous information already mentioned in the thread.

New Threads: If no thread_id is provided in the request, a new thread is created and the nlapi has no access to previous messages.

Thread Expiration: [in development] After a thread expires, a user cannot add additional messages to a thead. Expiring threads is a security feature. If we didn't expire threads, and a user had access to something early on in the thread, but not later, the nlapi may assume access still and hallucinate bad database interactions. With the proper Role Based Access Control policies, your user will still not be able to perform any action they are not allowed, but it could lead to a poor user experience and more hallucinations with longer threads. We may change this in the future.

Thread Object: The thread object is what is returned from every /nlapi request. Each thread object returned will have the following keys: thread_id : This is used to keep track of threads so a user can follow-up with a conversation run_id : Used for providing feedback on individual nlapi responses. messages : The array of message objects (see message object for more details) created_at : [in development] The timestamp the thread was created. expires_at : [in development] The timestamp when the thread expires. After this, no more messages will be accepted and the nlapi will return the thread object with the last message's content: 'Error: Thread has Expired, please start a new thread.'

Context

Context is information the developer knows about the user or current location of the request that the user and the nlapi would not. For example on a project management software if a user is on a project dashboard page and sends the payload of {'user_input' : 'add a task to this project called make documentation for feature x'}. The nlapi would not know what project the user is referencing. But because the user is on the project page, the user could add the context key context: ["user is viewing project with id 71"] to the payload and the nlapi would understand that the user is likely wanting to add a task with project_id of 71.

How context is being implemented is currently being shaped and is subject to change. Please send us your feedback on how you'd like to implement this. Email at jase@jasekraft.com

Streaming

Responses can be streamed back from NLAPI in the form of server sent events. For more information on server sent events, you can refer to the MDN documentation. In order to consume server sent events you will need to use an event parser. For JavaScript, you might consider using eventsource-parser.

To request a streamed response, you must include the "options" key which contains an object with the key "stream" set to "true" with stream set to true. A sample payload might look like:

{
"user_input": "Create a new recipe called 'meatloaf'",
"thread_id": "a1b2c3d4-5678-90ab-cdef-1234567890ab",
"context": ["user is on the main dashboard"],
"options": {
"stream": true
}
}

The events that the NLAPI sends in response will be sent as Uint8Arrays. Once decoded into plain text, they will have two fields, "event" and "data".

The event field describes the type of event that is being sent.

The data field contains a json string with information corresponding to the type of the event.

Streaming Events

status_message

Status messages relay information about the steps the NLAPI is taking behind the scenes, whether that be the initial processing of the request, making queries, etc.

event: status_message
data: {"content": "querying", "thread_id": "a1b2c3d4-5678-90ab-cdef-1234567890ab"}

message_chunk

Message chunks contain json strings with "content" denoting the current message token and "thread_id" containing the id of the current conversation.

event: message_chunk
data: {"content": "ketchup", "thread_id": "a1b2c3d4-5678-90ab-cdef-1234567890ab"}

close

The close event is the last event in the response. In addition to "content" and "thread_id", close events also contain a "run_id".

event: close
data: {"content": "ketchup", "thread_id": "a1b2c3d4-5678-90ab-cdef-1234567890ab", "run_id": "a1b2c3d4-5678-90ab-cdef-1234567890ab"}

error

If the NLAPI encountered an error while processing the request, it will send the error message in an error event.

event: error
data: {"content": "Error: API Connection error", "thread_id": "a1b2c3d4-5678-90ab-cdef-1234567890ab", "run_id": "a1b2c3d4-5678-90ab-cdef-1234567890ab"}

Feedback

To continue to improve our models we optionally allow users to give feedback on responses. We currently use this data internally to continually improve our models. However, if you'd like to not participate in the continued refinement of our data, you can simply reach out and we can discuss options.
Coming In Some Amount of Time: with enterprise installations of this software, the enterprises exclusive model will continue to learn from the data in this feedback loop

Developers can send POST requests to /feedback with the feedback object to help provide feedback and improve our models. Users can only provide feedback on responses from threads they are logged into. As with requests made to nlapi, you must include your access_token in the Authorization header of your request.

Feedback Object

{
"run_id": string, // The id of the run returned from the chat response.
"score": 0 | 1 // Was the response good or bad? 0 for bad, 1 for good.
}