Large Language Models integration
This feature is available on these plans:
ο»Ώ
βοΈ Starter βοΈ Growing βοΈ Scaling βοΈ Enterprise
Once you activate any LLM integration, we will create vector representations of your documents in the background.
After that, the search function becomes overpowered with the "Ask AI" feature, where your team or customers get to ask natural language questions that have the context of the knowledge in your workspace.
We currently integrate with OpenAI, but we plan to integrate with Cohere and have our own fine-tuned LLM in the future.
Activating our OpenAI integration will enable you to select spaces indexed/embedded with OpenAI GPT technologies and enable the generative search function.
We know some customer data is sensitive, so we have a switch on every space's settings to enable it selectively.
On top, OpenAI has clarified its policy for embeddings, as described below:
ο»Ώhttps://openai.com/policies/api-data-usage-policiesο»Ώ
Extract from OpenAI data usage policies(as of 10.05.2023):
ο»Ώ
"Starting on March 1, 2023, we are making two changes to our data usage and retention policies:
- OpenAI will not use data submitted by customers via our API to train or improve our models, unless you explicitly decide to share your data with us for this purpose. You canΒ
- Any data sent through the API will be retained for abuse and misuse monitoring purposes for a maximum of 30 days, after which it will be deleted (unless otherwise required by law)."
Once activated, our OpenAI integration indexes your content in the background. To optimize the cost of creating and updating your embeddings, they will update once every 30 minutes, not as you type. Then the generative search function is enabled.
How to setup & configure the OpenAI integration:
Go to your workspace settings, find the OpenAI integration, and enable it by entering your OpenAI API key
ο»Ώ

Go to Space Settings / Large Language Models and enable OpenAI LLM; we allow you to selectively enable LLMs on your spaces

Hit Search (Cmd + Shift + F) when in your workspace or (Cmd + K) when on a public doc space and click on "Ask our AI" to ask a question. Since you're reading this now, why don't you do it right here? π

How does OpenAI integration work?
It's a feature available for internal and customer-facing docs, opt-in at the space level.
When the OpenAI integration is activated, your team or customer can use the search function to ask questions like "How do I get started with your Public API? Can you give me an example in Java or Haskell?". Then get back an answer that feels human and embedded with real context about your team's work or product.
So what are some benefits?
- automated onboarding of employees and collaborators. And quicker, with questions being answered on the spot from your internal knowledge base and wikis;
- fewer support tickets due to users being able to ask the docs directly. This means your CS people focus more on building & maintaining an accurate knowledge base that we can use to feed into LLMs and less on answering questions individually;
What is the cost?
We don't charge for the extra functionality of integrating LLMs like OpenAI GPT. Since we ask you for the API keys, you will be charged directly by OpenAI. Take a look at their pricing. We mostly use the Embeddings API.