"Ask a table a question" token usage and feature request


I want to quickly clarify how tokens are being used when using the “Ask a table a question” feature.

I am guessing here, but is that way that it works that you load the entire table in your prompt and then ask GPT-4 to identify what record likely contains the answer, and then responds with the corresponding values in the answer field.

If this is the case, won’t use a lot of tokens the larger your dataset is, because it will have to read the entire dataset in its entirety each time?

Following on from this: I love the use case of using Glide to create a Q&A databank etc. Given the progress made on AI features, are there any consideration to allow for embeddings and to potentially allow for adding a vector database such as Pinecone as a source. This would solve the token issue and allow for some interesting use cases when it comes to building your own LLMs.

1 Like

We’re going to build the vector features directly into Glide.


Thanks for the post, I really would love the ability to have vector features within Glide. In one of my apps, I’m feeding all historical data to do some sort of a chatbot. I definitely would love to have more control over that.

1 Like


That’s awesome news

Hey David,

Great to hear! Any update concerning a timeline for the implementation of vector features?