Hi,
I want to quickly clarify how tokens are being used when using the “Ask a table a question” feature.
I am guessing here, but is that way that it works that you load the entire table in your prompt and then ask GPT-4 to identify what record likely contains the answer, and then responds with the corresponding values in the answer field.
If this is the case, won’t use a lot of tokens the larger your dataset is, because it will have to read the entire dataset in its entirety each time?
Following on from this: I love the use case of using Glide to create a Q&A databank etc. Given the progress made on AI features, are there any consideration to allow for embeddings and to potentially allow for adding a vector database such as Pinecone as a source. This would solve the token issue and allow for some interesting use cases when it comes to building your own LLMs.