I read all about the new Pinecone integration! It seems awesome that you can create context-aware AI agents right from the Glide frontend, does anyone already uses it and how?
I am eager to learn some example cases. 2025 is all about 2025 agents but we struggle to find real use cases for our marketing agency.
Thank you, looking forward to your awesome creations!
That’s interesting, watched it! What I would love is that Glide would add an integrated AI agent in Glide so that you can chat with ALL your data in Glide. Something like the Notion AI Agent.
I am using the integration to upsert profiles into an index and then compare them for similarity, I can do a deeper dive when I get through the fully implemented end result. One important limitation of the Glide integration is lack of support for metadata. (of course if I’m wrong about this please correct me). This requires that I use make.com integrations instead of Glide’s native implementation, because metadata cannot be included in the text-based index body. I would love Glide to enable metadata support!
Metadata is important for pre-filtering results when querying the index for vector similarity, in such a case where you don’t want to return results that are obviously not what the user wanted. For example, we would want to use pre-filter to restrict similarity results to metadata that fits preferences stored in the profile. If a user is looking for a specific set of circumstances when comparing values, the end resulting similarity should fall within the confines of those preferences.
Note I am using indexes and not document search or assistant (yet).
An update: Upon further investigation, make.com pinecone modules are pre-configured for storing vectors, not actually performing the embeddings, which means you need to select an embedding API (a model) in addition to storing the vectors. That means that the vectors themselves can be queried in a make scenario, but that embedding is managed separately which complicates the workflow compared to Glide’s integration. I filed a feature request for metadata support here:
I am running across a situation where vector matching is not returning as accurate scores as I’d like, so I am working on making the matching return higher confidence results. Currently my index and my query are returning about 60% vector match scores on average, based on a split of index and query.
Some side notes about my use case: I store attributes about an object and index those (upsert them) into pinecone, using a contextually written template column with the values inserted as text. I then write a contextual prompt with “preferences” that I am attempting to retrieve from the existing indexes as a query, wherein indexes with matching attributes should return a high confidence match based on the preferences in the query.
One way to optimize this is to use higher dimensionality for vectors, wherein I attempt to use a larger embedding model that may be able to better understand the semantic context I am providing. However, changing to use the text-3-embedding-large model breaks the Glide integration because the integration limits you to only being able to work with 1536 dimensions on a small embedding model, and a larger one uses 3072.
So in short, there are some limitations to doing vector matching with indexes through the Pinecone integration, but it may be good enough for some use cases. I am finding it somewhat limiting for mine, so would be cool to see more use cases shared in webinarts for Experts and other posts here in the community.