Either way, you would have to use something like Zapier/Make to create your own assistant through OpenAI, and interact with it. It’s not the same thing as “complete chat with history”.
If it’s just the normal ChatGPT on their web version, I’m not sure if there’s a difference. Please check the type of GPT model your Glide and web version is run on.
So in your case it’s just the web version of ChatGPT?
If you compare that with “complete chat with history” action, assuming you’re using the same model on both sides, with the same custom instructions, I would not expect much difference.
I assume you mean the normal version of ChatGPT, let’s just not use the term “assistants” since it’s a name of a different thing as I said above.
I used it in Make before, when I have more actions to do afterwards. If I don’t have anything else going on in my flow, I can just use the baked in OpenAI actions/Glide AI actions.
So if you are indeed talking about the Assistants API in OpenAI, it allows you to use Code Interpreter, Retrieval, and Function calling. Those made it more useful for specific use cases.
I don’t know exactly the underlying process of “Complete chat with history”, my best guess is it’s just a way to insert the current conversation’s context within the input that is sent to OpenAI, and Glide automatically knows where to cutoff the context. It’s helpful, but under the hood it’s still a normal OpenAI chat endpoint call, not an Assistant API call.
This is nice that you pointed out that assistant is a different entity inside GPT. Great that we both understand it.
Thank you for mentioning all the details about it. My question is still about the performance.
Maybe somebody else can add something on the matter.
The thing is, I don’t need those special functions for the project I mentioned. It is about chatting on the topic and summarization of the outcome. Like getting goals and subgoals to achieve something that you want.
If anybody ever compared them for the performance (not in the project which needs special functions like Code Interpreter, Retrieval, and Function calling) I would love to hear about your experience regarding performance.
What I really need to know is do you think it makes sense to use that assistant through automation instead of the function in Glide “chat with history”.
I have a project that needs to use the OpenAI Assistant via the API call (NOT using the Glide integration, and NOT a call to the general OpenAI API).
I have set up an OpenAI Assistant with documents for the project (some could be huge!), and there are some custom instructions built in. This means that every request from the Glide app would hop over to the Assistant, and bring back results that have somehow gone through the instructions and docts in the Assistant, and then brought back likely more focused results.
I am already using Chat with History to store content into effectively an OpenAI chat thread (using the Session ID). I ‘preload’ content in when a user clicks (quickly, sending data in with gpt-3.5-turbo) as this sets up for later conversations with the content in a form of ongoing chat.
My use case? International development agencies want to scale their funded social projects - so we need to know the project, the company standard language and approach, then the specific problem, and the user’s preference for response. The raw message to send eats up most of the tokens! So I would get a very short, curtailed response. Now I do one or more Complete Chat (with History), using a common Session ID and gpt-3.5 for speed, before the user continues to do more things. This has given me dramatically better results - confirmed by the client.
In addition, to solve the cut off responses, I am using 2 Glide AI columns - the first to work out ;text to boolean - is this likely an incomplete answer?’ … then generate text ‘complete the response without duplication’, and an if-then-else column to work out what to display.
OpenAI Assistant will give next level functionality… I hope!
I’m working with a client on a project where user talks with AI to shape certain results which are then stored in a certain format.
Getting OpenAI to reply to a prompt in a certain format would work best with Function Calling. Function calling is not available through Glide out-of-the-box integration with OpenAI. But it’s available in OpenAI Assistants.
It looks like Marco recently have made an interesting video on how to use GPT assistants with Glide. It is on his Loqode School
Thanks Marco @Loqode great job