Open AI in Glide or AI assistant in GPT account

Hello dear Gliders!

I’m working with a client on a project where user talks with AI to shape certain results which are then stored in a certain format.

For this purpose I made a chat (an open AI complete chat with history).

But my client in the meanwhile played with Chat GPT assistant right inside his GPT account.

And he thinks that it works better (no evidence) and he wants to use it in Glide somehow (through Zapier).

Here is my question.

Does it make any sense?

To my understanding GPT AI assistant in GPT account and open AI chat with history is pretty much the same thing. Please, correct me if I’m wrong.

Also, please, share the examples if you ever had done anything like that.

Does this mean a custom GPT? Or an assistant API?

Either way, you would have to use something like Zapier/Make to create your own assistant through OpenAI, and interact with it. It’s not the same thing as “complete chat with history”.

API is the same. And it is the same model of Chat GPT.

What do you mean by "it’s not the same thing as “complete chat with history”

When you say “assistant”, it is the same thing as one of their API endpoints which allow people to build chatbots that are specific to a use case.

https://platform.openai.com/docs/assistants/overview

If it’s just the normal ChatGPT on their web version, I’m not sure if there’s a difference. Please check the type of GPT model your Glide and web version is run on.

Yes, you are right about the Assistant in Chat GPT. It just uses a certain instruction.

But question is, how is it different from the assistant that you build inside Glide with “complete chat with history”?

It looks like it has the same functionality.

Have you ever used GPT Assistant with Make or Zapier instead of the integrated in Glide GPT chat with history? Would you do that?

So in your case it’s just the web version of ChatGPT?

If you compare that with “complete chat with history” action, assuming you’re using the same model on both sides, with the same custom instructions, I would not expect much difference.

I assume you mean the normal version of ChatGPT, let’s just not use the term “assistants” since it’s a name of a different thing as I said above.

I used it in Make before, when I have more actions to do afterwards. If I don’t have anything else going on in my flow, I can just use the baked in OpenAI actions/Glide AI actions.

I talk about assistant. I used the right term.

As I said, I try to understand the difference.

And how it can work better if under the hood it is the same GPT with an additional instruction.

While chat with history in Glide is also GPT with certain instruction. And it seems to have even more settings.

Thank you for sharing that you used it for more actions.

My question is about performance.

So if you are indeed talking about the Assistants API in OpenAI, it allows you to use Code Interpreter, Retrieval, and Function calling. Those made it more useful for specific use cases.

I don’t know exactly the underlying process of “Complete chat with history”, my best guess is it’s just a way to insert the current conversation’s context within the input that is sent to OpenAI, and Glide automatically knows where to cutoff the context. It’s helpful, but under the hood it’s still a normal OpenAI chat endpoint call, not an Assistant API call.

1 Like

This is nice that you pointed out that assistant is a different entity inside GPT. Great that we both understand it.

Thank you for mentioning all the details about it. My question is still about the performance.

Maybe somebody else can add something on the matter.

The thing is, I don’t need those special functions for the project I mentioned. It is about chatting on the topic and summarization of the outcome. Like getting goals and subgoals to achieve something that you want.

If anybody ever compared them for the performance (not in the project which needs special functions like Code Interpreter, Retrieval, and Function calling) I would love to hear about your experience regarding performance.

What I really need to know is do you think it makes sense to use that assistant through automation instead of the function in Glide “chat with history”.

Yeah if you’re just comparing normal “calls” to OpenAI to get an answer, I wouldn’t expect any difference.

1 Like

I have a project that needs to use the OpenAI Assistant via the API call (NOT using the Glide integration, and NOT a call to the general OpenAI API).

I have set up an OpenAI Assistant with documents for the project (some could be huge!), and there are some custom instructions built in. This means that every request from the Glide app would hop over to the Assistant, and bring back results that have somehow gone through the instructions and docts in the Assistant, and then brought back likely more focused results.

I am already using Chat with History to store content into effectively an OpenAI chat thread (using the Session ID). I ‘preload’ content in when a user clicks (quickly, sending data in with gpt-3.5-turbo) as this sets up for later conversations with the content in a form of ongoing chat.

My use case? International development agencies want to scale their funded social projects - so we need to know the project, the company standard language and approach, then the specific problem, and the user’s preference for response. The raw message to send eats up most of the tokens! So I would get a very short, curtailed response. Now I do one or more Complete Chat (with History), using a common Session ID and gpt-3.5 for speed, before the user continues to do more things. This has given me dramatically better results - confirmed by the client.

In addition, to solve the cut off responses, I am using 2 Glide AI columns - the first to work out ;text to boolean - is this likely an incomplete answer?’ … then generate text ‘complete the response without duplication’, and an if-then-else column to work out what to display.

OpenAI Assistant will give next level functionality… I hope!

1 Like

Mark, thank you very much for explaining your case!

This is interesting indeed.

1 Like