I’m encountering some issues with Open AI components. The response time is excessively long, and occasionally, the API fails due to an extended wait time. Is anyone else experiencing this problem?
If you have encountered this issue and know how to solve it, please let me know.
What actions/columns are you using and how are you configuring it?
We’ve been using the “Send Message to ChatGPT without history” action for about 3 to 4 months, and it worked well. But recently, we’re having problems with the GPT-4 model. Sometimes it works fine, but other times it doesn’t respond and shows a “User aborted request” error. We’ve tried different things like changing the system prompt, the temperature, rephrasing the task, and even using other versions of GPT-4, but nothing helped.
Switching to GPT-3.5 solves the problem, but we really need to use GPT-4 for our tasks.
2 Likes
No one is replying to this but I’m having the same problem. Other sites get around this by having the api stream the response one word at a time versus in one chunk. Would love if the glide folks could help with this…
1 Like
@nixkulinax @J_Woosley - Thanks for flagging. 
I’ll pass this along to the team.
In the meantime - how big are those request payloads you’re sending to ChatGPT?
1 Like
Thank you!
I’ve tested it with different payloads.
The first time I encountered this issue, I changed big payload tasks to a 16k GPT model, but then I found that errors continued on actions that used no more than 200-300 tokens.
1 Like
The max context window is 4K characters, so if you have that many it just churns and times out after 15 mins. I have a killer app thats now on Glide, so would love for you guys to fix this! In the meantime the internal AI is pretty good.
Thank you both for the context!
Best next step is to open a ticket with our support team, if you haven’t already.
They’ll be able to dig into your app and troubleshoot in more detail than what we can do in the forum.
1 Like