OpenAI response cutting off early

Hi! Wondering if anyone here has run into the issue with OpenAI/GPT 3.5 responses cutting off early (~200 words). This is happening to me across multiple surfaces on my platform (i.e. auto-reply generated from data input fields and chatbot-based features). When I ask the chatbot to complete its response, it will continue but will again cut off if the remaining response is still too long. Started happening about a week ago, and don’t believe it’s anything I changed on my end.

Using gpt-3.5-turbo-16k with temperature = 0.5 and frequency = 0.8, no maximum length.

Any ideas?

Have you tried increasing the tokens to max?


Just tried this — looks like it’s working! Had max_tokens set to 2048 when it was working and through to when it wasn’t, so didn’t think that was the issue. Hadn’t thought to just max it out.


1 Like

This topic was automatically closed 24 hours after the last reply. New replies are no longer allowed.