Hey folks , I’m having a weird issue with the Text to Speech OpenAI action. I am getting the max tokens error even though the output is actually only roughly 1000 tokens when I check it out on the tokenizer.
Any help appreciated.
Hey folks , I’m having a weird issue with the Text to Speech OpenAI action. I am getting the max tokens error even though the output is actually only roughly 1000 tokens when I check it out on the tokenizer.
Any help appreciated.
Seems like the input is the problem, not the output though. Can you check that?
Sorry I meant to say input initially. I’ve got a script that is 1038 tokens as the input.
Just began working again today when testing! Something i noticed with testing is the models never output more than roughly 1000 tokens. The max token parameter really only limits to values below 1000. I found a lot of threads on their forum as well with others noticing this mentioning it is almost impossible to get the models to output to the max 4096 as advertised.
Has anyone else found this to be the case also?
Ive actually had to limit the max output to 900 so that the prompt and output are less than 1000 so TTS will work every time.
EDIT : Even this doesnt work because the error limit is characters not tokens so it still gos over the limit for TTS
This is a big drawback for the entire AI system and.i just noticed it now! From what I’ve read briefly its to keep cost in line for OpenAI and next to impossible to get the models to output more than 1000txs
So it only happens with the text to speech endpoint?