OpenAI integration not using uploaded photo

I have an OpenAI developer account which is used to integrate with GlideApps. I also have a ChatGPTPlus account.

When using the ChatGPTPlus account, I can upload an image into a custom GPT and it generates a new image that is a fun variation on the original image.
Good.

When I use the exact same prompt as fuels my custom GPT, but I send it to OpenAi through the GlideApps integration, it ignores my uploaded image. It follows the remainder of the prompt fairly faithfully.

I have tried sending the prompt to DALL-E-3 and to GPT-4 but neither work in the OpenAi integration. I have tried the ‘generate image’ command and the ‘complete chat’ command.

Any ideas on why it works within ChatGPTPlus but not OpenAI?

I think this is most likely unrelated to the problem you’re facing, since whatever you do in Glide means interacting with their API, and what you’re describing seems like an experience with their web version, not API.

As far as I aware, GPTs (or Assistants, in their API form), have not been integrated natively into Glide. You would either have to build an assistant first through the API, and work with your input to start a chat with it, OR maybe some sophisticated flow in Make to:

  • Take the original image as an input
  • Read the image to understand its features
  • Use DALL-E 3 to generate a new version of it

All of the above was written without knowing what functions your custom GPT actually use.

The workflow you described is what I am trying to recreate in Glide.

  • Take the original image as an input
  • Read the image to understand its features
  • Use DALL-E 3 to generate a new version of it

The part that seems to be incorrect or incomplete is that the image does not seem to be passed as part of the ‘generate image’ prompt ot ‘complete chat’ prompt. I’ve tried both. In either case, the prompt being passed within the integration is created using a template column to reference the attached image.

I think what you’re trying to do there isn’t available with the native integration, because the API can’t automatically know what tools to use for the flow, except if you use the Assistants API.

If you don’t want to build a flow in Make, I think the most obvious way to try it is:

  • Allow users to upload an image
  • Tell the integration to describe the image
  • Generate a new image combining the description of the original image, and your prompt.

Also, reading your prompt, I doubt just telling “don’t use copyrighted” images work. Your prompt also is just referencing the image URL and I don’t know if these models are allowed to read image URLs anyway.