Introduction
This integration enables users to leverage OpenAI's ChatGPT model within Clay workflows. By utilizing Clay inputs as user prompts and incorporating context clues as system messages, the integration generates text responses via the OpenAI Chat Completions API. This functionality allows for dynamic, AI-powered text generation based on specific data and contextual information provided through Clay's platform.
Provider: https://openai.com
โAPI Documentation: https://platform.openai.com/docs/guides/text-generation/chat-completions-api
Input
Name | Is Optional | Description | Type |
true | The role set for the system field. NOTE: this is unused by public actions and is solely to help a function on the UI. | text | |
true | The message text associated with the "system" role in ChatGPT. | text | |
The prompt message to be sent to ChatGPT, associated with the last "user" role. | longtext | ||
# of Messages | The latest index of the examples array, used to generate the dynamic fields. | text | |
true | dynamicFields | ||
Examples | true | A list of examples. | text |
Answer Formatting | true | The format of the answer you want from GPT (e.g. text, number, etc) | object |
Model | true | The ChatGPT model to run this request with. Defaults to "auto", which will run gpt-4 for small requests and 3.5-turbo for larger requests. [Learn more about models here.](https://platform.openai.com/docs/models/) | text |
Return as JSON Object (JSON Mode) | true | If enabled, the output will be a JSON object. Be very clear about the properties you want to return, and mention the word "json" somewhere in your prompt. | boolean |
Define column outputs | true | The column outputs are used to structure the response. Each field here will be extracted to a column in your table. For example, if you add a field "isB2B", and change the type to True/False, each cell will have isB2B to add as a column. | object |
Creativity Level | true | Choose the creativity level for GPT. Higher temperatures mean higher creativity, or more variance in the responses. The scale is from 0-2, default being 1. You can [read more about it here.](https://platform.openai.com/docs/api-reference/chat/create#chat-create-temperature) | text |
Stop Sequence | true | Optionally, enter a piece of text where OpenAI will stop generating tokens. By default, any whitespace will be used. [Learn More](https://help.openai.com/en/articles/5072263-how-do-i-use-stop-sequences) | text |