Add Activepieces integration for workflow automation

- Add Activepieces fork with SmoothSchedule custom piece
- Create integrations app with Activepieces service layer
- Add embed token endpoint for iframe integration
- Create Automations page with embedded workflow builder
- Add sidebar visibility fix for embed mode
- Add list inactive customers endpoint to Public API
- Include SmoothSchedule triggers: event created/updated/cancelled
- Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
poduck
2025-12-18 22:59:37 -05:00
parent 9848268d34
commit 3aa7199503
16292 changed files with 1284892 additions and 4708 deletions

View File

@@ -0,0 +1,26 @@
{
"Powerful AI tools from Microsoft": "Mächtige KI-Tools von Microsoft",
"Endpoint": "Endpoint",
"API Key": "API-Schlüssel",
"https://<resource name>.openai.azure.com/": "https://<resource name>.openai.azure.com/",
"Use the Azure Portal to browse to your OpenAI resource and retrieve an API key": "Benutze das Azure Portal, um zu deiner OpenAI Ressource zu navigieren und einen API-Schlüssel zu erhalten",
"Ask GPT": "Frage GPT",
"Ask ChatGPT anything you want!": "Fragen Sie ChatGPT was Sie wollen!",
"Deployment Name": "Einsatzname",
"Question": "Frage",
"Temperature": "Temperatur",
"Maximum Tokens": "Maximale Token",
"Top P": "Oben P",
"Frequency penalty": "Frequenz Strafe",
"Presence penalty": "Präsenzstrafe",
"Memory Key": "Speicherschlüssel",
"Roles": "Rollen",
"The name of your model deployment.": "Der Name Ihres Model-Deployment.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Kontrolliert Zufallszufälligkeit: Die Verringerung führt zu weniger zufälligen Vervollständigungen. Je näher die Temperatur Null rückt, desto deterministischer und sich wiederholender wird.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)": "Die maximale Anzahl zu generierender Token. Anfragen können je nach Modell bis zu 2.048 oder 4.096 Tokens verwenden, die zwischen der Eingabeaufforderung und der Fertigstellung geteilt werden. Legen Sie den Wert nicht auf maximal und lassen Sie einige Token für die Eingabe. (Ein Token ist ungefähr 4 Zeichen für den normalen englischen Text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Eine Alternative zur Probenahme mit Temperatur, genannt Nucleus Probenahme, bei der das Modell die Ergebnisse der Tokens mit der Top_p Wahrscheinlichkeitsmasse berücksichtigt. 0,1 bedeutet also nur die Token, die die obersten 10% Wahrscheinlichkeitsmasse ausmachen.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Nummer zwischen -2.0 und 2.0. Positive Werte bestrafen neue Tokens aufgrund ihrer bisherigen Häufigkeit im Text, wodurch sich die Wahrscheinlichkeit verringert, dass das Modell dieselbe Zeile wörtlich wiederholt.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Nummer zwischen -2.0 und 2.0. Positive Werte bestrafen neue Tokens je nachdem, ob sie bisher im Text erscheinen, was die Wahrscheinlichkeit erhöht, über neue Themen zu sprechen.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.": "Ein Memory-Schlüssel, der den Chat-Verlauf über alle Abläufe und Ströme hinweg weitergibt. Leer lassen um ChatGPT ohne Speicher früherer Nachrichten zu verlassen.",
"Array of roles to specify more accurate response": "Rollenzuordnung, um eine genauere Antwort anzugeben"
}

View File

@@ -0,0 +1,26 @@
{
"Powerful AI tools from Microsoft": "Herramientas de IA potentes de Microsoft",
"Endpoint": "Endpoint",
"API Key": "Clave API",
"https://<resource name>.openai.azure.com/": "https://<resource name>.openai.azure.com/",
"Use the Azure Portal to browse to your OpenAI resource and retrieve an API key": "Usa el Portal Azure para navegar a tu recurso OpenAI y recuperar una clave API",
"Ask GPT": "Preguntar GPT",
"Ask ChatGPT anything you want!": "¡Pregúntale lo que quieras!",
"Deployment Name": "Nombre de despliegue",
"Question": "Pregunta",
"Temperature": "Temperatura",
"Maximum Tokens": "Tokens máximos",
"Top P": "Top P",
"Frequency penalty": "Puntuación de frecuencia",
"Presence penalty": "Penalización de presencia",
"Memory Key": "Clave de memoria",
"Roles": "Roles",
"The name of your model deployment.": "El nombre de la implementación del modelo.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controles aleatorios: La reducción de resultados en terminaciones menos aleatorias. A medida que la temperatura se acerca a cero, el modelo se volverá determinista y repetitivo.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)": "El número máximo de tokens a generar. Las solicitudes pueden usar hasta 2,048 o 4,096 tokens compartidos entre prompt y terminación dependiendo del modelo. No establecer el valor máximo y dejar algunas fichas para la entrada. (Una ficha es aproximadamente 4 caracteres para el texto normal en inglés)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Una alternativa al muestreo con temperatura, llamado muestreo de núcleos, donde el modelo considera los resultados de los tokens con masa de probabilidad superior_p. Por lo tanto, 0,1 significa que sólo se consideran las fichas que componen la masa superior del 10% de probabilidad.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Número entre -2.0 y 2.0. Los valores positivos penalizan nuevos tokens basados en su frecuencia existente en el texto hasta ahora, lo que reduce la probabilidad del modelo de repetir la misma línea literalmente.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Número entre -2.0 y 2.0. Los valores positivos penalizan las nuevas fichas basándose en si aparecen en el texto hasta ahora, aumentando la probabilidad de que el modo hable de nuevos temas.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.": "Una clave de memoria que mantendrá el historial de chat compartido a través de ejecuciones y flujos. Manténgalo vacío para dejar ChatGPT sin memoria de mensajes anteriores.",
"Array of roles to specify more accurate response": "Matriz de roles para especificar una respuesta más precisa"
}

View File

@@ -0,0 +1,26 @@
{
"Powerful AI tools from Microsoft": "Outils puissants IA de Microsoft",
"Endpoint": "Endpoint",
"API Key": "Clé API",
"https://<resource name>.openai.azure.com/": "https://<resource name>.openai.azure.com/",
"Use the Azure Portal to browse to your OpenAI resource and retrieve an API key": "Utilisez le portail Azure pour naviguer vers votre ressource OpenAI et récupérer une clé API",
"Ask GPT": "Demander GPT",
"Ask ChatGPT anything you want!": "Demandez à ChatGPT ce que vous voulez !",
"Deployment Name": "Nom du déploiement",
"Question": "Question",
"Temperature": "Température",
"Maximum Tokens": "Maximum de jetons",
"Top P": "Top P",
"Frequency penalty": "Malus de fréquence",
"Presence penalty": "Malus de présence",
"Memory Key": "Clé de mémoire",
"Roles": "Rôles",
"The name of your model deployment.": "Le nom du déploiement de votre modèle.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Contrôle aléatoirement : La baisse des résultats est moins aléatoire, alors que la température approche de zéro, le modèle devient déterministe et répétitif.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)": "Le nombre maximum de jetons à générer. Les requêtes peuvent utiliser jusqu'à 2 048 ou 4 096 jetons partagés entre l'invite et la complétion selon le modèle. Ne pas définir la valeur au maximum et laisser des jetons pour l'entrée. (un jeton est d'environ 4 caractères pour le texte anglais normal)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Une alternative à l'échantillonnage à la température, appelée l'échantillonnage du noyau, où le modèle considère les résultats des jetons avec la masse de probabilité top_p. Ainsi, 0,1 signifie que seuls les jetons constituant la masse de probabilité la plus élevée de 10% sont pris en compte.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Numéroter entre -2.0 et 2.0. Les valeurs positives pénalisent les nouveaux jetons en fonction de leur fréquence existante dans le texte jusqu'à présent, diminuant la probabilité du modèle de répéter le verbatim de la même ligne.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Numéroter entre -2.0 et 2.0. Les valeurs positives pénalisent les nouveaux jetons en fonction du fait qu'ils apparaissent dans le texte jusqu'à présent, ce qui augmente la probabilité du mode de parler de nouveaux sujets.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.": "Une clé de mémoire qui conservera l'historique des discussions partagées à travers les exécutions et les flux. Gardez-la vide pour laisser ChatGPT sans mémoire pour les messages précédents.",
"Array of roles to specify more accurate response": "Tableau de rôles pour spécifier une réponse plus précise"
}

View File

@@ -0,0 +1,26 @@
{
"Powerful AI tools from Microsoft": "Microsoftの強力なAIツール",
"Endpoint": "Endpoint",
"API Key": "API キー",
"https://<resource name>.openai.azure.com/": "https://<resource name>.openai.azure.com/",
"Use the Azure Portal to browse to your OpenAI resource and retrieve an API key": "Azure Portal を使用して OpenAI リソースを参照し、API キーを取得します。",
"Ask GPT": "GPTに聞く",
"Ask ChatGPT anything you want!": "あなたが望むものは何でもChatGPTに聞いてください",
"Deployment Name": "配備名",
"Question": "質問",
"Temperature": "温度",
"Maximum Tokens": "最大トークン",
"Top P": "トップ P",
"Frequency penalty": "頻度ペナルティ",
"Presence penalty": "プレゼンスペナルティ",
"Memory Key": "メモリーキー",
"Roles": "ロール",
"The name of your model deployment.": "モデル展開の名前",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "温度がゼロに近づくにつれて、モデルは決定論的で反復的になります。",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)": "生成するトークンの最大数。リクエストは、モデルに応じてプロンプトと補完の間で共有される最大2,048または4,096トークンを使用できます。 値を最大値に設定せず、トークンを残して入力します。(通常の英語テキストでは1つのトークンは約4文字です)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "核サンプリングと呼ばれる温度でのサンプリングの代わりに、モデルはtop_p確率質量を持つトークンの結果を考慮します。 つまり、0.1は上位10%の確率質量からなるトークンのみを考慮することになります。",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "-2.0 から 2.0 までの数字。 正の値は、これまでのテキスト内の既存の頻度に基づいて新しいトークンを罰するため、モデルが同じ行を元に繰り返す可能性が低下します。",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "-2.0 から 2.0 までの数字。 肯定的な値は、これまでのところテキストに表示されるかどうかに基づいて新しいトークンを罰し、モードが新しいトピックについて話す可能性を高めます。",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.": "実行とフロー間で共有されるチャット履歴を保持するメモリキー。以前のメッセージのメモリなしでChatGPTを残すには空白のままにしてください。",
"Array of roles to specify more accurate response": "より正確な応答を指定するロールの配列"
}

View File

@@ -0,0 +1,26 @@
{
"Powerful AI tools from Microsoft": "Krachtige AI tools van Microsoft",
"Endpoint": "Endpoint",
"API Key": "API Sleutel",
"https://<resource name>.openai.azure.com/": "https://<resource name>.openai.azure.com/",
"Use the Azure Portal to browse to your OpenAI resource and retrieve an API key": "Gebruik het Azure Portaal om naar uw OpenAI bron te bladeren en een API sleutel op te halen",
"Ask GPT": "Vraag GPT",
"Ask ChatGPT anything you want!": "Vraag ChatGPT alles wat je maar wilt!",
"Deployment Name": "Deployment Naam",
"Question": "Vraag",
"Temperature": "Temperatuur",
"Maximum Tokens": "Maximaal aantal tokens",
"Top P": "Boven P",
"Frequency penalty": "Frequentie boete",
"Presence penalty": "Presence boete",
"Memory Key": "Geheugen Sleutel",
"Roles": "Rollen",
"The name of your model deployment.": "De naam van je model implementatie.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Bestuurt willekeurigheid: Het verlagen van de temperatuur resulteert in minder willekeurige aanvullingen. Zodra de temperatuur nul nadert, zal het model deterministisch en herhalend worden.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)": "Het maximale aantal te genereren tokens kan gebruikt worden tot 2.048 of 4.096 tokens gedeeld tussen de prompt en de voltooiing, afhankelijk van het model. Stel de waarde niet in op een maximum en laat sommige tokens voor de invoer. (Eén token is ongeveer 4 tekens voor normale Engelse tekst)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Een alternatief voor bemonstering met de temperatuur, genaamd nucleus sampling, waarbij het model de resultaten van de tokens met top_p waarschijnlijkheid ziet. 0.1 betekent dus dat alleen de tokens die de grootste massa van 10 procent vormen, worden overwogen.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Nummer tussen -2.0 en 2.0. Positieve waarden bestraffen nieuwe tokens op basis van hun bestaande frequentie in de tekst tot nu toe, waardoor de waarschijnlijkheid van het model om dezelfde lijn te herhalen afneemt.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Nummer tussen -2.0 en 2.0. Positieve waarden bestraffen nieuwe tokens op basis van de vraag of ze tot nu toe in de tekst staan, waardoor de modus meer kans maakt om over nieuwe onderwerpen te praten.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.": "Een geheugensleutel waarmee de chatgeschiedenis gedeeld blijft over uitvoeringen en stromen. Houd het leeg om ChatGPT te verlaten zonder het geheugen van vorige berichten.",
"Array of roles to specify more accurate response": "Array of roles om een nauwkeuriger antwoord te geven"
}

View File

@@ -0,0 +1,26 @@
{
"Powerful AI tools from Microsoft": "Ferramentas poderosas de IA da Microsoft",
"Endpoint": "Endpoint",
"API Key": "Chave de API",
"https://<resource name>.openai.azure.com/": "https://<resource name>.openai.azure.com/",
"Use the Azure Portal to browse to your OpenAI resource and retrieve an API key": "Use o Azure Portal para navegar até seu recurso OpenAI e recuperar uma chave de API",
"Ask GPT": "Perguntar GPT",
"Ask ChatGPT anything you want!": "Pergunte ao ChatGPT o que você quiser!",
"Deployment Name": "Nome de implantação",
"Question": "Questão",
"Temperature": "Temperatura",
"Maximum Tokens": "Máximo de Tokens",
"Top P": "Superior P",
"Frequency penalty": "Penalidade de frequência",
"Presence penalty": "Penalidade de presença",
"Memory Key": "Chave de memória",
"Roles": "Papéis",
"The name of your model deployment.": "O nome do seu modelo de implantação.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controla aleatoriedade: Diminuir resulta em menos complementos aleatórios. À medida que a temperatura se aproxima de zero, o modelo se tornará determinístico e repetitivo.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)": "O número máximo de tokens a gerar. Solicitações podem usar até 2.048 ou 4,096 tokens compartilhados entre prompt e conclusão dependendo do modelo. Não defina o valor como máximo e deixe alguns tokens para o input. (Um token é aproximadamente 4 caracteres para o texto normal em inglês)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Uma alternativa à amostragem com temperatura, chamada amostragem núcleo, onde o modelo considera os resultados dos tokens com massa de probabilidade superior (P). Portanto, 0,1 significa que apenas os tokens que incluem a massa de probabilidade superior de 10% são considerados.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Número entre -2.0 e 2.0. Valores positivos penalizam novos tokens baseados em sua frequência existente no texto até agora, diminuindo a probabilidade do modelo repetir o verbal da mesma linha.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Número entre -2.0 e 2.0. Valores positivos penalizam novos tokens baseado no fato de eles aparecerem no texto até agora, aumentando a probabilidade de o modo falar sobre novos tópicos.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.": "Uma chave de memória que manterá o histórico de bate-papo compartilhado entre execuções e fluxos. Deixe em branco para deixar o ChatGPT sem memória das mensagens anteriores.",
"Array of roles to specify more accurate response": "Array de papéis para especificar uma resposta mais precisa"
}

View File

@@ -0,0 +1,27 @@
{
"Azure OpenAI": "Лазурный OpenAI",
"Powerful AI tools from Microsoft": "Мощные инструменты ИИ от Microsoft",
"Endpoint": "Endpoint",
"API Key": "Ключ API",
"https://<resource name>.openai.azure.com/": "https://<resource name>.openai.azure.com/",
"Use the Azure Portal to browse to your OpenAI resource and retrieve an API key": "Используйте портал Azure для просмотра вашего ресурса OpenAI и получения ключа API",
"Ask GPT": "Спросить GPT",
"Ask ChatGPT anything you want!": "Спросите ChatGPT все, что хотите!",
"Deployment Name": "Название развертывания",
"Question": "Вопрос",
"Temperature": "Температура",
"Maximum Tokens": "Максимум жетонов",
"Top P": "Верхний П",
"Frequency penalty": "Периодичность штрафа",
"Presence penalty": "Штраф присутствия",
"Memory Key": "Ключ памяти",
"Roles": "Роли",
"The name of your model deployment.": "Название вашей модели внедрения.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Контролирует случайность: понижение результатов в менее случайном завершении. По мере нулевого температурного приближения модель становится детерминированной и повторяющей.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)": "Максимальное количество жетонов для генерации. Запросы могут использовать до 2,048 или 4096 жетонов, которыми обменивались между подсказками и дополнениями, в зависимости от модели. Не устанавливайте максимальное значение и оставляйте некоторые токены для ввода. (Один токен примерно 4 символа для обычного английского текста)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Альтернатива отоплению с температурой, называемой ядерным отбором, где модель рассматривает результаты жетонов с вероятностью top_p. Таким образом, 0.1 означает, что учитываются только жетоны, состоящие из массы 10% наивысшего уровня.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Номер между -2.0 и 2.0. Положительные значения наказывают новые токены на основе их существующей частоты в тексте до сих пор, уменьшая вероятность повторения одной и той же строки.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Номер между -2.0 и 2.0. Положительные значения наказывают новые токены на основании того, появляются ли они в тексте до сих пор, что повышает вероятность того, что режим будет обсуждать новые темы.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.": "Ключ памяти, который хранит историю чата в разных запусках и потоках. Оставьте пустым, чтобы оставить ChatGPT без памяти предыдущих сообщений.",
"Array of roles to specify more accurate response": "Массив ролей для более точного ответа"
}

View File

@@ -0,0 +1,26 @@
{
"Powerful AI tools from Microsoft": "Powerful AI tools from Microsoft",
"Endpoint": "Endpoint",
"API Key": "API Key",
"https://<resource name>.openai.azure.com/": "https://<resource name>.openai.azure.com/",
"Use the Azure Portal to browse to your OpenAI resource and retrieve an API key": "Use the Azure Portal to browse to your OpenAI resource and retrieve an API key",
"Ask GPT": "Ask GPT",
"Ask ChatGPT anything you want!": "Ask ChatGPT anything you want!",
"Deployment Name": "Deployment Name",
"Question": "Question",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Frequency penalty": "Frequency penalty",
"Presence penalty": "Presence penalty",
"Memory Key": "Memory Key",
"Roles": "Roles",
"The name of your model deployment.": "The name of your model deployment.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.",
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response"
}

View File

@@ -0,0 +1,27 @@
{
"Azure OpenAI": "Azure OpenAI",
"Powerful AI tools from Microsoft": "Powerful AI tools from Microsoft",
"Endpoint": "Endpoint",
"API Key": "API Key",
"https://<resource name>.openai.azure.com/": "https://<resource name>.openai.azure.com/",
"Use the Azure Portal to browse to your OpenAI resource and retrieve an API key": "Use the Azure Portal to browse to your OpenAI resource and retrieve an API key",
"Ask GPT": "Ask GPT",
"Ask ChatGPT anything you want!": "Ask ChatGPT anything you want!",
"Deployment Name": "Deployment Name",
"Question": "Question",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Frequency penalty": "Frequency penalty",
"Presence penalty": "Presence penalty",
"Memory Key": "Memory Key",
"Roles": "Roles",
"The name of your model deployment.": "The name of your model deployment.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.",
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response"
}

View File

@@ -0,0 +1,26 @@
{
"Powerful AI tools from Microsoft": "Powerful AI tools from Microsoft",
"Endpoint": "Endpoint",
"API Key": "API 密钥",
"https://<resource name>.openai.azure.com/": "https://<resource name>.openai.azure.com/",
"Use the Azure Portal to browse to your OpenAI resource and retrieve an API key": "Use the Azure Portal to browse to your OpenAI resource and retrieve an API key",
"Ask GPT": "Ask GPT",
"Ask ChatGPT anything you want!": "Ask ChatGPT anything you want!",
"Deployment Name": "部署名称",
"Question": "Question",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Frequency penalty": "Frequency penalty",
"Presence penalty": "Presence penalty",
"Memory Key": "内存键",
"Roles": "角色",
"The name of your model deployment.": "The name of your model deployment.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.",
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response"
}

View File

@@ -0,0 +1,35 @@
import {
createPiece,
PieceAuth,
Property,
} from '@activepieces/pieces-framework';
import { askGpt } from './lib/actions/ask-gpt';
export const azureOpenaiAuth = PieceAuth.CustomAuth({
props: {
endpoint: Property.ShortText({
displayName: 'Endpoint',
description: 'https://<resource name>.openai.azure.com/',
required: true,
}),
apiKey: PieceAuth.SecretText({
displayName: 'API Key',
description:
'Use the Azure Portal to browse to your OpenAI resource and retrieve an API key',
required: true,
}),
},
required: true,
});
export const azureOpenai = createPiece({
displayName: 'Azure OpenAI',
description: 'Powerful AI tools from Microsoft',
auth: azureOpenaiAuth,
minimumSupportedRelease: '0.36.1',
logoUrl: 'https://cdn.activepieces.com/pieces/azure-openai.png',
authors: ["MoShizzle","abuaboud"],
actions: [askGpt],
triggers: [],
});

View File

@@ -0,0 +1,152 @@
import { azureOpenaiAuth } from '../../';
import {
Property,
StoreScope,
createAction,
} from '@activepieces/pieces-framework';
import { OpenAIClient, AzureKeyCredential } from '@azure/openai';
import { calculateMessagesTokenSize, exceedsHistoryLimit, reduceContextSize } from '../common';
import { z } from 'zod';
import { propsValidation } from '@activepieces/pieces-common';
export const askGpt = createAction({
auth: azureOpenaiAuth,
name: 'ask_gpt',
displayName: 'Ask GPT',
description: 'Ask ChatGPT anything you want!',
props: {
deploymentId: Property.ShortText({
displayName: 'Deployment Name',
description: 'The name of your model deployment.',
required: true,
}),
prompt: Property.LongText({
displayName: 'Question',
required: true,
}),
temperature: Property.Number({
displayName: 'Temperature',
required: false,
description:
'Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.',
defaultValue: 0.9,
}),
maxTokens: Property.Number({
displayName: 'Maximum Tokens',
required: true,
description:
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion depending on the model. Don't set the value to maximum and leave some tokens for the input. (One token is roughly 4 characters for normal English text)",
defaultValue: 2048,
}),
topP: Property.Number({
displayName: 'Top P',
required: false,
description:
'An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.',
defaultValue: 1,
}),
frequencyPenalty: Property.Number({
displayName: 'Frequency penalty',
required: false,
description:
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
defaultValue: 0,
}),
presencePenalty: Property.Number({
displayName: 'Presence penalty',
required: false,
description:
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
defaultValue: 0.6,
}),
memoryKey: Property.ShortText({
displayName: 'Memory Key',
description:
'A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave ChatGPT without memory of previous messages.',
required: false,
}),
roles: Property.Json({
displayName: 'Roles',
required: false,
description: 'Array of roles to specify more accurate response',
defaultValue: [
{ role: 'system', content: 'You are a helpful assistant.' },
],
}),
},
async run(context) {
const { propsValue, store } = context;
const auth = context.auth.props;
await propsValidation.validateZod(propsValue, {
temperature: z.number().min(0).max(1.0).optional(),
frequencyPenalty: z.number().min(-2.0).max(2.0).optional(),
presencePenalty: z.number().min(-2.0).max(2.0).optional(),
});
const openai = new OpenAIClient(
auth.endpoint,
new AzureKeyCredential(auth.apiKey)
);
let messageHistory: any[] | null = [];
// If memory key is set, retrieve messages stored in history
if (propsValue.memoryKey) {
messageHistory = (await store.get(propsValue.memoryKey, StoreScope.PROJECT)) ?? [];
}
// Add user prompt to message history
messageHistory.push({
role: 'user',
content: propsValue.prompt,
});
// Add system instructions if set by user
const rolesArray = propsValue.roles ? (propsValue.roles as any) : [];
const roles = rolesArray.map((item: any) => {
const rolesEnum = ['system', 'user', 'assistant'];
if (!rolesEnum.includes(item.role)) {
throw new Error(
'The only available roles are: [system, user, assistant]'
);
}
return {
role: item.role,
content: item.content,
};
});
const completion = await openai.getChatCompletions(propsValue.deploymentId, [...roles, ...messageHistory], {
maxTokens: propsValue.maxTokens,
temperature: propsValue.temperature,
frequencyPenalty: propsValue.frequencyPenalty,
presencePenalty: propsValue.presencePenalty,
topP: propsValue.topP,
});
const responseText = completion.choices[0].message?.content;
// Add response to message history
messageHistory = [...messageHistory, responseText];
// Check message history token size
// System limit is 32K tokens, we can probably make it bigger but this is a safe spot
const tokenLength = await calculateMessagesTokenSize(messageHistory, '');
if (propsValue.memoryKey) {
// If tokens exceed 90% system limit or 90% of model limit - maxTokens, reduce history token size
if (exceedsHistoryLimit(tokenLength, '', propsValue.maxTokens)) {
messageHistory = await reduceContextSize(
messageHistory,
'',
propsValue.maxTokens
);
}
// Store history
await store.put(propsValue.memoryKey, messageHistory, StoreScope.PROJECT);
}
return responseText;
},
});

View File

@@ -0,0 +1,74 @@
import { encoding_for_model } from 'tiktoken';
export const calculateTokensFromString = (string: string, model: string) => {
try {
const encoder = encoding_for_model(model as any);
const tokens = encoder.encode(string);
encoder.free();
return tokens.length;
} catch (e) {
// Model not supported by tiktoken, every 4 chars is a token
return Math.round(string.length / 4);
}
};
export const calculateMessagesTokenSize = async (
messages: string[],
model: string
) => {
let tokenLength = 0;
await Promise.all(
messages.map((message: string) => {
return new Promise((resolve) => {
tokenLength += calculateTokensFromString(message, model);
resolve(tokenLength);
});
})
);
return tokenLength;
};
export const reduceContextSize = async (
messages: string[],
model: string,
maxTokens: number
) => {
// TODO: Summarize context instead of cutoff
const cutoffSize = Math.round(messages.length * 0.1);
const cutoffMessages = messages.splice(cutoffSize, messages.length - 1);
if (
(await calculateMessagesTokenSize(cutoffMessages, model)) >
maxTokens / 1.5
) {
reduceContextSize(cutoffMessages, model, maxTokens);
}
return cutoffMessages;
};
export const exceedsHistoryLimit = (
tokenLength: number,
model: string,
maxTokens: number
) => {
if (
tokenLength >= tokenLimit / 1.1 ||
tokenLength >= (modelTokenLimit(model) - maxTokens) / 1.1
) {
return true;
}
return false;
};
export const tokenLimit = 32000;
export const modelTokenLimit = (model: string) => {
switch (model) {
default:
return 2048;
}
};