Add Activepieces integration for workflow automation

- Add Activepieces fork with SmoothSchedule custom piece
- Create integrations app with Activepieces service layer
- Add embed token endpoint for iframe integration
- Create Automations page with embedded workflow builder
- Add sidebar visibility fix for embed mode
- Add list inactive customers endpoint to Public API
- Include SmoothSchedule triggers: event created/updated/cancelled
- Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
poduck
2025-12-18 22:59:37 -05:00
parent 9848268d34
commit 3aa7199503
16292 changed files with 1284892 additions and 4708 deletions

View File

@@ -0,0 +1,27 @@
{
"DeepSeek": "DeepSeek",
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.",
"Ask Deepseek": "Ask Deepseek",
"Ask Deepseek anything you want!": "Ask Deepseek anything you want!",
"Model": "Model",
"Question": "Question",
"Frequency penalty": "Frequency penalty",
"Maximum Tokens": "Maximum Tokens",
"Presence penalty": "Presence penalty",
"Response Format": "Response Format",
"Temperature": "Temperature",
"Top P": "Top P",
"Memory Key": "Memory Key",
"Roles": "Roles",
"The model which will generate the completion.": "The model which will generate the completion.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "The maximum number of tokens to generate. Possible values are between 1 and 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.",
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
"Text": "Text",
"JSON": "JSON"
}

View File

@@ -0,0 +1,26 @@
{
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Folgen Sie diesen Anweisungen, um Ihren DeepSeek API-Schlüssel zu erhalten:\n\n1. Besuchen Sie die folgende Website: https://platform. eepseek.com/api_keys.\n2. Sobald Sie auf der Website sind, suchen und klicken Sie auf die Option, um Ihren DeepSeek API Key zu erhalten.",
"Ask Deepseek": "Deepseek fragen",
"Ask Deepseek anything you want!": "Fragen Sie Deepseek alles, was Sie wollen!",
"Model": "Modell",
"Question": "Frage",
"Frequency penalty": "Frequenz Strafe",
"Maximum Tokens": "Maximale Token",
"Presence penalty": "Präsenzstrafe",
"Response Format": "Antwortformat",
"Temperature": "Temperatur",
"Top P": "Oben P",
"Memory Key": "Speicherschlüssel",
"Roles": "Rollen",
"The model which will generate the completion.": "Das Modell, das die Fertigstellung generiert.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Nummer zwischen -2.0 und 2.0. Positive Werte bestrafen neue Tokens aufgrund ihrer bisherigen Häufigkeit im Text, wodurch sich die Wahrscheinlichkeit verringert, dass das Modell dieselbe Zeile wörtlich wiederholt.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "Die maximale Anzahl zu generierender Token. Mögliche Werte liegen zwischen 1 und 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Nummer zwischen -2.0 und 2.0. Positive Werte bestrafen neue Tokens je nachdem, ob sie bisher im Text erscheinen, was die Wahrscheinlichkeit erhöht, über neue Themen zu sprechen.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "Das Format der Antwort. WICHTIG: Wenn Sie JSON-Ausgabe verwenden, müssen Sie auch das Modell anweisen, JSON selbst zu produzieren",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Kontrolliert Zufallszufälligkeit: Die Verringerung führt zu weniger zufälligen Vervollständigungen. Je näher die Temperatur Null rückt, desto deterministischer und sich wiederholender wird. Zwischen 0 und 2. Wir empfehlen in der Regel diese oder top_p zu ändern, aber nicht beides.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "Eine Alternative zur Probenahme mit Temperatur, genannt Nucleus Sampling, bei der das Modell die Ergebnisse der Tokens mit der Top_p Wahrscheinlichkeitsmasse berücksichtigt. bedeutet nur die Token, die die oberste 10%ige Wahrscheinlichkeitsmasse beinhalten. Werte <=1. Wir empfehlen in der Regel diese oder die Temperatur zu ändern, aber nicht beides.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "Ein Memory-Schlüssel, der den Chatverlauf über alle Abläufe und Ströme hinweg weitergibt. Leer lassen um Deepseek ohne Speicher früherer Nachrichten zu verlassen.",
"Array of roles to specify more accurate response": "Rollenzuordnung, um eine genauere Antwort anzugeben",
"Text": "Text",
"JSON": "JSON"
}

View File

@@ -0,0 +1,26 @@
{
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Sigue estas instrucciones para obtener tu clave API de DeepSeek:\n\n1. Visita el siguiente sitio web: https://platform. eepseek.com/api_keys.\n2. Una vez en el sitio web, localiza y haz clic en la opción para obtener tu clave API de DeepSeek.",
"Ask Deepseek": "Preguntar Profundamente",
"Ask Deepseek anything you want!": "Pregúntele a Deepseek lo que quieras!",
"Model": "Modelo",
"Question": "Pregunta",
"Frequency penalty": "Puntuación de frecuencia",
"Maximum Tokens": "Tokens máximos",
"Presence penalty": "Penalización de presencia",
"Response Format": "Formato de respuesta",
"Temperature": "Temperatura",
"Top P": "Top P",
"Memory Key": "Clave de memoria",
"Roles": "Roles",
"The model which will generate the completion.": "El modelo que generará la terminación.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Número entre -2.0 y 2.0. Los valores positivos penalizan nuevos tokens basados en su frecuencia existente en el texto hasta ahora, lo que reduce la probabilidad del modelo de repetir la misma línea literalmente.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "El número máximo de tokens a generar. Los valores posibles son entre 1 y 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Número entre -2.0 y 2.0. Los valores positivos penalizan las nuevas fichas basándose en si aparecen en el texto hasta ahora, aumentando la probabilidad de que el modo hable de nuevos temas.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "El formato de la respuesta. IMPORTANTE: Cuando se utiliza la salida JSON, también debe dar instrucciones al modelo para producir JSON usted mismo",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Controles aleatorios: La reducción de resultados en terminaciones menos aleatorias. A medida que la temperatura se acerca a cero, el modelo se volverá determinista y repetitivo. Entre 0 y 2. Generalmente recomendamos modificar esto o top_p pero no ambos.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "Una alternativa al muestreo con temperatura, llamado muestreo de núcleos, donde el modelo considera los resultados de los tokens con una masa de probabilidad superior. significa que sólo se consideran los tokens que componen la masa superior del 10% de probabilidad. Valores <=1. Generalmente recomendamos alterar esto o la temperatura, pero no ambos.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "Una clave de memoria que mantendrá el historial de chat compartido a través de ejecuciones y flujos. Manténgalo vacío para dejar Deepseek sin memoria de mensajes anteriores.",
"Array of roles to specify more accurate response": "Matriz de roles para especificar una respuesta más precisa",
"Text": "Texto",
"JSON": "JSON"
}

View File

@@ -0,0 +1,26 @@
{
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Suivez ces instructions pour obtenir votre clé API DeepSeek :\n\n1. Visitez le site Web suivant : https://plate-forme. eepseek.com/api_keys.\n2. Une fois sur le site Web, localisez et cliquez sur l'option pour obtenir votre clé API DeepSeek.",
"Ask Deepseek": "Demander la recherche de profondeur",
"Ask Deepseek anything you want!": "Demandez à Deepseek ce que vous voulez !",
"Model": "Modélisation",
"Question": "Question",
"Frequency penalty": "Malus de fréquence",
"Maximum Tokens": "Maximum de jetons",
"Presence penalty": "Malus de présence",
"Response Format": "Format de réponse",
"Temperature": "Température",
"Top P": "Top P",
"Memory Key": "Clé de mémoire",
"Roles": "Rôles",
"The model which will generate the completion.": "Le modèle qui va générer la complétion.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Numéroter entre -2.0 et 2.0. Les valeurs positives pénalisent les nouveaux jetons en fonction de leur fréquence existante dans le texte jusqu'à présent, diminuant la probabilité du modèle de répéter le verbatim de la même ligne.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "Le nombre maximum de jetons à générer. Les valeurs possibles sont comprises entre 1 et 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Numéroter entre -2.0 et 2.0. Les valeurs positives pénalisent les nouveaux jetons en fonction du fait qu'ils apparaissent dans le texte jusqu'à présent, ce qui augmente la probabilité du mode de parler de nouveaux sujets.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "Le format de la réponse. IMPORTANT: Lorsque vous utilisez la sortie JSON, vous devez également indiquer au modèle de produire du JSON vous-même",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Contrôle aléatoirement : La baisse des résultats est moins aléatoire, alors que la température approche de zéro, le modèle devient déterministe et répétitif. Entre 0 et 2. Nous recommandons généralement de modifier ceci ou top_p mais pas les deux.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "Une alternative à l'échantillonnage à la température, appelée l'échantillonnage du noyau, où le modèle considère les résultats des jetons avec la masse de probabilité top_p. Donc 0. signifie que seuls les jetons comprenant la masse de probabilité la plus élevée de 10% sont pris en compte. Les valeurs <=1. Nous recommandons généralement de modifier ceci ou la température, mais pas les deux.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "Une clé mémoire qui conservera l'historique des discussions partagées à travers les exécutions et les flux. Gardez-la vide pour laisser Deepseek sans mémoire pour les messages précédents.",
"Array of roles to specify more accurate response": "Tableau de rôles pour spécifier une réponse plus précise",
"Text": "Texte du texte",
"JSON": "JSON"
}

View File

@@ -0,0 +1,27 @@
{
"DeepSeek": "DeepSeek",
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.",
"Ask Deepseek": "Ask Deepseek",
"Ask Deepseek anything you want!": "Ask Deepseek anything you want!",
"Model": "Model",
"Question": "Question",
"Frequency penalty": "Frequency penalty",
"Maximum Tokens": "Maximum Tokens",
"Presence penalty": "Presence penalty",
"Response Format": "Response Format",
"Temperature": "Temperature",
"Top P": "Top P",
"Memory Key": "Memory Key",
"Roles": "Roles",
"The model which will generate the completion.": "The model which will generate the completion.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "The maximum number of tokens to generate. Possible values are between 1 and 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.",
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
"Text": "Text",
"JSON": "JSON"
}

View File

@@ -0,0 +1,27 @@
{
"DeepSeek": "DeepSeek",
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.",
"Ask Deepseek": "Ask Deepseek",
"Ask Deepseek anything you want!": "Ask Deepseek anything you want!",
"Model": "Model",
"Question": "Question",
"Frequency penalty": "Frequency penalty",
"Maximum Tokens": "Maximum Tokens",
"Presence penalty": "Presence penalty",
"Response Format": "Response Format",
"Temperature": "Temperature",
"Top P": "Top P",
"Memory Key": "Memory Key",
"Roles": "Roles",
"The model which will generate the completion.": "The model which will generate the completion.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "The maximum number of tokens to generate. Possible values are between 1 and 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.",
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
"Text": "Text",
"JSON": "JSON"
}

View File

@@ -0,0 +1,26 @@
{
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.",
"Ask Deepseek": "ディープシークに聞く",
"Ask Deepseek anything you want!": "あなたが望むものを深く探してみてください!",
"Model": "モデル",
"Question": "質問",
"Frequency penalty": "頻度ペナルティ",
"Maximum Tokens": "最大トークン",
"Presence penalty": "プレゼンスペナルティ",
"Response Format": "応答形式",
"Temperature": "温度",
"Top P": "トップ P",
"Memory Key": "メモリーキー",
"Roles": "ロール",
"The model which will generate the completion.": "補完を生成するモデル。",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "-2.0 から 2.0 までの数字。 正の値は、これまでのテキスト内の既存の頻度に基づいて新しいトークンを罰するため、モデルが同じ行を元に繰り返す可能性が低下します。",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "生成するトークンの最大数。可能な値は1から8192の間です。",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "-2.0 から 2.0 までの数字。 肯定的な値は、これまでのところテキストに表示されるかどうかに基づいて新しいトークンを罰し、モードが新しいトピックについて話す可能性を高めます。",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "レスポンスのフォーマット 重要: JSON 出力を使用する場合は、モデルに JSON を自分で生成するように指示する必要があります",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "温度がゼロに近づくにつれて、モデルは決定論的で反復的になります。 0 から 2 の間では、通常、これまたは top_p を変更することをお勧めしますが、両方ではありません。",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "核サンプリングと呼ばれる温度によるサンプリングの代わりに、モデルはtop_p確率質量を持つトークンの結果を考慮します。 は、上位10%の確率質量からなるトークンのみが考慮されることを意味します。値 <=1 一般的には、これを変更することをお勧めしますが、両方は推奨しません。",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "A memory key that will keep the chat history shares and flows. Keep it to leave deepseek without memory of previous messages.",
"Array of roles to specify more accurate response": "より正確な応答を指定するロールの配列",
"Text": "テキスト",
"JSON": "JSON"
}

View File

@@ -0,0 +1,26 @@
{
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Volg deze instructies om uw DeepSeek API Key te verkrijgen:\n\n1. Bezoek de volgende website: https://platform. eepseek.com/api_keys.\n2. Lokaliseer en klik op de optie om uw DeepSeek API-sleutel te verkrijgen.",
"Ask Deepseek": "Vraag deepseek",
"Ask Deepseek anything you want!": "Vraag alles wat je maar wilt!",
"Model": "Model",
"Question": "Vraag",
"Frequency penalty": "Frequentie boete",
"Maximum Tokens": "Maximaal aantal tokens",
"Presence penalty": "Presence boete",
"Response Format": "Antwoord formaat",
"Temperature": "Temperatuur",
"Top P": "Boven P",
"Memory Key": "Geheugen Sleutel",
"Roles": "Rollen",
"The model which will generate the completion.": "Het model dat de voltooiing zal genereren.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Nummer tussen -2.0 en 2.0. Positieve waarden bestraffen nieuwe tokens op basis van hun bestaande frequentie in de tekst tot nu toe, waardoor de waarschijnlijkheid van het model om dezelfde lijn te herhalen afneemt.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "Het maximale aantal te genereren tokens. Mogelijke waarden liggen tussen 1 en 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Nummer tussen -2.0 en 2.0. Positieve waarden bestraffen nieuwe tokens op basis van de vraag of ze tot nu toe in de tekst staan, waardoor de modus meer kans maakt om over nieuwe onderwerpen te praten.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "Het formaat van de reactie. BELANGRIJK: Wanneer u JSON Uitvoer gebruikt, moet u het model ook instrueren om zelf JSON te produceren",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Bestuurt willekeurigheid: Het verlagen van de temperatuur resulteert in minder willekeurige aanvullingen. Zodra de temperatuur nul nadert, zal het model deterministisch en herhalend worden. Tussen 0 en 2. We raden je over het algemeen aan om deze of top _p te wijzigen, maar niet allebei.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "Een alternatief voor bemonstering met de temperatuur, genaamd nucleus sampling, waarbij het model de resultaten van de tokens met top_p waarschijnlijkheid ziet. Dus 0. betekent alleen dat de tokens bestaande uit de top 10% waarschijnlijkheids massa worden overwogen. Waarden <=1. We raden over het algemeen aan om deze of temperatuurwijzigingen door te voeren maar niet allebei.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "Een geheugensleutel waarmee de chat geschiedenis gedeeld blijft over uitvoeringen en stromen. Houd het leeg om Deepsearch zonder het geheugen van vorige berichten te achterlaten.",
"Array of roles to specify more accurate response": "Array of roles om een nauwkeuriger antwoord te geven",
"Text": "Tekstveld",
"JSON": "JSON"
}

View File

@@ -0,0 +1,26 @@
{
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.",
"Ask Deepseek": "Pergunte ao Deepsearch",
"Ask Deepseek anything you want!": "Pergunte ao Deepsearch qualquer coisa que quiser!",
"Model": "Modelo",
"Question": "Questão",
"Frequency penalty": "Penalidade de frequência",
"Maximum Tokens": "Máximo de Tokens",
"Presence penalty": "Penalidade de presença",
"Response Format": "Formato de Resposta",
"Temperature": "Temperatura",
"Top P": "Superior P",
"Memory Key": "Chave de memória",
"Roles": "Papéis",
"The model which will generate the completion.": "O modelo que irá gerar a conclusão.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Número entre -2.0 e 2.0. Valores positivos penalizam novos tokens baseados em sua frequência existente no texto até agora, diminuindo a probabilidade do modelo repetir o verbal da mesma linha.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "O número máximo de tokens a gerar. Os possíveis valores estão entre 1 e 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Número entre -2.0 e 2.0. Valores positivos penalizam novos tokens baseado no fato de eles aparecerem no texto até agora, aumentando a probabilidade de o modo falar sobre novos tópicos.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "O formato da resposta. IMPORTANTE: ao usar a saída JSON, você também deve instruir o modelo para produzir JSON você mesmo",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Controla aleatoriedade: Diminuir resulta em menos complementos aleatórios. À medida que a temperatura se aproxima de zero, o modelo se tornará determinístico e repetitivo. Entre 0 e 2. Geralmente recomendamos alterar isto ou top_p mas não ambos.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "Uma alternativa à amostragem com temperatura, chamada amostragem núcleo, onde o modelo considera os resultados dos tokens com massa de probabilidade superior p. Então 0. significa apenas os tokens que contenham a maior massa de probabilidade de 10% são considerados. Valores <=1. Geralmente recomendamos alterar isto ou temperatura, mas não ambos.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "Uma chave de memória que manterá o histórico de bate-papo compartilhado entre execuções e fluxos. Deixe vazio para deixar o Deepseek sem a memória de mensagens anteriores.",
"Array of roles to specify more accurate response": "Array de papéis para especificar uma resposta mais precisa",
"Text": "texto",
"JSON": "JSON"
}

View File

@@ -0,0 +1,27 @@
{
"DeepSeek": "DeepSeek",
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.",
"Ask Deepseek": "Спрашивать глубокий поиск",
"Ask Deepseek anything you want!": "Спросите Deepseek всё, что хотите!",
"Model": "Модель",
"Question": "Вопрос",
"Frequency penalty": "Периодичность штрафа",
"Maximum Tokens": "Максимум жетонов",
"Presence penalty": "Штраф присутствия",
"Response Format": "Формат ответа",
"Temperature": "Температура",
"Top P": "Верхний П",
"Memory Key": "Ключ памяти",
"Roles": "Роли",
"The model which will generate the completion.": "Модель, которая будет генерировать завершенность.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Номер между -2.0 и 2.0. Положительные значения наказывают новые токены на основе их существующей частоты в тексте до сих пор, уменьшая вероятность повторения одной и той же строки.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "Максимальное количество генерируемых токенов. Возможные значения - от 1 до 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Номер между -2.0 и 2.0. Положительные значения наказывают новые токены на основании того, появляются ли они в тексте до сих пор, что повышает вероятность того, что режим будет обсуждать новые темы.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "Формат ответа. ВАЖНО: При использовании вывода JSON, вы также должны указать модель для создания JSON",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Контролирует случайность: понижение результатов в менее случайном завершении. По мере нулевого температурного приближения модель становится детерминированной и повторяющей. Между 0 и 2. Обычно мы рекомендуем изменить это или top_p, но не и то и другое.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "Альтернатива выборочной пробы с температурой, называемой ядерным отбором, где модель рассматривает результаты жетонов с вероятностью top_p. Так 0. означает только жетоны, состоящие из массы наилучших 10%. Рассмотрены значения <=1. Мы обычно рекомендуем изменить эту величину или температуру, но не и то и другое.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "Ключ памяти, который сохранит историю чата общими между запусками и потоками. Оставьте пустым, чтобы оставить Deepseek без памяти предыдущих сообщений.",
"Array of roles to specify more accurate response": "Массив ролей для более точного ответа",
"Text": "Текст",
"JSON": "JSON"
}

View File

@@ -0,0 +1,26 @@
{
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.",
"Ask Deepseek": "Ask Deepseek",
"Ask Deepseek anything you want!": "Ask Deepseek anything you want!",
"Model": "Model",
"Question": "Question",
"Frequency penalty": "Frequency penalty",
"Maximum Tokens": "Maximum Tokens",
"Presence penalty": "Presence penalty",
"Response Format": "Response Format",
"Temperature": "Temperature",
"Top P": "Top P",
"Memory Key": "Memory Key",
"Roles": "Roles",
"The model which will generate the completion.": "The model which will generate the completion.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "The maximum number of tokens to generate. Possible values are between 1 and 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.",
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
"Text": "Text",
"JSON": "JSON"
}

View File

@@ -0,0 +1,27 @@
{
"DeepSeek": "DeepSeek",
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.",
"Ask Deepseek": "Ask Deepseek",
"Ask Deepseek anything you want!": "Ask Deepseek anything you want!",
"Model": "Model",
"Question": "Question",
"Frequency penalty": "Frequency penalty",
"Maximum Tokens": "Maximum Tokens",
"Presence penalty": "Presence penalty",
"Response Format": "Response Format",
"Temperature": "Temperature",
"Top P": "Top P",
"Memory Key": "Memory Key",
"Roles": "Roles",
"The model which will generate the completion.": "The model which will generate the completion.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "The maximum number of tokens to generate. Possible values are between 1 and 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.",
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
"Text": "Text",
"JSON": "JSON"
}

View File

@@ -0,0 +1,26 @@
{
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.",
"Ask Deepseek": "Ask Deepseek",
"Ask Deepseek anything you want!": "Ask Deepseek anything you want!",
"Model": "Model",
"Question": "Question",
"Frequency penalty": "Frequency penalty",
"Maximum Tokens": "Maximum Tokens",
"Presence penalty": "Presence penalty",
"Response Format": "Response Format",
"Temperature": "Temperature",
"Top P": "Top P",
"Memory Key": "内存键",
"Roles": "角色",
"The model which will generate the completion.": "The model which will generate the completion.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "The maximum number of tokens to generate. Possible values are between 1 and 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.",
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
"Text": "文本",
"JSON": "JSON"
}

View File

@@ -0,0 +1,53 @@
import { createPiece, PieceAuth } from "@activepieces/pieces-framework";
import { baseUrl, unauthorizedMessage } from "./lib/common/common";
import OpenAI from 'openai';
import { askDeepseek } from "./lib/actions/ask-deepseek";
import { PieceCategory } from "@activepieces/shared";
export const deepseekAuth = PieceAuth.SecretText({
description:`
Follow these instructions to get your DeepSeek API Key:
1. Visit the following website: https://platform.deepseek.com/api_keys.
2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.`,
displayName: 'API Key',
required: true,
validate: async (auth) => {
try {
const openai = new OpenAI({
baseURL: baseUrl,
apiKey: auth.auth,
});
const models = await openai.models.list();
if (models.data.length > 0){
return {
valid: true,
};
}
else
return {
valid: false,
error: unauthorizedMessage,
};
} catch (e) {
return {
valid: false,
error: unauthorizedMessage,
};
}
},
});
export const deepseek = createPiece({
displayName: "DeepSeek",
auth: deepseekAuth,
categories: [PieceCategory.ARTIFICIAL_INTELLIGENCE],
minimumSupportedRelease: '0.36.1',
logoUrl: "https://cdn.activepieces.com/pieces/deepseek.png",
authors: ["PFernandez98"],
actions: [askDeepseek],
triggers: [],
});

View File

@@ -0,0 +1,198 @@
import { deepseekAuth } from '../../index';
import { createAction, Property, StoreScope } from "@activepieces/pieces-framework";
import OpenAI from 'openai';
import { baseUrl } from '../common/common';
import { z } from 'zod';
import { propsValidation } from '@activepieces/pieces-common';
export const askDeepseek = createAction({
auth: deepseekAuth,
name: 'ask_deepseek',
displayName: 'Ask Deepseek',
description: 'Ask Deepseek anything you want!',
props: {
model: Property.Dropdown({
auth: deepseekAuth,
displayName: 'Model',
required: true,
description: 'The model which will generate the completion.',
refreshers: [],
defaultValue: 'deepseek-chat',
options: async ({ auth }) => {
if (!auth) {
return {
disabled: true,
placeholder: 'Enter your API key first',
options: [],
};
}
try {
const openai = new OpenAI({
baseURL: baseUrl,
apiKey: auth.secret_text,
});
const response = await openai.models.list();
// We need to get only LLM models
const models = response.data;
return {
disabled: false,
options: models.map((model) => {
return {
label: model.id,
value: model.id,
};
}),
};
} catch (error) {
return {
disabled: true,
options: [],
placeholder: "Couldn't load models, API key is invalid",
};
}
},
}),
prompt: Property.LongText({
displayName: 'Question',
required: true,
}),
frequencyPenalty: Property.Number({
displayName: 'Frequency penalty',
required: false,
description:
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
defaultValue: 0,
}),
maxTokens: Property.Number({
displayName: 'Maximum Tokens',
required: true,
description:
'The maximum number of tokens to generate. Possible values are between 1 and 8192.',
defaultValue: 4096,
}),
presencePenalty: Property.Number({
displayName: 'Presence penalty',
required: false,
description:
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
defaultValue: 0,
}),
responseFormat: Property.StaticDropdown({
displayName: 'Response Format',
description:
'The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself',
required: true,
defaultValue: 'text',
options: {
options: [
{
label: 'Text',
value: 'text',
},
{
label: 'JSON',
value: 'json_object',
},
],
},
}),
temperature: Property.Number({
displayName: 'Temperature',
required: false,
description:
'Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.',
defaultValue: 1,
}),
topP: Property.Number({
displayName: 'Top P',
required: false,
description:
'An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.',
defaultValue: 1,
}),
memoryKey: Property.ShortText({
displayName: 'Memory Key',
description:
'A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.',
required: false,
}),
roles: Property.Json({
displayName: 'Roles',
required: false,
description: 'Array of roles to specify more accurate response',
defaultValue: [
{ role: 'system', content: 'You are a helpful assistant.' },
],
}),
},
async run({ auth, propsValue, store }) {
await propsValidation.validateZod(propsValue, {
temperature: z.number().min(0).max(2).optional(),
memoryKey: z.string().max(128).optional(),
});
const openai = new OpenAI({
baseURL: baseUrl,
apiKey: auth.secret_text,
});
const {
model,
temperature,
maxTokens,
topP,
frequencyPenalty,
presencePenalty,
responseFormat,
prompt,
memoryKey,
} = propsValue;
let messageHistory: any[] | null = [];
// If memory key is set, retrieve messages stored in history
if (memoryKey) {
messageHistory = (await store.get(memoryKey, StoreScope.PROJECT)) ?? [];
}
// Add user prompt to message history
messageHistory.push({
role: 'user',
content: prompt,
});
// Add system instructions if set by user
const rolesArray = propsValue.roles ? (propsValue.roles as any) : [];
const roles = rolesArray.map((item: any) => {
const rolesEnum = ['system', 'user', 'assistant'];
if (!rolesEnum.includes(item.role)) {
throw new Error(
'The only available roles are: [system, user, assistant]'
);
}
return {
role: item.role,
content: item.content,
};
});
// Send prompt
const completion = await openai.chat.completions.create({
model: model,
messages: [...roles, ...messageHistory],
temperature: temperature,
max_tokens: maxTokens,
top_p: topP,
frequency_penalty: frequencyPenalty,
presence_penalty: presencePenalty,
response_format: responseFormat === "json_object" ? { "type": "json_object" } : { "type": "text" },
});
messageHistory = [...messageHistory, completion.choices[0].message];
if (memoryKey) {
await store.put(memoryKey, messageHistory, StoreScope.PROJECT);
}
return completion.choices[0].message.content;
},
});

View File

@@ -0,0 +1,5 @@
export const baseUrl = 'https://api.deepseek.com';
export const unauthorizedMessage = `Error Occurred: 401 \n
Ensure that your API key is valid. \n`;