Add Activepieces integration for workflow automation
- Add Activepieces fork with SmoothSchedule custom piece - Create integrations app with Activepieces service layer - Add embed token endpoint for iframe integration - Create Automations page with embedded workflow builder - Add sidebar visibility fix for embed mode - Add list inactive customers endpoint to Public API - Include SmoothSchedule triggers: event created/updated/cancelled - Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.": "Der kostenlose, selbstgehostete, von der Community betriebene und lokal-zuerst. Drop-In Ersatz für OpenAI auf verbraucherfähiger Hardware. Keine GPU erforderlich.",
|
||||
"Server URL": "Server-URL",
|
||||
"Access Token": "Zugangs-Token",
|
||||
"LocalAI Instance URL": "LocalAI Instance URL",
|
||||
"LocalAI Access Token": "LokalAI-Zugangs-Token",
|
||||
"Ask LocalAI": "LocalAI fragen",
|
||||
"Custom API Call": "Eigener API-Aufruf",
|
||||
"Ask LocalAI anything you want!": "Fragen Sie LocalAI was Sie wollen!",
|
||||
"Make a custom API call to a specific endpoint": "Einen benutzerdefinierten API-Aufruf an einen bestimmten Endpunkt machen",
|
||||
"Model": "Modell",
|
||||
"Question": "Frage",
|
||||
"Temperature": "Temperatur",
|
||||
"Maximum Tokens": "Maximale Token",
|
||||
"Top P": "Oben P",
|
||||
"Frequency penalty": "Frequenz Strafe",
|
||||
"Presence penalty": "Präsenzstrafe",
|
||||
"Roles": "Rollen",
|
||||
"Method": "Methode",
|
||||
"Headers": "Kopfzeilen",
|
||||
"Query Parameters": "Abfrageparameter",
|
||||
"Body": "Körper",
|
||||
"Response is Binary ?": "Antwort ist binär?",
|
||||
"No Error on Failure": "Kein Fehler bei Fehler",
|
||||
"Timeout (in seconds)": "Timeout (in Sekunden)",
|
||||
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "Das Modell, das die Vervollständigung generiert. Einige Modelle eignen sich für Aufgaben der natürlichen Sprache, andere sind auf Code spezialisiert.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Kontrolliert Zufallszufälligkeit: Die Verringerung führt zu weniger zufälligen Vervollständigungen. Je näher die Temperatur Null rückt, desto deterministischer und sich wiederholender wird.",
|
||||
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "Die maximale Anzahl der zu generierenden Token. Anfragen können bis zu 2.048 oder 4.096 Token verwenden, die zwischen Prompt und Fertigstellung geteilt werden, setzen Sie den Wert nicht auf maximal und lassen Sie einige Token für die Eingabe. Das genaue Limit variiert je nach Modell. (Ein Token ist ungefähr 4 Zeichen für den normalen englischen Text)",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Eine Alternative zur Probenahme mit Temperatur, genannt Nucleus Probenahme, bei der das Modell die Ergebnisse der Tokens mit der Top_p Wahrscheinlichkeitsmasse berücksichtigt. 0,1 bedeutet also nur die Token, die die obersten 10% Wahrscheinlichkeitsmasse ausmachen.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Nummer zwischen -2.0 und 2.0. Positive Werte bestrafen neue Tokens aufgrund ihrer bisherigen Häufigkeit im Text, wodurch sich die Wahrscheinlichkeit verringert, dass das Modell dieselbe Zeile wörtlich wiederholt.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Nummer zwischen -2.0 und 2.0. Positive Werte bestrafen neue Tokens je nachdem, ob sie bisher im Text erscheinen, was die Wahrscheinlichkeit erhöht, über neue Themen zu sprechen.",
|
||||
"Array of roles to specify more accurate response": "Rollenzuordnung, um eine genauere Antwort anzugeben",
|
||||
"Authorization headers are injected automatically from your connection.": "Autorisierungs-Header werden automatisch von Ihrer Verbindung injiziert.",
|
||||
"Enable for files like PDFs, images, etc..": "Aktivieren für Dateien wie PDFs, Bilder, etc..",
|
||||
"GET": "ERHALTEN",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "LÖSCHEN",
|
||||
"HEAD": "HEAD"
|
||||
}
|
||||
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.": "El gratis, autoalojado, impulsado por la comunidad y local-primero. Reemplazo para OpenAI corriendo en hardware de grado de consumo. No se requiere GPU.",
|
||||
"Server URL": "URL del servidor",
|
||||
"Access Token": "Token de acceso",
|
||||
"LocalAI Instance URL": "LocalAI Instance URL",
|
||||
"LocalAI Access Token": "Ficha de acceso local",
|
||||
"Ask LocalAI": "Preguntar LocalAI",
|
||||
"Custom API Call": "Llamada API personalizada",
|
||||
"Ask LocalAI anything you want!": "Pregúntale a LocalAI lo que quieras!",
|
||||
"Make a custom API call to a specific endpoint": "Hacer una llamada API personalizada a un extremo específico",
|
||||
"Model": "Modelo",
|
||||
"Question": "Pregunta",
|
||||
"Temperature": "Temperatura",
|
||||
"Maximum Tokens": "Tokens máximos",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Puntuación de frecuencia",
|
||||
"Presence penalty": "Penalización de presencia",
|
||||
"Roles": "Roles",
|
||||
"Method": "Método",
|
||||
"Headers": "Encabezados",
|
||||
"Query Parameters": "Parámetros de consulta",
|
||||
"Body": "Cuerpo",
|
||||
"Response is Binary ?": "¿Respuesta es binaria?",
|
||||
"No Error on Failure": "No hay ningún error en fallo",
|
||||
"Timeout (in seconds)": "Tiempo de espera (en segundos)",
|
||||
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "El modelo que generará la terminación. Algunos modelos son adecuados para tareas de lenguaje natural, otros se especializan en código.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controles aleatorios: La reducción de resultados en terminaciones menos aleatorias. A medida que la temperatura se acerca a cero, el modelo se volverá determinista y repetitivo.",
|
||||
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "El número máximo de tokens a generar. Las solicitudes pueden usar hasta 2,048 o 4,096 tokens compartidos entre prompt y completation, no establezca el valor máximo y deje algunas fichas para la entrada. El límite exacto varía según el modelo. (Un token es de aproximadamente 4 caracteres para el texto normal en inglés)",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Una alternativa al muestreo con temperatura, llamado muestreo de núcleos, donde el modelo considera los resultados de los tokens con masa de probabilidad superior_p. Por lo tanto, 0,1 significa que sólo se consideran las fichas que componen la masa superior del 10% de probabilidad.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Número entre -2.0 y 2.0. Los valores positivos penalizan nuevos tokens basados en su frecuencia existente en el texto hasta ahora, lo que reduce la probabilidad del modelo de repetir la misma línea literalmente.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Número entre -2.0 y 2.0. Los valores positivos penalizan las nuevas fichas basándose en si aparecen en el texto hasta ahora, aumentando la probabilidad de que el modo hable de nuevos temas.",
|
||||
"Array of roles to specify more accurate response": "Matriz de roles para especificar una respuesta más precisa",
|
||||
"Authorization headers are injected automatically from your connection.": "Las cabeceras de autorización se inyectan automáticamente desde tu conexión.",
|
||||
"Enable for files like PDFs, images, etc..": "Activar para archivos como PDFs, imágenes, etc.",
|
||||
"GET": "RECOGER",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "BORRAR",
|
||||
"HEAD": "LIMPIO"
|
||||
}
|
||||
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.": "La version gratuite, auto-hébergée, gérée par la communauté et localisée d'abord. Remplacement de remplacement pour OpenAI fonctionnant sur le matériel de consommation. Aucun GPU requis.",
|
||||
"Server URL": "URL du serveur",
|
||||
"Access Token": "Jeton d'accès",
|
||||
"LocalAI Instance URL": "LocalAI Instance URL",
|
||||
"LocalAI Access Token": "Jeton d'accès LocalAI",
|
||||
"Ask LocalAI": "Demander à l'IA locale",
|
||||
"Custom API Call": "Appel API personnalisé",
|
||||
"Ask LocalAI anything you want!": "Demandez à LocalAI ce que vous voulez !",
|
||||
"Make a custom API call to a specific endpoint": "Passez un appel API personnalisé à un point de terminaison spécifique",
|
||||
"Model": "Modélisation",
|
||||
"Question": "Question",
|
||||
"Temperature": "Température",
|
||||
"Maximum Tokens": "Maximum de jetons",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Malus de fréquence",
|
||||
"Presence penalty": "Malus de présence",
|
||||
"Roles": "Rôles",
|
||||
"Method": "Méthode",
|
||||
"Headers": "En-têtes",
|
||||
"Query Parameters": "Paramètres de requête",
|
||||
"Body": "Corps",
|
||||
"Response is Binary ?": "La réponse est Binaire ?",
|
||||
"No Error on Failure": "Aucune erreur en cas d'échec",
|
||||
"Timeout (in seconds)": "Délai d'attente (en secondes)",
|
||||
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "Le modèle qui va générer la complétion. Certains modèles sont adaptés aux tâches de langage naturel, d'autres se spécialisent dans le code.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Contrôle aléatoirement : La baisse des résultats est moins aléatoire, alors que la température approche de zéro, le modèle devient déterministe et répétitif.",
|
||||
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "Le nombre maximum de jetons à générer. Les requêtes peuvent utiliser jusqu'à 2 048 ou 4 096 jetons partagés entre l'invite et la complétion, ne pas définir la valeur au maximum et laisser des jetons pour l'entrée. La limite exacte varie selon le modèle. (un jeton est d'environ 4 caractères pour le texte anglais normal)",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Une alternative à l'échantillonnage à la température, appelée l'échantillonnage du noyau, où le modèle considère les résultats des jetons avec la masse de probabilité top_p. Ainsi, 0,1 signifie que seuls les jetons constituant la masse de probabilité la plus élevée de 10% sont pris en compte.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Numéroter entre -2.0 et 2.0. Les valeurs positives pénalisent les nouveaux jetons en fonction de leur fréquence existante dans le texte jusqu'à présent, diminuant la probabilité du modèle de répéter le verbatim de la même ligne.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Numéroter entre -2.0 et 2.0. Les valeurs positives pénalisent les nouveaux jetons en fonction du fait qu'ils apparaissent dans le texte jusqu'à présent, ce qui augmente la probabilité du mode de parler de nouveaux sujets.",
|
||||
"Array of roles to specify more accurate response": "Tableau de rôles pour spécifier une réponse plus précise",
|
||||
"Authorization headers are injected automatically from your connection.": "Les en-têtes d'autorisation sont injectés automatiquement à partir de votre connexion.",
|
||||
"Enable for files like PDFs, images, etc..": "Activer pour les fichiers comme les PDFs, les images, etc.",
|
||||
"GET": "OBTENIR",
|
||||
"POST": "POSTER",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "EFFACER",
|
||||
"DELETE": "SUPPRIMER",
|
||||
"HEAD": "TÊTE"
|
||||
}
|
||||
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.": "フリー、セルフホスティングされた、コミュニティ主導のローカルファースト。コンシューマーグレードのハードウェア上で動作する OpenAI のドロップイン交換。GPU は必要ありません。",
|
||||
"Server URL": "サーバー URL",
|
||||
"Access Token": "アクセストークン",
|
||||
"LocalAI Instance URL": "LocalAI Instance URL",
|
||||
"LocalAI Access Token": "LocalAI アクセストークン",
|
||||
"Ask LocalAI": "LocalAI に聞く",
|
||||
"Custom API Call": "カスタムAPI通話",
|
||||
"Ask LocalAI anything you want!": "あなたが望むものは何でもLocalaiに聞いてみてください!",
|
||||
"Make a custom API call to a specific endpoint": "特定のエンドポイントへのカスタム API コールを実行します。",
|
||||
"Model": "モデル",
|
||||
"Question": "質問",
|
||||
"Temperature": "温度",
|
||||
"Maximum Tokens": "最大トークン",
|
||||
"Top P": "トップ P",
|
||||
"Frequency penalty": "頻度ペナルティ",
|
||||
"Presence penalty": "プレゼンスペナルティ",
|
||||
"Roles": "ロール",
|
||||
"Method": "方法",
|
||||
"Headers": "ヘッダー",
|
||||
"Query Parameters": "クエリパラメータ",
|
||||
"Body": "本文",
|
||||
"Response is Binary ?": "応答はバイナリですか?",
|
||||
"No Error on Failure": "失敗時にエラーはありません",
|
||||
"Timeout (in seconds)": "タイムアウト(秒)",
|
||||
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "補完を生成するモデル。自然言語タスクに適しているモデルもあれば、コードに特化したモデルもあります。",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "温度がゼロに近づくにつれて、モデルは決定論的で反復的になります。",
|
||||
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "生成するトークンの最大数。 リクエストは、プロンプトと完了の間で共有された最大2,048または4,096トークンを使用できます。 値を最大値に設定しないで、トークンを入力してください。 正確な制限はモデルによって異なります。(通常の英文では1つのトークンが約4文字です)",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "核サンプリングと呼ばれる温度でのサンプリングの代わりに、モデルはtop_p確率質量を持つトークンの結果を考慮します。 つまり、0.1は上位10%の確率質量からなるトークンのみを考慮することになります。",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "-2.0 から 2.0 までの数字。 正の値は、これまでのテキスト内の既存の頻度に基づいて新しいトークンを罰するため、モデルが同じ行を元に繰り返す可能性が低下します。",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "-2.0 から 2.0 までの数字。 肯定的な値は、これまでのところテキストに表示されるかどうかに基づいて新しいトークンを罰し、モードが新しいトピックについて話す可能性を高めます。",
|
||||
"Array of roles to specify more accurate response": "より正確な応答を指定するロールの配列",
|
||||
"Authorization headers are injected automatically from your connection.": "認証ヘッダは接続から自動的に注入されます。",
|
||||
"Enable for files like PDFs, images, etc..": "PDF、画像などのファイルを有効にします。",
|
||||
"GET": "取得",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "削除",
|
||||
"HEAD": "頭"
|
||||
}
|
||||
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.": "De gratis, Zelf-hosted, community-gedreven en lokaal eerst. Drop-in vervanging voor OpenAI draait op hardware van consumentenformaat. Geen GPU vereist.",
|
||||
"Server URL": "Server URL",
|
||||
"Access Token": "Toegangs-token",
|
||||
"LocalAI Instance URL": "LocalAI Instance URL",
|
||||
"LocalAI Access Token": "LocalAI toegangstoken",
|
||||
"Ask LocalAI": "Vraag LocalAI",
|
||||
"Custom API Call": "Custom API Call",
|
||||
"Ask LocalAI anything you want!": "Vraag LocalAI alles wat je wilt!",
|
||||
"Make a custom API call to a specific endpoint": "Maak een aangepaste API call naar een specifiek eindpunt",
|
||||
"Model": "Model",
|
||||
"Question": "Vraag",
|
||||
"Temperature": "Temperatuur",
|
||||
"Maximum Tokens": "Maximaal aantal tokens",
|
||||
"Top P": "Boven P",
|
||||
"Frequency penalty": "Frequentie boete",
|
||||
"Presence penalty": "Presence boete",
|
||||
"Roles": "Rollen",
|
||||
"Method": "Methode",
|
||||
"Headers": "Kopteksten",
|
||||
"Query Parameters": "Query parameters",
|
||||
"Body": "Lichaam",
|
||||
"Response is Binary ?": "Antwoord is binair?",
|
||||
"No Error on Failure": "Geen fout bij fout",
|
||||
"Timeout (in seconds)": "Time-out (in seconden)",
|
||||
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "Het model dat de voltooiing zal genereren. Sommige modellen zijn geschikt voor natuurlijke taaltaken, andere zijn gespecialiseerd in code.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Bestuurt willekeurigheid: Het verlagen van de temperatuur resulteert in minder willekeurige aanvullingen. Zodra de temperatuur nul nadert, zal het model deterministisch en herhalend worden.",
|
||||
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "Het maximum aantal te genereren tokens Verzoeken kunnen maximaal 2.048 of 4.096 tokens gedeeld worden tussen prompt en voltooiing, stel de waarde niet in op maximum en laat sommige tokens voor de invoer. De exacte limiet varieert per model. (Eén token is grofweg 4 tekens voor normale Engelse tekst)",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Een alternatief voor bemonstering met de temperatuur, genaamd nucleus sampling, waarbij het model de resultaten van de tokens met top_p waarschijnlijkheid ziet. 0.1 betekent dus dat alleen de tokens die de grootste massa van 10 procent vormen, worden overwogen.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Nummer tussen -2.0 en 2.0. Positieve waarden bestraffen nieuwe tokens op basis van hun bestaande frequentie in de tekst tot nu toe, waardoor de waarschijnlijkheid van het model om dezelfde lijn te herhalen afneemt.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Nummer tussen -2.0 en 2.0. Positieve waarden bestraffen nieuwe tokens op basis van de vraag of ze tot nu toe in de tekst staan, waardoor de modus meer kans maakt om over nieuwe onderwerpen te praten.",
|
||||
"Array of roles to specify more accurate response": "Array of roles om een nauwkeuriger antwoord te geven",
|
||||
"Authorization headers are injected automatically from your connection.": "Autorisatie headers worden automatisch geïnjecteerd vanuit uw verbinding.",
|
||||
"Enable for files like PDFs, images, etc..": "Inschakelen voor bestanden zoals PDF's, afbeeldingen etc..",
|
||||
"GET": "KRIJG",
|
||||
"POST": "POSTE",
|
||||
"PATCH": "BEKIJK",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "VERWIJDEREN",
|
||||
"HEAD": "HOOFD"
|
||||
}
|
||||
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.": "O livre, Auto-hospedado, impulsionado pela comunidade e local-primeiro. Substituição de drop-in para OpenAI rodando em hardware de nível de consumidor. Não é necessária GPU.",
|
||||
"Server URL": "URL do servidor",
|
||||
"Access Token": "Token de acesso",
|
||||
"LocalAI Instance URL": "LocalAI Instance URL",
|
||||
"LocalAI Access Token": "Token de acesso LocalAI",
|
||||
"Ask LocalAI": "Perguntar LocalAI",
|
||||
"Custom API Call": "Chamada de API personalizada",
|
||||
"Ask LocalAI anything you want!": "Pergunte ao LocalAI o que você quiser!",
|
||||
"Make a custom API call to a specific endpoint": "Faça uma chamada de API personalizada para um ponto de extremidade específico",
|
||||
"Model": "Modelo",
|
||||
"Question": "Questão",
|
||||
"Temperature": "Temperatura",
|
||||
"Maximum Tokens": "Máximo de Tokens",
|
||||
"Top P": "Superior P",
|
||||
"Frequency penalty": "Penalidade de frequência",
|
||||
"Presence penalty": "Penalidade de presença",
|
||||
"Roles": "Papéis",
|
||||
"Method": "Método",
|
||||
"Headers": "Cabeçalhos",
|
||||
"Query Parameters": "Parâmetros da consulta",
|
||||
"Body": "Conteúdo",
|
||||
"Response is Binary ?": "A resposta é binária ?",
|
||||
"No Error on Failure": "Nenhum erro no Failure",
|
||||
"Timeout (in seconds)": "Tempo limite (em segundos)",
|
||||
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "O modelo que irá gerar a conclusão. Alguns modelos são adequados para tarefas de linguagem natural, outros são especializados no código.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controla aleatoriedade: Diminuir resulta em menos complementos aleatórios. À medida que a temperatura se aproxima de zero, o modelo se tornará determinístico e repetitivo.",
|
||||
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "O número máximo de tokens a gerar. Solicitações podem usar até 2.048 ou 4.096 tokens compartilhados entre prompt e conclusão, não defina o valor como máximo e deixe algumas fichas para a entrada. O limite exato varia por modelo. (Um token é aproximadamente 4 caracteres para o texto normal em inglês)",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Uma alternativa à amostragem com temperatura, chamada amostragem núcleo, onde o modelo considera os resultados dos tokens com massa de probabilidade superior (P). Portanto, 0,1 significa que apenas os tokens que incluem a massa de probabilidade superior de 10% são considerados.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Número entre -2.0 e 2.0. Valores positivos penalizam novos tokens baseados em sua frequência existente no texto até agora, diminuindo a probabilidade do modelo repetir o verbal da mesma linha.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Número entre -2.0 e 2.0. Valores positivos penalizam novos tokens baseado no fato de eles aparecerem no texto até agora, aumentando a probabilidade de o modo falar sobre novos tópicos.",
|
||||
"Array of roles to specify more accurate response": "Array de papéis para especificar uma resposta mais precisa",
|
||||
"Authorization headers are injected automatically from your connection.": "Os cabeçalhos de autorização são inseridos automaticamente a partir da sua conexão.",
|
||||
"Enable for files like PDFs, images, etc..": "Habilitar para arquivos como PDFs, imagens, etc..",
|
||||
"GET": "OBTER",
|
||||
"POST": "POSTAR",
|
||||
"PATCH": "COMPRAR",
|
||||
"PUT": "COLOCAR",
|
||||
"DELETE": "EXCLUIR",
|
||||
"HEAD": "CABEÇA"
|
||||
}
|
||||
@@ -0,0 +1,40 @@
|
||||
{
|
||||
"LocalAI": "Локальный AI",
|
||||
"The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.": "Свободный самохостинг, управляемый сообществом и сперва выкладывать замену OpenAI на потребительском оборудовании. Не требуется GPU.",
|
||||
"Server URL": "URL сервера",
|
||||
"Access Token": "Маркер доступа",
|
||||
"LocalAI Instance URL": "LocalAI Instance URL",
|
||||
"LocalAI Access Token": "маркер доступа LocalAI",
|
||||
"Ask LocalAI": "Спросить LocalAI",
|
||||
"Custom API Call": "Пользовательский вызов API",
|
||||
"Ask LocalAI anything you want!": "Спросите LocalAI все, что вы хотите!",
|
||||
"Make a custom API call to a specific endpoint": "Сделать пользовательский API вызов к определенной конечной точке",
|
||||
"Model": "Модель",
|
||||
"Question": "Вопрос",
|
||||
"Temperature": "Температура",
|
||||
"Maximum Tokens": "Максимум жетонов",
|
||||
"Top P": "Верхний П",
|
||||
"Frequency penalty": "Периодичность штрафа",
|
||||
"Presence penalty": "Штраф присутствия",
|
||||
"Roles": "Роли",
|
||||
"Method": "Метод",
|
||||
"Headers": "Заголовки",
|
||||
"Query Parameters": "Параметры запроса",
|
||||
"Body": "Тело",
|
||||
"No Error on Failure": "Нет ошибок при ошибке",
|
||||
"Timeout (in seconds)": "Таймаут (в секундах)",
|
||||
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "Модель, которая будет генерировать дополнение, некоторые модели подходят для естественных языковых задач, другие специализируются на программировании.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Контролирует случайность: понижение результатов в менее случайном завершении. По мере нулевого температурного приближения модель становится детерминированной и повторяющей.",
|
||||
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "Максимальное количество фишек для генерации Запросы могут использовать до 2 048 или 4 096 токенов, которыми обменивались между оперативными и завершенными действиями, не устанавливайте максимальное значение и оставляйте некоторые токены для ввода. In the twentieth century, Russian is widely taught in the schools of the members of the old Warsaw Pact and in other countries of the former Soviet Union.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Альтернатива отоплению с температурой, называемой ядерным отбором, где модель рассматривает результаты жетонов с вероятностью top_p. Таким образом, 0.1 означает, что учитываются только жетоны, состоящие из массы 10% наивысшего уровня.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Номер между -2.0 и 2.0. Положительные значения наказывают новые токены на основе их существующей частоты в тексте до сих пор, уменьшая вероятность повторения одной и той же строки.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Номер между -2.0 и 2.0. Положительные значения наказывают новые токены на основании того, появляются ли они в тексте до сих пор, что повышает вероятность того, что режим будет обсуждать новые темы.",
|
||||
"Array of roles to specify more accurate response": "Массив ролей для более точного ответа",
|
||||
"Authorization headers are injected automatically from your connection.": "Заголовки авторизации включаются автоматически из вашего соединения.",
|
||||
"GET": "ПОЛУЧИТЬ",
|
||||
"POST": "ПОСТ",
|
||||
"PATCH": "ПАТЧ",
|
||||
"PUT": "ПОКУПИТЬ",
|
||||
"DELETE": "УДАЛИТЬ",
|
||||
"HEAD": "HEAD"
|
||||
}
|
||||
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.": "The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.",
|
||||
"Server URL": "Server URL",
|
||||
"Access Token": "Access Token",
|
||||
"LocalAI Instance URL": "LocalAI Instance URL",
|
||||
"LocalAI Access Token": "LocalAI Access Token",
|
||||
"Ask LocalAI": "Ask LocalAI",
|
||||
"Custom API Call": "Custom API Call",
|
||||
"Ask LocalAI anything you want!": "Ask LocalAI anything you want!",
|
||||
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
|
||||
"Model": "Model",
|
||||
"Question": "Question",
|
||||
"Temperature": "Temperature",
|
||||
"Maximum Tokens": "Maximum Tokens",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Frequency penalty",
|
||||
"Presence penalty": "Presence penalty",
|
||||
"Roles": "Roles",
|
||||
"Method": "Method",
|
||||
"Headers": "Headers",
|
||||
"Query Parameters": "Query Parameters",
|
||||
"Body": "Body",
|
||||
"Response is Binary ?": "Response is Binary ?",
|
||||
"No Error on Failure": "No Error on Failure",
|
||||
"Timeout (in seconds)": "Timeout (in seconds)",
|
||||
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
|
||||
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
|
||||
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
|
||||
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
|
||||
"Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..",
|
||||
"GET": "GET",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "DELETE",
|
||||
"HEAD": "HEAD"
|
||||
}
|
||||
@@ -0,0 +1,40 @@
|
||||
{
|
||||
"LocalAI": "LocalAI",
|
||||
"The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.": "The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.",
|
||||
"Server URL": "Server URL",
|
||||
"Access Token": "Access Token",
|
||||
"LocalAI Instance URL": "LocalAI Instance URL",
|
||||
"LocalAI Access Token": "LocalAI Access Token",
|
||||
"Ask LocalAI": "Ask LocalAI",
|
||||
"Custom API Call": "Custom API Call",
|
||||
"Ask LocalAI anything you want!": "Ask LocalAI anything you want!",
|
||||
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
|
||||
"Model": "Model",
|
||||
"Question": "Question",
|
||||
"Temperature": "Temperature",
|
||||
"Maximum Tokens": "Maximum Tokens",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Frequency penalty",
|
||||
"Presence penalty": "Presence penalty",
|
||||
"Roles": "Roles",
|
||||
"Method": "Method",
|
||||
"Headers": "Headers",
|
||||
"Query Parameters": "Query Parameters",
|
||||
"Body": "Body",
|
||||
"No Error on Failure": "No Error on Failure",
|
||||
"Timeout (in seconds)": "Timeout (in seconds)",
|
||||
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
|
||||
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
|
||||
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
|
||||
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
|
||||
"GET": "GET",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "DELETE",
|
||||
"HEAD": "HEAD"
|
||||
}
|
||||
@@ -0,0 +1,41 @@
|
||||
{
|
||||
"The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.": "The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.",
|
||||
"Server URL": "服务器 URL",
|
||||
"Access Token": "Access Token",
|
||||
"LocalAI Instance URL": "LocalAI Instance URL",
|
||||
"LocalAI Access Token": "LocalAI Access Token",
|
||||
"Ask LocalAI": "Ask LocalAI",
|
||||
"Custom API Call": "自定义 API 呼叫",
|
||||
"Ask LocalAI anything you want!": "Ask LocalAI anything you want!",
|
||||
"Make a custom API call to a specific endpoint": "将一个自定义 API 调用到一个特定的终点",
|
||||
"Model": "Model",
|
||||
"Question": "Question",
|
||||
"Temperature": "Temperature",
|
||||
"Maximum Tokens": "Maximum Tokens",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Frequency penalty",
|
||||
"Presence penalty": "Presence penalty",
|
||||
"Roles": "角色",
|
||||
"Method": "方法",
|
||||
"Headers": "信头",
|
||||
"Query Parameters": "查询参数",
|
||||
"Body": "正文内容",
|
||||
"Response is Binary ?": "Response is Binary ?",
|
||||
"No Error on Failure": "失败时没有错误",
|
||||
"Timeout (in seconds)": "超时(秒)",
|
||||
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
|
||||
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
|
||||
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
|
||||
"Authorization headers are injected automatically from your connection.": "授权头自动从您的连接中注入。",
|
||||
"Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..",
|
||||
"GET": "获取",
|
||||
"POST": "帖子",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "弹出",
|
||||
"DELETE": "删除",
|
||||
"HEAD": "黑色"
|
||||
}
|
||||
@@ -0,0 +1,47 @@
|
||||
import { createCustomApiCallAction } from '@activepieces/pieces-common';
|
||||
import {
|
||||
PieceAuth,
|
||||
Property,
|
||||
createPiece,
|
||||
} from '@activepieces/pieces-framework';
|
||||
import { PieceCategory } from '@activepieces/shared';
|
||||
import { askLocalAI } from './lib/actions/send-prompt';
|
||||
|
||||
export const localaiAuth = PieceAuth.CustomAuth({
|
||||
props: {
|
||||
base_url: Property.ShortText({
|
||||
displayName: 'Server URL',
|
||||
description: 'LocalAI Instance URL',
|
||||
required: true,
|
||||
}),
|
||||
access_token: Property.ShortText({
|
||||
displayName: 'Access Token',
|
||||
description: 'LocalAI Access Token',
|
||||
required: false,
|
||||
}),
|
||||
},
|
||||
required: true,
|
||||
});
|
||||
export const openai = createPiece({
|
||||
displayName: 'LocalAI',
|
||||
description:
|
||||
'The free, Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required.',
|
||||
minimumSupportedRelease: '0.30.0',
|
||||
logoUrl: 'https://cdn.activepieces.com/pieces/localai.jpeg',
|
||||
categories: [PieceCategory.ARTIFICIAL_INTELLIGENCE],
|
||||
auth: localaiAuth,
|
||||
actions: [
|
||||
askLocalAI,
|
||||
createCustomApiCallAction({
|
||||
baseUrl: (auth) => (auth)?.props.base_url ?? '',
|
||||
auth: localaiAuth,
|
||||
authMapping: async (auth) => ({
|
||||
Authorization: `Bearer ${
|
||||
auth.props.access_token || ''
|
||||
}`,
|
||||
}),
|
||||
}),
|
||||
],
|
||||
authors: ["hkboujrida","kishanprmr","MoShizzle","abuaboud"],
|
||||
triggers: [],
|
||||
});
|
||||
@@ -0,0 +1,225 @@
|
||||
import {
|
||||
createAction,
|
||||
Property,
|
||||
} from '@activepieces/pieces-framework';
|
||||
import OpenAI from 'openai';
|
||||
import {
|
||||
AuthenticationType,
|
||||
httpClient,
|
||||
HttpMethod,
|
||||
} from '@activepieces/pieces-common';
|
||||
import { localaiAuth } from '../..';
|
||||
import { json } from 'stream/consumers';
|
||||
|
||||
const billingIssueMessage = `Error Occurred: 429 \n
|
||||
|
||||
1. Set LocalAI API Url. \n
|
||||
2. Generate a new API key (optional). \n
|
||||
3. Attempt the process again. \n
|
||||
|
||||
For guidance, visit: https://localai.io/`;
|
||||
|
||||
const unaurthorizedMessage = `Error Occurred: 401 \n
|
||||
|
||||
Ensure that your API key is valid. \n
|
||||
`;
|
||||
|
||||
export const askLocalAI = createAction({
|
||||
auth: localaiAuth,
|
||||
name: 'ask_localai',
|
||||
displayName: 'Ask LocalAI',
|
||||
description: 'Ask LocalAI anything you want!',
|
||||
props: {
|
||||
model: Property.Dropdown({
|
||||
auth: localaiAuth,
|
||||
displayName: 'Model',
|
||||
required: true,
|
||||
description:
|
||||
'The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.',
|
||||
refreshers: [],
|
||||
defaultValue: 'gpt-3.5-turbo',
|
||||
options: async ({ auth }) => {
|
||||
if (!auth) {
|
||||
return {
|
||||
disabled: true,
|
||||
placeholder: 'Enter your api key first',
|
||||
options: [],
|
||||
};
|
||||
}
|
||||
try {
|
||||
const response = await httpClient.sendRequest<{
|
||||
data: { id: string }[];
|
||||
}>({
|
||||
url: (<any>auth).base_url + '/models',
|
||||
method: HttpMethod.GET,
|
||||
authentication: {
|
||||
type: AuthenticationType.BEARER_TOKEN,
|
||||
token: (<any>auth).access_token as string,
|
||||
},
|
||||
});
|
||||
return {
|
||||
disabled: false,
|
||||
options: response.body.data.map((model) => {
|
||||
return {
|
||||
label: model.id,
|
||||
value: model.id,
|
||||
};
|
||||
}),
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
disabled: true,
|
||||
options: [],
|
||||
placeholder: "Couldn't Load Models",
|
||||
};
|
||||
}
|
||||
},
|
||||
}),
|
||||
prompt: Property.LongText({
|
||||
displayName: 'Question',
|
||||
required: true,
|
||||
}),
|
||||
temperature: Property.Number({
|
||||
displayName: 'Temperature',
|
||||
required: false,
|
||||
description:
|
||||
'Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.',
|
||||
}),
|
||||
maxTokens: Property.Number({
|
||||
displayName: 'Maximum Tokens',
|
||||
required: false,
|
||||
description:
|
||||
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
|
||||
}),
|
||||
topP: Property.Number({
|
||||
displayName: 'Top P',
|
||||
required: false,
|
||||
description:
|
||||
'An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.',
|
||||
}),
|
||||
frequencyPenalty: Property.Number({
|
||||
displayName: 'Frequency penalty',
|
||||
required: false,
|
||||
description:
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
}),
|
||||
presencePenalty: Property.Number({
|
||||
displayName: 'Presence penalty',
|
||||
required: false,
|
||||
description:
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
|
||||
}),
|
||||
roles: Property.Json({
|
||||
displayName: 'Roles',
|
||||
required: false,
|
||||
description: 'Array of roles to specify more accurate response',
|
||||
defaultValue: [
|
||||
{ role: 'system', content: 'You are a helpful assistant.' },
|
||||
],
|
||||
}),
|
||||
},
|
||||
async run({ auth, propsValue }) {
|
||||
const openai = new OpenAI({
|
||||
baseURL: auth.props.base_url,
|
||||
apiKey: auth.props.access_token,
|
||||
});
|
||||
let billingIssue = false;
|
||||
let unaurthorized = false;
|
||||
let model = 'gpt-3.5-turbo';
|
||||
if (propsValue.model) {
|
||||
model = propsValue.model;
|
||||
}
|
||||
let temperature = 0.9;
|
||||
if (propsValue.temperature) {
|
||||
temperature = Number(propsValue.temperature);
|
||||
}
|
||||
let maxTokens = 2048;
|
||||
if (propsValue.maxTokens) {
|
||||
maxTokens = Number(propsValue.maxTokens);
|
||||
}
|
||||
let topP = 1;
|
||||
if (propsValue.topP) {
|
||||
topP = Number(propsValue.topP);
|
||||
}
|
||||
let frequencyPenalty = 0.0;
|
||||
if (propsValue.frequencyPenalty) {
|
||||
frequencyPenalty = Number(propsValue.frequencyPenalty);
|
||||
}
|
||||
let presencePenalty = 0.6;
|
||||
if (propsValue.presencePenalty) {
|
||||
presencePenalty = Number(propsValue.presencePenalty);
|
||||
}
|
||||
|
||||
const rolesArray = propsValue.roles
|
||||
? (propsValue.roles as unknown as any[])
|
||||
: [];
|
||||
const roles = rolesArray.map((item) => {
|
||||
const rolesEnum = ['system', 'user', 'assistant'];
|
||||
if (!rolesEnum.includes(item.role)) {
|
||||
throw new Error(
|
||||
'The only available roles are: [system, user, assistant]'
|
||||
);
|
||||
}
|
||||
|
||||
return {
|
||||
role: item.role,
|
||||
content: item.content,
|
||||
};
|
||||
});
|
||||
|
||||
const maxRetries = 4;
|
||||
let retries = 0;
|
||||
let response: string | undefined;
|
||||
while (retries < maxRetries) {
|
||||
try {
|
||||
response = (
|
||||
await openai.chat.completions.create({
|
||||
model: model,
|
||||
messages: [
|
||||
...roles,
|
||||
{
|
||||
role: 'user',
|
||||
content: propsValue['prompt'],
|
||||
},
|
||||
],
|
||||
temperature: temperature,
|
||||
max_tokens: maxTokens,
|
||||
top_p: topP,
|
||||
frequency_penalty: frequencyPenalty,
|
||||
presence_penalty: presencePenalty,
|
||||
})
|
||||
)?.choices[0]?.message?.content?.trim();
|
||||
break; // Break out of the loop if the request is successful
|
||||
} catch (error: any) {
|
||||
if (error?.message?.includes('code 429')) {
|
||||
billingIssue = true;
|
||||
if (retries + 1 === maxRetries) {
|
||||
throw error;
|
||||
}
|
||||
// Calculate the time delay for the next retry using exponential backoff
|
||||
const delay = Math.pow(6, retries) * 1000;
|
||||
console.log(`Retrying in ${delay} milliseconds...`);
|
||||
await sleep(delay); // Wait for the calculated delay
|
||||
retries++;
|
||||
break;
|
||||
} else {
|
||||
if (error?.message?.includes('code 401')) {
|
||||
unaurthorized = true;
|
||||
}
|
||||
throw error;
|
||||
}
|
||||
}
|
||||
}
|
||||
if (billingIssue) {
|
||||
throw new Error(billingIssueMessage);
|
||||
}
|
||||
if (unaurthorized) {
|
||||
throw new Error(unaurthorizedMessage);
|
||||
}
|
||||
return response;
|
||||
},
|
||||
});
|
||||
|
||||
function sleep(ms: number) {
|
||||
return new Promise((resolve) => setTimeout(resolve, ms));
|
||||
}
|
||||
Reference in New Issue
Block a user