Add Activepieces integration for workflow automation

- Add Activepieces fork with SmoothSchedule custom piece
- Create integrations app with Activepieces service layer
- Add embed token endpoint for iframe integration
- Create Automations page with embedded workflow builder
- Add sidebar visibility fix for embed mode
- Add list inactive customers endpoint to Public API
- Include SmoothSchedule triggers: event created/updated/cancelled
- Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
poduck
2025-12-18 22:59:37 -05:00
parent 9848268d34
commit 3aa7199503
16292 changed files with 1284892 additions and 4708 deletions

View File

@@ -0,0 +1,32 @@
{
"OpenRouter": "OpenRouter",
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Use any AI model to generate code, text, or images via OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n",
"Ask LLM": "Ask LLM",
"Custom API Call": "Custom API Call",
"Ask any model supported by Open Router.": "Ask any model supported by Open Router.",
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
"Model": "Model",
"Prompt": "Prompt",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Method": "Method",
"Headers": "Headers",
"Query Parameters": "Query Parameters",
"Body": "Body",
"No Error on Failure": "No Error on Failure",
"Timeout (in seconds)": "Timeout (in seconds)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.",
"The prompt to send to the model.": "The prompt to send to the model.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
"GET": "GET",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "DELETE",
"HEAD": "HEAD"
}

View File

@@ -0,0 +1,33 @@
{
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Verwenden Sie jedes KI-Modell um Code, Text oder Bilder über OpenRouter.ai zu generieren.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFolgen Sie diesen Anweisungen, um Ihren OpenAI-API-Schlüssel zu erhalten:\n\n1. Besuchen Sie die folgende Website: https://openrouter.ai/keys.\n2. Sobald Sie auf der Webseite sind, klicken Sie auf einen Schlüssel erstellen.\n3. Sobald Sie einen Schlüssel erstellt haben, kopieren Sie ihn und verwenden Sie ihn für das Api-Schlüsselfeld auf der Seite.\n",
"Ask LLM": "LLM fragen",
"Custom API Call": "Eigener API-Aufruf",
"Ask any model supported by Open Router.": "Fragen Sie jedes Modell, das von Open Router unterstützt wird.",
"Make a custom API call to a specific endpoint": "Einen benutzerdefinierten API-Aufruf an einen bestimmten Endpunkt machen",
"Model": "Modell",
"Prompt": "Prompt",
"Temperature": "Temperatur",
"Maximum Tokens": "Maximale Token",
"Top P": "Oben P",
"Method": "Methode",
"Headers": "Kopfzeilen",
"Query Parameters": "Abfrageparameter",
"Body": "Körper",
"Response is Binary ?": "Antwort ist binär?",
"No Error on Failure": "Kein Fehler bei Fehler",
"Timeout (in seconds)": "Timeout (in Sekunden)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "Das Modell, das die Vervollständigung generiert. Einige Modelle eignen sich für Aufgaben der natürlichen Sprache, andere sind auf Code spezialisiert.",
"The prompt to send to the model.": "Die Eingabeaufforderung, die an das Modell gesendet wird.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Kontrolliert Zufallszufälligkeit: Die Verringerung führt zu weniger zufälligen Vervollständigungen. Je näher die Temperatur Null rückt, desto deterministischer und sich wiederholender wird.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "Die maximale Anzahl der zu generierenden Token. Anfragen können bis zu 2.048 oder 4.096 Token verwenden, die zwischen Prompt und Fertigstellung geteilt werden, setzen Sie den Wert nicht auf maximal und lassen Sie einige Token für die Eingabe. Das genaue Limit variiert je nach Modell. (Ein Token ist ungefähr 4 Zeichen für den normalen englischen Text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Eine Alternative zur Probenahme mit Temperatur, genannt Nucleus Probenahme, bei der das Modell die Ergebnisse der Tokens mit der Top_p Wahrscheinlichkeitsmasse berücksichtigt. 0,1 bedeutet also nur die Token, die die obersten 10% Wahrscheinlichkeitsmasse ausmachen.",
"Authorization headers are injected automatically from your connection.": "Autorisierungs-Header werden automatisch von Ihrer Verbindung injiziert.",
"Enable for files like PDFs, images, etc..": "Aktivieren für Dateien wie PDFs, Bilder, etc..",
"GET": "ERHALTEN",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "LÖSCHEN",
"HEAD": "HEAD"
}

View File

@@ -0,0 +1,33 @@
{
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Utilice cualquier modelo de IA para generar código, texto o imágenes a través de OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nSigue estas instrucciones para obtener tu clave API de OpenAI:\n\n1. Visita el siguiente sitio web: https://openrouter.ai/keys.\n2. Una vez en el sitio web, haga clic en crear una clave.\n3. Una vez que hayas creado una clave, cópiala y úsalo para el campo de la clave Api en el sitio.\n",
"Ask LLM": "Preguntar LLM",
"Custom API Call": "Llamada API personalizada",
"Ask any model supported by Open Router.": "Pregunte cualquier modelo soportado por Open Router.",
"Make a custom API call to a specific endpoint": "Hacer una llamada API personalizada a un extremo específico",
"Model": "Modelo",
"Prompt": "Petición",
"Temperature": "Temperatura",
"Maximum Tokens": "Tokens máximos",
"Top P": "Top P",
"Method": "Método",
"Headers": "Encabezados",
"Query Parameters": "Parámetros de consulta",
"Body": "Cuerpo",
"Response is Binary ?": "¿Respuesta es binaria?",
"No Error on Failure": "No hay ningún error en fallo",
"Timeout (in seconds)": "Tiempo de espera (en segundos)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "El modelo que generará la terminación. Algunos modelos son adecuados para tareas de lenguaje natural, otros se especializan en código.",
"The prompt to send to the model.": "El prompt para enviar al modelo.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controles aleatorios: La reducción de resultados en terminaciones menos aleatorias. A medida que la temperatura se acerca a cero, el modelo se volverá determinista y repetitivo.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "El número máximo de tokens a generar. Las solicitudes pueden usar hasta 2,048 o 4,096 tokens compartidos entre prompt y completation, no establezca el valor máximo y deje algunas fichas para la entrada. El límite exacto varía según el modelo. (Un token es de aproximadamente 4 caracteres para el texto normal en inglés)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Una alternativa al muestreo con temperatura, llamado muestreo de núcleos, donde el modelo considera los resultados de los tokens con masa de probabilidad superior_p. Por lo tanto, 0,1 significa que sólo se consideran las fichas que componen la masa superior del 10% de probabilidad.",
"Authorization headers are injected automatically from your connection.": "Las cabeceras de autorización se inyectan automáticamente desde tu conexión.",
"Enable for files like PDFs, images, etc..": "Activar para archivos como PDFs, imágenes, etc.",
"GET": "RECOGER",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "BORRAR",
"HEAD": "LIMPIO"
}

View File

@@ -0,0 +1,33 @@
{
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Utilisez n'importe quel modèle IA pour générer du code, du texte ou des images via OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n",
"Ask LLM": "Demander à LLM",
"Custom API Call": "Appel API personnalisé",
"Ask any model supported by Open Router.": "Demandez à n'importe quel modèle pris en charge par Open Router.",
"Make a custom API call to a specific endpoint": "Passez un appel API personnalisé à un point de terminaison spécifique",
"Model": "Modélisation",
"Prompt": "Prompt",
"Temperature": "Température",
"Maximum Tokens": "Maximum de jetons",
"Top P": "Top P",
"Method": "Méthode",
"Headers": "En-têtes",
"Query Parameters": "Paramètres de requête",
"Body": "Corps",
"Response is Binary ?": "La réponse est Binaire ?",
"No Error on Failure": "Aucune erreur en cas d'échec",
"Timeout (in seconds)": "Délai d'attente (en secondes)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "Le modèle qui va générer la complétion. Certains modèles sont adaptés aux tâches de langage naturel, d'autres se spécialisent dans le code.",
"The prompt to send to the model.": "L'invite à envoyer au modèle.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Contrôle aléatoirement : La baisse des résultats est moins aléatoire, alors que la température approche de zéro, le modèle devient déterministe et répétitif.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "Le nombre maximum de jetons à générer. Les requêtes peuvent utiliser jusqu'à 2 048 ou 4 096 jetons partagés entre l'invite et la complétion, ne pas définir la valeur au maximum et laisser des jetons pour l'entrée. La limite exacte varie selon le modèle. (un jeton est d'environ 4 caractères pour le texte anglais normal)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Une alternative à l'échantillonnage à la température, appelée l'échantillonnage du noyau, où le modèle considère les résultats des jetons avec la masse de probabilité top_p. Ainsi, 0,1 signifie que seuls les jetons constituant la masse de probabilité la plus élevée de 10% sont pris en compte.",
"Authorization headers are injected automatically from your connection.": "Les en-têtes d'autorisation sont injectés automatiquement à partir de votre connexion.",
"Enable for files like PDFs, images, etc..": "Activer pour les fichiers comme les PDFs, les images, etc.",
"GET": "OBTENIR",
"POST": "POSTER",
"PATCH": "PATCH",
"PUT": "EFFACER",
"DELETE": "SUPPRIMER",
"HEAD": "TÊTE"
}

View File

@@ -0,0 +1,32 @@
{
"OpenRouter": "OpenRouter",
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Use any AI model to generate code, text, or images via OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n",
"Ask LLM": "Ask LLM",
"Custom API Call": "Custom API Call",
"Ask any model supported by Open Router.": "Ask any model supported by Open Router.",
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
"Model": "Model",
"Prompt": "Prompt",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Method": "Method",
"Headers": "Headers",
"Query Parameters": "Query Parameters",
"Body": "Body",
"No Error on Failure": "No Error on Failure",
"Timeout (in seconds)": "Timeout (in seconds)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.",
"The prompt to send to the model.": "The prompt to send to the model.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
"GET": "GET",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "DELETE",
"HEAD": "HEAD"
}

View File

@@ -0,0 +1,32 @@
{
"OpenRouter": "OpenRouter",
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Use any AI model to generate code, text, or images via OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n",
"Ask LLM": "Ask LLM",
"Custom API Call": "Custom API Call",
"Ask any model supported by Open Router.": "Ask any model supported by Open Router.",
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
"Model": "Model",
"Prompt": "Prompt",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Method": "Method",
"Headers": "Headers",
"Query Parameters": "Query Parameters",
"Body": "Body",
"No Error on Failure": "No Error on Failure",
"Timeout (in seconds)": "Timeout (in seconds)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.",
"The prompt to send to the model.": "The prompt to send to the model.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
"GET": "GET",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "DELETE",
"HEAD": "HEAD"
}

View File

@@ -0,0 +1,33 @@
{
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "任意のAIモデルを使用して、OpenRouter.aiを介してコード、テキスト、または画像を生成します。",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n",
"Ask LLM": "LLMに聞く",
"Custom API Call": "カスタムAPI通話",
"Ask any model supported by Open Router.": "Open Routerでサポートされているモデルに問い合わせてください。",
"Make a custom API call to a specific endpoint": "特定のエンドポイントへのカスタム API コールを実行します。",
"Model": "モデル",
"Prompt": "Prompt",
"Temperature": "温度",
"Maximum Tokens": "最大トークン",
"Top P": "トップ P",
"Method": "方法",
"Headers": "ヘッダー",
"Query Parameters": "クエリパラメータ",
"Body": "本文",
"Response is Binary ?": "応答はバイナリですか?",
"No Error on Failure": "失敗時にエラーはありません",
"Timeout (in seconds)": "タイムアウト(秒)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "補完を生成するモデル。自然言語タスクに適しているモデルもあれば、コードに特化したモデルもあります。",
"The prompt to send to the model.": "モデルに送信するプロンプト。",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "温度がゼロに近づくにつれて、モデルは決定論的で反復的になります。",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "生成するトークンの最大数。 リクエストは、プロンプトと完了の間で共有された最大2,048または4,096トークンを使用できます。 値を最大値に設定しないで、トークンを入力してください。 正確な制限はモデルによって異なります。(通常の英文では1つのトークンが約4文字です)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "核サンプリングと呼ばれる温度でのサンプリングの代わりに、モデルはtop_p確率質量を持つトークンの結果を考慮します。 つまり、0.1は上位10%の確率質量からなるトークンのみを考慮することになります。",
"Authorization headers are injected automatically from your connection.": "認証ヘッダは接続から自動的に注入されます。",
"Enable for files like PDFs, images, etc..": "PDF、画像などのファイルを有効にします。",
"GET": "取得",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "削除",
"HEAD": "頭"
}

View File

@@ -0,0 +1,33 @@
{
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Gebruik een AI-model om code, tekst of afbeeldingen te genereren via OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nVolg deze instructies om uw OpenAI API Key te verkrijgen:\n\n1. Bezoek de volgende website: https://openrouter.ai/keys.\n2. Eenmaal op de website, klik op een sleutel maken.\n3. Zodra u een sleutel heeft gemaakt, kopieer deze en gebruik deze voor het veld Api sleutel op de site.\n",
"Ask LLM": "Vraag LLLM",
"Custom API Call": "Custom API Call",
"Ask any model supported by Open Router.": "Vraag elk model dat wordt ondersteund door Open Router.",
"Make a custom API call to a specific endpoint": "Maak een aangepaste API call naar een specifiek eindpunt",
"Model": "Model",
"Prompt": "Prompt",
"Temperature": "Temperatuur",
"Maximum Tokens": "Maximaal aantal tokens",
"Top P": "Boven P",
"Method": "Methode",
"Headers": "Kopteksten",
"Query Parameters": "Query parameters",
"Body": "Lichaam",
"Response is Binary ?": "Antwoord is binair?",
"No Error on Failure": "Geen fout bij fout",
"Timeout (in seconds)": "Time-out (in seconden)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "Het model dat de voltooiing zal genereren. Sommige modellen zijn geschikt voor natuurlijke taaltaken, andere zijn gespecialiseerd in code.",
"The prompt to send to the model.": "De prompt om naar het model te sturen.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Bestuurt willekeurigheid: Het verlagen van de temperatuur resulteert in minder willekeurige aanvullingen. Zodra de temperatuur nul nadert, zal het model deterministisch en herhalend worden.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "Het maximum aantal te genereren tokens Verzoeken kunnen maximaal 2.048 of 4.096 tokens gedeeld worden tussen prompt en voltooiing, stel de waarde niet in op maximum en laat sommige tokens voor de invoer. De exacte limiet varieert per model. (Eén token is grofweg 4 tekens voor normale Engelse tekst)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Een alternatief voor bemonstering met de temperatuur, genaamd nucleus sampling, waarbij het model de resultaten van de tokens met top_p waarschijnlijkheid ziet. 0.1 betekent dus dat alleen de tokens die de grootste massa van 10 procent vormen, worden overwogen.",
"Authorization headers are injected automatically from your connection.": "Autorisatie headers worden automatisch geïnjecteerd vanuit uw verbinding.",
"Enable for files like PDFs, images, etc..": "Inschakelen voor bestanden zoals PDF's, afbeeldingen etc..",
"GET": "KRIJG",
"POST": "POSTE",
"PATCH": "BEKIJK",
"PUT": "PUT",
"DELETE": "VERWIJDEREN",
"HEAD": "HOOFD"
}

View File

@@ -0,0 +1,33 @@
{
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Use qualquer modelo de IA para gerar código, texto ou imagens via OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n",
"Ask LLM": "Perguntar LLM",
"Custom API Call": "Chamada de API personalizada",
"Ask any model supported by Open Router.": "Pergunte qualquer modelo suportado pelo Open Router.",
"Make a custom API call to a specific endpoint": "Faça uma chamada de API personalizada para um ponto de extremidade específico",
"Model": "Modelo",
"Prompt": "Aviso",
"Temperature": "Temperatura",
"Maximum Tokens": "Máximo de Tokens",
"Top P": "Superior P",
"Method": "Método",
"Headers": "Cabeçalhos",
"Query Parameters": "Parâmetros da consulta",
"Body": "Conteúdo",
"Response is Binary ?": "A resposta é binária ?",
"No Error on Failure": "Nenhum erro no Failure",
"Timeout (in seconds)": "Tempo limite (em segundos)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "O modelo que irá gerar a conclusão. Alguns modelos são adequados para tarefas de linguagem natural, outros são especializados no código.",
"The prompt to send to the model.": "O prompt para enviar para o modelo.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controla aleatoriedade: Diminuir resulta em menos complementos aleatórios. À medida que a temperatura se aproxima de zero, o modelo se tornará determinístico e repetitivo.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "O número máximo de tokens a gerar. Solicitações podem usar até 2.048 ou 4.096 tokens compartilhados entre prompt e conclusão, não defina o valor como máximo e deixe algumas fichas para a entrada. O limite exato varia por modelo. (Um token é aproximadamente 4 caracteres para o texto normal em inglês)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Uma alternativa à amostragem com temperatura, chamada amostragem núcleo, onde o modelo considera os resultados dos tokens com massa de probabilidade superior (P). Portanto, 0,1 significa que apenas os tokens que incluem a massa de probabilidade superior de 10% são considerados.",
"Authorization headers are injected automatically from your connection.": "Os cabeçalhos de autorização são inseridos automaticamente a partir da sua conexão.",
"Enable for files like PDFs, images, etc..": "Habilitar para arquivos como PDFs, imagens, etc..",
"GET": "OBTER",
"POST": "POSTAR",
"PATCH": "COMPRAR",
"PUT": "COLOCAR",
"DELETE": "EXCLUIR",
"HEAD": "CABEÇA"
}

View File

@@ -0,0 +1,32 @@
{
"OpenRouter": "Открытый маршрутизатор",
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Используйте любую модель ИИ для генерации кода, текста или изображений через OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n",
"Ask LLM": "Спросить LLM",
"Custom API Call": "Пользовательский вызов API",
"Ask any model supported by Open Router.": "Спросите любую модель, поддерживаемую Open Router.",
"Make a custom API call to a specific endpoint": "Сделать пользовательский API вызов к определенной конечной точке",
"Model": "Модель",
"Prompt": "Prompt",
"Temperature": "Температура",
"Maximum Tokens": "Максимум жетонов",
"Top P": "Верхний П",
"Method": "Метод",
"Headers": "Заголовки",
"Query Parameters": "Параметры запроса",
"Body": "Тело",
"No Error on Failure": "Нет ошибок при ошибке",
"Timeout (in seconds)": "Таймаут (в секундах)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "Модель, которая будет генерировать дополнение, некоторые модели подходят для естественных языковых задач, другие специализируются на программировании.",
"The prompt to send to the model.": "Запрос на отправку модели.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Контролирует случайность: понижение результатов в менее случайном завершении. По мере нулевого температурного приближения модель становится детерминированной и повторяющей.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "Максимальное количество фишек для генерации Запросы могут использовать до 2 048 или 4 096 токенов, которыми обменивались между оперативными и завершенными действиями, не устанавливайте максимальное значение и оставляйте некоторые токены для ввода. In the twentieth century, Russian is widely taught in the schools of the members of the old Warsaw Pact and in other countries of the former Soviet Union.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Альтернатива отоплению с температурой, называемой ядерным отбором, где модель рассматривает результаты жетонов с вероятностью top_p. Таким образом, 0.1 означает, что учитываются только жетоны, состоящие из массы 10% наивысшего уровня.",
"Authorization headers are injected automatically from your connection.": "Заголовки авторизации включаются автоматически из вашего соединения.",
"GET": "ПОЛУЧИТЬ",
"POST": "ПОСТ",
"PATCH": "ПАТЧ",
"PUT": "ПОКУПИТЬ",
"DELETE": "УДАЛИТЬ",
"HEAD": "HEAD"
}

View File

@@ -0,0 +1,33 @@
{
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Use any AI model to generate code, text, or images via OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n",
"Ask LLM": "Ask LLM",
"Custom API Call": "Custom API Call",
"Ask any model supported by Open Router.": "Ask any model supported by Open Router.",
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
"Model": "Model",
"Prompt": "Prompt",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Method": "Method",
"Headers": "Headers",
"Query Parameters": "Query Parameters",
"Body": "Body",
"Response is Binary ?": "Response is Binary ?",
"No Error on Failure": "No Error on Failure",
"Timeout (in seconds)": "Timeout (in seconds)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.",
"The prompt to send to the model.": "The prompt to send to the model.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
"Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..",
"GET": "GET",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "DELETE",
"HEAD": "HEAD"
}

View File

@@ -0,0 +1,32 @@
{
"OpenRouter": "OpenRouter",
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Use any AI model to generate code, text, or images via OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n",
"Ask LLM": "Ask LLM",
"Custom API Call": "Custom API Call",
"Ask any model supported by Open Router.": "Ask any model supported by Open Router.",
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
"Model": "Model",
"Prompt": "Prompt",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Method": "Method",
"Headers": "Headers",
"Query Parameters": "Query Parameters",
"Body": "Body",
"No Error on Failure": "No Error on Failure",
"Timeout (in seconds)": "Timeout (in seconds)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.",
"The prompt to send to the model.": "The prompt to send to the model.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
"GET": "GET",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "DELETE",
"HEAD": "HEAD"
}

View File

@@ -0,0 +1,33 @@
{
"Use any AI model to generate code, text, or images via OpenRouter.ai.": "Use any AI model to generate code, text, or images via OpenRouter.ai.",
"\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n": "\nFollow these instructions to get your OpenAI API Key:\n\n1. Visit the following website: https://openrouter.ai/keys.\n2. Once on the website, click on create a key.\n3. Once you have created a key, copy it and use it for the Api key field on the site.\n",
"Ask LLM": "Ask LLM",
"Custom API Call": "自定义 API 呼叫",
"Ask any model supported by Open Router.": "Ask any model supported by Open Router.",
"Make a custom API call to a specific endpoint": "将一个自定义 API 调用到一个特定的终点",
"Model": "Model",
"Prompt": "Prompt",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Method": "方法",
"Headers": "信头",
"Query Parameters": "查询参数",
"Body": "正文内容",
"Response is Binary ?": "Response is Binary ?",
"No Error on Failure": "失败时没有错误",
"Timeout (in seconds)": "超时(秒)",
"The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.": "The model which will generate the completion. Some models are suitable for natural language tasks, others specialize in code.",
"The prompt to send to the model.": "The prompt to send to the model.",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
"The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)": "The maximum number of tokens to generate. Requests can use up to 2,048 or 4,096 tokens shared between prompt and completion, don't set the value to maximum and leave some tokens for the input. The exact limit varies by model. (One token is roughly 4 characters for normal English text)",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
"Authorization headers are injected automatically from your connection.": "授权头自动从您的连接中注入。",
"Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..",
"GET": "获取",
"POST": "帖子",
"PATCH": "PATCH",
"PUT": "弹出",
"DELETE": "删除",
"HEAD": "黑色"
}