Add Activepieces integration for workflow automation
- Add Activepieces fork with SmoothSchedule custom piece - Create integrations app with Activepieces service layer - Add embed token endpoint for iframe integration - Create Automations page with embedded workflow builder - Add sidebar visibility fix for embed mode - Add list inactive customers endpoint to Public API - Include SmoothSchedule triggers: event created/updated/cancelled - Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,60 @@
|
||||
{
|
||||
"Groq": "Groq",
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Use Groq's fast language models and audio processing capabilities.",
|
||||
"Enter your Groq API Key": "Enter your Groq API Key",
|
||||
"Ask AI": "Ask AI",
|
||||
"Transcribe Audio": "Transcribe Audio",
|
||||
"Translate Audio": "Translate Audio",
|
||||
"Custom API Call": "Custom API Call",
|
||||
"Ask Groq anything using fast language models.": "Ask Groq anything using fast language models.",
|
||||
"Transcribes audio into text in the input language.": "Transcribes audio into text in the input language.",
|
||||
"Translates audio into English text.": "Translates audio into English text.",
|
||||
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
|
||||
"Model": "Model",
|
||||
"Question": "Question",
|
||||
"Temperature": "Temperature",
|
||||
"Maximum Tokens": "Maximum Tokens",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Frequency penalty",
|
||||
"Presence penalty": "Presence penalty",
|
||||
"Memory Key": "Memory Key",
|
||||
"Roles": "Roles",
|
||||
"Audio File": "Audio File",
|
||||
"Language": "Language",
|
||||
"Prompt": "Prompt",
|
||||
"Response Format": "Response Format",
|
||||
"Method": "Method",
|
||||
"Headers": "Headers",
|
||||
"Query Parameters": "Query Parameters",
|
||||
"Body": "Body",
|
||||
"No Error on Failure": "No Error on Failure",
|
||||
"Timeout (in seconds)": "Timeout (in seconds)",
|
||||
"The model which will generate the completion.": "The model which will generate the completion.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.",
|
||||
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "The model to use for transcription.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.",
|
||||
"The format of the transcript output.": "The format of the transcript output.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "The model to use for translation.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "An optional text in English to guide the model's style or continue a previous audio segment.",
|
||||
"The format of the translation output.": "The format of the translation output.",
|
||||
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
|
||||
"JSON": "JSON",
|
||||
"Text": "Text",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "GET",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "DELETE",
|
||||
"HEAD": "HEAD"
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
{
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Verwenden Sie Groqs schnelle Sprachmodelle und Audio-Verarbeitungsmöglichkeiten.",
|
||||
"Enter your Groq API Key": "Geben Sie Ihren Groq API Key ein",
|
||||
"Ask AI": "KI fragen",
|
||||
"Transcribe Audio": "Audio umschreiben",
|
||||
"Translate Audio": "Audio übersetzen",
|
||||
"Custom API Call": "Eigener API-Aufruf",
|
||||
"Ask Groq anything using fast language models.": "Fragen Sie Groq alles mit schnellen Sprachmodellen.",
|
||||
"Transcribes audio into text in the input language.": "Transkribiert Audio in Text in der Eingabesprache.",
|
||||
"Translates audio into English text.": "Übersetzt Audio in den englischen Text.",
|
||||
"Make a custom API call to a specific endpoint": "Einen benutzerdefinierten API-Aufruf an einen bestimmten Endpunkt machen",
|
||||
"Model": "Modell",
|
||||
"Question": "Frage",
|
||||
"Temperature": "Temperatur",
|
||||
"Maximum Tokens": "Maximale Token",
|
||||
"Top P": "Oben P",
|
||||
"Frequency penalty": "Frequenz Strafe",
|
||||
"Presence penalty": "Präsenzstrafe",
|
||||
"Memory Key": "Speicherschlüssel",
|
||||
"Roles": "Rollen",
|
||||
"Audio File": "Audiodatei",
|
||||
"Language": "Sprache",
|
||||
"Prompt": "Prompt",
|
||||
"Response Format": "Antwortformat",
|
||||
"Method": "Methode",
|
||||
"Headers": "Kopfzeilen",
|
||||
"Query Parameters": "Abfrageparameter",
|
||||
"Body": "Körper",
|
||||
"Response is Binary ?": "Antwort ist binär?",
|
||||
"No Error on Failure": "Kein Fehler bei Fehler",
|
||||
"Timeout (in seconds)": "Timeout (in Sekunden)",
|
||||
"The model which will generate the completion.": "Das Modell, das die Fertigstellung generiert.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Kontrolliert Zufallszufälligkeit: Die Verringerung führt zu weniger zufälligen Vervollständigungen. Je näher die Temperatur Null rückt, desto deterministischer und sich wiederholender wird.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "Die maximale Anzahl der zu generierenden Token. Die Gesamtlänge der Eingabe-Tokens und generierten Token wird durch die Kontextlänge des Modells begrenzt.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Eine Alternative zur Probenahme mit Temperatur, genannt Nucleus Probenahme, bei der das Modell die Ergebnisse der Tokens mit der Top_p Wahrscheinlichkeitsmasse berücksichtigt. 0,1 bedeutet also nur die Token, die die obersten 10% Wahrscheinlichkeitsmasse ausmachen.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Nummer zwischen -2.0 und 2.0. Positive Werte bestrafen neue Tokens aufgrund ihrer bisherigen Häufigkeit im Text, wodurch sich die Wahrscheinlichkeit verringert, dass das Modell dieselbe Zeile wörtlich wiederholt.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Nummer zwischen -2.0 und 2.0. Positive Werte bestrafen neue Tokens je nachdem, ob sie im Text so weit erscheinen, was die Wahrscheinlichkeit erhöht, über neue Themen zu sprechen.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "Ein Memory-Schlüssel, der den Chatverlauf über alle Abläufe und Ströme hinweg weitergibt. Leer lassen um Groq ohne Speicher früherer Nachrichten zu verlassen.",
|
||||
"Array of roles to specify more accurate response": "Rollenzuordnung, um eine genauere Antwort anzugeben",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "Die Audiodatei zum Transkribieren. Unterstützte Formate: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "Das zu verwendende Modell für die Transkription.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "The language of the input audio in ISO-639-1 format (e.g.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "Ein optionaler Text, der den Stil des Modells anführt oder einen vorherigen Audio-Bereich fortsetzt. Der Prompt sollte mit der Audiosprache übereinstimmen.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "Die Probenahmetemperatur zwischen 0 und 1. Höhere Werte wie 0. wird die Ausgabe zufälliger, während niedrigere Werte wie 0,2 sie fokussieren und deterministischer machen.",
|
||||
"The format of the transcript output.": "Das Format der Transkript-Ausgabe.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "Die zu übersetzende Audiodatei. Unterstützte Formate: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "Das zu verwendende Modell für die Übersetzung.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "Ein optionaler Text in englischer Sprache, der den Stil des Modells steuert oder ein vorheriges Audio-Segment fortsetzt.",
|
||||
"The format of the translation output.": "Das Format der Übersetzungsausgabe.",
|
||||
"Authorization headers are injected automatically from your connection.": "Autorisierungs-Header werden automatisch von Ihrer Verbindung injiziert.",
|
||||
"Enable for files like PDFs, images, etc..": "Aktivieren für Dateien wie PDFs, Bilder, etc..",
|
||||
"JSON": "JSON",
|
||||
"Text": "Text",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "ERHALTEN",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "LÖSCHEN",
|
||||
"HEAD": "HEAD"
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
{
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Utilice los modelos de lenguaje rápido de Groq y las capacidades de procesamiento de audio.",
|
||||
"Enter your Groq API Key": "Introduzca su clave API Groq",
|
||||
"Ask AI": "Preguntar IA",
|
||||
"Transcribe Audio": "Transcribir audio",
|
||||
"Translate Audio": "Traducir audio",
|
||||
"Custom API Call": "Llamada API personalizada",
|
||||
"Ask Groq anything using fast language models.": "Pregunte a Groq cualquier cosa usando modelos de lenguaje rápido.",
|
||||
"Transcribes audio into text in the input language.": "Transcribe el audio en el texto en el idioma de entrada.",
|
||||
"Translates audio into English text.": "Traduce el audio al texto en inglés.",
|
||||
"Make a custom API call to a specific endpoint": "Hacer una llamada API personalizada a un extremo específico",
|
||||
"Model": "Modelo",
|
||||
"Question": "Pregunta",
|
||||
"Temperature": "Temperatura",
|
||||
"Maximum Tokens": "Tokens máximos",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Puntuación de frecuencia",
|
||||
"Presence penalty": "Penalización de presencia",
|
||||
"Memory Key": "Clave de memoria",
|
||||
"Roles": "Roles",
|
||||
"Audio File": "Archivo de audio",
|
||||
"Language": "Idioma",
|
||||
"Prompt": "Petición",
|
||||
"Response Format": "Formato de respuesta",
|
||||
"Method": "Método",
|
||||
"Headers": "Encabezados",
|
||||
"Query Parameters": "Parámetros de consulta",
|
||||
"Body": "Cuerpo",
|
||||
"Response is Binary ?": "¿Respuesta es binaria?",
|
||||
"No Error on Failure": "No hay ningún error en fallo",
|
||||
"Timeout (in seconds)": "Tiempo de espera (en segundos)",
|
||||
"The model which will generate the completion.": "El modelo que generará la terminación.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controles aleatorios: La reducción de resultados en terminaciones menos aleatorias. A medida que la temperatura se acerca a cero, el modelo se volverá determinista y repetitivo.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "El número máximo de fichas a generar. La longitud total de las fichas de entrada y las fichas generadas está limitada por la longitud del contexto del modelo.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Una alternativa al muestreo con temperatura, llamado muestreo de núcleos, donde el modelo considera los resultados de los tokens con masa de probabilidad superior_p. Por lo tanto, 0,1 significa que sólo se consideran las fichas que componen la masa superior del 10% de probabilidad.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Número entre -2.0 y 2.0. Los valores positivos penalizan nuevos tokens basados en su frecuencia existente en el texto hasta ahora, lo que reduce la probabilidad del modelo de repetir la misma línea literalmente.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Número entre -2.0 y 2.0. Los valores positivos penalizan las nuevas fichas basándose en si aparecen en el texto hasta ahora, lo que aumenta la probabilidad de que el modelo hable de nuevos temas.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "Una clave de memoria que mantendrá el historial de chat compartido entre ejecuciones y flujos. Manténgalo vacío para dejar Groq sin memoria de mensajes anteriores.",
|
||||
"Array of roles to specify more accurate response": "Matriz de roles para especificar una respuesta más precisa",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "El archivo de audio a transcribir. Formatos soportados: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "El modelo a utilizar para la transcripción.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "The language of the input audio in ISO-639-1 format (e.g., \"es\" for English), this will improve accuracy and latency.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "Un texto opcional para guiar el estilo del modelo o continuar con un segmento de audio anterior. El indicador debe coincidir con el idioma de audio.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "La temperatura de muestreo, entre 0 y 1. Valores más altos como 0. hará que la salida sea más aleatoria, mientras que valores inferiores como 0.2 la harán más centrada y determinista.",
|
||||
"The format of the transcript output.": "El formato de la salida de transcripción.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "El archivo de audio a traducir. Formatos soportados: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "El modelo a utilizar para la traducción.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "Un texto opcional en inglés para guiar el estilo del modelo o continuar con un segmento de audio anterior.",
|
||||
"The format of the translation output.": "El formato de la salida de traducción.",
|
||||
"Authorization headers are injected automatically from your connection.": "Las cabeceras de autorización se inyectan automáticamente desde tu conexión.",
|
||||
"Enable for files like PDFs, images, etc..": "Activar para archivos como PDFs, imágenes, etc.",
|
||||
"JSON": "JSON",
|
||||
"Text": "Texto",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "RECOGER",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "BORRAR",
|
||||
"HEAD": "LIMPIO"
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
{
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Utilisez les modèles de langage rapide de Groq et les capacités de traitement audio.",
|
||||
"Enter your Groq API Key": "Entrez votre clé API Groq",
|
||||
"Ask AI": "Demander à l'IA",
|
||||
"Transcribe Audio": "Transcrire l'audio",
|
||||
"Translate Audio": "Traduire l'audio",
|
||||
"Custom API Call": "Appel API personnalisé",
|
||||
"Ask Groq anything using fast language models.": "Demandez à Groq quoi que ce soit en utilisant des modèles de langue rapides.",
|
||||
"Transcribes audio into text in the input language.": "Transcrit l'audio en texte dans la langue d'entrée.",
|
||||
"Translates audio into English text.": "Traduit l'audio en texte anglais.",
|
||||
"Make a custom API call to a specific endpoint": "Passez un appel API personnalisé à un point de terminaison spécifique",
|
||||
"Model": "Modélisation",
|
||||
"Question": "Question",
|
||||
"Temperature": "Température",
|
||||
"Maximum Tokens": "Maximum de jetons",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Malus de fréquence",
|
||||
"Presence penalty": "Malus de présence",
|
||||
"Memory Key": "Clé de mémoire",
|
||||
"Roles": "Rôles",
|
||||
"Audio File": "Fichier audio",
|
||||
"Language": "Langue",
|
||||
"Prompt": "Prompt",
|
||||
"Response Format": "Format de réponse",
|
||||
"Method": "Méthode",
|
||||
"Headers": "En-têtes",
|
||||
"Query Parameters": "Paramètres de requête",
|
||||
"Body": "Corps",
|
||||
"Response is Binary ?": "La réponse est Binaire ?",
|
||||
"No Error on Failure": "Aucune erreur en cas d'échec",
|
||||
"Timeout (in seconds)": "Délai d'attente (en secondes)",
|
||||
"The model which will generate the completion.": "Le modèle qui va générer la complétion.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Contrôle aléatoirement : La baisse des résultats est moins aléatoire, alors que la température approche de zéro, le modèle devient déterministe et répétitif.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "Le nombre maximum de jetons à générer. La longueur totale des jetons d'entrée et des jetons générés est limitée par la longueur de contexte du modèle.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Une alternative à l'échantillonnage à la température, appelée l'échantillonnage du noyau, où le modèle considère les résultats des jetons avec la masse de probabilité top_p. Ainsi, 0,1 signifie que seuls les jetons constituant la masse de probabilité la plus élevée de 10% sont pris en compte.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Numéroter entre -2.0 et 2.0. Les valeurs positives pénalisent les nouveaux jetons en fonction de leur fréquence existante dans le texte jusqu'à présent, diminuant la probabilité du modèle de répéter le verbatim de la même ligne.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Numéroter entre -2.0 et 2.0. Les valeurs positives pénalisent les nouveaux jetons en fonction de leur apparition dans le texte jusqu'à présent, ce qui augmente la probabilité du modèle de parler de nouveaux sujets.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "Une clé de mémoire qui conservera l'historique des discussions partagées à travers les exécutions et les flux. Gardez-la vide pour laisser Groq sans mémoire pour les messages précédents.",
|
||||
"Array of roles to specify more accurate response": "Tableau de rôles pour spécifier une réponse plus précise",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "Le fichier audio à transcrire. Les formats supportés : flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "Le modèle à utiliser pour la transcription.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "La langue de l'audio d'entrée au format ISO-639-1 (par exemple, \"en\" pour l'anglais). Cela améliorera la précision et la latence.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "Un texte optionnel pour guider le style du modèle ou continuer un segment audio précédent. L'invite doit correspondre à la langue audio.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "La température d'échantillonnage entre 0 et 1. Des valeurs plus élevées comme 0. rendra la sortie plus aléatoire, tandis que des valeurs plus basses comme 0.2 la rendront plus concentrée et déterministe.",
|
||||
"The format of the transcript output.": "Le format de sortie de la transcription.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "Le fichier audio à traduire. Formats supportés : flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "Le modèle à utiliser pour la traduction.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "Un texte optionnel en anglais pour guider le style du modèle ou continuer un segment audio précédent.",
|
||||
"The format of the translation output.": "Le format de la sortie de traduction.",
|
||||
"Authorization headers are injected automatically from your connection.": "Les en-têtes d'autorisation sont injectés automatiquement à partir de votre connexion.",
|
||||
"Enable for files like PDFs, images, etc..": "Activer pour les fichiers comme les PDFs, les images, etc.",
|
||||
"JSON": "JSON",
|
||||
"Text": "Texte du texte",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "OBTENIR",
|
||||
"POST": "POSTER",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "EFFACER",
|
||||
"DELETE": "SUPPRIMER",
|
||||
"HEAD": "TÊTE"
|
||||
}
|
||||
@@ -0,0 +1,60 @@
|
||||
{
|
||||
"Groq": "Groq",
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Use Groq's fast language models and audio processing capabilities.",
|
||||
"Enter your Groq API Key": "Enter your Groq API Key",
|
||||
"Ask AI": "Ask AI",
|
||||
"Transcribe Audio": "Transcribe Audio",
|
||||
"Translate Audio": "Translate Audio",
|
||||
"Custom API Call": "Custom API Call",
|
||||
"Ask Groq anything using fast language models.": "Ask Groq anything using fast language models.",
|
||||
"Transcribes audio into text in the input language.": "Transcribes audio into text in the input language.",
|
||||
"Translates audio into English text.": "Translates audio into English text.",
|
||||
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
|
||||
"Model": "Model",
|
||||
"Question": "Question",
|
||||
"Temperature": "Temperature",
|
||||
"Maximum Tokens": "Maximum Tokens",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Frequency penalty",
|
||||
"Presence penalty": "Presence penalty",
|
||||
"Memory Key": "Memory Key",
|
||||
"Roles": "Roles",
|
||||
"Audio File": "Audio File",
|
||||
"Language": "Language",
|
||||
"Prompt": "Prompt",
|
||||
"Response Format": "Response Format",
|
||||
"Method": "Method",
|
||||
"Headers": "Headers",
|
||||
"Query Parameters": "Query Parameters",
|
||||
"Body": "Body",
|
||||
"No Error on Failure": "No Error on Failure",
|
||||
"Timeout (in seconds)": "Timeout (in seconds)",
|
||||
"The model which will generate the completion.": "The model which will generate the completion.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.",
|
||||
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "The model to use for transcription.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.",
|
||||
"The format of the transcript output.": "The format of the transcript output.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "The model to use for translation.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "An optional text in English to guide the model's style or continue a previous audio segment.",
|
||||
"The format of the translation output.": "The format of the translation output.",
|
||||
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
|
||||
"JSON": "JSON",
|
||||
"Text": "Text",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "GET",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "DELETE",
|
||||
"HEAD": "HEAD"
|
||||
}
|
||||
@@ -0,0 +1,60 @@
|
||||
{
|
||||
"Groq": "Groq",
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Use Groq's fast language models and audio processing capabilities.",
|
||||
"Enter your Groq API Key": "Enter your Groq API Key",
|
||||
"Ask AI": "Ask AI",
|
||||
"Transcribe Audio": "Transcribe Audio",
|
||||
"Translate Audio": "Translate Audio",
|
||||
"Custom API Call": "Custom API Call",
|
||||
"Ask Groq anything using fast language models.": "Ask Groq anything using fast language models.",
|
||||
"Transcribes audio into text in the input language.": "Transcribes audio into text in the input language.",
|
||||
"Translates audio into English text.": "Translates audio into English text.",
|
||||
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
|
||||
"Model": "Model",
|
||||
"Question": "Question",
|
||||
"Temperature": "Temperature",
|
||||
"Maximum Tokens": "Maximum Tokens",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Frequency penalty",
|
||||
"Presence penalty": "Presence penalty",
|
||||
"Memory Key": "Memory Key",
|
||||
"Roles": "Roles",
|
||||
"Audio File": "Audio File",
|
||||
"Language": "Language",
|
||||
"Prompt": "Prompt",
|
||||
"Response Format": "Response Format",
|
||||
"Method": "Method",
|
||||
"Headers": "Headers",
|
||||
"Query Parameters": "Query Parameters",
|
||||
"Body": "Body",
|
||||
"No Error on Failure": "No Error on Failure",
|
||||
"Timeout (in seconds)": "Timeout (in seconds)",
|
||||
"The model which will generate the completion.": "The model which will generate the completion.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.",
|
||||
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "The model to use for transcription.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.",
|
||||
"The format of the transcript output.": "The format of the transcript output.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "The model to use for translation.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "An optional text in English to guide the model's style or continue a previous audio segment.",
|
||||
"The format of the translation output.": "The format of the translation output.",
|
||||
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
|
||||
"JSON": "JSON",
|
||||
"Text": "Text",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "GET",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "DELETE",
|
||||
"HEAD": "HEAD"
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
{
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Groqの高速言語モデルとオーディオ処理機能を使用します。",
|
||||
"Enter your Groq API Key": "Groq APIキーを入力",
|
||||
"Ask AI": "AIに聞く",
|
||||
"Transcribe Audio": "オーディオを変換する",
|
||||
"Translate Audio": "音声を翻訳",
|
||||
"Custom API Call": "カスタムAPI通話",
|
||||
"Ask Groq anything using fast language models.": "速い言語モデルを使ってGroqに何でも聞いてみてください。",
|
||||
"Transcribes audio into text in the input language.": "音声を入力言語でテキストに変換します。",
|
||||
"Translates audio into English text.": "音声を英語のテキストに翻訳します。",
|
||||
"Make a custom API call to a specific endpoint": "特定のエンドポイントへのカスタム API コールを実行します。",
|
||||
"Model": "モデル",
|
||||
"Question": "質問",
|
||||
"Temperature": "温度",
|
||||
"Maximum Tokens": "最大トークン",
|
||||
"Top P": "トップ P",
|
||||
"Frequency penalty": "頻度ペナルティ",
|
||||
"Presence penalty": "プレゼンスペナルティ",
|
||||
"Memory Key": "メモリーキー",
|
||||
"Roles": "ロール",
|
||||
"Audio File": "オーディオ ファイル",
|
||||
"Language": "言語",
|
||||
"Prompt": "Prompt",
|
||||
"Response Format": "応答形式",
|
||||
"Method": "方法",
|
||||
"Headers": "ヘッダー",
|
||||
"Query Parameters": "クエリパラメータ",
|
||||
"Body": "本文",
|
||||
"Response is Binary ?": "応答はバイナリですか?",
|
||||
"No Error on Failure": "失敗時にエラーはありません",
|
||||
"Timeout (in seconds)": "タイムアウト(秒)",
|
||||
"The model which will generate the completion.": "補完を生成するモデル。",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "温度がゼロに近づくにつれて、モデルは決定論的で反復的になります。",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "生成するトークンの最大数。入力トークンと生成されるトークンの合計の長さは、モデルのコンテキストの長さによって制限されます。",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "核サンプリングと呼ばれる温度でのサンプリングの代わりに、モデルはtop_p確率質量を持つトークンの結果を考慮します。 つまり、0.1は上位10%の確率質量からなるトークンのみを考慮することになります。",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "-2.0 から 2.0 までの数字。 正の値は、これまでのテキスト内の既存の頻度に基づいて新しいトークンを罰するため、モデルが同じ行を元に繰り返す可能性が低下します。",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "-2.0 から 2.0 までの数字。 正の値は、これまでのところテキストに表示されるかどうかに基づいて新しいトークンを罰するため、モデルが新しいトピックについて話す可能性を高めます。",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "共有されたチャット履歴を実行とフロー間で保持するメモリキー。以前のメッセージのメモリなしでGroqを残すには空のままにしておいてください。",
|
||||
"Array of roles to specify more accurate response": "より正確な応答を指定するロールの配列",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "転写するオーディオファイル。サポートされているフォーマット: flac, mp3, mp4, mpeg, mpg, mpg, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "転写に使用するモデル。",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "入力音声の言語はISO-639-1形式(例:英語の場合は\"en\")で、精度とレイテンシが向上します。",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "モデルのスタイルをガイドしたり、以前のオーディオセグメントを継続するための任意のテキストです。プロンプトはオーディオ言語と一致する必要があります。",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "サンプリング温度は0から1の間です。0のような値が高いです。 は出力をよりランダムにし、0.2のような値を下げるとより集中的で決定的になります。",
|
||||
"The format of the transcript output.": "転写出力の形式。",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "翻訳するオーディオファイル。サポートされているフォーマット: flac, mp3, mp4, mpeg, mpg, mpg, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "翻訳に使用するモデル",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "モデルのスタイルをガイドしたり、以前のオーディオセグメントを継続するための任意の英語テキスト。",
|
||||
"The format of the translation output.": "翻訳出力の書式。",
|
||||
"Authorization headers are injected automatically from your connection.": "認証ヘッダは接続から自動的に注入されます。",
|
||||
"Enable for files like PDFs, images, etc..": "PDF、画像などのファイルを有効にします。",
|
||||
"JSON": "JSON",
|
||||
"Text": "テキスト",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "取得",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "削除",
|
||||
"HEAD": "頭"
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
{
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Gebruik Groq's snelle taalmodellen en geluidsverwerkingsmogelijkheden.",
|
||||
"Enter your Groq API Key": "Voer uw Groq API-sleutel in",
|
||||
"Ask AI": "Vraag het AI",
|
||||
"Transcribe Audio": "Audio Transcrimeren",
|
||||
"Translate Audio": "Audio vertalen",
|
||||
"Custom API Call": "Custom API Call",
|
||||
"Ask Groq anything using fast language models.": "Groq vragen alles met behulp van snelle taalmodellen.",
|
||||
"Transcribes audio into text in the input language.": "Audio-overdracht in tekst in de invoertaal.",
|
||||
"Translates audio into English text.": "Vertaal audio in Engelse tekst.",
|
||||
"Make a custom API call to a specific endpoint": "Maak een aangepaste API call naar een specifiek eindpunt",
|
||||
"Model": "Model",
|
||||
"Question": "Vraag",
|
||||
"Temperature": "Temperatuur",
|
||||
"Maximum Tokens": "Maximaal aantal tokens",
|
||||
"Top P": "Boven P",
|
||||
"Frequency penalty": "Frequentie boete",
|
||||
"Presence penalty": "Presence boete",
|
||||
"Memory Key": "Geheugen Sleutel",
|
||||
"Roles": "Rollen",
|
||||
"Audio File": "Audio bestand",
|
||||
"Language": "Taal",
|
||||
"Prompt": "Prompt",
|
||||
"Response Format": "Antwoord formaat",
|
||||
"Method": "Methode",
|
||||
"Headers": "Kopteksten",
|
||||
"Query Parameters": "Query parameters",
|
||||
"Body": "Lichaam",
|
||||
"Response is Binary ?": "Antwoord is binair?",
|
||||
"No Error on Failure": "Geen fout bij fout",
|
||||
"Timeout (in seconds)": "Time-out (in seconden)",
|
||||
"The model which will generate the completion.": "Het model dat de voltooiing zal genereren.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Bestuurt willekeurigheid: Het verlagen van de temperatuur resulteert in minder willekeurige aanvullingen. Zodra de temperatuur nul nadert, zal het model deterministisch en herhalend worden.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "Het maximale aantal tokens om te genereren. De totale lengte van de ingangstokens en gegenereerde tokens wordt beperkt door de context lengte van het model.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Een alternatief voor bemonstering met de temperatuur, genaamd nucleus sampling, waarbij het model de resultaten van de tokens met top_p waarschijnlijkheid ziet. 0.1 betekent dus dat alleen de tokens die de grootste massa van 10 procent vormen, worden overwogen.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Nummer tussen -2.0 en 2.0. Positieve waarden bestraffen nieuwe tokens op basis van hun bestaande frequentie in de tekst tot nu toe, waardoor de waarschijnlijkheid van het model om dezelfde lijn te herhalen afneemt.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Nummer tussen -2.0 en 2.0. Positieve waarden bestraffen nieuwe tokens op basis van de vraag of ze tot nu toe in de tekst staan, waardoor de waarschijnlijkheid van het model om over nieuwe onderwerpen te praten toeneemt.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "Een geheugensleutel waarmee de chatgeschiedenis gedeeld blijft over uitvoeringen en stromen. Houd deze leeg om Groq zonder geheugen van vorige berichten te verlaten.",
|
||||
"Array of roles to specify more accurate response": "Array of roles om een nauwkeuriger antwoord te geven",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "Het audiobestand om te converteren. Ondersteunde formaten: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "Het te gebruiken model voor transcriptie.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "De taal van het invoergeluid in ISO-639-1 formaat (bijv. \"en\" voor het Engels). Dit verbetert de nauwkeurigheid en latentie.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "Een optionele tekst om de stijl van het model te begeleiden of door te gaan met een vorig audiosegment. De vraag moet overeenkomen met de audiotaal.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "De monstertemperatuur tussen 0 en 1. Hogere waarden zoals 0. zal de uitvoer willekeuriger maken, terwijl lagere waarden zoals 0,2 deze gerichter en deterministischer maken.",
|
||||
"The format of the transcript output.": "Het formaat van de transcript uitvoer.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "Het audiobestand om te vertalen. Ondersteunde formaten: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "Het te gebruiken model voor vertaling.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "Een optionele tekst in het Engels om de stijl van het model te begeleiden of door te gaan met een vorig audiosegment.",
|
||||
"The format of the translation output.": "Het formaat van de vertaaluitvoer.",
|
||||
"Authorization headers are injected automatically from your connection.": "Autorisatie headers worden automatisch geïnjecteerd vanuit uw verbinding.",
|
||||
"Enable for files like PDFs, images, etc..": "Inschakelen voor bestanden zoals PDF's, afbeeldingen etc..",
|
||||
"JSON": "JSON",
|
||||
"Text": "Tekstveld",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "KRIJG",
|
||||
"POST": "POSTE",
|
||||
"PATCH": "BEKIJK",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "VERWIJDEREN",
|
||||
"HEAD": "HOOFD"
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
{
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Use modelos de linguagem rápida de Groq e recursos de processamento de áudio.",
|
||||
"Enter your Groq API Key": "Digite sua chave de API Groq",
|
||||
"Ask AI": "Perguntar à IA",
|
||||
"Transcribe Audio": "Transcrever Áudio",
|
||||
"Translate Audio": "Traduzir Áudio",
|
||||
"Custom API Call": "Chamada de API personalizada",
|
||||
"Ask Groq anything using fast language models.": "Pergunte a Groq qualquer coisa usando modelos de linguagem rápida.",
|
||||
"Transcribes audio into text in the input language.": "Transcreve áudio em texto no idioma de entrada.",
|
||||
"Translates audio into English text.": "Traduz áudio em texto em inglês.",
|
||||
"Make a custom API call to a specific endpoint": "Faça uma chamada de API personalizada para um ponto de extremidade específico",
|
||||
"Model": "Modelo",
|
||||
"Question": "Questão",
|
||||
"Temperature": "Temperatura",
|
||||
"Maximum Tokens": "Máximo de Tokens",
|
||||
"Top P": "Superior P",
|
||||
"Frequency penalty": "Penalidade de frequência",
|
||||
"Presence penalty": "Penalidade de presença",
|
||||
"Memory Key": "Chave de memória",
|
||||
"Roles": "Papéis",
|
||||
"Audio File": "Arquivo de Áudio",
|
||||
"Language": "IDIOMA",
|
||||
"Prompt": "Aviso",
|
||||
"Response Format": "Formato de Resposta",
|
||||
"Method": "Método",
|
||||
"Headers": "Cabeçalhos",
|
||||
"Query Parameters": "Parâmetros da consulta",
|
||||
"Body": "Conteúdo",
|
||||
"Response is Binary ?": "A resposta é binária ?",
|
||||
"No Error on Failure": "Nenhum erro no Failure",
|
||||
"Timeout (in seconds)": "Tempo limite (em segundos)",
|
||||
"The model which will generate the completion.": "O modelo que irá gerar a conclusão.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controla aleatoriedade: Diminuir resulta em menos complementos aleatórios. À medida que a temperatura se aproxima de zero, o modelo se tornará determinístico e repetitivo.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "O número máximo de tokens a gerar. O comprimento total de tokens de entrada e tokens gerados é limitado pelo comprimento do contexto do modelo.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Uma alternativa à amostragem com temperatura, chamada amostragem núcleo, onde o modelo considera os resultados dos tokens com massa de probabilidade superior (P). Portanto, 0,1 significa que apenas os tokens que incluem a massa de probabilidade superior de 10% são considerados.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Número entre -2.0 e 2.0. Valores positivos penalizam novos tokens baseados em sua frequência existente no texto até agora, diminuindo a probabilidade do modelo repetir o verbal da mesma linha.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Número entre -2.0 e 2.0. Valores positivos penalizam novos tokens baseado no fato de eles aparecerem no texto até agora, aumentando a probabilidade do modelo de falar sobre novos tópicos.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "Uma chave de memória que manterá o histórico de bate-papo compartilhado entre execuções e fluxos. Deixe em branco para deixar Groq sem memória das mensagens anteriores.",
|
||||
"Array of roles to specify more accurate response": "Array de papéis para especificar uma resposta mais precisa",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "O arquivo de áudio a ser transcrito. Formatos suportados: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "O modelo a ser usado para transcrição.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "O idioma do áudio de entrada no formato ISO-639-1 (por exemplo, \"en\" para inglês). Isso vai melhorar precisão e latência.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "Um texto opcional para guiar o estilo do modelo ou continuar um segmento de áudio anterior. O prompt deve corresponder ao idioma do áudio.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "A temperatura amostral, entre 0 e 1. Valores mais altos como 0. irá tornar a saída mais aleatória, enquanto valores mais baixos como 0,2 irão torná-lo mais focado e determinístico.",
|
||||
"The format of the transcript output.": "O formato da saída de transcrição.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "O arquivo de áudio para traduzir. Formatos suportados: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "O modelo a ser usado para tradução.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "Um texto opcional em Inglês para guiar o estilo do modelo ou continuar um segmento de áudio anterior.",
|
||||
"The format of the translation output.": "O formato da saída de tradução.",
|
||||
"Authorization headers are injected automatically from your connection.": "Os cabeçalhos de autorização são inseridos automaticamente a partir da sua conexão.",
|
||||
"Enable for files like PDFs, images, etc..": "Habilitar para arquivos como PDFs, imagens, etc..",
|
||||
"JSON": "JSON",
|
||||
"Text": "texto",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "OBTER",
|
||||
"POST": "POSTAR",
|
||||
"PATCH": "COMPRAR",
|
||||
"PUT": "COLOCAR",
|
||||
"DELETE": "EXCLUIR",
|
||||
"HEAD": "CABEÇA"
|
||||
}
|
||||
@@ -0,0 +1,60 @@
|
||||
{
|
||||
"Groq": "Грок",
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Используйте быстрые языковые модели Grok и возможности обработки звука.",
|
||||
"Enter your Groq API Key": "Введите ваш Groq API ключ",
|
||||
"Ask AI": "Ask AI",
|
||||
"Transcribe Audio": "Переписать аудио",
|
||||
"Translate Audio": "Перевести аудио",
|
||||
"Custom API Call": "Пользовательский вызов API",
|
||||
"Ask Groq anything using fast language models.": "Спросите Groq что-либо с использованием моделей быстрого языка.",
|
||||
"Transcribes audio into text in the input language.": "Преобразует аудио в текст на языке ввода.",
|
||||
"Translates audio into English text.": "Аудио транслируется в английский текст.",
|
||||
"Make a custom API call to a specific endpoint": "Сделать пользовательский API вызов к определенной конечной точке",
|
||||
"Model": "Модель",
|
||||
"Question": "Вопрос",
|
||||
"Temperature": "Температура",
|
||||
"Maximum Tokens": "Максимум жетонов",
|
||||
"Top P": "Верхний П",
|
||||
"Frequency penalty": "Периодичность штрафа",
|
||||
"Presence penalty": "Штраф присутствия",
|
||||
"Memory Key": "Ключ памяти",
|
||||
"Roles": "Роли",
|
||||
"Audio File": "Аудио файл",
|
||||
"Language": "Язык",
|
||||
"Prompt": "Prompt",
|
||||
"Response Format": "Формат ответа",
|
||||
"Method": "Метод",
|
||||
"Headers": "Заголовки",
|
||||
"Query Parameters": "Параметры запроса",
|
||||
"Body": "Тело",
|
||||
"No Error on Failure": "Нет ошибок при ошибке",
|
||||
"Timeout (in seconds)": "Таймаут (в секундах)",
|
||||
"The model which will generate the completion.": "Модель, которая будет генерировать завершенность.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Контролирует случайность: понижение результатов в менее случайном завершении. По мере нулевого температурного приближения модель становится детерминированной и повторяющей.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "Максимальное количество генерируемых токенов. Общая длина входных токенов и сгенерированных токенов ограничена длиной контекста модели.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "Альтернатива отоплению с температурой, называемой ядерным отбором, где модель рассматривает результаты жетонов с вероятностью top_p. Таким образом, 0.1 означает, что учитываются только жетоны, состоящие из массы 10% наивысшего уровня.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Номер между -2.0 и 2.0. Положительные значения наказывают новые токены на основе их существующей частоты в тексте до сих пор, уменьшая вероятность повторения одной и той же строки.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Номер между -2.0 и 2.0. Положительные значения наказывают новые токены на основе того, появляются ли они в тексте до сих пор, что повышает вероятность модели говорить о новых темах.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "Ключ памяти, который сохранит историю чата общими между запусками и потоками. Оставьте пустым, чтобы оставить Groq без памяти предыдущих сообщений.",
|
||||
"Array of roles to specify more accurate response": "Массив ролей для более точного ответа",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "Аудиофайл для передачи. Поддерживаются форматы: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "Модель, используемая для транскрипции.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "Язык входного аудио в формате ISO-639-1 (например, \"ru\" на английском языке). Это повысит точность и задержку.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "Необязательный текст для руководства стилем модели или продолжения предыдущего сегмента звука. Заявка должна соответствовать языку звука.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "Температура отбора образцов от 0 до 1. Высокие значения, как 0. сделает вывод более случайным, а меньшие значения, такие как 0.2, сделают его более сфокусированным и детерминированным.",
|
||||
"The format of the transcript output.": "Формат вывода субтитров.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "Поддерживаемые форматы: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "Модель, используемая для перевода.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "Необязательный текст на английском языке для руководства стилем модели или продолжения предыдущего звукового сегмента.",
|
||||
"The format of the translation output.": "Формат вывода перевода.",
|
||||
"Authorization headers are injected automatically from your connection.": "Заголовки авторизации включаются автоматически из вашего соединения.",
|
||||
"JSON": "JSON",
|
||||
"Text": "Текст",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "ПОЛУЧИТЬ",
|
||||
"POST": "ПОСТ",
|
||||
"PATCH": "ПАТЧ",
|
||||
"PUT": "ПОКУПИТЬ",
|
||||
"DELETE": "УДАЛИТЬ",
|
||||
"HEAD": "HEAD"
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
{
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Use Groq's fast language models and audio processing capabilities.",
|
||||
"Enter your Groq API Key": "Enter your Groq API Key",
|
||||
"Ask AI": "Ask AI",
|
||||
"Transcribe Audio": "Transcribe Audio",
|
||||
"Translate Audio": "Translate Audio",
|
||||
"Custom API Call": "Custom API Call",
|
||||
"Ask Groq anything using fast language models.": "Ask Groq anything using fast language models.",
|
||||
"Transcribes audio into text in the input language.": "Transcribes audio into text in the input language.",
|
||||
"Translates audio into English text.": "Translates audio into English text.",
|
||||
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
|
||||
"Model": "Model",
|
||||
"Question": "Question",
|
||||
"Temperature": "Temperature",
|
||||
"Maximum Tokens": "Maximum Tokens",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Frequency penalty",
|
||||
"Presence penalty": "Presence penalty",
|
||||
"Memory Key": "Memory Key",
|
||||
"Roles": "Roles",
|
||||
"Audio File": "Audio File",
|
||||
"Language": "Language",
|
||||
"Prompt": "Prompt",
|
||||
"Response Format": "Response Format",
|
||||
"Method": "Method",
|
||||
"Headers": "Headers",
|
||||
"Query Parameters": "Query Parameters",
|
||||
"Body": "Body",
|
||||
"Response is Binary ?": "Response is Binary ?",
|
||||
"No Error on Failure": "No Error on Failure",
|
||||
"Timeout (in seconds)": "Timeout (in seconds)",
|
||||
"The model which will generate the completion.": "The model which will generate the completion.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.",
|
||||
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "The model to use for transcription.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.",
|
||||
"The format of the transcript output.": "The format of the transcript output.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "The model to use for translation.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "An optional text in English to guide the model's style or continue a previous audio segment.",
|
||||
"The format of the translation output.": "The format of the translation output.",
|
||||
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
|
||||
"Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..",
|
||||
"JSON": "JSON",
|
||||
"Text": "Text",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "GET",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "DELETE",
|
||||
"HEAD": "HEAD"
|
||||
}
|
||||
@@ -0,0 +1,60 @@
|
||||
{
|
||||
"Groq": "Groq",
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Use Groq's fast language models and audio processing capabilities.",
|
||||
"Enter your Groq API Key": "Enter your Groq API Key",
|
||||
"Ask AI": "Ask AI",
|
||||
"Transcribe Audio": "Transcribe Audio",
|
||||
"Translate Audio": "Translate Audio",
|
||||
"Custom API Call": "Custom API Call",
|
||||
"Ask Groq anything using fast language models.": "Ask Groq anything using fast language models.",
|
||||
"Transcribes audio into text in the input language.": "Transcribes audio into text in the input language.",
|
||||
"Translates audio into English text.": "Translates audio into English text.",
|
||||
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
|
||||
"Model": "Model",
|
||||
"Question": "Question",
|
||||
"Temperature": "Temperature",
|
||||
"Maximum Tokens": "Maximum Tokens",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Frequency penalty",
|
||||
"Presence penalty": "Presence penalty",
|
||||
"Memory Key": "Memory Key",
|
||||
"Roles": "Roles",
|
||||
"Audio File": "Audio File",
|
||||
"Language": "Language",
|
||||
"Prompt": "Prompt",
|
||||
"Response Format": "Response Format",
|
||||
"Method": "Method",
|
||||
"Headers": "Headers",
|
||||
"Query Parameters": "Query Parameters",
|
||||
"Body": "Body",
|
||||
"No Error on Failure": "No Error on Failure",
|
||||
"Timeout (in seconds)": "Timeout (in seconds)",
|
||||
"The model which will generate the completion.": "The model which will generate the completion.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.",
|
||||
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "The model to use for transcription.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.",
|
||||
"The format of the transcript output.": "The format of the transcript output.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "The model to use for translation.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "An optional text in English to guide the model's style or continue a previous audio segment.",
|
||||
"The format of the translation output.": "The format of the translation output.",
|
||||
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
|
||||
"JSON": "JSON",
|
||||
"Text": "Text",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "GET",
|
||||
"POST": "POST",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "PUT",
|
||||
"DELETE": "DELETE",
|
||||
"HEAD": "HEAD"
|
||||
}
|
||||
@@ -0,0 +1,61 @@
|
||||
{
|
||||
"Use Groq's fast language models and audio processing capabilities.": "Use Groq's fast language models and audio processing capabilities.",
|
||||
"Enter your Groq API Key": "Enter your Groq API Key",
|
||||
"Ask AI": "询问AI",
|
||||
"Transcribe Audio": "Transcribe Audio",
|
||||
"Translate Audio": "Translate Audio",
|
||||
"Custom API Call": "自定义 API 呼叫",
|
||||
"Ask Groq anything using fast language models.": "Ask Groq anything using fast language models.",
|
||||
"Transcribes audio into text in the input language.": "Transcribes audio into text in the input language.",
|
||||
"Translates audio into English text.": "Translates audio into English text.",
|
||||
"Make a custom API call to a specific endpoint": "将一个自定义 API 调用到一个特定的终点",
|
||||
"Model": "Model",
|
||||
"Question": "Question",
|
||||
"Temperature": "Temperature",
|
||||
"Maximum Tokens": "Maximum Tokens",
|
||||
"Top P": "Top P",
|
||||
"Frequency penalty": "Frequency penalty",
|
||||
"Presence penalty": "Presence penalty",
|
||||
"Memory Key": "内存键",
|
||||
"Roles": "角色",
|
||||
"Audio File": "Audio File",
|
||||
"Language": "Language",
|
||||
"Prompt": "Prompt",
|
||||
"Response Format": "Response Format",
|
||||
"Method": "方法",
|
||||
"Headers": "信头",
|
||||
"Query Parameters": "查询参数",
|
||||
"Body": "正文内容",
|
||||
"Response is Binary ?": "Response is Binary ?",
|
||||
"No Error on Failure": "失败时没有错误",
|
||||
"Timeout (in seconds)": "超时(秒)",
|
||||
"The model which will generate the completion.": "The model which will generate the completion.",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.",
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.": "The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.",
|
||||
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
|
||||
"The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for transcription.": "The model to use for transcription.",
|
||||
"The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.": "The language of the input audio in ISO-639-1 format (e.g., \"en\" for English). This will improve accuracy and latency.",
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.": "An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.",
|
||||
"The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.": "The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.",
|
||||
"The format of the transcript output.": "The format of the transcript output.",
|
||||
"The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.": "The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.",
|
||||
"The model to use for translation.": "The model to use for translation.",
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.": "An optional text in English to guide the model's style or continue a previous audio segment.",
|
||||
"The format of the translation output.": "The format of the translation output.",
|
||||
"Authorization headers are injected automatically from your connection.": "授权头自动从您的连接中注入。",
|
||||
"Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..",
|
||||
"JSON": "JSON",
|
||||
"Text": "文本",
|
||||
"Verbose JSON": "Verbose JSON",
|
||||
"GET": "获取",
|
||||
"POST": "帖子",
|
||||
"PATCH": "PATCH",
|
||||
"PUT": "弹出",
|
||||
"DELETE": "删除",
|
||||
"HEAD": "黑色"
|
||||
}
|
||||
@@ -0,0 +1,66 @@
|
||||
import {
|
||||
AuthenticationType,
|
||||
HttpMethod,
|
||||
createCustomApiCallAction,
|
||||
httpClient,
|
||||
} from '@activepieces/pieces-common';
|
||||
import { PieceAuth, createPiece } from '@activepieces/pieces-framework';
|
||||
import { PieceCategory } from '@activepieces/shared';
|
||||
import { askGroq } from './lib/actions/ask-groq';
|
||||
import { transcribeAudio } from './lib/actions/transcribe-audio';
|
||||
import { translateAudio } from './lib/actions/translate-audio';
|
||||
|
||||
const baseUrl = 'https://api.groq.com/openai/v1';
|
||||
|
||||
export const groqAuth = PieceAuth.SecretText({
|
||||
description: 'Enter your Groq API Key',
|
||||
displayName: 'API Key',
|
||||
required: true,
|
||||
validate: async (auth) => {
|
||||
try {
|
||||
await httpClient.sendRequest<{
|
||||
data: { id: string }[];
|
||||
}>({
|
||||
url: `${baseUrl}/models`,
|
||||
method: HttpMethod.GET,
|
||||
authentication: {
|
||||
type: AuthenticationType.BEARER_TOKEN,
|
||||
token: auth.auth,
|
||||
},
|
||||
});
|
||||
return {
|
||||
valid: true,
|
||||
};
|
||||
} catch (e) {
|
||||
return {
|
||||
valid: false,
|
||||
error: 'Invalid API key',
|
||||
};
|
||||
}
|
||||
},
|
||||
});
|
||||
|
||||
export const groq = createPiece({
|
||||
displayName: 'Groq',
|
||||
description: 'Use Groq\'s fast language models and audio processing capabilities.',
|
||||
minimumSupportedRelease: '0.9.0',
|
||||
logoUrl: 'https://cdn.activepieces.com/pieces/groq.png',
|
||||
categories: [PieceCategory.ARTIFICIAL_INTELLIGENCE],
|
||||
auth: groqAuth,
|
||||
actions: [
|
||||
askGroq,
|
||||
transcribeAudio,
|
||||
translateAudio,
|
||||
createCustomApiCallAction({
|
||||
auth: groqAuth,
|
||||
baseUrl: () => baseUrl,
|
||||
authMapping: async (auth) => {
|
||||
return {
|
||||
Authorization: `Bearer ${auth.secret_text}`,
|
||||
};
|
||||
},
|
||||
}),
|
||||
],
|
||||
authors: ['abuaboud'],
|
||||
triggers: [],
|
||||
});
|
||||
@@ -0,0 +1,206 @@
|
||||
import { createAction, Property, StoreScope } from '@activepieces/pieces-framework';
|
||||
import { groqAuth } from '../..';
|
||||
import { httpClient, HttpMethod, AuthenticationType } from '@activepieces/pieces-common';
|
||||
|
||||
export const askGroq = createAction({
|
||||
auth: groqAuth,
|
||||
name: 'ask-ai',
|
||||
displayName: 'Ask AI',
|
||||
description: 'Ask Groq anything using fast language models.',
|
||||
props: {
|
||||
model: Property.Dropdown({
|
||||
auth: groqAuth,
|
||||
displayName: 'Model',
|
||||
required: true,
|
||||
description: 'The model which will generate the completion.',
|
||||
refreshers: [],
|
||||
defaultValue: 'llama-3.1-70b-versatile',
|
||||
options: async ({ auth }) => {
|
||||
if (!auth) {
|
||||
return {
|
||||
disabled: true,
|
||||
placeholder: 'Please connect your Groq account first.',
|
||||
options: [],
|
||||
};
|
||||
}
|
||||
try {
|
||||
const response = await httpClient.sendRequest({
|
||||
url: 'https://api.groq.com/openai/v1/models',
|
||||
method: HttpMethod.GET,
|
||||
authentication: {
|
||||
type: AuthenticationType.BEARER_TOKEN,
|
||||
token: auth.secret_text,
|
||||
},
|
||||
});
|
||||
// Filter out audio models
|
||||
const models = (response.body.data as Array<{ id: string }>).filter(
|
||||
(model) => !model.id.toLowerCase().includes('whisper'),
|
||||
);
|
||||
return {
|
||||
disabled: false,
|
||||
options: models.map((model) => {
|
||||
return {
|
||||
label: model.id,
|
||||
value: model.id,
|
||||
};
|
||||
}),
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
disabled: true,
|
||||
options: [],
|
||||
placeholder: "Couldn't load models, API key is invalid",
|
||||
};
|
||||
}
|
||||
},
|
||||
}),
|
||||
prompt: Property.LongText({
|
||||
displayName: 'Question',
|
||||
required: true,
|
||||
}),
|
||||
temperature: Property.Number({
|
||||
displayName: 'Temperature',
|
||||
required: false,
|
||||
description:
|
||||
'Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive.',
|
||||
defaultValue: 0.9,
|
||||
}),
|
||||
maxTokens: Property.Number({
|
||||
displayName: 'Maximum Tokens',
|
||||
required: true,
|
||||
description:
|
||||
"The maximum number of tokens to generate. The total length of input tokens and generated tokens is limited by the model's context length.",
|
||||
defaultValue: 2048,
|
||||
}),
|
||||
topP: Property.Number({
|
||||
displayName: 'Top P',
|
||||
required: false,
|
||||
description:
|
||||
'An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.',
|
||||
defaultValue: 1,
|
||||
}),
|
||||
frequencyPenalty: Property.Number({
|
||||
displayName: 'Frequency penalty',
|
||||
required: false,
|
||||
description:
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
defaultValue: 0,
|
||||
}),
|
||||
presencePenalty: Property.Number({
|
||||
displayName: 'Presence penalty',
|
||||
required: false,
|
||||
description:
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.",
|
||||
defaultValue: 0.6,
|
||||
}),
|
||||
memoryKey: Property.ShortText({
|
||||
displayName: 'Memory Key',
|
||||
description:
|
||||
'A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Groq without memory of previous messages.',
|
||||
required: false,
|
||||
}),
|
||||
roles: Property.Json({
|
||||
displayName: 'Roles',
|
||||
required: false,
|
||||
description: 'Array of roles to specify more accurate response',
|
||||
defaultValue: [{ role: 'system', content: 'You are a helpful assistant.' }],
|
||||
}),
|
||||
},
|
||||
async run({ auth, propsValue, store }) {
|
||||
const {
|
||||
model,
|
||||
temperature,
|
||||
maxTokens,
|
||||
topP,
|
||||
frequencyPenalty,
|
||||
presencePenalty,
|
||||
prompt,
|
||||
memoryKey,
|
||||
} = propsValue;
|
||||
|
||||
let messageHistory: any[] | null = [];
|
||||
// If memory key is set, retrieve messages stored in history
|
||||
if (memoryKey) {
|
||||
messageHistory = (await store.get(memoryKey, StoreScope.PROJECT)) ?? [];
|
||||
}
|
||||
|
||||
// Add user prompt to message history
|
||||
messageHistory.push({
|
||||
role: 'user',
|
||||
content: prompt,
|
||||
});
|
||||
|
||||
// Add system instructions if set by user
|
||||
const rolesArray = propsValue.roles ? (propsValue.roles as any) : [];
|
||||
const roles = rolesArray.map((item: any) => {
|
||||
const rolesEnum = ['system', 'user', 'assistant'];
|
||||
if (!rolesEnum.includes(item.role)) {
|
||||
throw new Error('The only available roles are: [system, user, assistant]');
|
||||
}
|
||||
|
||||
return {
|
||||
role: item.role,
|
||||
content: item.content,
|
||||
};
|
||||
});
|
||||
|
||||
// Send prompt
|
||||
const completion = await httpClient.sendRequest({
|
||||
method: HttpMethod.POST,
|
||||
url: 'https://api.groq.com/openai/v1/chat/completions',
|
||||
authentication: {
|
||||
type: AuthenticationType.BEARER_TOKEN,
|
||||
token: auth.secret_text,
|
||||
},
|
||||
body: {
|
||||
model: model,
|
||||
messages: [...roles, ...messageHistory],
|
||||
temperature: temperature,
|
||||
top_p: topP,
|
||||
frequency_penalty: frequencyPenalty,
|
||||
presence_penalty: presencePenalty,
|
||||
max_completion_tokens: maxTokens,
|
||||
},
|
||||
});
|
||||
|
||||
// Add response to message history
|
||||
messageHistory = [...messageHistory, completion.body.choices[0].message];
|
||||
|
||||
// Store history if memory key is set
|
||||
if (memoryKey) {
|
||||
await store.put(memoryKey, messageHistory, StoreScope.PROJECT);
|
||||
}
|
||||
|
||||
// Get the raw content from the response
|
||||
const rawContent = completion.body.choices[0].message.content;
|
||||
|
||||
// Check if the response contains thinking (content inside <think> tags)
|
||||
const thinkRegex = /<think>([\s\S]*?)<\/think>/;
|
||||
const thinkMatch = rawContent.match(thinkRegex);
|
||||
|
||||
// Create the response structure
|
||||
const responseStructure = [];
|
||||
|
||||
if (thinkMatch) {
|
||||
// Extract the thinking content
|
||||
const thinkContent = thinkMatch[1].trim();
|
||||
|
||||
// Extract the final answer (content after the last </think> tag)
|
||||
const finalContent = rawContent.split('</think>').pop()?.trim() || '';
|
||||
|
||||
// Add to response structure
|
||||
responseStructure.push({
|
||||
Think: thinkContent,
|
||||
Content: finalContent
|
||||
});
|
||||
} else {
|
||||
// If no thinking tags, just return the content as is
|
||||
responseStructure.push({
|
||||
Think: null,
|
||||
Content: rawContent
|
||||
});
|
||||
}
|
||||
|
||||
return responseStructure;
|
||||
},
|
||||
});
|
||||
@@ -0,0 +1,126 @@
|
||||
import { createAction, Property } from '@activepieces/pieces-framework';
|
||||
import { groqAuth } from '../..';
|
||||
import { httpClient, HttpMethod, AuthenticationType } from '@activepieces/pieces-common';
|
||||
|
||||
export const transcribeAudio = createAction({
|
||||
auth: groqAuth,
|
||||
name: 'transcribe-audio',
|
||||
displayName: 'Transcribe Audio',
|
||||
description: 'Transcribes audio into text in the input language.',
|
||||
props: {
|
||||
file: Property.File({
|
||||
displayName: 'Audio File',
|
||||
required: true,
|
||||
description:
|
||||
'The audio file to transcribe. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.',
|
||||
}),
|
||||
model: Property.Dropdown({
|
||||
displayName: 'Model',
|
||||
auth: groqAuth,
|
||||
required: true,
|
||||
description: 'The model to use for transcription.',
|
||||
refreshers: [],
|
||||
defaultValue: 'whisper-large-v3',
|
||||
options: async ({ auth }) => {
|
||||
if (!auth) {
|
||||
return {
|
||||
disabled: true,
|
||||
placeholder: 'Please connect your Groq account first.',
|
||||
options: [],
|
||||
};
|
||||
}
|
||||
try {
|
||||
const response = await httpClient.sendRequest({
|
||||
url: 'https://api.groq.com/openai/v1/models',
|
||||
method: HttpMethod.GET,
|
||||
authentication: {
|
||||
type: AuthenticationType.BEARER_TOKEN,
|
||||
token: auth.secret_text,
|
||||
},
|
||||
});
|
||||
// Filter for whisper models only
|
||||
const models = (response.body.data as Array<{ id: string }>).filter((model) =>
|
||||
model.id.toLowerCase().includes('whisper'),
|
||||
);
|
||||
return {
|
||||
disabled: false,
|
||||
options: models.map((model) => {
|
||||
return {
|
||||
label: model.id,
|
||||
value: model.id,
|
||||
};
|
||||
}),
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
disabled: true,
|
||||
options: [],
|
||||
placeholder: "Couldn't load models, API key is invalid",
|
||||
};
|
||||
}
|
||||
},
|
||||
}),
|
||||
language: Property.ShortText({
|
||||
displayName: 'Language',
|
||||
required: false,
|
||||
description:
|
||||
'The language of the input audio in ISO-639-1 format (e.g., "en" for English). This will improve accuracy and latency.',
|
||||
}),
|
||||
prompt: Property.LongText({
|
||||
displayName: 'Prompt',
|
||||
required: false,
|
||||
description:
|
||||
"An optional text to guide the model's style or continue a previous audio segment. The prompt should match the audio language.",
|
||||
}),
|
||||
temperature: Property.Number({
|
||||
displayName: 'Temperature',
|
||||
required: false,
|
||||
description:
|
||||
'The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.',
|
||||
defaultValue: 0,
|
||||
}),
|
||||
responseFormat: Property.StaticDropdown({
|
||||
displayName: 'Response Format',
|
||||
required: false,
|
||||
description: 'The format of the transcript output.',
|
||||
defaultValue: 'json',
|
||||
options: {
|
||||
disabled: false,
|
||||
options: [
|
||||
{ label: 'JSON', value: 'json' },
|
||||
{ label: 'Text', value: 'text' },
|
||||
{ label: 'Verbose JSON', value: 'verbose_json' },
|
||||
],
|
||||
},
|
||||
}),
|
||||
},
|
||||
async run({ auth, propsValue }) {
|
||||
const { file, model, language, prompt, temperature, responseFormat } = propsValue;
|
||||
|
||||
// Create form data
|
||||
const formData = new FormData();
|
||||
formData.append('file', new Blob([file.data] as unknown as BlobPart[]), file.filename);
|
||||
formData.append('model', model);
|
||||
|
||||
if (language) formData.append('language', language);
|
||||
if (prompt) formData.append('prompt', prompt);
|
||||
if (temperature !== undefined) formData.append('temperature', temperature.toString());
|
||||
if (responseFormat) formData.append('response_format', responseFormat);
|
||||
|
||||
// Send request
|
||||
const response = await httpClient.sendRequest({
|
||||
method: HttpMethod.POST,
|
||||
url: 'https://api.groq.com/openai/v1/audio/transcriptions',
|
||||
authentication: {
|
||||
type: AuthenticationType.BEARER_TOKEN,
|
||||
token: auth.secret_text,
|
||||
},
|
||||
headers: {
|
||||
'Content-Type': 'multipart/form-data',
|
||||
},
|
||||
body: formData,
|
||||
});
|
||||
|
||||
return response.body;
|
||||
},
|
||||
});
|
||||
@@ -0,0 +1,119 @@
|
||||
import { createAction, Property } from '@activepieces/pieces-framework';
|
||||
import { groqAuth } from '../..';
|
||||
import { httpClient, HttpMethod, AuthenticationType } from '@activepieces/pieces-common';
|
||||
|
||||
export const translateAudio = createAction({
|
||||
auth: groqAuth,
|
||||
name: 'translate-audio',
|
||||
displayName: 'Translate Audio',
|
||||
description: 'Translates audio into English text.',
|
||||
props: {
|
||||
file: Property.File({
|
||||
displayName: 'Audio File',
|
||||
required: true,
|
||||
description:
|
||||
'The audio file to translate. Supported formats: flac, mp3, mp4, mpeg, mpga, m4a, ogg, wav, webm.',
|
||||
}),
|
||||
model: Property.Dropdown({
|
||||
displayName: 'Model',
|
||||
required: true,
|
||||
description: 'The model to use for translation.',
|
||||
refreshers: [],
|
||||
defaultValue: 'whisper-large-v3',
|
||||
auth: groqAuth,
|
||||
options: async ({ auth }) => {
|
||||
if (!auth) {
|
||||
return {
|
||||
disabled: true,
|
||||
placeholder: 'Please connect your Groq account first.',
|
||||
options: [],
|
||||
};
|
||||
}
|
||||
try {
|
||||
const response = await httpClient.sendRequest({
|
||||
url: 'https://api.groq.com/openai/v1/models',
|
||||
method: HttpMethod.GET,
|
||||
authentication: {
|
||||
type: AuthenticationType.BEARER_TOKEN,
|
||||
token: auth.secret_text
|
||||
},
|
||||
});
|
||||
// Filter for whisper models only
|
||||
const models = (response.body.data as Array<{ id: string }>).filter((model) =>
|
||||
model.id.toLowerCase().includes('whisper'),
|
||||
);
|
||||
return {
|
||||
disabled: false,
|
||||
options: models.map((model) => {
|
||||
return {
|
||||
label: model.id,
|
||||
value: model.id,
|
||||
};
|
||||
}),
|
||||
};
|
||||
} catch (error) {
|
||||
return {
|
||||
disabled: true,
|
||||
options: [],
|
||||
placeholder: "Couldn't load models, API key is invalid",
|
||||
};
|
||||
}
|
||||
},
|
||||
}),
|
||||
prompt: Property.LongText({
|
||||
displayName: 'Prompt',
|
||||
required: false,
|
||||
description:
|
||||
"An optional text in English to guide the model's style or continue a previous audio segment.",
|
||||
}),
|
||||
temperature: Property.Number({
|
||||
displayName: 'Temperature',
|
||||
required: false,
|
||||
description:
|
||||
'The sampling temperature, between 0 and 1. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.',
|
||||
defaultValue: 0,
|
||||
}),
|
||||
responseFormat: Property.StaticDropdown({
|
||||
displayName: 'Response Format',
|
||||
required: false,
|
||||
description: 'The format of the translation output.',
|
||||
defaultValue: 'json',
|
||||
options: {
|
||||
disabled: false,
|
||||
options: [
|
||||
{ label: 'JSON', value: 'json' },
|
||||
{ label: 'Text', value: 'text' },
|
||||
{ label: 'Verbose JSON', value: 'verbose_json' },
|
||||
],
|
||||
},
|
||||
}),
|
||||
},
|
||||
async run({ auth, propsValue }) {
|
||||
const { file, model, prompt, temperature, responseFormat } = propsValue;
|
||||
|
||||
// Create form data
|
||||
const formData = new FormData();
|
||||
formData.append('file', new Blob([file.data] as unknown as BlobPart[]), file.filename);
|
||||
formData.append('model', model);
|
||||
|
||||
if (prompt) formData.append('prompt', prompt);
|
||||
if (temperature !== undefined) formData.append('temperature', temperature.toString());
|
||||
if (responseFormat) formData.append('response_format', responseFormat);
|
||||
|
||||
// Send request
|
||||
const response = await httpClient.sendRequest({
|
||||
method: HttpMethod.POST,
|
||||
url: 'https://api.groq.com/openai/v1/audio/translations',
|
||||
authentication: {
|
||||
type: AuthenticationType.BEARER_TOKEN,
|
||||
token: auth.secret_text
|
||||
},
|
||||
headers: {
|
||||
'Content-Type': 'multipart/form-data',
|
||||
},
|
||||
body: formData,
|
||||
});
|
||||
|
||||
return response.body;
|
||||
},
|
||||
});
|
||||
Reference in New Issue
Block a user