Add Activepieces integration for workflow automation

- Add Activepieces fork with SmoothSchedule custom piece
- Create integrations app with Activepieces service layer
- Add embed token endpoint for iframe integration
- Create Automations page with embedded workflow builder
- Add sidebar visibility fix for embed mode
- Add list inactive customers endpoint to Public API
- Include SmoothSchedule triggers: event created/updated/cancelled
- Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
poduck
2025-12-18 22:59:37 -05:00
parent 9848268d34
commit 3aa7199503
16292 changed files with 1284892 additions and 4708 deletions

View File

@@ -0,0 +1,58 @@
{
"AI-powered visual recognition": "AI-gestützte visuelle Erkennung",
"\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n": "\nFolgen Sie diesen Anweisungen, um Ihren Clarifai (Persönliches Zugangs-Token) PAT-Schlüssel zu erhalten:\n1. Gehe zum [Sicherheits-Tab](https://clarifai.com/settings/security) in deinem Clarifai Account und erstelle einen neuen PAT Token.\n2. Kopieren Sie das PAT-Token und fügen Sie es in das PAT-Schlüsselfeld ein.\n",
"Ask LLM": "LLM fragen",
"Ask IGM": "IGM fragen",
"Classify Images or Videos": "Bilder oder Videos klassifizieren",
"Classify Text": "Text klassifizieren",
"Image to Text": "Bild zu Text",
"Text to Text": "Text zu Text",
"Audio to Text": "Audio zu Text",
"Add Inputs": "Eingaben hinzufügen",
"Run Workflow": "Workflow ausführen",
"Custom API Call": "Eigener API-Aufruf",
"Send a prompt to any large language models (LLM) supported by clarifai.": "Senden Sie einen Prompt an alle großen Sprachmodelle (LLM), die von clearfai.",
"Generate an image using the Image generating models supported by clarifai.": "Generieren Sie ein Bild mit den Bild-Generierungsmodellen, die von clarifai.",
"Call an visual classifier AI model to recognize concepts": "KI-Modell für visuelle Klassifizierung aufrufen, um Konzepte zu erkennen",
"Call a text classifier AI model to recognize concepts": "KI-Modell für Text-Klassifizierung aufrufen, um Konzepte zu erkennen",
"Call an image to text AI model": "Bild zum Text-KI-Modell aufrufen",
"Call a text to text AI model": "Text zum Text des KI-Modells aufrufen",
"Call a audio to text AI model": "Audio zum Text-KI-Modell aufrufen",
"Add inputs to your clarifai app": "Eingaben zu Ihrer Klarstellen-App hinzufügen",
"Call a Clarifai workflow": "Clarifai Workflow aufrufen",
"Make a custom API call to a specific endpoint": "Einen benutzerdefinierten API-Aufruf an einen bestimmten Endpunkt machen",
"Model Id": "Modell-Id",
"Prompt": "Prompt",
"Model URL": "Modell-URL",
"Input URL or bytes": "Input URL or bytes",
"Input Text": "Input Text",
"User ID": "Benutzer-ID",
"App ID": "App-ID",
"Workflow URL": "Workflow-URL",
"Method": "Methode",
"Headers": "Kopfzeilen",
"Query Parameters": "Abfrageparameter",
"Body": "Körper",
"Response is Binary ?": "Antwort ist binär?",
"No Error on Failure": "Kein Fehler bei Fehler",
"Timeout (in seconds)": "Timeout (in Sekunden)",
"The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)": "Das Modell, das die Antwort generiert. Wenn das Modell nicht aufgelistet ist, erhalten Sie die Modell-ID von der clarifai Website. Beispiel : 'GPT-4' du kannst die Model ID von hier [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4) erhalten",
"The prompt to send to the model.": "Die Eingabeaufforderung, die an das Modell gesendet wird.",
"URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models": "URL des Clarifai-Modells. Zum Beispiel https://clarifai.com/clarifai/main/models/general-image-recognition ODER eine bestimmte Version wie https://clarifai.com/main/models/general-image-recognition/versions/a7f35c01e0642fda5cf400f543e7c40. Weitere Modelle finden Sie unter https://clarifai.com/explore/models",
"URL or base64 bytes of the image or video to classify": "URL oder Base64 Bytes des Bildes oder Videos zur Klassifizierung",
"Text to classify": "Zu klassifizierender Text",
"URL or base64 bytes of the image to classify": "URL oder Base64 Bytes des zu klassifizierenden Bildes",
"URL or base64 bytes of the audio to classify": "URL oder Base64 Bytes des zu klassifizierenden Audios",
"User ID to associate with the input": "Benutzer-ID zur Verknüpfung mit der Eingabe",
"App ID (project) to associate with the input": "App-ID (Projekt), die mit der Eingabe verknüpft werden soll",
"URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows": "URL des Clarifai-Workflows. Zum Beispiel https://clarifai.com/clarifai/main/workflows/Demographics. Weitere Workflows finden Sie unter https://clarifai.com/explore/workflows",
"URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.": "URL oder Base64 Bytes des eingehenden Bilds/Video/Text/Audio, um den Workflow zu durchlaufen. Hinweis: must be appropriate first step of the workflow to handle that data typ.",
"Authorization headers are injected automatically from your connection.": "Autorisierungs-Header werden automatisch von Ihrer Verbindung injiziert.",
"Enable for files like PDFs, images, etc..": "Aktivieren für Dateien wie PDFs, Bilder, etc..",
"GET": "ERHALTEN",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "LÖSCHEN",
"HEAD": "HEAD"
}

View File

@@ -0,0 +1,58 @@
{
"AI-powered visual recognition": "Reconocimiento visual impulsado por AIs",
"\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n": "\nSigue estas instrucciones para obtener tu clave PAT Clarifai (token de acceso personal) de Clarifai:\n1. Ve a la [pestaña de seguridad](https://clarifai.com/settings/security) en tu cuenta de Clarifai y genera un nuevo token PAT.\n2. Copia el token PAT y pégalo en el campo Clave PAT.\n",
"Ask LLM": "Preguntar LLM",
"Ask IGM": "Preguntar IGM",
"Classify Images or Videos": "Clasificar imágenes o vídeos",
"Classify Text": "Clasificar texto",
"Image to Text": "Imagen a texto",
"Text to Text": "Texto a texto",
"Audio to Text": "Audio a texto",
"Add Inputs": "Añadir entradas",
"Run Workflow": "Ejecutar workkflow",
"Custom API Call": "Llamada API personalizada",
"Send a prompt to any large language models (LLM) supported by clarifai.": "Envíe un mensaje a cualquier modelo de idioma de gran tamaño (LLM) apoyado por aclarai.",
"Generate an image using the Image generating models supported by clarifai.": "Generar una imagen usando los modelos de generación de imágenes soportados por aclarai.",
"Call an visual classifier AI model to recognize concepts": "Llamar a un modelo de IA clasificador visual para reconocer conceptos",
"Call a text classifier AI model to recognize concepts": "Llamar a un modelo de IA clasificador de texto para reconocer conceptos",
"Call an image to text AI model": "Llamar a una imagen al modelo AI de texto",
"Call a text to text AI model": "Llamar a un texto al modelo de IA texto",
"Call a audio to text AI model": "Llamar a un audio al modelo de IA de texto",
"Add inputs to your clarifai app": "Añadir entradas a tu aplicación clarifai",
"Call a Clarifai workflow": "Llamar a un flujo de trabajo Clarifai",
"Make a custom API call to a specific endpoint": "Hacer una llamada API personalizada a un extremo específico",
"Model Id": "Id del Modelo",
"Prompt": "Petición",
"Model URL": "URL del modelo",
"Input URL or bytes": "Input URL or bytes",
"Input Text": "Input Text",
"User ID": "ID Usuario",
"App ID": "ID App",
"Workflow URL": "URL de flujo de trabajo",
"Method": "Método",
"Headers": "Encabezados",
"Query Parameters": "Parámetros de consulta",
"Body": "Cuerpo",
"Response is Binary ?": "¿Respuesta es binaria?",
"No Error on Failure": "No hay ningún error en fallo",
"Timeout (in seconds)": "Tiempo de espera (en segundos)",
"The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)": "El modelo que generará la respuesta. Si el modelo no aparece en la lista, obtenga el id del modelo del sitio web clarifai. Ejemplo: 'GPT-4' puedes obtener el id del modelo desde aquí [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)",
"The prompt to send to the model.": "El prompt para enviar al modelo.",
"URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models": "URL del modelo Clarifai. Por ejemplo https://clarifai.com/clarifai/main/models/general-image-recognition O una versión específica como https://clarifai.com/clarifai/main/models/general-image-cometion/versions/aa7f35c01e0642fda5cf400f543e7c40. Encuentra más modelos en https://clarifai.com/explore/models",
"URL or base64 bytes of the image or video to classify": "URL o bytes base64 de la imagen o vídeo a clasificar",
"Text to classify": "Texto a clasificar",
"URL or base64 bytes of the image to classify": "URL o bytes base64 de la imagen a clasificar",
"URL or base64 bytes of the audio to classify": "URL o bytes base64 del audio a clasificar",
"User ID to associate with the input": "ID de usuario para asociar con la entrada",
"App ID (project) to associate with the input": "ID de la aplicación (proyecto) para asociar con la entrada",
"URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows": "URL del flujo de trabajo Clarifai. Por ejemplo https://clarifai.com/clarifai/main/workflows/Demographics. Encuentre más flujos de trabajo en https://clarifai.com/explore/workflows",
"URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.": "URL o bytes base64 de la imagen/video/texto/audio entrante para correr a través del flujo de trabajo. Nota: debe ser el primer paso apropiado del flujo de trabajo para manejar ese tipo de datos.",
"Authorization headers are injected automatically from your connection.": "Las cabeceras de autorización se inyectan automáticamente desde tu conexión.",
"Enable for files like PDFs, images, etc..": "Activar para archivos como PDFs, imágenes, etc.",
"GET": "RECOGER",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "BORRAR",
"HEAD": "LIMPIO"
}

View File

@@ -0,0 +1,58 @@
{
"AI-powered visual recognition": "Reconnaissance visuelle par IA",
"\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n": "\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n",
"Ask LLM": "Demander à LLM",
"Ask IGM": "Demander IGM",
"Classify Images or Videos": "Classer Images ou Vidéos",
"Classify Text": "Classer le texte",
"Image to Text": "Image au texte",
"Text to Text": "Texte au texte",
"Audio to Text": "Audio au texte",
"Add Inputs": "Ajouter des entrées",
"Run Workflow": "Exécuter le flux de travail",
"Custom API Call": "Appel API personnalisé",
"Send a prompt to any large language models (LLM) supported by clarifai.": "Envoyez une invite à tous les grands modèles de langue (LLM) supportés par clarifai.",
"Generate an image using the Image generating models supported by clarifai.": "Générer une image en utilisant les modèles de génération d'image pris en charge par clarifai.",
"Call an visual classifier AI model to recognize concepts": "Appeler un modèle IA de classeur visuel pour reconnaître les concepts",
"Call a text classifier AI model to recognize concepts": "Appeler un modèle IA de classificateur de texte pour reconnaître les concepts",
"Call an image to text AI model": "Appeler une image au modèle de l'IA",
"Call a text to text AI model": "Appeler un texte vers le modèle de l'IA",
"Call a audio to text AI model": "Appeler un fichier audio vers le modèle de l'IA",
"Add inputs to your clarifai app": "Ajouter des entrées à votre application clarifai",
"Call a Clarifai workflow": "Appeler un workflow Clarifai",
"Make a custom API call to a specific endpoint": "Passez un appel API personnalisé à un point de terminaison spécifique",
"Model Id": "Identifiant du modèle",
"Prompt": "Prompt",
"Model URL": "URL du modèle",
"Input URL or bytes": "Input URL or bytes",
"Input Text": "Input Text",
"User ID": "Identifiant de l'utilisateur",
"App ID": "ID de l'application",
"Workflow URL": "URL du workflow",
"Method": "Méthode",
"Headers": "En-têtes",
"Query Parameters": "Paramètres de requête",
"Body": "Corps",
"Response is Binary ?": "La réponse est Binaire ?",
"No Error on Failure": "Aucune erreur en cas d'échec",
"Timeout (in seconds)": "Délai d'attente (en secondes)",
"The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)": "Le modèle qui va générer la réponse. Si le modèle n'est pas listé, obtenir l'id du modèle à partir du site Web de clarifai. Exemple : 'GPT-4' vous pouvez obtenir l'id du modèle ici [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)",
"The prompt to send to the model.": "L'invite à envoyer au modèle.",
"URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models": "URL du modèle Clarifai. Par exemple https://clarifai.com/clarifai/main/models/general-image-recognition OU une version spécifique telle que https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Trouver d'autres modèles sur https://clarifai.com/explore/models",
"URL or base64 bytes of the image or video to classify": "URL ou octets base64 de l'image ou de la vidéo à classer",
"Text to classify": "Texte à classer",
"URL or base64 bytes of the image to classify": "URL ou octets base64 de l'image à classer",
"URL or base64 bytes of the audio to classify": "URL ou octets base64 de l'audio à classer",
"User ID to associate with the input": "ID de l'utilisateur à associer avec l'entrée",
"App ID (project) to associate with the input": "ID de l'application (projet) à associer à l'entrée",
"URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows": "URL du flux de travail Clarifai. Par exemple https://clarifai.com/clarifai/main/workflows/Demographics. Trouvez plus de workflows sur https://clarifai.com/explore/workflows",
"URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.": "URL ou base64 octets de l'image entrante/vidéo/texte/audio à exécuter à travers le workflow. Note: doit être la première étape appropriée du workflow pour gérer ce type de données.",
"Authorization headers are injected automatically from your connection.": "Les en-têtes d'autorisation sont injectés automatiquement à partir de votre connexion.",
"Enable for files like PDFs, images, etc..": "Activer pour les fichiers comme les PDFs, les images, etc.",
"GET": "OBTENIR",
"POST": "POSTER",
"PATCH": "PATCH",
"PUT": "EFFACER",
"DELETE": "SUPPRIMER",
"HEAD": "TÊTE"
}

View File

@@ -0,0 +1,58 @@
{
"AI-powered visual recognition": "AIを搭載した視覚認識",
"\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n": "\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n",
"Ask LLM": "LLMに聞く",
"Ask IGM": "IGMに聞く",
"Classify Images or Videos": "画像や動画を分類",
"Classify Text": "テキストを分類",
"Image to Text": "画像からテキストへ",
"Text to Text": "テキストからテキストへ",
"Audio to Text": "音声からテキストへ",
"Add Inputs": "入力を追加",
"Run Workflow": "ワークフローを実行",
"Custom API Call": "カスタムAPI通話",
"Send a prompt to any large language models (LLM) supported by clarifai.": "クラライファイでサポートされている任意の大きな言語モデル (LLM) にプロンプトを送信します。",
"Generate an image using the Image generating models supported by clarifai.": "明確化によってサポートされている画像生成モデルを使用して画像を生成します。",
"Call an visual classifier AI model to recognize concepts": "コンセプトを認識するためにビジュアル分類器AIモデルを呼び出します。",
"Call a text classifier AI model to recognize concepts": "概念を認識するためにテキスト分類器AIモデルを呼び出します。",
"Call an image to text AI model": "テキストAIモデルに画像を呼び出す",
"Call a text to text AI model": "テキストAIモデルにテキストを呼び出す",
"Call a audio to text AI model": "テキストAIモデルにオーディオを呼び出す",
"Add inputs to your clarifai app": "クラリファイアプリに入力を追加",
"Call a Clarifai workflow": "クラリファイのワークフローを呼び出す",
"Make a custom API call to a specific endpoint": "特定のエンドポイントへのカスタム API コールを実行します。",
"Model Id": "モデル ID",
"Prompt": "Prompt",
"Model URL": "モデル URL",
"Input URL or bytes": "Input URL or bytes",
"Input Text": "Input Text",
"User ID": "ユーザー ID",
"App ID": "アプリ ID",
"Workflow URL": "ワークフロー URL",
"Method": "方法",
"Headers": "ヘッダー",
"Query Parameters": "クエリパラメータ",
"Body": "本文",
"Response is Binary ?": "応答はバイナリですか?",
"No Error on Failure": "失敗時にエラーはありません",
"Timeout (in seconds)": "タイムアウト(秒)",
"The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)": "レスポンスを生成するモデル。モデルが表示されていない場合は、Clarfaiのウェブサイトからモデル ID を取得します。 例 : 'GPT-4' ここからモデル ID を取得できます [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clearfai.com/openai/chat-completion/models/GPT-4)",
"The prompt to send to the model.": "モデルに送信するプロンプト。",
"URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models": "ClarifaiモデルのURL。例えば、https://clearfai.com/clearfai/main/models/general-image-recognition または https://clearfai.com/clearfai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. https://clearfai.com/explore/models",
"URL or base64 bytes of the image or video to classify": "URLまたは画像またはビデオのbase64バイトを分類する",
"Text to classify": "分類するテキスト",
"URL or base64 bytes of the image to classify": "URLまたは画像のbase64バイトを分類する",
"URL or base64 bytes of the audio to classify": "URLまたはオーディオのbase64バイトを分類する",
"User ID to associate with the input": "入力に関連付けるユーザー ID",
"App ID (project) to associate with the input": "入力に関連付けるアプリ ID (プロジェクト)",
"URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows": "ClarifaiワークフローのURL。例えば https://clearfai.com/clearfai/main/workflows/Demographics。詳細はhttps://clearfai.com/explore/workflowsをご覧ください。",
"URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.": "ワークフローを通して実行する画像/ビデオ/オーディオのURLまたはbase64バイト。 注: そのデータ型を処理するためには、ワークフローの適切な最初のステップである必要があります。",
"Authorization headers are injected automatically from your connection.": "認証ヘッダは接続から自動的に注入されます。",
"Enable for files like PDFs, images, etc..": "PDF、画像などのファイルを有効にします。",
"GET": "取得",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "削除",
"HEAD": "頭"
}

View File

@@ -0,0 +1,58 @@
{
"AI-powered visual recognition": "Visuele herkenning op AI-aangedreven",
"\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n": "\nVolg deze instructies om je Clarifai (Personale Toegang Token) PAT Key:\n1 te krijgen. Ga naar het [security tab](https://clarifai.com/settings/security) in je Clarifai account en genereer een nieuwe PAT token.\n2. Kopieer de PAT token en plak het in het PAT Key veld.\n",
"Ask LLM": "Vraag LLLM",
"Ask IGM": "IGM vragen",
"Classify Images or Videos": "Afbeeldingen of video's indelen",
"Classify Text": "Tekst indelen",
"Image to Text": "Afbeelding naar tekst",
"Text to Text": "Tekst naar tekst",
"Audio to Text": "Audio naar tekst",
"Add Inputs": "Invoer toevoegen",
"Run Workflow": "Workflow uitvoeren",
"Custom API Call": "Custom API Call",
"Send a prompt to any large language models (LLM) supported by clarifai.": "Stuur een vraag naar een groot taalmodel (LLM), ondersteund door clarifaire.",
"Generate an image using the Image generating models supported by clarifai.": "Genereer een afbeelding met behulp van de afbeelding genereren modellen ondersteund door clarifai.",
"Call an visual classifier AI model to recognize concepts": "Roep een visuele classifier AI model aan om concepten te herkennen",
"Call a text classifier AI model to recognize concepts": "Bel een tekstclassifier AI model aan om concepten te herkennen",
"Call an image to text AI model": "Bel een afbeelding aan naar tekst AI model",
"Call a text to text AI model": "Bel een tekst naar tekst AI model",
"Call a audio to text AI model": "Bel een audio naar tekst AI model",
"Add inputs to your clarifai app": "Voeg inputs toe aan je clarifai app",
"Call a Clarifai workflow": "Bel een Clarifai workflow",
"Make a custom API call to a specific endpoint": "Maak een aangepaste API call naar een specifiek eindpunt",
"Model Id": "Model ID",
"Prompt": "Prompt",
"Model URL": "Model URL",
"Input URL or bytes": "Input URL or bytes",
"Input Text": "Input Text",
"User ID": "Gebruiker ID",
"App ID": "App ID",
"Workflow URL": "Workflow URL",
"Method": "Methode",
"Headers": "Kopteksten",
"Query Parameters": "Query parameters",
"Body": "Lichaam",
"Response is Binary ?": "Antwoord is binair?",
"No Error on Failure": "Geen fout bij fout",
"Timeout (in seconds)": "Time-out (in seconden)",
"The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)": "Het model dat het antwoord zal genereren. Als het model niet wordt weergegeven, krijg het model-id van de clarifai website. Voorbeeld : 'GPT-4' waar je het model-id van hier kunt krijgen [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)",
"The prompt to send to the model.": "De prompt om naar het model te sturen.",
"URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models": "URL van het Clarifai model. Bijvoorbeeld https://clarifai.com/clarifai/main/models/general-image-herkenning OR een specifieke versie zoals https://clarifai.com/clarifai/main/models/general-image-herkenntion/versions/aa7f35c0642fda5cf400f543e7c40. Vind meer modellen op https://clarifai.com/explore/models",
"URL or base64 bytes of the image or video to classify": "URL of base64 bytes van de afbeelding of video om te classificeren",
"Text to classify": "Tekst om te classificeren",
"URL or base64 bytes of the image to classify": "URL of base64 bytes van de afbeelding om te classificeren",
"URL or base64 bytes of the audio to classify": "URL of base64 bytes van de audio te classificeren",
"User ID to associate with the input": "Gebruikers-ID om te koppelen aan de invoer",
"App ID (project) to associate with the input": "App ID (project) om te koppelen aan de invoer",
"URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows": "URL van de Clarifai workflow. Bijvoorbeeld https://clarifai.com/clarifai/main/workflows/Demographics. Vind meer workflows op https://clarifai.com/explore/workflows",
"URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.": "URL of base64 bytes van de inkomende afbeelding/video/text/audio uitvoeren door de workflow. Opmerking: moet de juiste eerste stap van de workflow zijn om dat gegevenstype te verwerken.",
"Authorization headers are injected automatically from your connection.": "Autorisatie headers worden automatisch geïnjecteerd vanuit uw verbinding.",
"Enable for files like PDFs, images, etc..": "Inschakelen voor bestanden zoals PDF's, afbeeldingen etc..",
"GET": "KRIJG",
"POST": "POSTE",
"PATCH": "BEKIJK",
"PUT": "PUT",
"DELETE": "VERWIJDEREN",
"HEAD": "HOOFD"
}

View File

@@ -0,0 +1,58 @@
{
"AI-powered visual recognition": "Reconhecimento visual fortalecido com IA",
"\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n": "\nSiga estas instruções para obter sua Chave PAT do Clarifai (Token de Acesso Pessoal:\n1. Vá para a [aba de segurança](https://clarifai.com/settings/security) na sua conta Clarifai e gere um novo token PAT.\n2. Copie o token PAT e cole-o no campo Chave do PAT.\n",
"Ask LLM": "Perguntar LLM",
"Ask IGM": "Pergunte ao IGM",
"Classify Images or Videos": "Classificar Imagens ou Vídeos",
"Classify Text": "Classificar Texto",
"Image to Text": "Imagem para texto",
"Text to Text": "Texto para texto",
"Audio to Text": "Áudio para texto",
"Add Inputs": "Adicionar entradas",
"Run Workflow": "Executar Workflow",
"Custom API Call": "Chamada de API personalizada",
"Send a prompt to any large language models (LLM) supported by clarifai.": "Envie um prompt para quaisquer modelos de idioma grandes (LLM) suportados por clarificação.",
"Generate an image using the Image generating models supported by clarifai.": "Gerar uma imagem usando a imagem gerando modelos suportados pela clarificação.",
"Call an visual classifier AI model to recognize concepts": "Chame um modelo de IA visual para reconhecer conceitos",
"Call a text classifier AI model to recognize concepts": "Chame um modelo classificador de texto de IA para reconhecer conceitos",
"Call an image to text AI model": "Chamar uma imagem ao modelo de IA de texto",
"Call a text to text AI model": "Chame um texto para o modelo de IA de texto",
"Call a audio to text AI model": "Ligue um áudio para o modelo de IA de texto",
"Add inputs to your clarifai app": "Adicione entradas ao seu aplicativo clarificado",
"Call a Clarifai workflow": "Chamar um fluxo de trabalho do Clarifai",
"Make a custom API call to a specific endpoint": "Faça uma chamada de API personalizada para um ponto de extremidade específico",
"Model Id": "Id do Modelo",
"Prompt": "Aviso",
"Model URL": "URL do modelo",
"Input URL or bytes": "Input URL or bytes",
"Input Text": "Input Text",
"User ID": "ID de usuário",
"App ID": "ID do aplicativo",
"Workflow URL": "URL do workflow",
"Method": "Método",
"Headers": "Cabeçalhos",
"Query Parameters": "Parâmetros da consulta",
"Body": "Conteúdo",
"Response is Binary ?": "A resposta é binária ?",
"No Error on Failure": "Nenhum erro no Failure",
"Timeout (in seconds)": "Tempo limite (em segundos)",
"The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)": "O modelo que irá gerar a resposta. Se o modelo não estiver listado, obtenha o ID do modelo no site clarifado. Exemplo: 'GPT-4' você pode obter o modelo de ajuda a partir daqui [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)",
"The prompt to send to the model.": "O prompt para enviar para o modelo.",
"URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models": "URL do modelo Clarifai. Por exemplo, https://clarifai.com/clarifai/main/models/general-image-recognition OU uma versão específica como https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Encontre mais modelos em https://clarifai.com/explore/models",
"URL or base64 bytes of the image or video to classify": "URL ou bytes base64 da imagem ou vídeo para classificar",
"Text to classify": "Texto para classificar",
"URL or base64 bytes of the image to classify": "URL ou bytes base64 da imagem a classificar",
"URL or base64 bytes of the audio to classify": "URL ou bytes base64 do áudio para classificar",
"User ID to associate with the input": "ID do usuário para associar com a entrada",
"App ID (project) to associate with the input": "App ID (projeto) para associar com a entrada",
"URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows": "URL do fluxo de trabalho do Clarifai. Por exemplo, https://clarifai.com/clarifai/main/workflows/Demographics. Encontre mais fluxos de trabalho em https://clarifai.com/explore/workflows",
"URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.": "URL ou base64 bytes de entrada imagem/vídeo/text/áudio para executar o fluxo de trabalho. Nota: deve ser a primeira etapa apropriada do fluxo de trabalho para lidar com esse tipo de dados.",
"Authorization headers are injected automatically from your connection.": "Os cabeçalhos de autorização são inseridos automaticamente a partir da sua conexão.",
"Enable for files like PDFs, images, etc..": "Habilitar para arquivos como PDFs, imagens, etc..",
"GET": "OBTER",
"POST": "POSTAR",
"PATCH": "COMPRAR",
"PUT": "COLOCAR",
"DELETE": "EXCLUIR",
"HEAD": "CABEÇA"
}

View File

@@ -0,0 +1,57 @@
{
"Clarifai": "Clarifai",
"AI-powered visual recognition": "Визуальное распознавание на основе AI",
"\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n": "\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n",
"Ask LLM": "Спросить LLM",
"Ask IGM": "Спросить IGM",
"Classify Images or Videos": "Классифицировать изображения или видео",
"Classify Text": "Классифицировать текст",
"Image to Text": "Изображение к тексту",
"Text to Text": "Текст в текст",
"Audio to Text": "Аудио в текст",
"Add Inputs": "Добавить входы",
"Run Workflow": "Запуск рабочего процесса",
"Custom API Call": "Пользовательский вызов API",
"Send a prompt to any large language models (LLM) supported by clarifai.": "Отправить запрос на любые большие языковые модели (LLM), поддерживаемые разъяснениями.",
"Generate an image using the Image generating models supported by clarifai.": "Генерировать изображение с помощью моделей генерации изображений, поддерживаемых ясными свойствами.",
"Call an visual classifier AI model to recognize concepts": "Призвать визуальную классификатор AI модель, чтобы распознать концепции",
"Call a text classifier AI model to recognize concepts": "Вызвать модель AI классификатора текстов для распознавания концепций",
"Call an image to text AI model": "Вызвать изображение в текстовую модель ИИ",
"Call a text to text AI model": "Вызов текста для модели ИИ",
"Call a audio to text AI model": "Вызвать аудио для текстовой модели ИИ",
"Add inputs to your clarifai app": "Добавить информацию в ваше приложение для уточнения",
"Call a Clarifai workflow": "Вызвать рабочий процесс Clarifai",
"Make a custom API call to a specific endpoint": "Сделать пользовательский API вызов к определенной конечной точке",
"Model Id": "ID модели",
"Prompt": "Prompt",
"Model URL": "URL модели",
"Input URL or bytes": "Input URL or bytes",
"Input Text": "Input Text",
"User ID": "ID пользователя",
"App ID": "ID приложения",
"Workflow URL": "URL рабочего процесса",
"Method": "Метод",
"Headers": "Заголовки",
"Query Parameters": "Параметры запроса",
"Body": "Тело",
"No Error on Failure": "Нет ошибок при ошибке",
"Timeout (in seconds)": "Таймаут (в секундах)",
"The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)": "Модель, которая будет генерировать ответ. Если модель не указана, получить идентификатор модели с сайта уточнения. Пример: 'GPT-4' вы можете получить идентификатор модели отсюда [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)",
"The prompt to send to the model.": "Запрос на отправку модели.",
"URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models": "URL модели Clarifai. Например, https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Найдите больше моделей на https://clarifai.com/explore/models",
"URL or base64 bytes of the image or video to classify": "URL или base64 байт изображения или видео для классификации",
"Text to classify": "Текст для классификации",
"URL or base64 bytes of the image to classify": "URL или base64 байт изображения для классификации",
"URL or base64 bytes of the audio to classify": "URL или base64 байт аудио для классификации",
"User ID to associate with the input": "ID пользователя для привязки к вводу",
"App ID (project) to associate with the input": "App ID (проект), связанный с вводом",
"URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows": "URL рабочего процесса Clarifai. Например https://clarifai.com/clarifai/main/workflows/Demographics. Найти больше рабочих потоков на https://clarifai.com/explore/workflows",
"URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.": "URL или base64 байт входящего image/video/text/audio для запуска через рабочий процесс. Примечание: для обработки этого типа данных должен быть надлежащим первым шагом рабочего процесса.",
"Authorization headers are injected automatically from your connection.": "Заголовки авторизации включаются автоматически из вашего соединения.",
"GET": "ПОЛУЧИТЬ",
"POST": "ПОСТ",
"PATCH": "ПАТЧ",
"PUT": "ПОКУПИТЬ",
"DELETE": "УДАЛИТЬ",
"HEAD": "HEAD"
}

View File

@@ -0,0 +1,58 @@
{
"AI-powered visual recognition": "AI-powered visual recognition",
"\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n": "\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n",
"Ask LLM": "Ask LLM",
"Ask IGM": "Ask IGM",
"Classify Images or Videos": "Classify Images or Videos",
"Classify Text": "Classify Text",
"Image to Text": "Image to Text",
"Text to Text": "Text to Text",
"Audio to Text": "Audio to Text",
"Add Inputs": "Add Inputs",
"Run Workflow": "Run Workflow",
"Custom API Call": "Custom API Call",
"Send a prompt to any large language models (LLM) supported by clarifai.": "Send a prompt to any large language models (LLM) supported by clarifai.",
"Generate an image using the Image generating models supported by clarifai.": "Generate an image using the Image generating models supported by clarifai.",
"Call an visual classifier AI model to recognize concepts": "Call an visual classifier AI model to recognize concepts",
"Call a text classifier AI model to recognize concepts": "Call a text classifier AI model to recognize concepts",
"Call an image to text AI model": "Call an image to text AI model",
"Call a text to text AI model": "Call a text to text AI model",
"Call a audio to text AI model": "Call a audio to text AI model",
"Add inputs to your clarifai app": "Add inputs to your clarifai app",
"Call a Clarifai workflow": "Call a Clarifai workflow",
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
"Model Id": "Model Id",
"Prompt": "Prompt",
"Model URL": "Model URL",
"Input URL or bytes": "Input URL or bytes",
"Input Text": "Input Text",
"User ID": "User ID",
"App ID": "App ID",
"Workflow URL": "Workflow URL",
"Method": "Method",
"Headers": "Headers",
"Query Parameters": "Query Parameters",
"Body": "Body",
"Response is Binary ?": "Response is Binary ?",
"No Error on Failure": "No Error on Failure",
"Timeout (in seconds)": "Timeout (in seconds)",
"The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)": "The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)",
"The prompt to send to the model.": "The prompt to send to the model.",
"URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models": "URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models",
"URL or base64 bytes of the image or video to classify": "URL or base64 bytes of the image or video to classify",
"Text to classify": "Text to classify",
"URL or base64 bytes of the image to classify": "URL or base64 bytes of the image to classify",
"URL or base64 bytes of the audio to classify": "URL or base64 bytes of the audio to classify",
"User ID to associate with the input": "User ID to associate with the input",
"App ID (project) to associate with the input": "App ID (project) to associate with the input",
"URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows": "URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows",
"URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.": "URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.",
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
"Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..",
"GET": "GET",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "DELETE",
"HEAD": "HEAD"
}

View File

@@ -0,0 +1,57 @@
{
"Clarifai": "Clarifai",
"AI-powered visual recognition": "AI-powered visual recognition",
"\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n": "\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n",
"Ask LLM": "Ask LLM",
"Ask IGM": "Ask IGM",
"Classify Images or Videos": "Classify Images or Videos",
"Classify Text": "Classify Text",
"Image to Text": "Image to Text",
"Text to Text": "Text to Text",
"Audio to Text": "Audio to Text",
"Add Inputs": "Add Inputs",
"Run Workflow": "Run Workflow",
"Custom API Call": "Custom API Call",
"Send a prompt to any large language models (LLM) supported by clarifai.": "Send a prompt to any large language models (LLM) supported by clarifai.",
"Generate an image using the Image generating models supported by clarifai.": "Generate an image using the Image generating models supported by clarifai.",
"Call an visual classifier AI model to recognize concepts": "Call an visual classifier AI model to recognize concepts",
"Call a text classifier AI model to recognize concepts": "Call a text classifier AI model to recognize concepts",
"Call an image to text AI model": "Call an image to text AI model",
"Call a text to text AI model": "Call a text to text AI model",
"Call a audio to text AI model": "Call a audio to text AI model",
"Add inputs to your clarifai app": "Add inputs to your clarifai app",
"Call a Clarifai workflow": "Call a Clarifai workflow",
"Make a custom API call to a specific endpoint": "Make a custom API call to a specific endpoint",
"Model Id": "Model Id",
"Prompt": "Prompt",
"Model URL": "Model URL",
"Input URL or bytes": "Input URL or bytes",
"Input Text": "Input Text",
"User ID": "User ID",
"App ID": "App ID",
"Workflow URL": "Workflow URL",
"Method": "Method",
"Headers": "Headers",
"Query Parameters": "Query Parameters",
"Body": "Body",
"No Error on Failure": "No Error on Failure",
"Timeout (in seconds)": "Timeout (in seconds)",
"The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)": "The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)",
"The prompt to send to the model.": "The prompt to send to the model.",
"URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models": "URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models",
"URL or base64 bytes of the image or video to classify": "URL or base64 bytes of the image or video to classify",
"Text to classify": "Text to classify",
"URL or base64 bytes of the image to classify": "URL or base64 bytes of the image to classify",
"URL or base64 bytes of the audio to classify": "URL or base64 bytes of the audio to classify",
"User ID to associate with the input": "User ID to associate with the input",
"App ID (project) to associate with the input": "App ID (project) to associate with the input",
"URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows": "URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows",
"URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.": "URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.",
"Authorization headers are injected automatically from your connection.": "Authorization headers are injected automatically from your connection.",
"GET": "GET",
"POST": "POST",
"PATCH": "PATCH",
"PUT": "PUT",
"DELETE": "DELETE",
"HEAD": "HEAD"
}

View File

@@ -0,0 +1,58 @@
{
"AI-powered visual recognition": "AI-powered visual recognition",
"\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n": "\nFollow these instructions to get your Clarifai (Personal Access Token) PAT Key:\n1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.\n2. Copy the PAT token and paste it in the PAT Key field.\n",
"Ask LLM": "Ask LLM",
"Ask IGM": "Ask IGM",
"Classify Images or Videos": "Classify Images or Videos",
"Classify Text": "Classify Text",
"Image to Text": "Image to Text",
"Text to Text": "Text to Text",
"Audio to Text": "Audio to Text",
"Add Inputs": "Add Inputs",
"Run Workflow": "Run Workflow",
"Custom API Call": "自定义 API 呼叫",
"Send a prompt to any large language models (LLM) supported by clarifai.": "Send a prompt to any large language models (LLM) supported by clarifai.",
"Generate an image using the Image generating models supported by clarifai.": "Generate an image using the Image generating models supported by clarifai.",
"Call an visual classifier AI model to recognize concepts": "Call an visual classifier AI model to recognize concepts",
"Call a text classifier AI model to recognize concepts": "Call a text classifier AI model to recognize concepts",
"Call an image to text AI model": "Call an image to text AI model",
"Call a text to text AI model": "Call a text to text AI model",
"Call a audio to text AI model": "Call a audio to text AI model",
"Add inputs to your clarifai app": "Add inputs to your clarifai app",
"Call a Clarifai workflow": "Call a Clarifai workflow",
"Make a custom API call to a specific endpoint": "将一个自定义 API 调用到一个特定的终点",
"Model Id": "Model Id",
"Prompt": "Prompt",
"Model URL": "Model URL",
"Input URL or bytes": "Input URL or bytes",
"Input Text": "Input Text",
"User ID": "User ID",
"App ID": "App ID",
"Workflow URL": "Workflow URL",
"Method": "方法",
"Headers": "信头",
"Query Parameters": "查询参数",
"Body": "正文内容",
"Response is Binary ?": "Response is Binary ?",
"No Error on Failure": "失败时没有错误",
"Timeout (in seconds)": "超时(秒)",
"The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)": "The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)",
"The prompt to send to the model.": "The prompt to send to the model.",
"URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models": "URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models",
"URL or base64 bytes of the image or video to classify": "URL or base64 bytes of the image or video to classify",
"Text to classify": "Text to classify",
"URL or base64 bytes of the image to classify": "URL or base64 bytes of the image to classify",
"URL or base64 bytes of the audio to classify": "URL or base64 bytes of the audio to classify",
"User ID to associate with the input": "User ID to associate with the input",
"App ID (project) to associate with the input": "App ID (project) to associate with the input",
"URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows": "URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows",
"URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.": "URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.",
"Authorization headers are injected automatically from your connection.": "授权头自动从您的连接中注入。",
"Enable for files like PDFs, images, etc..": "Enable for files like PDFs, images, etc..",
"GET": "获取",
"POST": "帖子",
"PATCH": "PATCH",
"PUT": "弹出",
"DELETE": "删除",
"HEAD": "黑色"
}

View File

@@ -0,0 +1,79 @@
import {
HttpMethod,
createCustomApiCallAction,
httpClient,
} from '@activepieces/pieces-common';
import { PieceAuth, createPiece } from '@activepieces/pieces-framework';
import { PieceCategory } from '@activepieces/shared';
import { clarifaiAskLLM } from './lib/actions/ask-llm';
import { audioToTextModelPredictAction } from './lib/actions/call-audio-model';
import {
imageToTextModelPredictAction,
visualClassifierModelPredictAction,
} from './lib/actions/call-image-model';
import { postInputsAction } from './lib/actions/call-post-inputs';
import {
textClassifierModelPredictAction,
textToTextModelPredictAction,
} from './lib/actions/call-text-model';
import { workflowPredictAction } from './lib/actions/call-workflow';
import { clarifaiGenerateIGM } from './lib/actions/generate-igm';
const markdownDescription = `
Follow these instructions to get your Clarifai (Personal Access Token) PAT Key:
1. Go to the [security tab](https://clarifai.com/settings/security) in your Clarifai account and generate a new PAT token.
2. Copy the PAT token and paste it in the PAT Key field.
`;
export const clarifaiAuth = PieceAuth.SecretText({
displayName: 'PAT Key',
description: markdownDescription,
required: true,
validate: async ({ auth }) => {
try {
await httpClient.sendRequest({
method: HttpMethod.GET,
url: 'https://api.clarifai.com/v2/models',
headers: {
Authorization: 'Key ' + auth,
},
});
return {
valid: true,
};
} catch (e) {
return {
valid: false,
error: `Invalid PAT token\nerror:\n${e}`,
};
}
},
});
export const clarifai = createPiece({
displayName: 'Clarifai',
description: 'AI-powered visual recognition',
minimumSupportedRelease: '0.30.0',
logoUrl: 'https://cdn.activepieces.com/pieces/clarifai.png',
categories: [PieceCategory.ARTIFICIAL_INTELLIGENCE],
authors: ["akatechis","zeiler","Salem-Alaa","kishanprmr","MoShizzle","abuaboud"],
auth: clarifaiAuth,
actions: [
clarifaiAskLLM,
clarifaiGenerateIGM,
visualClassifierModelPredictAction,
textClassifierModelPredictAction,
imageToTextModelPredictAction,
textToTextModelPredictAction,
audioToTextModelPredictAction,
postInputsAction,
workflowPredictAction,
createCustomApiCallAction({
baseUrl: () => 'https://api.clarifai.com/v2', // Replace with the actual base URL
auth: clarifaiAuth,
authMapping: async (auth) => ({
Authorization: `Key ${auth.secret_text}`,
}),
}),
],
triggers: [],
});

View File

@@ -0,0 +1,131 @@
import { Property, createAction } from '@activepieces/pieces-framework';
import { clarifaiAuth } from '../..';
import {
HttpMethod,
HttpRequest,
httpClient,
} from '@activepieces/pieces-common';
export const clarifaiAskLLM = createAction({
name: 'ask-llm',
displayName: 'Ask LLM',
description:
'Send a prompt to any large language models (LLM) supported by clarifai.',
auth: clarifaiAuth,
props: {
models: Property.Dropdown({
auth: clarifaiAuth,
displayName: 'Model Id',
description: `The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)`,
refreshers: ['auth'],
required: true,
options: async ({ auth }) => {
if (!auth) {
return {
disabled: true,
options: [],
placeholder: 'Please add an PAT key',
};
}
const request: HttpRequest = {
method: HttpMethod.GET,
url: 'https://api.clarifai.com/v2/models?sort_by_star_count=true&use_cases=llm&filter_by_user_id=true&additional_fields=stars&per_page=24&page=1',
headers: {
Authorization: ('Key ' + auth.secret_text) as string,
},
};
try {
const response = await httpClient.sendRequest<{
models: {
id: string;
name: string;
}[];
}>(request);
return {
options: response.body.models.map((model) => {
return {
label: model.name,
value: model.id,
};
}),
disabled: false,
};
} catch (error) {
return {
options: [],
disabled: true,
placeholder: `Couldn't Load Models:\n${error}`,
};
}
},
defaultValue: 'GPT-4',
}),
prompt: Property.LongText({
displayName: 'Prompt',
description: 'The prompt to send to the model.',
required: true,
}),
},
async run(context) {
const mId = context.propsValue.models as string;
const findModel: HttpRequest = {
method: HttpMethod.GET,
url: `https://api.clarifai.com/v2/models?name=${mId}&use_cases=llm`,
headers: {
Authorization: ('Key ' + context.auth.secret_text) as string,
},
};
let model;
try {
const response = await httpClient.sendRequest<{
models: {
id: string;
name: string;
model_version: {
id: string;
app_id: string;
user_id: string;
};
}[];
}>(findModel);
model = response.body.models[0];
} catch (error) {
throw new Error(`Couldn't find model ${mId}\n${error}`);
}
const prompt = context.propsValue.prompt as string;
const sendPrompt: HttpRequest = {
method: HttpMethod.POST,
url: `https://api.clarifai.com/v2/users/${model.model_version.user_id}/apps/${model.model_version.app_id}/models/${model.id}/versions/${model.model_version.id}/outputs`,
headers: {
Authorization: ('Key ' + context.auth.secret_text) as string,
},
body: {
inputs: [
{
data: {
text: {
raw: prompt,
},
},
},
],
},
};
try {
const response = await httpClient.sendRequest<{
outputs: {
data: {
text: {
raw: string;
};
};
}[];
}>(sendPrompt);
return { result: response.body.outputs[0].data.text.raw };
} catch (error) {
throw new Error(`Couldn't send prompt to model ${mId}\n${error}`);
}
},
});

View File

@@ -0,0 +1,37 @@
import { clarifaiAuth } from '../../';
import { Property, createAction } from '@activepieces/pieces-framework';
import {
CommonClarifaiProps,
callClarifaiModel,
cleanMultiOutputResponse,
fileToInput,
} from '../common';
import { Data } from 'clarifai-nodejs-grpc/proto/clarifai/api/resources_pb';
export const audioToTextModelPredictAction = createAction({
auth: clarifaiAuth,
name: 'audio_text_model',
description: 'Call a audio to text AI model',
displayName: 'Audio to Text',
props: {
modelUrl: CommonClarifaiProps.modelUrl,
file: Property.File({
description: 'URL or base64 bytes of the audio to classify',
displayName: 'Input URL or bytes',
required: true,
}),
},
async run(ctx) {
const { auth } = ctx;
const { modelUrl, file } = ctx.propsValue;
const input = fileToInput(file);
const outputs = await callClarifaiModel({
auth: auth.secret_text,
modelUrl,
input,
});
return cleanMultiOutputResponse(outputs);
},
});

View File

@@ -0,0 +1,65 @@
import { clarifaiAuth } from '../../';
import { Property, createAction } from '@activepieces/pieces-framework';
import {
CommonClarifaiProps,
callClarifaiModel,
cleanMultiOutputResponse,
fileToInput,
} from '../common';
import { Data } from 'clarifai-nodejs-grpc/proto/clarifai/api/resources_pb';
export const visualClassifierModelPredictAction = createAction({
auth: clarifaiAuth,
name: 'visual_classifier_model',
description: 'Call an visual classifier AI model to recognize concepts',
displayName: 'Classify Images or Videos',
props: {
modelUrl: CommonClarifaiProps.modelUrl,
file: Property.File({
description: 'URL or base64 bytes of the image or video to classify',
displayName: 'Input URL or bytes',
required: true,
}),
},
async run(ctx) {
const { auth } = ctx;
const { modelUrl, file } = ctx.propsValue;
const input = fileToInput(file);
const outputs = await callClarifaiModel({
auth: auth.secret_text,
modelUrl,
input,
});
return cleanMultiOutputResponse(outputs);
},
});
export const imageToTextModelPredictAction = createAction({
auth: clarifaiAuth,
name: 'image_text_model',
description: 'Call an image to text AI model',
displayName: 'Image to Text',
props: {
modelUrl: CommonClarifaiProps.modelUrl,
file: Property.File({
description: 'URL or base64 bytes of the image to classify',
displayName: 'Input URL or bytes',
required: true,
}),
},
async run(ctx) {
const { auth } = ctx;
const { modelUrl, file } = ctx.propsValue;
const input = fileToInput(file);
const outputs = await callClarifaiModel({
auth: auth.secret_text ,
modelUrl,
input,
});
return cleanMultiOutputResponse(outputs);
},
});

View File

@@ -0,0 +1,47 @@
import { clarifaiAuth } from '../../';
import { Property, createAction } from '@activepieces/pieces-framework';
import {
CommonClarifaiProps,
callPostInputs,
cleanMultiInputResponse,
fileToInput,
} from '../common';
import { Data } from 'clarifai-nodejs-grpc/proto/clarifai/api/resources_pb';
export const postInputsAction = createAction({
auth: clarifaiAuth,
name: 'post_inputs',
description: 'Add inputs to your clarifai app',
displayName: 'Add Inputs',
props: {
userId: Property.ShortText({
description: 'User ID to associate with the input',
displayName: 'User ID',
required: true,
}),
appId: Property.ShortText({
description: 'App ID (project) to associate with the input',
displayName: 'App ID',
required: true,
}),
file: Property.File({
description: 'URL or base64 bytes of the audio to classify',
displayName: 'Input URL or bytes',
required: true,
}),
},
async run(ctx) {
const { auth } = ctx;
const { userId, appId, file } = ctx.propsValue;
const input = fileToInput(file);
const inputs = await callPostInputs({
auth: auth.secret_text,
userId,
appId,
input,
});
return cleanMultiInputResponse(inputs);
},
});

View File

@@ -0,0 +1,65 @@
import { clarifaiAuth } from '../../';
import { Property, createAction } from '@activepieces/pieces-framework';
import {
CommonClarifaiProps,
callClarifaiModel,
cleanMultiOutputResponse,
textToInput,
} from '../common';
import { Data } from 'clarifai-nodejs-grpc/proto/clarifai/api/resources_pb';
export const textClassifierModelPredictAction = createAction({
auth: clarifaiAuth,
name: 'text_classifier_model',
description: 'Call a text classifier AI model to recognize concepts',
displayName: 'Classify Text',
props: {
modelUrl: CommonClarifaiProps.modelUrl,
txt: Property.LongText({
description: 'Text to classify',
displayName: 'Input Text',
required: true,
}),
},
async run(ctx) {
const { auth } = ctx;
const { modelUrl, txt } = ctx.propsValue;
const input = textToInput(txt);
const outputs = await callClarifaiModel({
auth: auth.secret_text,
modelUrl,
input,
});
return cleanMultiOutputResponse(outputs);
},
});
export const textToTextModelPredictAction = createAction({
auth: clarifaiAuth,
name: 'text_text_model',
description: 'Call a text to text AI model',
displayName: 'Text to Text',
props: {
modelUrl: CommonClarifaiProps.modelUrl,
txt: Property.LongText({
description: 'Text to classify',
displayName: 'Input Text',
required: true,
}),
},
async run(ctx) {
const { auth } = ctx;
const { modelUrl, txt } = ctx.propsValue;
const input = textToInput(txt);
const outputs = await callClarifaiModel({
auth: auth.secret_text,
modelUrl,
input,
});
return cleanMultiOutputResponse(outputs);
},
});

View File

@@ -0,0 +1,38 @@
import { clarifaiAuth } from '../../';
import { Property, createAction } from '@activepieces/pieces-framework';
import {
CommonClarifaiProps,
callClarifaiWorkflow,
cleanPostWorkflowResultsResponse,
fileToInput,
} from '../common';
import { Data } from 'clarifai-nodejs-grpc/proto/clarifai/api/resources_pb';
export const workflowPredictAction = createAction({
auth: clarifaiAuth,
name: 'workflow_predict',
description: 'Call a Clarifai workflow',
displayName: 'Run Workflow',
props: {
workflowUrl: CommonClarifaiProps.workflowUrl,
file: Property.File({
description:
'URL or base64 bytes of the incoming image/video/text/audio to run through the workflow. Note: must be appropriate first step of the workflow to handle that data type.',
displayName: 'Input URL or bytes',
required: true,
}),
},
async run(ctx) {
const { auth } = ctx;
const { workflowUrl, file } = ctx.propsValue;
const input = fileToInput(file);
const outputs = await callClarifaiWorkflow({
auth: auth.secret_text,
workflowUrl,
input,
});
return cleanPostWorkflowResultsResponse(outputs);
},
});

View File

@@ -0,0 +1,147 @@
import { Property, createAction } from '@activepieces/pieces-framework';
import { clarifaiAuth } from '../..';
import {
HttpMethod,
HttpRequest,
httpClient,
} from '@activepieces/pieces-common';
export const clarifaiGenerateIGM = createAction({
name: 'generate-igm',
displayName: 'Ask IGM',
description:
'Generate an image using the Image generating models supported by clarifai.',
auth: clarifaiAuth,
props: {
models: Property.Dropdown({
displayName: 'Model Id',
description: `The model which will generate the response. If the model is not listed, get the model id from the clarifai website. Example : 'GPT-4' you can get the model id from here [https://clarifai.com/openai/chat-completion/models/GPT-4](https://clarifai.com/openai/chat-completion/models/GPT-4)`,
refreshers: ['auth'],
required: true,
auth: clarifaiAuth,
options: async ({ auth }) => {
if (!auth) {
return {
disabled: true,
options: [],
placeholder: 'Please add an PAT key',
};
}
const request: HttpRequest = {
method: HttpMethod.GET,
url: 'https://api.clarifai.com/v2/models?sort_by_star_count=true&model_type_id=text-to-image&filter_by_user_id=true&additional_fields=stars&per_page=24&page=1',
headers: {
Authorization: ('Key ' + auth.secret_text) as string,
},
};
try {
const response = await httpClient.sendRequest<{
models: {
id: string;
name: string;
}[];
}>(request);
return {
options: response.body.models.map((model) => {
return {
label: model.name,
value: model.id,
};
}),
disabled: false,
};
} catch (error) {
return {
options: [],
disabled: true,
placeholder: `Couldn't Load Models:\n${error}`,
};
}
},
defaultValue: 'general-image-generator-dalle-mini',
}),
prompt: Property.LongText({
displayName: 'Prompt',
description: 'The prompt to send to the model.',
required: true,
}),
},
run: async (context) => {
const mId = context.propsValue.models as string;
const findModel: HttpRequest = {
method: HttpMethod.GET,
url: `https://api.clarifai.com/v2/models?name=${mId}&model_type_id=text-to-image`,
headers: {
Authorization: ('Key ' + context.auth.secret_text) as string,
},
};
let model;
try {
const response = await httpClient.sendRequest<{
models: {
id: string;
name: string;
model_version: {
id: string;
app_id: string;
user_id: string;
};
}[];
}>(findModel);
model = response.body.models[0];
} catch (error) {
throw new Error(`Couldn't find model ${mId}\n${error}`);
}
const prompt = context.propsValue.prompt as string;
const sendPrompt: HttpRequest = {
method: HttpMethod.POST,
url: `https://api.clarifai.com/v2/users/${model.model_version.user_id}/apps/${model.model_version.app_id}/models/${model.id}/versions/${model.model_version.id}/outputs`,
headers: {
Authorization: ('Key ' + context.auth.secret_text ) as string,
},
body: {
inputs: [
{
data: {
text: {
raw: prompt,
},
},
},
],
},
};
try {
const response = await httpClient.sendRequest<{
outputs: {
id: string;
data: {
image: {
base64: string;
image_info: {
format: string;
};
};
};
}[];
}>(sendPrompt);
return {
result: await context.files.write({
fileName:
response.body.outputs[0].id +
'.' +
response.body.outputs[0].data.image.image_info.format,
data: Buffer.from(
response.body.outputs[0].data.image.base64,
'base64'
),
}),
};
} catch (error) {
throw new Error(`Couldn't send prompt to model ${mId}\n${error}`);
}
},
});

View File

@@ -0,0 +1,435 @@
import { Property, ApFile } from '@activepieces/pieces-framework';
import { grpc } from 'clarifai-nodejs-grpc';
import {
Model,
Data,
Input,
UserAppIDSet,
Image,
Video,
Audio,
Text,
} from 'clarifai-nodejs-grpc/proto/clarifai/api/resources_pb';
import { V2Client } from 'clarifai-nodejs-grpc/proto/clarifai/api/service_grpc_pb';
import {
MultiOutputResponse,
PostModelOutputsRequest,
MultiInputResponse,
PostInputsRequest,
PostWorkflowResultsResponse,
PostWorkflowResultsRequest,
} from 'clarifai-nodejs-grpc/proto/clarifai/api/service_pb';
import { promisify } from 'util';
function initClarifaiClient() {
const clarifai = new V2Client(
'api.clarifai.com',
grpc.ChannelCredentials.createSsl()
);
return clarifai;
}
export const clarifaiClient = initClarifaiClient();
export interface CallModelRequest {
auth: string;
modelUrl: string;
input: Input;
}
export interface CallWorkflowRequest {
auth: string;
workflowUrl: string;
input: Input;
}
export interface CallPostInputsRequest {
auth: string;
userId: string;
appId: string;
input: Input;
}
export function callClarifaiModel({ auth, modelUrl, input }: CallModelRequest) {
const [userId, appId, modelId, versionId] = parseEntityUrl(modelUrl);
const req = new PostModelOutputsRequest();
req.setUserAppId(userAppIdSet(userId, appId));
req.setModelId(modelId);
if (versionId) {
req.setVersionId(versionId);
}
req.setInputsList([input]);
const metadata = authMetadata(auth);
// TODO: we should really be using the async version of this, circle back with clarifai team to see if we can
// tweak the protoc settings to build a promise-compatible version of our API client.
const postModelOutputs = promisify<
PostModelOutputsRequest,
grpc.Metadata,
MultiOutputResponse
>(clarifaiClient.postModelOutputs.bind(clarifaiClient));
return postModelOutputs(req, metadata);
}
export function callClarifaiWorkflow({
auth,
workflowUrl,
input,
}: CallWorkflowRequest) {
const [userId, appId, workflowId, versionId] = parseEntityUrl(workflowUrl);
const req = new PostWorkflowResultsRequest();
req.setUserAppId(userAppIdSet(userId, appId));
req.setWorkflowId(workflowId);
if (versionId) {
req.setVersionId(versionId);
}
req.setInputsList([input]);
const metadata = authMetadata(auth);
// TODO: we should really be using the async version of this, circle back with clarifai team to see if we can
// tweak the protoc settings to build a promise-compatible version of our API client.
const postWorkflowResults = promisify<
PostWorkflowResultsRequest,
grpc.Metadata,
PostWorkflowResultsResponse
>(clarifaiClient.postWorkflowResults.bind(clarifaiClient));
return postWorkflowResults(req, metadata);
}
export function callPostInputs({
auth,
userId,
appId,
input,
}: CallPostInputsRequest) {
const req = new PostInputsRequest();
req.setUserAppId(userAppIdSet(userId, appId));
req.setInputsList([input]);
const metadata = authMetadata(auth);
// TODO: we should really be using the async version of this, circle back with clarifai team to see if we can
// tweak the protoc settings to build a promise-compatible version of our API client.
const postInputs = promisify<
PostInputsRequest,
grpc.Metadata,
MultiInputResponse
>(clarifaiClient.postInputs.bind(clarifaiClient));
return postInputs(req, metadata);
}
export function fileToInput(file: ApFile) {
const input = new Input();
const inputData = new Data();
const base64 = file.base64;
const mimeType = detectMimeType(base64, file.filename);
if (mimeType.startsWith('image')) {
const dataImage = new Image();
dataImage.setBase64(base64);
inputData.setImage(dataImage);
} else if (mimeType.startsWith('video')) {
const dataVideo = new Video();
dataVideo.setBase64(base64);
inputData.setVideo(dataVideo);
} else if (mimeType.startsWith('audio')) {
const dataAudio = new Audio();
dataAudio.setBase64(base64);
inputData.setAudio(dataAudio);
} else {
// sending the rest of text may not always work, but it's worth a shot
const dataText = new Text();
dataText.setRaw(base64);
inputData.setText(dataText);
}
input.setData(inputData);
return input;
}
export function textToInput(text: string) {
const input = new Input();
const inputData = new Data();
const dataText = new Text();
dataText.setRaw(text);
inputData.setText(dataText);
input.setData(inputData);
return input;
}
function userAppIdSet(userId: string, appId: string) {
const set = new UserAppIDSet();
set.setUserId(userId);
set.setAppId(appId);
return set;
}
function authMetadata(auth: string) {
const metadata = new grpc.Metadata();
metadata.set('authorization', 'Key ' + auth);
return metadata;
}
export const CommonClarifaiProps = {
modelUrl: Property.ShortText({
description:
'URL of the Clarifai model. For example https://clarifai.com/clarifai/main/models/general-image-recognition OR a specific version such as https://clarifai.com/clarifai/main/models/general-image-recognition/versions/aa7f35c01e0642fda5cf400f543e7c40. Find more models at https://clarifai.com/explore/models',
displayName: 'Model URL',
required: true,
}),
workflowUrl: Property.ShortText({
description:
'URL of the Clarifai workflow. For example https://clarifai.com/clarifai/main/workflows/Demographics. Find more workflows at https://clarifai.com/explore/workflows',
displayName: 'Workflow URL',
required: true,
}),
};
function parseEntityUrl(entityUrl: string): [string, string, string, string] {
const url = new URL(entityUrl);
const parts = url.pathname.split('/');
let version = '';
if (parts.length === 7 && parts[5] === 'versions') {
version = parts[6];
}
return [parts[1], parts[2], parts[4], version];
}
export function removeListFromPropertyNames(
obj: Record<string, unknown>
): Record<string, unknown> {
const result: Record<string, unknown> = {};
for (const [key, value] of Object.entries(obj)) {
if (key.endsWith('List') && Array.isArray(value)) {
if (value.length === 0) {
// remove empty lists by default
continue;
}
// remove 'List' and recurse on every item in the array
result[key.slice(0, -4)] = value.map((item) => {
// if the item is an object, recurse on it
if (Object.prototype.toString.call(item) === '[object Object]') {
return removeListFromPropertyNames(item);
}
// otherwise, return the item as-is
return item;
});
} else {
// if the item is an object, recurse on it
if (Object.prototype.toString.call(value) === '[object Object]') {
result[key] = removeListFromPropertyNames(
value as Record<string, unknown>
);
} else {
result[key] = value;
}
}
}
return result;
}
/**
* Returns the data type based on the base64 string and filename extension
* https://www.iana.org/assignments/media-types/media-types.xhtml for full list of mime types.
* @param {String} base64String
* @param {String} fileName
* @returns {String}
*/
function detectMimeType(base64String: string, fileName: string | undefined) {
let ext = 'undefined';
if (fileName === undefined || fileName === null || fileName === '') {
ext = 'bin';
} else {
ext = fileName.substring(fileName.lastIndexOf('.') + 1);
if (ext === undefined || ext === null || ext === '') ext = 'bin';
}
ext = ext.toLowerCase();
// This is not an exhaustive list by any stretch.
const signatures = {
JVBERi0: 'application/pdf',
R0lGODdh: 'image/gif',
R0lGODlh: 'image/gif',
iVBORw0KGgo: 'image/png',
TU0AK: 'image/tiff',
'/9j/': 'image/jpg',
UEs: 'application/vnd.openxmlformats-officedocument.',
PK: 'application/zip',
};
for (const [key, value] of Object.entries(signatures)) {
let modiifedValue = value;
if (base64String.indexOf(key) === 0) {
// var x = signatures[s];
// if an office file format
if (ext.length > 3 && ext.substring(0, 3) === 'ppt') {
modiifedValue += 'presentationml.presentation';
} else if (ext.length > 3 && ext.substring(0, 3) === 'xls') {
modiifedValue += 'spreadsheetml.sheet';
} else if (ext.length > 3 && ext.substring(0, 3) === 'doc') {
modiifedValue += 'wordprocessingml.document';
}
// return
return modiifedValue;
}
}
// if we are here we can only go off the extensions
const extensions = {
'7z': 'application/x-7z-compressed',
aif: 'audio/x-aiff',
aiff: 'audio/x-aiff',
asf: 'video/x-ms-asf',
asx: 'video/x-ms-asf',
avi: 'video/x-msvideo',
bin: 'application/octet-stream',
bmp: 'image/bmp',
class: 'application/octet-stream',
css: 'text/css',
csv: 'text/csv',
dll: 'application/octet-stream',
doc: 'application/msword',
docx: 'application/vnd.openxmlformats-officedocument.wordprocessingml.document',
dwg: 'application/acad',
dxf: 'application/dxf',
eml: 'message/rfc822',
exe: 'application/octet-stream',
flv: 'video/x-flv',
gif: 'image/gif',
gz: 'application/x-gzip',
gzip: 'application/x-gzip',
htm: 'text/html',
html: 'text/html',
ice: 'x-conference/x-cooltalk',
ico: 'image/x-icon',
ics: 'text/calendar',
iges: 'model/iges',
igs: 'model/iges',
jpeg: 'image/jpeg',
jpg: 'image/jpeg',
js: 'application/javascript',
json: 'application/json',
m2a: 'audio/mpeg',
m2v: 'video/mpeg',
m3u: 'audio/x-mpegurl',
m4v: 'video/mpeg',
mesh: 'model/mesh',
mov: 'video/quicktime',
movie: 'video/x-sgi-movie',
mp2: 'audio/mpeg',
mp2a: 'audio/mpeg',
mp3: 'audio/mpeg',
mp4: 'video/mp4',
mpe: 'video/mpeg',
mpeg: 'video/mpeg',
mpg: 'video/mpeg',
mpga: 'audio/mpeg',
mpv: 'video/mpeg',
msg: 'application/vnd.ms-outlook',
msh: 'model/mesh',
mxf: 'application/mxf',
obj: 'application/octet-stream',
oda: 'application/oda',
ogg: 'application/ogg',
ogv: 'video/ogg',
ogx: 'application/ogg',
pdb: 'chemical/x-pdb',
pdf: 'application/pdf',
png: 'image/png',
ppt: 'application/vnd.ms-powerpoint',
psd: 'application/octet-stream',
qt: 'video/quicktime',
ra: 'audio/x-realaudio',
ram: 'audio/x-pn-realaudio',
rgb: 'image/x-rgb',
rm: 'audio/x-pn-realaudio',
rpm: 'audio/x-pn-realaudio-plugin',
rtf: 'application/rtf',
sea: 'application/octet-stream',
silo: 'model/mesh',
so: 'application/octet-stream',
svg: 'image/svg+xml',
tar: 'application/x-tar',
tif: 'image/tiff',
tiff: 'image/tiff',
txt: 'text/plain',
vrml: 'model/vrml',
wav: 'audio/x-wav',
wax: 'audio/x-ms-wax',
webp: 'image/webp',
wma: 'audio/x-ms-wma',
wmv: 'video/x-ms-wmv',
wrl: 'model/vrml',
xls: 'application/vnd.ms-excel',
xml: 'text/xml',
xyz: 'chemical/x-pdb',
zip: 'application/zip',
};
for (const [key, value] of Object.entries(extensions)) {
if (ext.indexOf(key) === 0) {
return value;
}
}
// if we are here - not sure what type this is
return 'unknown';
}
export function cleanMultiOutputResponse(outputs: MultiOutputResponse) {
if (outputs.getOutputsList().length === 0) {
throw new Error('No outputs found from Clarifai');
}
const data = outputs.getOutputsList()[0].getData();
if (data == undefined) {
throw new Error('No data found from Clarifai');
} else {
const result = Data.toObject(false, data);
return removeListFromPropertyNames(result);
}
}
export function cleanMultiInputResponse(inputs: MultiInputResponse) {
if (inputs.getInputsList().length === 0) {
throw new Error('No inputs found from Clarifai');
}
const data = inputs.getInputsList()[0].getData();
if (data == undefined) {
throw new Error('No data found from Clarifai');
} else {
const result = Data.toObject(false, data);
return removeListFromPropertyNames(result);
}
}
export function cleanPostWorkflowResultsResponse(
response: PostWorkflowResultsResponse
) {
if (response.getResultsList().length === 0) {
throw new Error('No results found from Clarifai');
}
// one result per input in the workflow.
const results = response.getResultsList();
if (results == undefined || results.length === 0) {
throw new Error('No results found from Clarifai');
} else {
const result = results[0];
const outputs = result.getOutputsList();
if (outputs == undefined || outputs.length === 0) {
throw new Error('No outputs found from Clarifai');
}
const array: any[] = [];
for (const output of outputs) {
const model = output.getModel();
if (model == undefined) {
throw new Error('No model found from Clarifai');
}
const m = Model.toObject(false, model);
const data = output.getData();
let out: any = { output: 'suppressed' };
if (data != undefined) {
out = Data.toObject(false, data);
}
array.push({
model: removeListFromPropertyNames(m),
data: removeListFromPropertyNames(out),
});
}
return array;
}
}