Add Activepieces integration for workflow automation

- Add Activepieces fork with SmoothSchedule custom piece
- Create integrations app with Activepieces service layer
- Add embed token endpoint for iframe integration
- Create Automations page with embedded workflow builder
- Add sidebar visibility fix for embed mode
- Add list inactive customers endpoint to Public API
- Include SmoothSchedule triggers: event created/updated/cancelled
- Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
poduck
2025-12-18 22:59:37 -05:00
parent 9848268d34
commit 3aa7199503
16292 changed files with 1284892 additions and 4708 deletions

View File

@@ -0,0 +1,25 @@
{
"Perplexity AI": "Perplexity AI",
"AI powered search engine": "AI powered search engine",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ",
"Ask AI": "Ask AI",
"Enables users to generate prompt completion based on a specified model.": "Enables users to generate prompt completion based on a specified model.",
"Model": "Model",
"Question": "Question",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Presence penalty": "Presence penalty",
"Frequency penalty": "Frequency penalty",
"Roles": "Roles",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "sonar-reasoning",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,24 @@
{
"AI powered search engine": "KI betriebene Suchmaschine",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Navigiere zu [API-Einstellungen](https://www.perplexity.ai/settings/api) und erstelle neuen API-Schlüssel.\n ",
"Ask AI": "KI fragen",
"Enables users to generate prompt completion based on a specified model.": "Ermöglicht Benutzern die Fertigstellung der Eingabeaufforderung basierend auf einem bestimmten Modell.",
"Model": "Modell",
"Question": "Frage",
"Temperature": "Temperatur",
"Maximum Tokens": "Maximale Token",
"Top P": "Oben P",
"Presence penalty": "Präsenzstrafe",
"Frequency penalty": "Frequenz Strafe",
"Roles": "Rollen",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "Die Anzahl der zufälligen Werte in der Response.Höhere Werte sind zufällig und niedrigere Werte sind deterministischer.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Bitte beachten Sie [guide](https://docs.perplexity.ai/guides/model-cards) für jedes Modell-Token-Limit.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "Der Schwellenwert für Nucleus Probenahme, der zwischen 0 und 1 inklusive ist. Bei jedem späteren Token berücksichtigt das Modell die Ergebnisse der Tokens mit der Top_p Wahrscheinlichkeitsmasse.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Nummer zwischen -2.0 und 2.0. Positive Werte bestrafen neue Tokens je nachdem, ob sie bisher im Text erscheinen, was die Wahrscheinlichkeit erhöht, über neue Themen zu sprechen.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Eine multiplikative Strafe, die größer als 0 ist. Werte größer als 1. bestrafen neue Token basierend auf ihrer bisherigen Häufigkeit im Text, wodurch die Wahrscheinlichkeit des Modells verringert wird, die gleiche Zeile wörtlich zu wiederholen.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Array der Rollen, um eine genauere Antwort zu spezifizieren. Nach der (optionalen) Systemnachricht sollten Benutzer- und Assistentenrollen wechseln mit dem Benutzer und dann Assistenten, die im Benutzer enden.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "sonar-Argumentation",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,24 @@
{
"AI powered search engine": "Motor de búsqueda impulsado por IA",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Navega a [Configuración API](https://www.perplexity.ai/settings/api) y crea una nueva clave API.\n ",
"Ask AI": "Preguntar IA",
"Enables users to generate prompt completion based on a specified model.": "Permite a los usuarios generar un prompt completion basado en un modelo especificado.",
"Model": "Modelo",
"Question": "Pregunta",
"Temperature": "Temperatura",
"Maximum Tokens": "Tokens máximos",
"Top P": "Top P",
"Presence penalty": "Penalización de presencia",
"Frequency penalty": "Puntuación de frecuencia",
"Roles": "Roles",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "La cantidad de aleatoriedad en la respuesta. Los valores más altos son más aleatorios, y los valores más bajos son más deterministas.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Por favor, consulte [guide](https://docs.perplexity.ai/guides/model-cards) para el límite de tokens de cada modelo.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "El umbral de muestreo del núcleo, valorado entre 0 y 1 inclusive. Para cada ficha posterior, el modelo considera los resultados de las fichas con la masa de probabilidad top_p.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Número entre -2.0 y 2.0. Los valores positivos penalizan las nuevas fichas basándose en si aparecen en el texto hasta ahora, aumentando la probabilidad de que el modo hable de nuevos temas.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Una pena multiplicativa mayor que 0. Valores mayores que 1. penalizar nuevos tokens basados en su frecuencia existente en el texto hasta ahora, lo que reduce la probabilidad del modelo de repetir la misma línea literalmente.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Array de roles para especificar una respuesta más precisa. Después del mensaje (opcional) del sistema, los roles de usuario y asistente deben alternarse con el usuario entonces asistente, terminando en el usuario.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "razonamiento sonar",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,24 @@
{
"AI powered search engine": "Moteur de recherche alimenté par une IA",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ",
"Ask AI": "Demander à l'IA",
"Enables users to generate prompt completion based on a specified model.": "Permet aux utilisateurs de générer une réalisation rapide basée sur un modèle spécifié.",
"Model": "Modélisation",
"Question": "Question",
"Temperature": "Température",
"Maximum Tokens": "Maximum de jetons",
"Top P": "Top P",
"Presence penalty": "Malus de présence",
"Frequency penalty": "Malus de fréquence",
"Roles": "Rôles",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "La quantité de hasard dans la réponse.Les valeurs plus élevées sont plus aléatoires, et les valeurs inférieures sont plus déterministes.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Veuillez vous référer à [guide](https://docs.perplexity.ai/guides/model-cards) pour chaque limite de jetons de modèle.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "Le seuil d'échantillonnage du noyau, compris entre 0 et 1. Pour chaque jeton suivant, le modèle prend en compte les résultats des jetons avec la masse de probabilité top_p.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Numéroter entre -2.0 et 2.0. Les valeurs positives pénalisent les nouveaux jetons en fonction du fait qu'ils apparaissent dans le texte jusqu'à présent, ce qui augmente la probabilité du mode de parler de nouveaux sujets.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Une pénalité multiplicative supérieure à 0. Valeurs supérieures à 1. pénaliser les nouveaux jetons en fonction de leur fréquence existante dans le texte jusqu'à présent, diminuant la probabilité du modèle de répéter la même ligne verbatim.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Tableau de rôles pour spécifier une réponse plus précise. Après le message système (facultatif), l'utilisateur et l'assistant doivent alterner avec l'utilisateur puis l'assistant, se terminant par l'utilisateur.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "sonar-raisonnement",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,25 @@
{
"Perplexity AI": "Perplexity AI",
"AI powered search engine": "AI powered search engine",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ",
"Ask AI": "Ask AI",
"Enables users to generate prompt completion based on a specified model.": "Enables users to generate prompt completion based on a specified model.",
"Model": "Model",
"Question": "Question",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Presence penalty": "Presence penalty",
"Frequency penalty": "Frequency penalty",
"Roles": "Roles",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "sonar-reasoning",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,25 @@
{
"Perplexity AI": "Perplexity AI",
"AI powered search engine": "AI powered search engine",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ",
"Ask AI": "Ask AI",
"Enables users to generate prompt completion based on a specified model.": "Enables users to generate prompt completion based on a specified model.",
"Model": "Model",
"Question": "Question",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Presence penalty": "Presence penalty",
"Frequency penalty": "Frequency penalty",
"Roles": "Roles",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "sonar-reasoning",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,24 @@
{
"AI powered search engine": "AI駆動検索エンジン",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n [API Settings](https://www.perplexity.ai/settings/api) に移動し、新しい API キーを作成します。\n ",
"Ask AI": "AIに聞く",
"Enables users to generate prompt completion based on a specified model.": "ユーザーが指定したモデルに基づいてプロンプト補完を生成できるようにします。",
"Model": "モデル",
"Question": "質問",
"Temperature": "温度",
"Maximum Tokens": "最大トークン",
"Top P": "トップ P",
"Presence penalty": "プレゼンスペナルティ",
"Frequency penalty": "頻度ペナルティ",
"Roles": "ロール",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "応答におけるランダム性の量。値が大きいほど、値が小さい方が決定的になります。",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "各モデルトークンの制限については、 [guide](https://docs.perplexity.ai/guides/model-cards) を参照してください。",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "0~1の範囲の核サンプリングしきい値。 後続の各トークンについて、モデルはtop_p確率質量を持つトークンの結果を考慮します。",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "-2.0 から 2.0 までの数字。 肯定的な値は、これまでのところテキストに表示されるかどうかに基づいて新しいトークンを罰し、モードが新しいトピックについて話す可能性を高めます。",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "0 より大きい乗算ペナルティ。1より大きい値。 これまでのテキスト内の既存の頻度に基づいて新しいトークンを罰すると、モデルが同じ行を元に繰り返す可能性が低下します。",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "より正確な応答を指定するためのロールの配列(任意)システムメッセージの後、ユーザーとアシスタントのロールは、ユーザーで終了し、その後、アシスタントを代替する必要があります。",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "ソナーの推論は",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,24 @@
{
"AI powered search engine": "AI aangedreven zoekmachine",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Navigeer naar [API-instellingen](https://www.perplexity.ai/settings/api) en maak een nieuwe API-sleutel.\n ",
"Ask AI": "Vraag het AI",
"Enables users to generate prompt completion based on a specified model.": "Maakt het mogelijk voor gebruikers om een snelle voltooiing te genereren op basis van een opgegeven model.",
"Model": "Model",
"Question": "Vraag",
"Temperature": "Temperatuur",
"Maximum Tokens": "Maximaal aantal tokens",
"Top P": "Boven P",
"Presence penalty": "Presence boete",
"Frequency penalty": "Frequentie boete",
"Roles": "Rollen",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "De hoeveelheid willekeurigheid in de response.Hogere waarden zijn willekeuriger, en lagere waarden zijn deterministischer.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Raadpleeg [guide](https://docs.perplexity.ai/guides/model-cards) voor elke modeltoken limiet.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "De drempelwaarde voor het bemonsteren van kerncentrales, die tussen 0 en 1 werd gewaardeerd. Voor elk daaropvolgende token worden de resultaten van de tokens beoordeeld met top_p waarschijnlijkheid.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Nummer tussen -2.0 en 2.0. Positieve waarden bestraffen nieuwe tokens op basis van de vraag of ze tot nu toe in de tekst staan, waardoor de modus meer kans maakt om over nieuwe onderwerpen te praten.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Een multiplicatieve straf groter dan 0. Waarden groter dan 1. nieuwe tokens straffen op basis van hun bestaande frequentie in de tekst tot nu toe, waardoor de waarschijnlijkheid dat het model dezelfde lijn zal herhalen afneemt.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Reeks van rollen om nauwkeuriger antwoord op te geven. Na het (optioneel) systeembericht, gebruiker en assistent-rollen zou afwisselend moeten zijn met de gebruiker, eindigend in de gebruiker.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "sonar-redenering",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,24 @@
{
"AI powered search engine": "Mecanismo de pesquisa IA",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Navegue para [Configurações da API](https://www.perplexity.ai/settings/api) e crie uma nova chave de API.\n ",
"Ask AI": "Perguntar à IA",
"Enables users to generate prompt completion based on a specified model.": "Permite aos usuários gerar conclusão prompt com base em um modelo especificado.",
"Model": "Modelo",
"Question": "Questão",
"Temperature": "Temperatura",
"Maximum Tokens": "Máximo de Tokens",
"Top P": "Superior P",
"Presence penalty": "Penalidade de presença",
"Frequency penalty": "Penalidade de frequência",
"Roles": "Papéis",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "A quantidade de aleatoriedade na resposta. Valores maiores são mais aleatórios e valores mais baixos são mais determinísticos.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Por favor, consulte [guide](https://docs.perplexity.ai/guides/model-cards) para cada limite de token modelo.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "O limite de amostragem do núcleo, avaliado entre 0 e 1, inclusivo. Para cada token subsequente, o modelo considera os resultados dos tokens com massa de probabilidade top_p.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Número entre -2.0 e 2.0. Valores positivos penalizam novos tokens baseado no fato de eles aparecerem no texto até agora, aumentando a probabilidade de o modo falar sobre novos tópicos.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Uma penalidade multiplicadora maior que 0. Valores maiores que 1. penalize novos tokens baseados em sua freqüência existente no texto até agora, diminuindo a probabilidade do modelo repetir o verbal da mesma linha.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Array de papéis para especificar uma resposta mais precisa. Após a mensagem do sistema (opcional) papéis de usuário e assistente devem alternar com o usuário, depois assistente, terminando no usuário.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "raciocínio dos sonares",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,25 @@
{
"Perplexity AI": "Perplexity AI",
"AI powered search engine": "Поисковая система, основанная на ИИ",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Перейдите в [Настройки API](https://www.perplexity.ai/settings/api) и создайте новый ключ API.\n ",
"Ask AI": "Ask AI",
"Enables users to generate prompt completion based on a specified model.": "Позволяет пользователям генерировать подсказки на основе указанной модели.",
"Model": "Модель",
"Question": "Вопрос",
"Temperature": "Температура",
"Maximum Tokens": "Максимум жетонов",
"Top P": "Верхний П",
"Presence penalty": "Штраф присутствия",
"Frequency penalty": "Периодичность штрафа",
"Roles": "Роли",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "Количество случайности в ответе. Чем больше значение параметра, тем ниже значение детерминировано.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Пожалуйста, обратитесь к [guide](https://docs.perplexity.ai/guides/model-cards) за каждый лимит токенов модели.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "Порог отбора проб ядра в диапазоне от 0 до 1 включительно. Для каждого последующего токена модель учитывает результаты токенов с вероятностью массы top_p.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Номер между -2.0 и 2.0. Положительные значения наказывают новые токены на основании того, появляются ли они в тексте до сих пор, что повышает вероятность того, что режим будет обсуждать новые темы.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Множественное наказание больше 0. Значения больше 1. оштрафовать новые токены на основе их существующей частоты в тексте до сих пор, уменьшая вероятность повторения одной и той же строки.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Массив ролей, чтобы указать более точный ответ. После (опционально) системное сообщение, роли пользователя и ассистента должны чередоваться с пользователем, а затем ассистент, заканчивающийся пользователем.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "звуковое рассуждение",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,24 @@
{
"AI powered search engine": "AI powered search engine",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ",
"Ask AI": "Ask AI",
"Enables users to generate prompt completion based on a specified model.": "Enables users to generate prompt completion based on a specified model.",
"Model": "Model",
"Question": "Question",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Presence penalty": "Presence penalty",
"Frequency penalty": "Frequency penalty",
"Roles": "Roles",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "sonar-reasoning",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,25 @@
{
"Perplexity AI": "Perplexity AI",
"AI powered search engine": "AI powered search engine",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ",
"Ask AI": "Ask AI",
"Enables users to generate prompt completion based on a specified model.": "Enables users to generate prompt completion based on a specified model.",
"Model": "Model",
"Question": "Question",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Presence penalty": "Presence penalty",
"Frequency penalty": "Frequency penalty",
"Roles": "Roles",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "sonar-reasoning",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,24 @@
{
"AI powered search engine": "AI powered search engine",
"\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ": "\n Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.\n ",
"Ask AI": "询问AI",
"Enables users to generate prompt completion based on a specified model.": "Enables users to generate prompt completion based on a specified model.",
"Model": "Model",
"Question": "Question",
"Temperature": "Temperature",
"Maximum Tokens": "Maximum Tokens",
"Top P": "Top P",
"Presence penalty": "Presence penalty",
"Frequency penalty": "Frequency penalty",
"Roles": "角色",
"The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.": "The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.",
"Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.": "Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.",
"The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.": "The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.": "Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.",
"sonar-reasoning-pro": "sonar-reasoning-pro",
"sonar-reasoning": "sonar-reasoning",
"sonar-pro": "sonar-pro",
"sonar": "sonar"
}

View File

@@ -0,0 +1,23 @@
import { createPiece, PieceAuth } from '@activepieces/pieces-framework';
import { PieceCategory } from '@activepieces/shared';
import { createChatCompletionAction } from './lib/actions/create-chat-completion.action';
export const perplexityAiAuth = PieceAuth.SecretText({
displayName: 'API Key',
required: true,
description: `
Navigate to [API Settings](https://www.perplexity.ai/settings/api) and create new API key.
`,
});
export const perplexityAi = createPiece({
displayName: 'Perplexity AI',
auth: perplexityAiAuth,
minimumSupportedRelease: '0.36.1',
logoUrl: 'https://cdn.activepieces.com/pieces/perplexity-ai.png',
categories: [PieceCategory.ARTIFICIAL_INTELLIGENCE],
description: 'AI powered search engine',
authors: ['kishanprmr','AbdulTheActivePiecer'],
actions: [createChatCompletionAction],
triggers: [],
});

View File

@@ -0,0 +1,151 @@
import { perplexityAiAuth } from '../../';
import {
createAction,
Property,
} from '@activepieces/pieces-framework';
import {
AuthenticationType,
httpClient,
HttpMethod,
} from '@activepieces/pieces-common';
import { z } from 'zod';
import { propsValidation } from '@activepieces/pieces-common';
export const createChatCompletionAction = createAction({
auth: perplexityAiAuth,
name: 'ask-ai',
displayName: 'Ask AI',
description:
'Enables users to generate prompt completion based on a specified model.',
props: {
model: Property.StaticDropdown({
displayName: 'Model',
required: true,
defaultValue:'sonar-pro',
options: {
disabled: false,
options: [
// https://docs.perplexity.ai/guides/model-cards
{
label:'sonar-reasoning-pro',
value:'sonar-reasoning-pro'
},
{
label:'sonar-reasoning',
value:'sonar-reasoning'
},
{
label:'sonar-pro',
value:'sonar-pro'
},
{
label:'sonar',
value:'sonar'
}
],
},
}),
prompt: Property.LongText({
displayName: 'Question',
required: true,
}),
temperature: Property.Number({
displayName: 'Temperature',
required: false,
description:
'The amount of randomness in the response.Higher values are more random, and lower values are more deterministic.',
defaultValue: 0.2,
}),
max_tokens: Property.Number({
displayName: 'Maximum Tokens',
required: false,
description: `Please refer [guide](https://docs.perplexity.ai/guides/model-cards) for each model token limit.`,
}),
top_p: Property.Number({
displayName: 'Top P',
required: false,
description:
'The nucleus sampling threshold, valued between 0 and 1 inclusive. For each subsequent token, the model considers the results of the tokens with top_p probability mass.',
defaultValue: 0.9,
}),
presence_penalty: Property.Number({
displayName: 'Presence penalty',
required: false,
description:
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
defaultValue: 0,
}),
frequency_penalty: Property.Number({
displayName: 'Frequency penalty',
required: false,
description:
"A multiplicative penalty greater than 0. Values greater than 1.0 penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
defaultValue: 1.0,
}),
roles: Property.Json({
displayName: 'Roles',
required: false,
description:
'Array of roles to specify more accurate response.After the (optional) system message, user and assistant roles should alternate with user then assistant, ending in user.',
defaultValue: [
{ role: 'system', content: 'You are a helpful assistant.' },
],
}),
},
async run(context) {
await propsValidation.validateZod(context.propsValue, {
temperature: z.number().min(0).max(2).optional(),
});
const rolesArray = context.propsValue.roles
? (context.propsValue.roles as any)
: [];
const roles = rolesArray.map((item: any) => {
const rolesEnum = ['system', 'user', 'assistant'];
if (!rolesEnum.includes(item.role)) {
throw new Error(
'The only available roles are: [system, user, assistant]'
);
}
return {
role: item.role,
content: item.content,
};
});
roles.push({ role: 'user', content: context.propsValue.prompt });
const response = await httpClient.sendRequest({
method: HttpMethod.POST,
url: 'https://api.perplexity.ai/chat/completions',
authentication: {
type: AuthenticationType.BEARER_TOKEN,
token: context.auth.secret_text,
},
headers: {
'Content-Type': 'application/json',
},
body: {
model: context.propsValue.model,
messages: roles,
temperature: context.propsValue.temperature,
top_p: context.propsValue.top_p,
presence_penalty: context.propsValue.presence_penalty,
frequency_penalty: context.propsValue.frequency_penalty,
},
});
if (response.status === 200) {
return {
result:response.body.choices[0].message.content,
citations:response.body.citations
}
}
return response.body;
},
});