Add Activepieces integration for workflow automation
- Add Activepieces fork with SmoothSchedule custom piece - Create integrations app with Activepieces service layer - Add embed token endpoint for iframe integration - Create Automations page with embedded workflow builder - Add sidebar visibility fix for embed mode - Add list inactive customers endpoint to Public API - Include SmoothSchedule triggers: event created/updated/cancelled - Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
@@ -0,0 +1,27 @@
|
||||
{
|
||||
"DeepSeek": "DeepSeek",
|
||||
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.",
|
||||
"Ask Deepseek": "Ask Deepseek",
|
||||
"Ask Deepseek anything you want!": "Ask Deepseek anything you want!",
|
||||
"Model": "Model",
|
||||
"Question": "Question",
|
||||
"Frequency penalty": "Frequency penalty",
|
||||
"Maximum Tokens": "Maximum Tokens",
|
||||
"Presence penalty": "Presence penalty",
|
||||
"Response Format": "Response Format",
|
||||
"Temperature": "Temperature",
|
||||
"Top P": "Top P",
|
||||
"Memory Key": "Memory Key",
|
||||
"Roles": "Roles",
|
||||
"The model which will generate the completion.": "The model which will generate the completion.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
|
||||
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "The maximum number of tokens to generate. Possible values are between 1 and 8192.",
|
||||
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
|
||||
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself",
|
||||
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.",
|
||||
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.",
|
||||
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.",
|
||||
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
|
||||
"Text": "Text",
|
||||
"JSON": "JSON"
|
||||
}
|
||||
Reference in New Issue
Block a user