Files
smoothschedule/activepieces-fork/packages/pieces/community/deepseek/src/i18n/hi.json
poduck 3aa7199503 Add Activepieces integration for workflow automation
- Add Activepieces fork with SmoothSchedule custom piece
- Create integrations app with Activepieces service layer
- Add embed token endpoint for iframe integration
- Create Automations page with embedded workflow builder
- Add sidebar visibility fix for embed mode
- Add list inactive customers endpoint to Public API
- Include SmoothSchedule triggers: event created/updated/cancelled
- Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
2025-12-18 22:59:37 -05:00

27 lines
3.6 KiB
JSON

{
"DeepSeek": "DeepSeek",
"\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.": "\n Follow these instructions to get your DeepSeek API Key:\n\n1. Visit the following website: https://platform.deepseek.com/api_keys.\n2. Once on the website, locate and click on the option to obtain your DeepSeek API Key.",
"Ask Deepseek": "Ask Deepseek",
"Ask Deepseek anything you want!": "Ask Deepseek anything you want!",
"Model": "Model",
"Question": "Question",
"Frequency penalty": "Frequency penalty",
"Maximum Tokens": "Maximum Tokens",
"Presence penalty": "Presence penalty",
"Response Format": "Response Format",
"Temperature": "Temperature",
"Top P": "Top P",
"Memory Key": "Memory Key",
"Roles": "Roles",
"The model which will generate the completion.": "The model which will generate the completion.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.",
"The maximum number of tokens to generate. Possible values are between 1 and 8192.": "The maximum number of tokens to generate. Possible values are between 1 and 8192.",
"Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.": "Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the mode's likelihood to talk about new topics.",
"The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself": "The format of the response. IMPORTANT: When using JSON Output, you must also instruct the model to produce JSON yourself",
"Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.": "Controls randomness: Lowering results in less random completions. As the temperature approaches zero, the model will become deterministic and repetitive. Between 0 and 2. We generally recommend altering this or top_p but not both.",
"An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.": "An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. Values <=1. We generally recommend altering this or temperature but not both.",
"A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.": "A memory key that will keep the chat history shared across runs and flows. Keep it empty to leave Deepseek without memory of previous messages.",
"Array of roles to specify more accurate response": "Array of roles to specify more accurate response",
"Text": "Text",
"JSON": "JSON"
}