Add Activepieces integration for workflow automation
- Add Activepieces fork with SmoothSchedule custom piece - Create integrations app with Activepieces service layer - Add embed token endpoint for iframe integration - Create Automations page with embedded workflow builder - Add sidebar visibility fix for embed mode - Add list inactive customers endpoint to Public API - Include SmoothSchedule triggers: event created/updated/cancelled - Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
18
activepieces-fork/docs/install/architecture/engine.mdx
Normal file
18
activepieces-fork/docs/install/architecture/engine.mdx
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title: "Engine"
|
||||
icon: "brain"
|
||||
---
|
||||
|
||||
The Engine file contains the following types of operations:
|
||||
|
||||
- **Extract Piece Metadata**: Extracts metadata when installing new pieces.
|
||||
- **Execute Step**: Executes a single test step.
|
||||
- **Execute Flow**: Executes a flow.
|
||||
- **Execute Property**: Executes dynamic dropdowns or dynamic properties.
|
||||
- **Execute Trigger Hook**: Executes actions such as OnEnable, OnDisable, or extracting payloads.
|
||||
- **Execute Auth Validation**: Validates the authentication of the connection.
|
||||
|
||||
The engine takes the flow JSON with an engine token scoped to this project and implements the API provided for the piece framework, such as:
|
||||
- Storage Service: A simple key/value persistent store for the piece framework.
|
||||
- File Service: A helper to store files either locally or in a database, such as for testing steps.
|
||||
- Fetch Metadata: Retrieves metadata of the current running project.
|
||||
67
activepieces-fork/docs/install/architecture/overview.mdx
Normal file
67
activepieces-fork/docs/install/architecture/overview.mdx
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
title: "Overview"
|
||||
description: ""
|
||||
icon: "cube"
|
||||
---
|
||||
|
||||
This page focuses on describing the main components of Activepieces and focus mainly on workflow executions.
|
||||
|
||||
## Components
|
||||
|
||||

|
||||
|
||||
**Activepieces:**
|
||||
|
||||
- **App**: The main application that organizes everything from APIs to scheduled jobs.
|
||||
- **Worker**: Polls for new jobs and executes the flows with the engine, ensuring proper sandboxing, and sends results back to the app through the API.
|
||||
- **Engine**: TypeScript code that parses flow JSON and executes it. It is compiled into a single JS file.
|
||||
- **UI**: Frontend written in React.
|
||||
|
||||
**Third Party**:
|
||||
- **Postgres**: The main database for Activepieces.
|
||||
- **Redis**: This is used to power the queue using [BullMQ](https://docs.bullmq.io/).
|
||||
|
||||
## Reliability & Scalability
|
||||
|
||||
<Tip>
|
||||
Postgres and Redis availability is outside the scope of this documentation, as many cloud providers already implement best practices to ensure their availability.
|
||||
</Tip>
|
||||
|
||||
- **Webhooks**:
|
||||
All webhooks are sent to the Activepieces app, which performs basic validation and adds them to the queue. In case of a spike, webhooks will be added to the queue.
|
||||
|
||||
- **Polling Trigger**:
|
||||
All recurring jobs are added to Redis. In case of a failure, the missed jobs will be executed again.
|
||||
|
||||
- **Flow Execution**:
|
||||
Workers poll jobs from the queue. In the event of a spike, the flow execution will still work but may be delayed depending on the size of the spike.
|
||||
|
||||
To scale Activepieces, you typically need to increase the replicas of either workers, the app, or the Postgres database. A small Redis instance is sufficient as it can handle thousands of jobs per second and rarely acts as a bottleneck.
|
||||
|
||||
## Repository Structure
|
||||
|
||||
|
||||
The repository is structured as a monorepo using the NX build system, with TypeScript as the primary language. It is divided into several packages:
|
||||
|
||||
```
|
||||
.
|
||||
├── packages
|
||||
│ ├── react-ui
|
||||
│ ├── server
|
||||
| |── api
|
||||
| |── worker
|
||||
| |── shared
|
||||
| ├── ee
|
||||
│ ├── engine
|
||||
│ ├── pieces
|
||||
│ ├── shared
|
||||
```
|
||||
|
||||
- `react-ui`: This package contains the user interface, implemented using the React framework.
|
||||
- `server-api`: This package contains the main application written in TypeScript with the Fastify framework.
|
||||
- `server-worker`: This package contains the logic of accepting flow jobs and executing them using the engine.
|
||||
- `server-shared`: this package contains the shared logic between worker and app.
|
||||
- `engine`: This package contains the logic for flow execution within the sandbox.
|
||||
- `pieces`: This package contains the implementation of triggers and actions for third-party apps.
|
||||
- `shared`: This package contains shared data models and helper functions used by the other packages.
|
||||
- `ee`: This package contains features that are only available in the paid edition.
|
||||
107
activepieces-fork/docs/install/architecture/performance.mdx
Normal file
107
activepieces-fork/docs/install/architecture/performance.mdx
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
title: "Benchmarking"
|
||||
icon: "chart-line"
|
||||
---
|
||||
|
||||
## Performance
|
||||
|
||||
On average, Activepieces (self-hosted) can handle 95 flow executions per second on a single instance (including PostgreSQL and Redis) with under 300ms latency.\
|
||||
It can scale up much more with increasing instance resources and/or adding more instances.\
|
||||
\
|
||||
The result of **5000** requests with concurrency of **25**
|
||||
|
||||
```
|
||||
This is ApacheBench, Version 2.3 <$Revision: 1913912 $>
|
||||
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
|
||||
Licensed to The Apache Software Foundation, http://www.apache.org/
|
||||
|
||||
Benchmarking localhost (be patient)
|
||||
Completed 500 requests
|
||||
Completed 1000 requests
|
||||
Completed 1500 requests
|
||||
Completed 2000 requests
|
||||
Completed 2500 requests
|
||||
Completed 3000 requests
|
||||
Completed 3500 requests
|
||||
Completed 4000 requests
|
||||
Completed 4500 requests
|
||||
Completed 5000 requests
|
||||
Finished 5000 requests
|
||||
|
||||
|
||||
Server Software:
|
||||
Server Hostname: localhost
|
||||
Server Port: 4200
|
||||
|
||||
Document Path: /api/v1/webhooks/GMtpNwDsy4mbJe3369yzy/sync
|
||||
Document Length: 16 bytes
|
||||
|
||||
Concurrency Level: 25
|
||||
Time taken for tests: 52.087 seconds
|
||||
Complete requests: 5000
|
||||
Failed requests: 0
|
||||
Total transferred: 1375000 bytes
|
||||
HTML transferred: 80000 bytes
|
||||
Requests per second: 95.99 [#/sec] (mean)
|
||||
Time per request: 260.436 [ms] (mean)
|
||||
Time per request: 10.417 [ms] (mean, across all concurrent requests)
|
||||
Transfer rate: 25.78 [Kbytes/sec] received
|
||||
|
||||
Connection Times (ms)
|
||||
min mean[+/-sd] median max
|
||||
Connect: 0 0 0.0 0 1
|
||||
Processing: 32 260 23.8 254 756
|
||||
Waiting: 31 260 23.8 254 756
|
||||
Total: 32 260 23.8 254 756
|
||||
|
||||
Percentage of the requests served within a certain time (ms)
|
||||
50% 254
|
||||
66% 261
|
||||
75% 267
|
||||
80% 272
|
||||
90% 289
|
||||
95% 306
|
||||
98% 327
|
||||
99% 337
|
||||
100% 756 (longest request)
|
||||
```
|
||||
|
||||
#### Benchmarking
|
||||
|
||||
Here is how to reproduce the benchmark:
|
||||
|
||||
1. Run Activepieces with PostgreSQL and Redis with the following environment variables:
|
||||
|
||||
```env
|
||||
AP_EXECUTION_MODE=SANDBOX_CODE_ONLY
|
||||
AP_FLOW_WORKER_CONCURRENCY=25
|
||||
```
|
||||
|
||||
2. Create a flow with a Catch Webhook trigger and a webhook Return Response action.
|
||||
|
||||
|
||||

|
||||
3. Get the webhook URL from the webhook trigger and append `/sync` to it.
|
||||
4. Install a benchmark tool like [ab](https://httpd.apache.org/docs/2.4/programs/ab.html):
|
||||
|
||||
```bash
|
||||
sudo apt-get install apache2-utils
|
||||
```
|
||||
|
||||
5. Run the benchmark:
|
||||
|
||||
```bash
|
||||
ab -c 25 -n 5000 http://localhost:4200/api/v1/webhooks/GMtpNwDsy4mbJe3369yzy/sync
|
||||
```
|
||||
|
||||
6. Check the results:
|
||||
|
||||
Instance specs used to get the above results:
|
||||
|
||||
- 16GB RAM
|
||||
- AMD Ryzen 7 8845HS (8 cores, 16 threads)
|
||||
- Ubuntu 24.04 LTS
|
||||
|
||||
<Tip>
|
||||
These benchmarks are based on running Activepieces in `SANDBOX_CODE_ONLY` mode. This does **not** represent the performance of Activepieces Cloud, which uses a different sandboxing mechanism to support multi-tenancy. For more information, see [Sandboxing](/install/architecture/workers#sandboxing).
|
||||
</Tip>
|
||||
98
activepieces-fork/docs/install/architecture/workers.mdx
Normal file
98
activepieces-fork/docs/install/architecture/workers.mdx
Normal file
@@ -0,0 +1,98 @@
|
||||
---
|
||||
title: "Workers & Sandboxing"
|
||||
icon: "gears"
|
||||
---
|
||||
|
||||
This component is responsible for polling jobs from the app, preparing the sandbox, and executing them with the engine.
|
||||
|
||||
## Jobs
|
||||
|
||||
There are three types of jobs:
|
||||
|
||||
- **Recurring Jobs**: Polling/schedule triggers jobs for active flows.
|
||||
- **Flow Jobs**: Flows that are currently being executed.
|
||||
- **Webhook Jobs**: Webhooks that still need to be ingested, as third-party webhooks can map to multiple flows or need mapping.
|
||||
|
||||
<Tip>
|
||||
This documentation will not discuss how the engine works other than stating that it takes the jobs and produces the output. Please refer to [engine](./engine) for more information.
|
||||
</Tip>
|
||||
|
||||
## Sandboxing
|
||||
|
||||
Sandbox in Activepieces means in which environment the engine will execute the flow. There are four types of sandboxes, each with different trade-offs:
|
||||
|
||||
<Snippet file="execution-mode.mdx" />
|
||||
|
||||
|
||||
|
||||
|
||||
### No Sandboxing & V8 Sandboxing
|
||||
|
||||
The difference between the two modes is in the execution of code pieces. For V8 Sandboxing, we use [isolated-vm](https://www.npmjs.com/package/isolated-vm), which relies on V8 isolation to isolate code pieces.
|
||||
|
||||
These are the steps that are used to execute the flow:
|
||||
|
||||
<Steps>
|
||||
<Step title="Prepare Code Pieces">
|
||||
If the code doesn't exist, it will be built with bun with the necessary npm packages will be prepared, if possible.
|
||||
</Step>
|
||||
<Step title="Install Pieces">
|
||||
Pieces are npm packages, we use `bun` to install the pieces.
|
||||
</Step>
|
||||
<Step title="Execution">
|
||||
There is a pool of worker threads kept warm and the engine stays running and listening. Each thread executes one engine operation and sends back the result upon completion.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
|
||||
#### Security:
|
||||
In a self-hosted environment, all piece installations are done by the **platform admin**. It is assumed that the pieces are secure, as they have full access to the machine.
|
||||
|
||||
Code pieces provided by the end user are isolated using V8, which restricts the user to browser JavaScript instead of Node.js with npm.
|
||||
|
||||
#### Performance
|
||||
The flow execution is fast as the javascript can be, although there is overhead in polling from queue and prepare the files first time the flow get executed.
|
||||
|
||||
#### Benchmark
|
||||
|
||||
TBD
|
||||
|
||||
### Kernel Namespaces Sandboxing
|
||||
|
||||
This consists of two steps: the first one is preparing the sandbox, and the other one is the execution part.
|
||||
|
||||
#### Prepare the folder
|
||||
|
||||
Each flow will have a folder with everything required to execute this flows, which means the **engine**, **code pieces** and **npms**
|
||||
|
||||
<Steps>
|
||||
<Step title="Prepare Code Pieces">
|
||||
If the code doesn't exist, it will be compiled using TypeScript Compiler (tsc) and the necessary npm packages will be prepared, if possible.
|
||||
</Step>
|
||||
<Step title="Install Pieces">
|
||||
Pieces are npm packages, we perform simple check If they don't exist we use `pnpm` to install the pieces.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
#### Execute Flow using Sandbox
|
||||
|
||||
In this mode, we use kernel namespaces to isolate everything (file system, memory, CPU). The folder prepared earlier will be bound as a **Read Only** Directory.
|
||||
|
||||
Then we use the command line and to spin up the isolation with new node process, something like that.
|
||||
```bash
|
||||
./isolate node path/to/flow.js --- rest of args
|
||||
```
|
||||
|
||||
#### Security
|
||||
|
||||
The flow execution is isolated in their own namespaces, which means pieces are isolated in different process and namespaces, So the user can run bash scripts and use the file system safely as It's limited and will be removed after the execution, in this mode the user can use any **NPM package** in their code piece.
|
||||
|
||||
#### Performance
|
||||
|
||||
This mode is **Slow** and **CPU Intensive**. The reason behind this is the **cold boot** of Node.js, since each flow execution will require a new **Node.js** process. The Node.js process consumes a lot of resources and takes some time to compile the code and start executing.
|
||||
|
||||
|
||||
#### Benchmark
|
||||
|
||||
|
||||
TBD
|
||||
@@ -0,0 +1,168 @@
|
||||
---
|
||||
title: "Breaking Changes"
|
||||
description: "This list shows all versions that include breaking changes and how to upgrade."
|
||||
icon: "hammer"
|
||||
---
|
||||
|
||||
## 0.74.0
|
||||
|
||||
### What has changed?
|
||||
- The default embedded database for development and lightweight deployments has changed from **SQLite3** to [**PGLite**](https://pglite.dev/) (embedded PostgreSQL).
|
||||
- The environment variable `AP_DB_TYPE=SQLITE3` is now deprecated and replaced with `AP_DB_TYPE=PGLITE`.
|
||||
- Existing SQLite databases will be automatically migrated to PGLite on first startup.
|
||||
- Templates are broken in this version. A migration issue changed template IDs, breaking API endpoints. This will be fixed in the next patch release.
|
||||
|
||||
### Do you need to take action?
|
||||
- **If you are using `AP_DB_TYPE=SQLITE3`:** Update your configuration to use `AP_DB_TYPE=PGLITE` instead.
|
||||
- **If you are using templates:** Wait for the next patch release to fix the template IDs.
|
||||
|
||||
|
||||
## 0.73.0
|
||||
|
||||
### What has changed?
|
||||
- Major change to MCP: [Read the announcement.](https://community.activepieces.com/t/mcp-update-easier-faster-and-more-secure/11177)
|
||||
- If you have SMTP configured in the platform admin, it's no longer supported—you need to use AP_SMTP_ [environment variables.](https://www.activepieces.com/docs/install/configuration/environment-variables#environment-variables)
|
||||
|
||||
### Do you need to take action?
|
||||
- If you are currently using MCP, review the linked announcement for important migration details and upgrade guidance.
|
||||
|
||||
|
||||
## 0.71.0
|
||||
|
||||
### What has changed?
|
||||
|
||||
- In separate workers setup, now they have access to Redis.
|
||||
- `AP_EXECUTION_MODE` mode `SANDBOXED` is now deprecated and replaced with `SANDBOX_PROCESS`
|
||||
- Code Copilot has been deprecated. It will be reintroduced in a different, more powerful form in the future.
|
||||
|
||||
### When is action necessary?
|
||||
|
||||
- If you have separate workers setup, you should make sure that workers have access to Redis.
|
||||
- If you are using `AP_EXECUTION_MODE` mode `SANDBOXED`, you should replace it with `SANDBOX_PROCESS`
|
||||
|
||||
## 0.70.0
|
||||
|
||||
### What has changed?
|
||||
- `AP_QUEUE_MODE` is now deprecated and replaced with `AP_REDIS_TYPE`
|
||||
- If you are using Sentinel Redis, you should add `AP_REDIS_TYPE` to `SENTINEL`
|
||||
|
||||
### When is action necessary?
|
||||
|
||||
- If you are using `AP_QUEUE_MODE`, you should replace it with `AP_REDIS_TYPE`
|
||||
- If you are using Sentinel Redis, you should add `AP_REDIS_TYPE` to `SENTINEL`
|
||||
|
||||
## 0.69.0
|
||||
|
||||
### What has changed?
|
||||
- `AP_FLOW_WORKER_CONCURRENCY` and `AP_SCHEDULED_WORKER_CONCURRENCY` are now deprecated all jobs have single queue and replaced with `AP_WORKER_CONCURRENCY`
|
||||
|
||||
### When is action necessary?
|
||||
|
||||
- If you are using `AP_FLOW_WORKER_CONCURRENCY` or `AP_SCHEDULED_WORKER_CONCURRENCY`, you should replace them with `AP_WORKER_CONCURRENCY`
|
||||
|
||||
## 0.66.0
|
||||
|
||||
### What has changed?
|
||||
|
||||
- If you use embedding the embedding SDK, please upgrade to version 0.6.0, `embedding.dashboard.hideSidebar` used to hide the navbar above the flows table in the dashboard now it relies on `embedding.dashboard.hideFlowsPageNavbar`
|
||||
|
||||
|
||||
## 0.64.0
|
||||
|
||||
### What has changed?
|
||||
|
||||
- MCP management is removed from the embedding SDK.
|
||||
|
||||
|
||||
## 0.63.0
|
||||
|
||||
### What has changed?
|
||||
|
||||
- Replicate provider's text models have been removed.
|
||||
|
||||
### When is action necessary?
|
||||
|
||||
- If you are using one of Replicate's text models, you should replace it with another model from another provider.
|
||||
|
||||
## 0.46.0
|
||||
|
||||
### What has changed?
|
||||
|
||||
- The UI for "Array of Properties" inputs in the pieces has been updated, particularly affecting the "Dynamic Value" toggle functionality.
|
||||
|
||||
### When is action necessary?
|
||||
|
||||
- No action is required for this change.
|
||||
- Your published flows will continue to work without interruption.
|
||||
- When editing existing flows that use the "Dynamic Value" toggle on "Array of Properties" inputs (such as the "files" parameter in the "Extract Structured Data" action of the "Utility AI" piece), the end user will need to remap the values again.
|
||||
- For details on the new UI implementation, refer to this [announcement](https://community.activepieces.com/t/inline-items/8964).
|
||||
|
||||
## 0.38.6
|
||||
|
||||
### What has changed?
|
||||
|
||||
- Workers no longer rely on the `AP_FLOW_WORKER_CONCURRENCY` and `AP_SCHEDULED_WORKER_CONCURRENCY` environment variables. These values are now retrieved from the app server.
|
||||
|
||||
### When is action necessary?
|
||||
|
||||
- If `AP_CONTAINER_TYPE` is set to `WORKER` on the worker machine, and `AP_SCHEDULED_WORKER_CONCURRENCY` or `AP_FLOW_WORKER_CONCURRENCY` are set to zero on the app server, workers will stop processing the queues. To fix this, check the [Separate Worker from App](https://www.activepieces.com/docs/install/configuration/separate-workers) documentation and set the `AP_CONTAINER_TYPE` to fetch the necessary values from the app server. If no container type is set on the worker machine, this is not a breaking change.
|
||||
|
||||
## 0.35.1
|
||||
|
||||
### What has changed?
|
||||
|
||||
- The 'name' attribute has been renamed to 'externalId' in the `AppConnection` entity.
|
||||
- The 'displayName' attribute has been added to the `AppConnection` entity.
|
||||
|
||||
### When is action necessary?
|
||||
- If you are using the connections API, you should update the `name` attribute to `externalId` and add the `displayName` attribute.
|
||||
|
||||
## 0.35.0
|
||||
|
||||
### What has changed?
|
||||
|
||||
- All branches are now converted to routers, and downgrade is not supported.
|
||||
|
||||
## 0.33.0
|
||||
|
||||
### What has changed?
|
||||
|
||||
- Files from actions or triggers are now stored in the database / S3 to support retries from certain steps, and the size of files from actions is now subject to the limit of `AP_MAX_FILE_SIZE_MB`.
|
||||
- Files in triggers were previously passed as base64 encoded strings; now they are passed as file paths in the database / S3. Paused flows that have triggers from version 0.29.0 or earlier will no longer work.
|
||||
|
||||
### When is action necessary?
|
||||
- If you are dealing with large files in the actions, consider increasing the `AP_MAX_FILE_SIZE_MB` to a higher value, and make sure the storage system (database/S3) has enough capacity for the files.
|
||||
|
||||
|
||||
## 0.30.0
|
||||
|
||||
### What has changed?
|
||||
|
||||
- `AP_SANDBOX_RUN_TIME_SECONDS` is now deprecated and replaced with `AP_FLOW_TIMEOUT_SECONDS`
|
||||
- `AP_CODE_SANDBOX_TYPE` is now deprecated and replaced with new mode in `AP_EXECUTION_MODE`
|
||||
|
||||
### When is action necessary?
|
||||
|
||||
- If you are using `AP_CODE_SANDBOX_TYPE` to `V8_ISOLATE`, you should switch to `AP_EXECUTION_MODE` to `SANDBOX_CODE_ONLY`
|
||||
- If you are using `AP_SANDBOX_RUN_TIME_SECONDS` to set the sandbox run time limit, you should switch to `AP_FLOW_TIMEOUT_SECONDS`
|
||||
|
||||
## 0.28.0
|
||||
|
||||
### What has changed?
|
||||
|
||||
- **Project Members:**
|
||||
- The `EXTERNAL_CUSTOMER` role has been deprecated and replaced with the `OPERATOR` role. Please check the permissions page for more details.
|
||||
- All pending invitations will be removed.
|
||||
- The User Invitation entity has been introduced to send invitations. You can still use the Project Member API to add roles for the user, but it requires the user to exist. If you want to send an email, use the User Invitation, and later a record in the project member will be created after the user accepts and registers an account.
|
||||
- **Authentication:**
|
||||
- The `SIGN_UP_ENABLED` environment variable, which allowed multiple users to sign up for different platforms/projects, has been removed. It has been replaced with inviting users to the same platform/project. All old users should continue to work normally.
|
||||
|
||||
### When is action necessary?
|
||||
|
||||
- **Project Members:**
|
||||
|
||||
If you use the embedding SDK or the create project member API with the `EXTERNAL_CUSTOMER` role, you should start using the `OPERATOR` role instead.
|
||||
|
||||
- **Authentication:**
|
||||
|
||||
Multiple platforms/projects are no longer supported in the community edition. Technically, everything is still there, but you have to hack using the API as the authentication system has now changed. If you have already created the users/platforms, they should continue to work, and no action is required.
|
||||
@@ -0,0 +1,136 @@
|
||||
---
|
||||
title: 'Environment Variables'
|
||||
description: ''
|
||||
icon: 'gear'
|
||||
---
|
||||
|
||||
To configure activepieces, you will need to set some environment variables, There is file called `.env` at the root directory for our main repo.
|
||||
|
||||
<Tip> When you execute the [tools/deploy.sh](https://github.com/activepieces/activepieces/blob/main/tools/deploy.sh) script in the Docker installation tutorial,
|
||||
it will produce these values. </Tip>
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Default Value | Example |
|
||||
| ---------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------ | ---------------------------------------------------------------------- |
|
||||
| `AP_CONFIG_PATH` | Optional parameter for specifying the path to store PGLite database and local settings. | `~/.activepieces` | |
|
||||
| `AP_CLOUD_AUTH_ENABLED` | Turn off the utilization of Activepieces oauth2 applications | `false` | |
|
||||
| `AP_DB_TYPE` | The type of database to use. `POSTGRES` for external PostgreSQL, `PGLITE` for embedded database. **Note:** `SQLITE3` is deprecated and will be automatically migrated to `PGLITE`. | `POSTGRES` | |
|
||||
| `AP_EXECUTION_MODE` | You can choose between 'SANDBOX_PROCESS', 'UNSANDBOXED', 'SANDBOX_CODE_ONLY', 'SANDBOX_CODE_AND_PROCESS' as possible values. If you decide to change this, make sure to carefully read https://www.activepieces.com/docs/install/architecture/workers | `UNSANDBOXED` | |
|
||||
| `AP_WORKER_CONCURRENCY` | The number of different scheduled worker jobs can be processed in same time | `10` |
|
||||
| `AP_AGENTS_WORKER_CONCURRENCY` | The number of different agents can be processed in same time | `10` |
|
||||
| `AP_ENCRYPTION_KEY` | ❗️ Encryption key used for connections is a 32-character (16 bytes) hexadecimal key. You can generate one using the following command: `openssl rand -hex 16`. | None |
|
||||
| `AP_EXECUTION_DATA_RETENTION_DAYS` | The number of days to retain execution data, logs and events. | `30` | |
|
||||
| `AP_FRONTEND_URL` | ❗️ Url that will be used to specify redirect url and webhook url.
|
||||
| `AP_INTERNAL_URL` | (BETA) Used to specify the SSO authentication URL. | None | [https://demo.activepieces.com/api](https://demo.activepieces.com/api) |
|
||||
| `AP_JWT_SECRET` | ❗️ Encryption key used for generating JWT tokens is a 32-character hexadecimal key. You can generate one using the following command: `openssl rand -hex 32`. | None | [https://demo.activepieces.com](https://demo.activepieces.com) |
|
||||
| `AP_QUEUE_UI_ENABLED` | Enable the queue UI (only works with redis) | `true` | |
|
||||
| `AP_QUEUE_UI_USERNAME` | The username for the queue UI. This is required if `AP_QUEUE_UI_ENABLED` is set to `true`. | None | |
|
||||
| `AP_QUEUE_UI_PASSWORD` | The password for the queue UI. This is required if `AP_QUEUE_UI_ENABLED` is set to `true`. | None | |
|
||||
| `AP_REDIS_FAILED_JOB_RETENTION_DAYS` | The number of days to retain failed jobs in Redis. | `30` | |
|
||||
| `AP_REDIS_FAILED_JOB_RETENTION_MAX_COUNT` | The maximum number of failed jobs to retain in Redis. | `2000` | |
|
||||
| `AP_TRIGGER_DEFAULT_POLL_INTERVAL` | How many minutes before the system checks for new data updates for pieces with scheduled triggers, such as new Google Contacts. | `5` | |
|
||||
| `AP_PIECES_SOURCE` | `AP_PIECES_SOURCE`: `FILE` for local development, `DB` for database. You can find more information about it in [Setting Piece Source](#setting-piece-source) section. | `CLOUD_AND_DB` | |
|
||||
| `AP_PIECES_SYNC_MODE` | `AP_PIECES_SYNC_MODE`: None for no metadata syncing / 'OFFICIAL_AUTO' for automatic syncing for pieces metadata from cloud | `OFFICIAL_AUTO` |
|
||||
| `AP_POSTGRES_DATABASE` | ❗️ The name of the PostgreSQL database | None | |
|
||||
| `AP_POSTGRES_HOST` | ❗️ The hostname or IP address of the PostgreSQL server | None | |
|
||||
| `AP_POSTGRES_PASSWORD` | ❗️ The password for the PostgreSQL, you can generate a 32-character hexadecimal key using the following command: `openssl rand -hex 32`. | None | |
|
||||
| `AP_POSTGRES_PORT` | ❗️ The port number for the PostgreSQL server | None | |
|
||||
| `AP_POSTGRES_USERNAME` | ❗️ The username for the PostgreSQL user | None | |
|
||||
| `AP_POSTGRES_USE_SSL` | Use SSL to connect the postgres database | `false` | |
|
||||
| `AP_POSTGRES_SSL_CA` | Use SSL Certificate to connect to the postgres database |
|
||||
| `AP_POSTGRES_URL` | Alternatively, you can specify only the connection string (e.g postgres://user:password@host:5432/database) instead of providing the database, host, port, username, and password. | None | |
|
||||
| `AP_POSTGRES_POOL_SIZE` | Maximum number of clients the pool should contain for the PostgreSQL database | None
|
||||
| `AP_POSTGRES_IDLE_TIMEOUT_MS` | Sets the idle timout pool for your PostgreSQL | `30000`
|
||||
| `AP_REDIS_TYPE` | Where to spin redis instance, either in memory (MEMORY) or in a dedicated instance (STANDALONE), or in a sentinel instance (SENTINEL) | `STANDALONE` | |
|
||||
| `AP_REDIS_URL` | If a Redis connection URL is specified, all other Redis properties will be ignored. | None | |
|
||||
| `AP_REDIS_USER` | ❗️ Username to use when connect to redis | None | |
|
||||
| `AP_REDIS_PASSWORD` | ❗️ Password to use when connect to redis | None | |
|
||||
| `AP_REDIS_HOST` | ❗️ The hostname or IP address of the Redis server | None | |
|
||||
| `AP_REDIS_PORT` | ❗️ The port number for the Redis server | None | |
|
||||
| `AP_REDIS_DB` | The Redis database index to use | `0` | |
|
||||
| `AP_REDIS_USE_SSL` | Connect to Redis with SSL | `false` | |
|
||||
| `AP_REDIS_SSL_CA_FILE` | The path to the CA file for the Redis server. | None | |
|
||||
| `AP_REDIS_SENTINEL_HOSTS` | If specified, this should be a comma-separated list of `host:port` pairs for Redis Sentinels. Make sure to set `AP_REDIS_CONNECTION_MODE` to `SENTINEL` | None | `sentinel-host-1:26379,sentinel-host-2:26379,sentinel-host-3:26379` |
|
||||
| `AP_REDIS_SENTINEL_NAME` | The name of the master node monitored by the sentinels. | None | `sentinel-host-1` |
|
||||
| `AP_REDIS_SENTINEL_ROLE` | The role to connect to, either `master` or `slave`. | None | `master` |
|
||||
| `AP_TRIGGER_TIMEOUT_SECONDS` | Maximum allowed runtime for a trigger to perform polling in seconds | `60` | |
|
||||
| `AP_FLOW_TIMEOUT_SECONDS` | Maximum allowed runtime for a flow to run in seconds | `600` | |
|
||||
| `AP_AGENT_TIMEOUT_SECONDS` | Maximum allowed runtime for an agent to run in seconds | `600` | |
|
||||
| `AP_SANDBOX_MEMORY_LIMIT` | The maximum amount of memory (in kilobytes) that a single sandboxed worker process can use. This helps prevent runaway memory usage in custom code or pieces. If not set, the default is 1048576 KB (1024 MB). | `1048576` | `1048576` |
|
||||
| `AP_SANDBOX_PROPAGATED_ENV_VARS` | Environment variables that will be propagated to the sandboxed code. If you are using it for pieces, we strongly suggests keeping everything in the authentication object to make sure it works across AP instances. | None | |
|
||||
| `AP_TELEMETRY_ENABLED` | Collect telemetry information. | `true` | |
|
||||
| `AP_TEMPLATES_SOURCE_URL` | This is the endpoint we query for templates, remove it and templates will be removed from UI | `https://cloud.activepieces.com/api/v1/templates` | |
|
||||
| `AP_WEBHOOK_TIMEOUT_SECONDS` | The default timeout for webhooks. The maximum allowed is 15 minutes. Please note that Cloudflare limits it to 30 seconds. If you are using a reverse proxy for SSL, make sure it's configured correctly. | `30` | |
|
||||
| `AP_TRIGGER_FAILURE_THRESHOLD` | The maximum number of consecutive trigger failures is 576 by default, which is equivalent to approximately 2 days. | `30` |
|
||||
| `AP_PROJECT_RATE_LIMITER_ENABLED` | Enforce rate limits and prevent excessive usage by a single project. | `true` | |
|
||||
| `AP_MAX_CONCURRENT_JOBS_PER_PROJECT`| The maximum number of active runs a project can have. This is used to enforce rate limits and prevent excessive usage by a single project. | `100` | |
|
||||
| `AP_S3_ACCESS_KEY_ID` | The access key ID for your S3-compatible storage service. Not required if `AP_S3_USE_IRSA` is `true`. | None | |
|
||||
| `AP_S3_SECRET_ACCESS_KEY` | The secret access key for your S3-compatible storage service. Not required if `AP_S3_USE_IRSA` is `true`. | None | |
|
||||
| `AP_S3_BUCKET` | The name of the S3 bucket to use for file storage. | None | |
|
||||
| `AP_S3_ENDPOINT` | The endpoint URL for your S3-compatible storage service. Not required if `AWS_ENDPOINT_URL` is set. | None | `https://s3.amazonaws.com` |
|
||||
| `AP_S3_REGION` | The region where your S3 bucket is located. Not required if `AWS_REGION` is set. | None | `us-east-1` |
|
||||
| `AP_S3_USE_SIGNED_URLS` | It is used to route traffic to S3 directly. It should be enabled if the S3 bucket is public. | None | |
|
||||
| `AP_S3_USE_IRSA` | Use IAM Role for Service Accounts (IRSA) to connect to S3. When `true`, `AP_S3_ACCESS_KEY_ID` and `AP_S3_ACCESS_KEY_ID` are not required. | None | `true` |
|
||||
| `AP_SMTP_HOST` | The host name for the SMTP server that activepieces uses to send emails | `None` | `mail.example.com` |
|
||||
| `AP_SMTP_PORT` | The port number for the SMTP server that activepieces uses to send emails | `None` | 587 |
|
||||
| `AP_SMTP_USERNAME` | The user name for the SMTP server that activepieces uses to send emails | `None` | test@mail.example.com |
|
||||
| `AP_SMTP_PASSWORD` | The password for the SMTP server that activepieces uses to send emails | `None` | secret1234 |
|
||||
| `AP_SMTP_SENDER_EMAIL` | The email address from which activepieces sends emails. | `None` | test@mail.example.com |
|
||||
| `AP_SMTP_SENDER_NAME` | The sender name activepieces uses to send emails.
|
||||
| `AP_MAX_FILE_SIZE_MB` | The maximum allowed file size in megabytes for uploads including logs of flow runs. If logs exceed this size, they will be truncated which may cause flow execution issues. | `10` | `10` |
|
||||
| `AP_FILE_STORAGE_LOCATION` | The location to store files. Possible values are `DB` for storing files in the database or `S3` for storing files in an S3-compatible storage service. | `DB` | |
|
||||
| `AP_PAUSED_FLOW_TIMEOUT_DAYS` | The maximum allowed pause duration in days for a paused flow, please note it can not exceed `AP_EXECUTION_DATA_RETENTION_DAYS` | `30` |
|
||||
| `AP_MAX_RECORDS_PER_TABLE` | The maximum allowed number of records per table | `1500` | `1500`
|
||||
| `AP_MAX_FIELDS_PER_TABLE` | The maximum allowed number of fields per table | `15` | `15`
|
||||
| `AP_MAX_TABLES_PER_PROJECT` | The maximum allowed number of tables per project | `20` | `20`
|
||||
| `AP_MAX_MCPS_PER_PROJECT` | The maximum allowed number of mcp per project | `20` | `20`
|
||||
| `AP_ENABLE_FLOW_ON_PUBLISH` | Whether publishing a new flow version should automatically enable the flow | `true` | `false`
|
||||
| `AP_ISSUE_ARCHIVE_DAYS` | Controls the automatic archival of issues in the system. Issues that have not been updated for this many days will be automatically moved to an archived state.| `14` | `1`
|
||||
| `AP_APP_TITLE` | Initial title shown in the browser tab while loading the app | `Activepieces` | `Activepieces`
|
||||
| `AP_FAVICON_URL` | Initial favicon shown in the browser tab while loading the app | `https://cdn.activepieces.com/brand/favicon.ico` | `https://cdn.activepieces.com/brand/favicon.ico`
|
||||
|
||||
<Warning>
|
||||
The frontend URL is essential for webhooks and app triggers to work. It must
|
||||
be accessible to third parties to send data.
|
||||
</Warning>
|
||||
|
||||
|
||||
### Setting Webhook (Frontend URL):
|
||||
|
||||
The default URL is set to the machine's IP address. To ensure proper operation, ensure that this address is accessible or specify an `AP_FRONTEND_URL` environment variable.
|
||||
|
||||
One possible solution for this is using a service like ngrok ([https://ngrok.com/](https://ngrok.com/)), which can be used to expose the frontend port (4200) to the internet.
|
||||
|
||||
### Redis Configuration
|
||||
|
||||
Set the `AP_REDIS_URL` environment variable to the connection URL of your Redis server.
|
||||
|
||||
Please note that if a Redis connection URL is specified, all other **Redis properties** will be ignored.
|
||||
|
||||
<Info>
|
||||
If you don't have the Redis URL, you can use the following command to get it. You can use the following variables:
|
||||
|
||||
- `REDIS_USER`: The username to use when connecting to Redis.
|
||||
- `REDIS_PASSWORD`: The password to use when connecting to Redis.
|
||||
- `REDIS_HOST`: The hostname or IP address of the Redis server.
|
||||
- `REDIS_PORT`: The port number for the Redis server.
|
||||
- `REDIS_DB`: The Redis database index to use.
|
||||
- `REDIS_USE_SSL`: Connect to Redis with SSL.
|
||||
</Info>
|
||||
<Info>
|
||||
If you are using **Redis Sentinel**, you can set the following environment variables:
|
||||
- `AP_REDIS_TYPE`: Set this to `SENTINEL`.
|
||||
- `AP_REDIS_SENTINEL_HOSTS`: A comma-separated list of `host:port` pairs for Redis Sentinels. When set, all other Redis properties will be ignored.
|
||||
- `AP_REDIS_SENTINEL_NAME`: The name of the master node monitored by the sentinels.
|
||||
- `AP_REDIS_SENTINEL_ROLE`: The role to connect to, either `master` or `slave`.
|
||||
- `AP_REDIS_PASSWORD`: The password to use when connecting to Redis.
|
||||
- `AP_REDIS_USE_SSL`: Connect to Redis with SSL.
|
||||
- `AP_REDIS_SSL_CA_FILE`: The path to the CA file for the Redis server.
|
||||
</Info>
|
||||
|
||||
### SMTP Configuration
|
||||
|
||||
SMTP can be configured both from the platform admin screen and through environment variables. The enviroment variables are only used if the platform admin screen has no email configuration entered.
|
||||
|
||||
Activepieces will only use the configuration from the environment variables if `AP_SMTP_HOST`, `AP_SMTP_PORT`, `AP_SMTP_USERNAME` and `AP_SMTP_PASSWORD` all have a value set. TLS is supported.
|
||||
52
activepieces-fork/docs/install/configuration/hardware.mdx
Normal file
52
activepieces-fork/docs/install/configuration/hardware.mdx
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
title: "Hardware Requirements"
|
||||
icon: "server"
|
||||
description: "Specifications for hosting Activepieces"
|
||||
---
|
||||
|
||||
More information about architecture please visit our [architecture](../architecture/overview) page.
|
||||
|
||||
### Technical Specifications
|
||||
|
||||
Activepieces is designed to be memory-intensive rather than CPU-intensive. A modest instance will suffice for most scenarios, but requirements can vary based on specific use cases.
|
||||
|
||||
| Component | Memory (RAM) | CPU Cores | Disk Space | Notes |
|
||||
| ------------- | ------------ | --------- | ---------- | ----- |
|
||||
| PostgreSQL | 1 GB | 1 | - | |
|
||||
| Redis | 1 GB | 1 | - | |
|
||||
| Activepieces | 4 GB | 1 | 30 GB | |
|
||||
|
||||
<Tip>
|
||||
The above recommendations are designed to meet the needs of the majority of use cases.
|
||||
</Tip>
|
||||
|
||||
## Scaling Factors
|
||||
|
||||
### Redis
|
||||
|
||||
Redis requires minimal scaling as it primarily stores jobs during processing. Activepieces leverages BullMQ, capable of handling a substantial number of jobs per second.
|
||||
|
||||
### PostgreSQL
|
||||
|
||||
<Tip>
|
||||
**Scaling Tip:** Since files are stored in the database, you can alleviate the load by configuring S3 storage for file management.
|
||||
</Tip>
|
||||
|
||||
PostgreSQL is typically not the system's bottleneck.
|
||||
|
||||
### Activepieces Container
|
||||
|
||||
<Tip>
|
||||
**Scaling Tip:** The Activepieces container is stateless, allowing for seamless horizontal scaling.
|
||||
</Tip>
|
||||
|
||||
- `FLOW_WORKER_CONCURRENCY` and `SCHEDULED_WORKER_CONCURRENCY` dictate the number of concurrent jobs processed for flows and scheduled flows, respectively. By default, these are set to 20 and 10.
|
||||
|
||||
## Expected Performance
|
||||
|
||||
Activepieces ensures no request is lost; all requests are queued. In the event of a spike, requests will be processed later, which is acceptable as most flows are asynchronous, with synchronous flows being prioritized.
|
||||
|
||||
It's hard to predict exact performance because flows can be very different. But running a flow doesn't slow things down, as it runs as fast as regular JavaScript.
|
||||
(Note: This applies to `SANDBOX_CODE_ONLY` and `UNSANDBOXED` execution modes, which are recommended and used in self-hosted setups.)
|
||||
|
||||
You can anticipate handling over **20 million executions** monthly with this setup.
|
||||
65
activepieces-fork/docs/install/configuration/overview.mdx
Normal file
65
activepieces-fork/docs/install/configuration/overview.mdx
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
title: "Deployment Checklist"
|
||||
description: "Checklist to follow after deploying Activepieces"
|
||||
icon: "list"
|
||||
---
|
||||
|
||||
<Info>
|
||||
This tutorial assumes you have already followed the quick start guide using one of the installation methods listed in [Install Overview](../overview).
|
||||
</Info>
|
||||
|
||||
In this section, we will go through the checklist after using one of the installation methods and ensure that your deployment is production-ready.
|
||||
|
||||
<AccordionGroup>
|
||||
<Accordion title="Decide on Sandboxing" icon="code">
|
||||
|
||||
You should decide on the sandboxing mode for your deployment based on your use case and whether it is multi-tenant or not. Here is a simplified way to decide:
|
||||
|
||||
<Tip>
|
||||
**Friendly Tip #1**: For multi-tenant setups, use V8/Code Sandboxing.
|
||||
|
||||
It is secure and does not require privileged Docker access in Kubernetes.
|
||||
Privileged Docker is usually not allowed to prevent root escalation threats.
|
||||
</Tip>
|
||||
|
||||
<Tip>
|
||||
**Friendly Tip #2**: For single-tenant setups, use No Sandboxing. It is faster and does not require privileged Docker access.
|
||||
</Tip>
|
||||
|
||||
<Snippet file="execution-mode.mdx" />
|
||||
|
||||
More Information at [Sandboxing & Workers](../architecture/workers#sandboxing)
|
||||
</Accordion>
|
||||
<Accordion title="Enterprise Edition (Optional)" icon="building">
|
||||
<Tip>
|
||||
For licensing inquiries regarding the self-hosted enterprise edition, please reach out to `sales@activepieces.com`, as the code and Docker image are not covered by the MIT license.
|
||||
</Tip>
|
||||
|
||||
<Note>You can request a trial key from within the app or in the cloud by filling out the form. Alternatively, you can contact sales at [https://www.activepieces.com/sales](https://www.activepieces.com/sales).<br></br>Please know that when your trial runs out, all enterprise [features](https://www.activepieces.com/pricing) will be shut down meaning any user other than the platform admin will be deactivated, and your private pieces will be deleted, which could result in flows using them to fail.</Note>
|
||||
|
||||
<Warning>
|
||||
Before version 0.73.0, you cannot switch from CE to EE directly We suggest upgrading to 0.73.0 with the same edition first, then switch `AP_EDITION`.
|
||||
</Warning>
|
||||
|
||||
<Warning>
|
||||
Enterprise edition must use `PostgreSQL` as the database backend and `Redis` as the Queue System.
|
||||
</Warning>
|
||||
|
||||
## Installation
|
||||
|
||||
1. Set the `AP_EDITION` environment variable to `ee`.
|
||||
2. Set the `AP_EXECUTION_MODE` to anything other than `UNSANDBOXED`, check the above section.
|
||||
3. Once your instance is up, activate the license key by going to **Platform Admin -> Setup -> License Keys**.
|
||||
|
||||

|
||||
|
||||
</Accordion>
|
||||
<Accordion title="Setup HTTPS" icon="lock">
|
||||
Setting up HTTPS is highly recommended because many services require webhook URLs to be secure (HTTPS). This helps prevent potential errors.
|
||||
|
||||
To set up SSL, you can use any reverse proxy. For a step-by-step guide, check out our example using [Nginx](../guides/setup-ssl).
|
||||
</Accordion>
|
||||
<Accordion title="Troubleshooting (Optional)" icon="wrench">
|
||||
If you encounter any issues, check out our [Troubleshooting](../troubleshooting/websocket-issues) guide.
|
||||
</Accordion>
|
||||
</AccordionGroup>
|
||||
25
activepieces-fork/docs/install/configuration/telemetry.mdx
Normal file
25
activepieces-fork/docs/install/configuration/telemetry.mdx
Normal file
@@ -0,0 +1,25 @@
|
||||
---
|
||||
title: "Telemetry"
|
||||
description: ""
|
||||
icon: 'calculator'
|
||||
---
|
||||
|
||||
# Why Does Activepieces need data?
|
||||
|
||||
As a self-hosted product, gathering usage metrics and insights can be difficult for us. However, these analytics are essential in helping us understand key behaviors and delivering a higher quality experience that meets your needs.
|
||||
|
||||
To ensure we can continue to improve our product, we have decided to track certain basic behaviors and metrics that are vital for understanding the usage of Activepieces.
|
||||
|
||||
We have implemented a minimal tracking plan and provide a detailed list of the metrics collected in a separate section.
|
||||
|
||||
|
||||
# What Does Activepieces Collect?
|
||||
|
||||
We value transparency in data collection and assure you that we do not collect any personal information. The following events are currently being collected:
|
||||
|
||||
[Exact Code](https://github.com/activepieces/activepieces/blob/main/packages/shared/src/lib/common/telemetry.ts)
|
||||
|
||||
|
||||
# Opting out?
|
||||
|
||||
To opt out, set the environment variable `AP_TELEMETRY_ENABLED=false`
|
||||
48
activepieces-fork/docs/install/guides/separate-workers.mdx
Normal file
48
activepieces-fork/docs/install/guides/separate-workers.mdx
Normal file
@@ -0,0 +1,48 @@
|
||||
---
|
||||
title: 'How to Separate Workers'
|
||||
description: ''
|
||||
icon: 'robot'
|
||||
---
|
||||
|
||||
Benefits of separating workers from the main application (APP):
|
||||
|
||||
- **Availability**: The application remains lightweight, allowing workers to be scaled independently.
|
||||
- **Security**: Workers lack direct access to Redis and the database, minimizing impact in case of a security breach.
|
||||
|
||||
|
||||
<Steps>
|
||||
<Step title="Create Worker Token">
|
||||
To create a worker token, use the local CLI command to generate the JWT and sign it with your `AP_JWT_SECRET` used for the app server. Follow these steps:
|
||||
1. Open your terminal and navigate to the root of the repository.
|
||||
2. Run the command: `npm run workers token`.
|
||||
3. When prompted, enter the JWT secret (this should be the same as the `AP_JWT_SECRET` used for the app server).
|
||||
4. The generated token will be displayed in your terminal, copy it and use it in the next step.
|
||||

|
||||
</Step>
|
||||
<Step title="Configure Environment Variables">
|
||||
Define the following environment variables in the `.env` file on the worker machine:
|
||||
- Set `AP_CONTAINER_TYPE` to `WORKER`
|
||||
- Specify `AP_FRONTEND_URL`
|
||||
- Provide `AP_WORKER_TOKEN`
|
||||
</Step>
|
||||
<Step title="Configure Persistent Volume">
|
||||
Configure a persistent volume for the worker to cache flows and pieces. This is important as first uncached execution of pieces and flows are very slow. Having a persistent volume significantly improves execution speed.
|
||||
|
||||
Add the following volume mapping to your docker configuration:
|
||||
```yaml
|
||||
volumes:
|
||||
- <your path>:/usr/src/app/cache
|
||||
```
|
||||
Note: This setup works whether you attach one volume per worker, It cannot be shared across multiple workers.
|
||||
</Step>
|
||||
<Step title="Launch Worker Machine">
|
||||
Launch the worker machine and supply it with the generated token.
|
||||
</Step>
|
||||
<Step title="Verify Worker Operation">
|
||||
Verify that the workers are visible in the Platform Admin Console under Infra -> Workers.
|
||||

|
||||
</Step>
|
||||
<Step title="Configure App Container Type">
|
||||
On the APP machine, set `AP_CONTAINER_TYPE` to `APP`.
|
||||
</Step>
|
||||
</Steps>
|
||||
33
activepieces-fork/docs/install/guides/setup-app-webhooks.mdx
Normal file
33
activepieces-fork/docs/install/guides/setup-app-webhooks.mdx
Normal file
@@ -0,0 +1,33 @@
|
||||
---
|
||||
title: "How to Setup App Webhooks"
|
||||
description: ""
|
||||
icon: 'webhook'
|
||||
---
|
||||
|
||||
Certain apps like Slack and Square only support one webhook per OAuth2 app. This means that manual configuration is required in their developer portal, and it cannot be automated.
|
||||
|
||||
## Slack
|
||||
|
||||
**Configure Webhook Secret**
|
||||
|
||||
1. Visit the "Basic Information" section of your Slack OAuth settings.
|
||||
2. Copy the "Signing Secret" and save it.
|
||||
3. Set the following environment variable in your activepieces environment:
|
||||
```
|
||||
AP_APP_WEBHOOK_SECRETS={"@activepieces/piece-slack": {"webhookSecret": "SIGNING_SECRET"}}
|
||||
```
|
||||
4. Restart your application instance.
|
||||
|
||||
|
||||
**Configure Webhook URL**
|
||||
|
||||
1. Go to the "Event Subscription" settings in the Slack OAuth2 developer platform.
|
||||
2. The URL format should be: `https://YOUR_AP_INSTANCE/api/v1/app-events/slack`.
|
||||
3. When connecting to Slack, use your OAuth2 credentials or update the OAuth2 app details from the admin console (in platform plans).
|
||||
4. Add the following events to the app:
|
||||
- `message.channels`
|
||||
- `reaction_added`
|
||||
- `message.im`
|
||||
- `message.groups`
|
||||
- `message.mpim`
|
||||
- `app_mention`
|
||||
@@ -0,0 +1,19 @@
|
||||
---
|
||||
title: "How to Setup OpenTelemetry"
|
||||
description: "Configure OpenTelemetry for observability and tracing"
|
||||
icon: "chart-line"
|
||||
---
|
||||
|
||||
Activepieces supports both standard OpenTelemetry environment variables and vendor-specific configuration for observability and tracing.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Default Value | Example |
|
||||
|----------|-------------|---------------|---------|
|
||||
| `AP_OTEL_ENABLED` | Enable OpenTelemetry tracing | `false` | `true` |
|
||||
| `OTEL_EXPORTER_OTLP_ENDPOINT` | OTLP exporter endpoint URL | None | `https://your-collector:4317/v1/traces` |
|
||||
| `OTEL_EXPORTER_OTLP_HEADERS` | Headers for OTLP exporter (comma-separated key=value pairs) | None | `Authorization=Bearer token` |
|
||||
|
||||
<Note>
|
||||
Both `AP_OTEL_ENABLED` and `OTEL_EXPORTER_OTLP_ENDPOINT` must be set for OpenTelemetry to be enabled.
|
||||
</Note>
|
||||
27
activepieces-fork/docs/install/guides/setup-s3.mdx
Normal file
27
activepieces-fork/docs/install/guides/setup-s3.mdx
Normal file
@@ -0,0 +1,27 @@
|
||||
---
|
||||
title: "How to Setup S3"
|
||||
description: "Configure S3-compatible storage for files and run logs"
|
||||
icon: "cloud"
|
||||
---
|
||||
|
||||
Run logs and files are stored in the database by default, but you can switch to S3 later without any migration; for most cases, the database is enough.
|
||||
|
||||
It's recommended to start with the database and switch to S3 if needed. After switching, expired files in the database will be deleted, and everything will be stored in S3. No manual migration is needed.
|
||||
|
||||
## Environment Variables
|
||||
|
||||
| Variable | Description | Default Value | Example |
|
||||
|----------|-------------|---------------|---------|
|
||||
| `AP_FILE_STORAGE_LOCATION` | The location to store files. Set to `S3` for S3 storage. | `DB` | `S3` |
|
||||
| `AP_S3_ACCESS_KEY_ID` | The access key ID for your S3-compatible storage service. Not required if `AP_S3_USE_IRSA` is `true`. | None | |
|
||||
| `AP_S3_SECRET_ACCESS_KEY` | The secret access key for your S3-compatible storage service. Not required if `AP_S3_USE_IRSA` is `true`. | None | |
|
||||
| `AP_S3_BUCKET` | The name of the S3 bucket to use for file storage. | None | |
|
||||
| `AP_S3_ENDPOINT` | The endpoint URL for your S3-compatible storage service. Not required if `AWS_ENDPOINT_URL` is set. | None | `https://s3.amazonaws.com` |
|
||||
| `AP_S3_REGION` | The region where your S3 bucket is located. Not required if `AWS_REGION` is set. | None | `us-east-1` |
|
||||
| `AP_S3_USE_SIGNED_URLS` | It is used to route traffic to S3 directly. It should be enabled if the S3 bucket is public. | None | `true` |
|
||||
| `AP_S3_USE_IRSA` | Use IAM Role for Service Accounts (IRSA) to connect to S3. When `true`, `AP_S3_ACCESS_KEY_ID` and `AP_S3_SECRET_ACCESS_KEY` are not required. | None | `true` |
|
||||
| `AP_MAX_FILE_SIZE_MB` | The maximum allowed file size in megabytes for uploads including logs of flow runs. | `10` | `10` |
|
||||
|
||||
<Tip>
|
||||
**Friendly Tip #1**: If the S3 bucket supports signed URLs but needs to be accessible over a public network, you can set `AP_S3_USE_SIGNED_URLS` to `true` to route traffic directly to S3 and reduce heavy traffic on your API server.
|
||||
</Tip>
|
||||
73
activepieces-fork/docs/install/guides/setup-ssl.mdx
Normal file
73
activepieces-fork/docs/install/guides/setup-ssl.mdx
Normal file
@@ -0,0 +1,73 @@
|
||||
---
|
||||
title: "Setup HTTPS"
|
||||
description: ""
|
||||
icon: "shield"
|
||||
---
|
||||
|
||||
To enable SSL, you can use a reverse proxy. In this case, we will use Nginx as the reverse proxy.
|
||||
|
||||
## Install Nginx
|
||||
|
||||
```bash
|
||||
sudo apt-get install nginx
|
||||
```
|
||||
|
||||
|
||||
## Create Certificate
|
||||
|
||||
To proceed with this documentation, it is assumed that you already have a certificate for your domain.
|
||||
|
||||
<Tip>
|
||||
You have the option to use Cloudflare or generate a certificate using Let's Encrypt or Certbot.
|
||||
</Tip>
|
||||
|
||||
|
||||
Add the certificate to the following paths: `/etc/key.pem` and `/etc/cert.pem`
|
||||
|
||||
|
||||
## Setup Nginx
|
||||
|
||||
```bash
|
||||
sudo nano /etc/nginx/sites-available/default
|
||||
```
|
||||
|
||||
|
||||
```bash
|
||||
server {
|
||||
listen 80;
|
||||
listen [::]:80;
|
||||
|
||||
server_name example.com www.example.com;
|
||||
|
||||
return 301 https://$server_name$request_uri;
|
||||
}
|
||||
|
||||
server {
|
||||
listen 443 ssl http2;
|
||||
listen [::]:443 ssl http2;
|
||||
|
||||
server_name example.com www.example.com;
|
||||
|
||||
ssl_certificate /etc/cert.pem;
|
||||
ssl_certificate_key /etc/key.pem;
|
||||
|
||||
location / {
|
||||
proxy_pass http://localhost:8080;
|
||||
proxy_http_version 1.1;
|
||||
proxy_set_header Upgrade $http_upgrade;
|
||||
proxy_set_header Connection 'upgrade';
|
||||
proxy_set_header Host $host;
|
||||
proxy_cache_bypass $http_upgrade;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Restart Nginx
|
||||
|
||||
```bash
|
||||
sudo systemctl restart nginx
|
||||
```
|
||||
|
||||
## Test
|
||||
|
||||
Visit your domain and you should see your application running with SSL.
|
||||
139
activepieces-fork/docs/install/options/aws.mdx
Normal file
139
activepieces-fork/docs/install/options/aws.mdx
Normal file
@@ -0,0 +1,139 @@
|
||||
---
|
||||
title: "AWS (Pulumi)"
|
||||
description: "Get Activepieces up & running on AWS with Pulumi for IaC"
|
||||
---
|
||||
|
||||
# Infrastructure-as-Code (IaC) with Pulumi
|
||||
|
||||
Pulumi is an IaC solution akin to Terraform or CloudFormation that lets you deploy & manage your infrastructure using popular programming languages e.g. Typescipt (which we'll use), C#, Go etc.
|
||||
|
||||
## Deploy from Pulumi Cloud
|
||||
|
||||
If you're already familiar with Pulumi Cloud and have [integrated their services with your AWS account](https://www.pulumi.com/docs/pulumi-cloud/deployments/oidc/aws/#configuring-openid-connect-for-aws), you can use the button below to deploy Activepieces in a few clicks.
|
||||
The template will deploy the latest Activepieces image that's available on [Docker Hub](https://hub.docker.com/r/activepieces/activepieces).
|
||||
|
||||
[](https://app.pulumi.com/new?template=https://github.com/activepieces/activepieces/tree/main/deploy/pulumi)
|
||||
|
||||
## Deploy from a local environment
|
||||
|
||||
Or, if you're currently using an S3 bucket to maintain your Pulumi state, you can scaffold and deploy Activepieces direct from Docker Hub using the template below in just few commands:
|
||||
|
||||
```bash
|
||||
$ mkdir deploy-activepieces && cd deploy-activepieces
|
||||
$ pulumi new https://github.com/activepieces/activepieces/tree/main/deploy/pulumi
|
||||
$ pulumi up
|
||||
```
|
||||
|
||||
## What's Deployed?
|
||||
|
||||
The template is setup to be somewhat flexible, supporting what could be a development or more production-ready configuration.
|
||||
The configuration options that are presented during stack configuration will allow you to optionally add any or all of:
|
||||
|
||||
* PostgreSQL RDS instance. Opting out of this will use a local SQLite3 Db.
|
||||
* Single node Redis 7 cluster. Opting out of this will mean using an in-memory cache.
|
||||
* Fully qualified domain name with SSL. Note that the hosted zone must already be configured in Route 53.
|
||||
Opting out of this will mean relying on using the application load balancer's url over standard HTTP to access your Activepieces deployment.
|
||||
|
||||
For a full list of all the currently available configuration options, take a look at the [Activepieces Pulumi template file on GitHub](https://github.com/activepieces/activepieces/tree/main/deploy/pulumi/Pulumi.yaml).
|
||||
|
||||
## Setting up Pulumi for the first time
|
||||
|
||||
If you're new to Pulumi then read on to get your local dev environment setup to be able to deploy Activepieces.
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. Make sure you have [Node](https://nodejs.org/en/download) and [Pulumi](https://www.pulumi.com/docs/install/) installed.
|
||||
2. [Install and configure the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html).
|
||||
3. [Install and configure Pulumi](https://www.pulumi.com/docs/clouds/aws/get-started/begin/).
|
||||
4. Create an S3 bucket which we'll use to maintain the state of all the various service we'll provision for our Activepieces deployment:
|
||||
|
||||
```bash
|
||||
aws s3api create-bucket --bucket pulumi-state --region us-east-1
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Note: [Pulumi supports to two different state management approaches](https://www.pulumi.com/docs/concepts/state/#deciding-on-a-state-backend).
|
||||
If you'd rather use Pulumi Cloud instead of S3 then feel free to skip this step and setup an account with Pulumi.
|
||||
</Tip>
|
||||
|
||||
5. Login to the Pulumi backend:
|
||||
|
||||
```bash
|
||||
pulumi login s3://pulumi-state?region=us-east-1
|
||||
```
|
||||
6. Next we're going to use the Activepieces Pulumi deploy template to create a new project, a stack in that project and then kick off the deploy:
|
||||
|
||||
```bash
|
||||
$ mkdir deploy-activepieces && cd deploy-activepieces
|
||||
$ pulumi new https://github.com/activepieces/activepieces/tree/main/deploy/pulumi
|
||||
```
|
||||
|
||||
This step will prompt you to create you stack and to populate a series of config options, such as whether or not to provision a PostgreSQL RDS instance or use SQLite3.
|
||||
|
||||
<Tip>
|
||||
Note: When choosing a stack name, use something descriptive like `activepieces-dev`, `ap-prod` etc.
|
||||
This solution uses the stack name as a prefix for every AWS service created
|
||||
e.g. your VPC will be named `<stack name>-vpc`.
|
||||
</Tip>
|
||||
|
||||
7. Nothing left to do now but kick off the deploy:
|
||||
|
||||
```bash
|
||||
pulumi up
|
||||
```
|
||||
|
||||
8. Now choose `yes` when prompted. Once the deployment has finished, you should see a bunch of Pulumi output variables that look like the following:
|
||||
```json
|
||||
_: {
|
||||
activePiecesUrl: "http://<alb name & id>.us-east-1.elb.amazonaws.com"
|
||||
activepiecesEnv: [
|
||||
. . . .
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
The config value of interest here is the `activePiecesUrl` as that is the URL for our Activepieces deployment.
|
||||
If you chose to add a fully qualified domain during your stack configuration, that will be displayed here.
|
||||
Otherwise you'll see the URL to the application load balancer. And that's it.
|
||||
|
||||
Congratulations! You have successfully deployed Activepieces to AWS.
|
||||
|
||||
## Deploy a locally built Activepieces Docker image
|
||||
|
||||
To deploy a locally built image instead of using the official Docker Hub image, read on.
|
||||
|
||||
1. Clone the Activepieces repo locally:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/activepieces/activepieces
|
||||
```
|
||||
2. Move into the `deploy/pulumi` folder & install the necessary npm packages:
|
||||
|
||||
```bash
|
||||
cd deploy/pulumi && npm i
|
||||
```
|
||||
3. This folder already has two Pulumi stack configuration files reday to go: `Pulumi.activepieces-dev.yaml` and `Pulumi.activepieces-prod.yaml`.
|
||||
These files already contain all the configurations we need to create our environments. Feel free to have a look & edit the values as you see fit.
|
||||
Lets continue by creating a development stack that uses the existing `Pulumi.activepieces-dev.yaml` file & kick off the deploy.
|
||||
|
||||
```bash
|
||||
pulumi stack init activepieces-dev && pulumi up
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Note: Using `activepieces-dev` or `activepieces-prod` for the `pulumi stack init` command is required here as the stack name needs to match the existing stack file name in the folder.
|
||||
</Tip>
|
||||
|
||||
4. You should now see a preview in the terminal of all the services that will be provisioned, before you continue.
|
||||
Once you choose `yes`, a new image will be built based on the `Dockerfile` in the root of the solution (make sure Docker Desktop is running) and then pushed up to a new ECR, along with provisioning all the other AWS services for the stack.
|
||||
|
||||
Congratulations! You have successfully deployed Activepieces into AWS using a locally built Docker image.
|
||||
|
||||
## Customising the deploy
|
||||
|
||||
All of the current configuration options, as well as the low-level details associated with the provisioned services are fully customisable, as you would expect from any IaC.
|
||||
For example, if you'd like to have three availability zones instead of two for the VPC, use an older version of Redis or add some additional security group rules for PostgreSQL, you can update all of these and more in the `index.ts` file inside the `deploy` folder.
|
||||
|
||||
Or maybe you'd still like to deploy the official Activepieces Docker image instead of a local build, but would like to change some of the services. Simply set the `deployLocalBuild` config option in the stack file to `false` and make whatever changes you'd like to the `index.ts` file.
|
||||
|
||||
Checking out the [Pulumi docs](https://www.pulumi.com/docs/clouds/aws/) before doing so is highly encouraged.
|
||||
122
activepieces-fork/docs/install/options/docker-compose.mdx
Executable file
122
activepieces-fork/docs/install/options/docker-compose.mdx
Executable file
@@ -0,0 +1,122 @@
|
||||
---
|
||||
title: "Docker Compose"
|
||||
description: ""
|
||||
icon: "book"
|
||||
---
|
||||
|
||||
To get up and running quickly with Activepieces, we will use the Activepieces Docker image. Follow these steps:
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need to have [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and [Docker](https://docs.docker.com/get-docker/) installed on your machine in order to set up Activepieces via Docker Compose.
|
||||
|
||||
## Installing
|
||||
|
||||
**1. Clone Activepieces repository.**
|
||||
|
||||
Use the command line to clone Activepieces repository:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/activepieces/activepieces.git
|
||||
```
|
||||
|
||||
**2. Go to the repository folder.**
|
||||
|
||||
```bash
|
||||
cd activepieces
|
||||
```
|
||||
|
||||
**3.Generate Environment variable**
|
||||
|
||||
Run the following command from the command prompt / terminal
|
||||
|
||||
```bash
|
||||
sh tools/deploy.sh
|
||||
```
|
||||
|
||||
<Tip>
|
||||
If none of the above methods work, you can rename the .env.example file in the root directory to .env and fill in the necessary information within the file.
|
||||
</Tip>
|
||||
|
||||
**4. Run Activepieces.**
|
||||
|
||||
<Warning>
|
||||
Please note that "docker-compose" (with a dash) is an outdated version of Docker Compose and it will not work properly. We strongly recommend downloading and installing version 2 from the [here](https://docs.docker.com/compose/install/) to use Docker Compose.
|
||||
</Warning>
|
||||
|
||||
```bash
|
||||
docker compose -p activepieces up
|
||||
```
|
||||
|
||||
## 4. Configure Webhook URL (Important for Triggers, Optional If you have public IP)
|
||||
|
||||
**Note:** By default, Activepieces will try to use your public IP for webhooks. If you are self-hosting on a personal machine, you must configure the frontend URL so that the webhook is accessible from the internet.
|
||||
|
||||
**Optional:** The easiest way to expose your webhook URL on localhost is by using a service like ngrok. However, it is not suitable for production use.
|
||||
|
||||
1. Install ngrok
|
||||
2. Run the following command:
|
||||
```bash
|
||||
ngrok http 8080
|
||||
```
|
||||
3. Replace `AP_FRONTEND_URL` environment variable in `.env` with the ngrok url.
|
||||
|
||||

|
||||
|
||||
<Warning>
|
||||
When deploying for production, ensure that you update the database credentials and properly set the environment variables.
|
||||
|
||||
Review the [configurations guide](/install/configuration/environment-variables) to make any necessary adjustments.
|
||||
</Warning>
|
||||
|
||||
## Upgrading
|
||||
|
||||
To upgrade to new versions, which are installed using docker compose, perform the following steps. First, open a terminal in the activepieces repository directory and run the following commands.
|
||||
|
||||
### Automatic Pull
|
||||
|
||||
**1. Run the update script**
|
||||
|
||||
```bash
|
||||
sh tools/update.sh
|
||||
```
|
||||
|
||||
### Manually Pull
|
||||
|
||||
**1. Pull the new docker compose file**
|
||||
```bash
|
||||
git pull
|
||||
```
|
||||
|
||||
**2. Pull the new images**
|
||||
```bash
|
||||
docker compose pull
|
||||
```
|
||||
|
||||
**3. Review changelog for breaking changes**
|
||||
|
||||
<Warning>
|
||||
Please review breaking changes in the [changelog](../configuration/breaking-changes).
|
||||
</Warning>
|
||||
|
||||
**4. Run the updated docker images**
|
||||
```
|
||||
docker compose up -d --remove-orphans
|
||||
```
|
||||
|
||||
Congratulations! You have now successfully updated the version.
|
||||
|
||||
## Deleting
|
||||
|
||||
The following command is capable of deleting all Docker containers and associated data, and therefore should be used with caution:
|
||||
|
||||
```
|
||||
sh tools/reset.sh
|
||||
```
|
||||
|
||||
<Warning>
|
||||
Executing this command will result in the removal of all Docker containers and the data stored within them. It is important to be aware of the potentially hazardous nature of this command before proceeding.
|
||||
</Warning>
|
||||
|
||||
|
||||
|
||||
86
activepieces-fork/docs/install/options/docker.mdx
Executable file
86
activepieces-fork/docs/install/options/docker.mdx
Executable file
@@ -0,0 +1,86 @@
|
||||
---
|
||||
title: "Docker"
|
||||
description: "Single docker image deployment with SQLite3 and Memory Queue"
|
||||
icon: "docker"
|
||||
---
|
||||
|
||||
<Warning>
|
||||
This setup is only meant for personal use or testing. It runs on SQLite3 and an in-memory Redis queue, which supports only a single instance on a single machine. For production or multi-instance setups, you must use Docker Compose with PostgreSQL and Redis.
|
||||
</Warning>
|
||||
|
||||
To get up and running quickly with Activepieces, we will use the Activepieces Docker image. Follow these steps:
|
||||
|
||||
## Prerequisites
|
||||
|
||||
You need to have [Git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) and [Docker](https://docs.docker.com/get-docker/) installed on your machine in order to set up Activepieces via Docker Compose.
|
||||
|
||||
## Install
|
||||
|
||||
### Pull Image and Run Docker image
|
||||
|
||||
Pull the Activepieces Docker image and run the container with the following command:
|
||||
|
||||
```bash
|
||||
docker run -d -p 8080:80 -v ~/.activepieces:/root/.activepieces -e AP_REDIS_TYPE=MEMORY -e AP_DB_TYPE=SQLITE3 -e AP_FRONTEND_URL="http://localhost:8080" activepieces/activepieces:latest
|
||||
```
|
||||
|
||||
### Configure Webhook URL (Important for Triggers, Optional If you have public IP)
|
||||
|
||||
**Note:** By default, Activepieces will try to use your public IP for webhooks. If you are self-hosting on a personal machine, you must configure the frontend URL so that the webhook is accessible from the internet.
|
||||
|
||||
**Optional:** The easiest way to expose your webhook URL on localhost is by using a service like ngrok. However, it is not suitable for production use.
|
||||
|
||||
1. Install ngrok
|
||||
2. Run the following command:
|
||||
```bash
|
||||
ngrok http 8080
|
||||
```
|
||||
3. Replace `AP_FRONTEND_URL` environment variable in the command line above.
|
||||
|
||||

|
||||
|
||||
|
||||
|
||||
## Upgrade
|
||||
|
||||
Please follow the steps below:
|
||||
|
||||
### Step 1: Back Up Your Data (Recommended)
|
||||
|
||||
Before proceeding with the upgrade, it is always a good practice to back up your Activepieces data to avoid any potential data loss during the update process.
|
||||
|
||||
1. **Stop the Current Activepieces Container:** If your Activepieces container is running, stop it using the following command:
|
||||
```bash
|
||||
docker stop activepieces_container_name
|
||||
```
|
||||
|
||||
2. **Backup Activepieces Data Directory:** By default, Activepieces data is stored in the `~/.activepieces` directory on your host machine. Create a backup of this directory to a safe location using the following command:
|
||||
```bash
|
||||
cp -r ~/.activepieces ~/.activepieces_backup
|
||||
```
|
||||
|
||||
### Step 2: Update the Docker Image
|
||||
|
||||
1. **Pull the Latest Activepieces Docker Image:** Run the following command to pull the latest Activepieces Docker image from Docker Hub:
|
||||
```bash
|
||||
docker pull activepieces/activepieces:latest
|
||||
```
|
||||
|
||||
### Step 3: Remove the Existing Activepieces Container
|
||||
|
||||
1. **Stop and Remove the Current Activepieces Container:** If your Activepieces container is running, stop and remove it using the following commands:
|
||||
```bash
|
||||
docker stop activepieces_container_name
|
||||
docker rm activepieces_container_name
|
||||
```
|
||||
|
||||
### Step 4: Run the Updated Activepieces Container
|
||||
|
||||
Now, run the updated Activepieces container with the latest image using the same command you used during the initial setup. Be sure to replace `activepieces_container_name` with the desired name for your new container.
|
||||
|
||||
```bash
|
||||
docker run -d -p 8080:80 -v ~/.activepieces:/root/.activepieces -e AP_REDIS_TYPE=MEMORY -e AP_DB_TYPE=SQLITE3 -e AP_FRONTEND_URL="http://localhost:8080" --name activepieces_container_name activepieces/activepieces:latest
|
||||
```
|
||||
|
||||
|
||||
Congratulations! You have successfully upgraded your Activepieces Docker deployment
|
||||
15
activepieces-fork/docs/install/options/easypanel.mdx
Executable file
15
activepieces-fork/docs/install/options/easypanel.mdx
Executable file
@@ -0,0 +1,15 @@
|
||||
---
|
||||
title: "Easypanel"
|
||||
description: "Run Activepieces with Easypanel 1-Click Install"
|
||||
---
|
||||
|
||||
Easypanel is a modern server control panel. If you [run Easypanel](https://easypanel.io/docs) on your server, you can deploy Activepieces with 1 click on it.
|
||||
|
||||
<a target="_blank" rel="noopener" href="https://easypanel.io/docs/templates/activepieces"></a>
|
||||
|
||||
## Instructions
|
||||
|
||||
1. Create a VM that runs Ubuntu on your cloud provider.
|
||||
2. Install Easypanel using the instructions from the website.
|
||||
3. Create a new project.
|
||||
4. Install Activepieces using the dedicated template.
|
||||
8
activepieces-fork/docs/install/options/elestio.mdx
Normal file
8
activepieces-fork/docs/install/options/elestio.mdx
Normal file
@@ -0,0 +1,8 @@
|
||||
---
|
||||
title: "Elestio"
|
||||
description: "Run Activepieces with Elestio 1-Click Install"
|
||||
---
|
||||
|
||||
You can deploy Activepieces on Elestio using one-click deployment. Elestio handles version updates, maintenance, security, backups, etc. So go ahead and click below to deploy and start using.
|
||||
|
||||
[](https://elest.io/open-source/activepieces)
|
||||
29
activepieces-fork/docs/install/options/gcp.mdx
Normal file
29
activepieces-fork/docs/install/options/gcp.mdx
Normal file
@@ -0,0 +1,29 @@
|
||||
---
|
||||
title: "GCP"
|
||||
description: ""
|
||||
---
|
||||
|
||||
This documentation is to deploy activepieces on VM Instance or VM Instance Group, we should first create VM template
|
||||
|
||||
## Create VM Template
|
||||
|
||||
First choose machine type (e.g e2-medium)
|
||||
|
||||
After configuring the VM Template, you can proceed to click on "Deploy Container" and specify the following container-specific settings:
|
||||
|
||||
- Image: activepieces/activepieces
|
||||
- Run as a privileged container: true
|
||||
- Environment Variables:
|
||||
- `AP_REDIS_TYPE`: MEMORY
|
||||
- `AP_DB_TYPE`: SQLITE3
|
||||
- `AP_FRONTEND_URL`: http://localhost:80
|
||||
- `AP_EXECUTION_MODE`: SANDBOX_PROCESS
|
||||
- Firewall: Allow HTTP traffic (for testing purposes only)
|
||||
|
||||
Once these details are entered, click on the "Deploy" button and patiently wait for the container deployment process to complete.\
|
||||
|
||||
After a successful deployment, you can access the ActivePieces application by visiting the external IP address of the VM on GCP.
|
||||
|
||||
## Production Deployment
|
||||
|
||||
Please visit [ActivePieces](/install/configuration/environment-variables) for more details on how to customize the application.
|
||||
210
activepieces-fork/docs/install/options/helm.mdx
Normal file
210
activepieces-fork/docs/install/options/helm.mdx
Normal file
@@ -0,0 +1,210 @@
|
||||
---
|
||||
title: 'Helm'
|
||||
description: 'Deploy Activepieces on Kubernetes using Helm'
|
||||
---
|
||||
|
||||
This guide walks you through deploying Activepieces on Kubernetes using the official Helm chart.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- Kubernetes cluster (v1.19+)
|
||||
- Helm 3.x installed
|
||||
- kubectl configured to access your cluster
|
||||
|
||||
## Using External PostgreSQL and Redis
|
||||
|
||||
The Helm chart supports using external PostgreSQL and Redis services instead of deploying the Bitnami subcharts.
|
||||
|
||||
### Using External PostgreSQL
|
||||
|
||||
To use an external PostgreSQL instance:
|
||||
|
||||
```yaml
|
||||
postgresql:
|
||||
enabled: false # Disable Bitnami PostgreSQL subchart
|
||||
host: "your-postgres-host.example.com"
|
||||
port: 5432
|
||||
useSSL: true # Enable SSL if required
|
||||
auth:
|
||||
database: "activepieces"
|
||||
username: "postgres"
|
||||
password: "your-password"
|
||||
# Or use external secret reference:
|
||||
# externalSecret:
|
||||
# name: "postgresql-credentials"
|
||||
# key: "password"
|
||||
```
|
||||
|
||||
Alternatively, you can use a connection URL:
|
||||
|
||||
```yaml
|
||||
postgresql:
|
||||
enabled: false
|
||||
url: "postgresql://user:password@host:5432/database?sslmode=require"
|
||||
```
|
||||
|
||||
### Using External Redis
|
||||
|
||||
To use an external Redis instance:
|
||||
|
||||
```yaml
|
||||
redis:
|
||||
enabled: false # Disable Bitnami Redis subchart
|
||||
host: "your-redis-host.example.com"
|
||||
port: 6379
|
||||
useSSL: false # Enable SSL if required
|
||||
auth:
|
||||
enabled: true
|
||||
password: "your-password"
|
||||
# Or use external secret reference:
|
||||
# externalSecret:
|
||||
# name: "redis-credentials"
|
||||
# key: "password"
|
||||
```
|
||||
|
||||
Alternatively, you can use a connection URL:
|
||||
|
||||
```yaml
|
||||
redis:
|
||||
enabled: false
|
||||
url: "redis://:password@host:6379/0"
|
||||
```
|
||||
|
||||
### External Secret References
|
||||
|
||||
For better security, you can reference passwords from existing Kubernetes secrets (useful with External Secrets Operator or Sealed Secrets):
|
||||
|
||||
```yaml
|
||||
postgresql:
|
||||
enabled: false
|
||||
host: "your-postgres-host.example.com"
|
||||
auth:
|
||||
externalSecret:
|
||||
name: "postgresql-credentials"
|
||||
key: "password"
|
||||
|
||||
redis:
|
||||
enabled: false
|
||||
host: "your-redis-host.example.com"
|
||||
auth:
|
||||
enabled: true
|
||||
externalSecret:
|
||||
name: "redis-credentials"
|
||||
key: "password"
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
### 1. Clone the Repository
|
||||
|
||||
```bash
|
||||
git clone https://github.com/activepieces/activepieces.git
|
||||
cd activepieces
|
||||
```
|
||||
|
||||
### 2. Install Dependencies
|
||||
|
||||
```bash
|
||||
helm dependency update
|
||||
```
|
||||
|
||||
### 3. Create a Values File
|
||||
|
||||
Create a `my-values.yaml` file with your configuration. You can use the [example values file](https://github.com/activepieces/activepieces/blob/main/deploy/activepieces-helm/values.yaml) as a reference.
|
||||
The Helm chart has sensible defaults for required values while leaving the optional ones empty, but you should customize these core values for production
|
||||
|
||||
|
||||
### 4. Install Activepieces
|
||||
|
||||
```bash
|
||||
helm install activepieces deploy/activepieces-helm -f my-values.yaml
|
||||
```
|
||||
|
||||
### 5. Verify Installation
|
||||
|
||||
```bash
|
||||
# Check deployment status
|
||||
kubectl get pods
|
||||
kubectl get services
|
||||
|
||||
```
|
||||
|
||||
## Production Checklist
|
||||
|
||||
- [ ] Set `frontendUrl` to your actual domain
|
||||
- [ ] Set strong passwords for PostgreSQL and Redis (or keep auto-generated)
|
||||
- [ ] Configure proper ingress with TLS
|
||||
- [ ] Set appropriate resource limits
|
||||
- [ ] Configure persistent storage
|
||||
- [ ] Choose appropriate [execution mode](/install/architecture/workers) for your security requirements
|
||||
- [ ] Review [environment variables](/install/configuration/environment-variables) for advanced configuration
|
||||
- [ ] Consider using a [separate workers](/install/guides/separate-workers) setup for better availability and security
|
||||
|
||||
## Upgrading
|
||||
|
||||
```bash
|
||||
# Update dependencies
|
||||
helm dependency update
|
||||
|
||||
# Upgrade release
|
||||
helm upgrade activepieces deploy/activepieces-helm -f my-values.yaml
|
||||
|
||||
# Check upgrade status
|
||||
kubectl rollout status deployment/activepieces
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### Common Issues
|
||||
|
||||
1. **Pod won't start**: Check logs with `kubectl logs deployment/activepieces`
|
||||
2. **Database connection**: Verify PostgreSQL credentials and connectivity
|
||||
3. **Frontend URL**: Ensure `frontendUrl` is accessible from external sources
|
||||
4. **Webhooks not working**: Check ingress configuration and DNS resolution
|
||||
|
||||
### Useful Commands
|
||||
|
||||
```bash
|
||||
# View logs
|
||||
kubectl logs deployment/activepieces -f
|
||||
|
||||
# Port forward for testing
|
||||
kubectl port-forward svc/activepieces 4200:80 --namespace default
|
||||
|
||||
# Get all resources
|
||||
kubectl get all --namespace default
|
||||
```
|
||||
|
||||
## Editions
|
||||
|
||||
Activepieces supports three editions:
|
||||
|
||||
- **`ce` (Community Edition)**: Open-source version with all core features (default)
|
||||
- **`ee` (Enterprise Edition)**: Self-hosted edition with advanced features like SSO, RBAC, and audit logs
|
||||
- **`cloud`**: For Activepieces Cloud deployments
|
||||
|
||||
Set the edition in your values file:
|
||||
|
||||
```yaml
|
||||
activepieces:
|
||||
edition: "ce" # or "ee" for Enterprise Edition
|
||||
```
|
||||
|
||||
For Enterprise Edition features and licensing, visit [activepieces.com](https://www.activepieces.com/docs/admin-console/overview).
|
||||
|
||||
## Environment Variables
|
||||
|
||||
For a complete list of configuration options, see the [Environment Variables](/install/configuration/environment-variables) documentation. Most environment variables can be configured through the Helm values file under the `activepieces` section.
|
||||
|
||||
## Execution Modes
|
||||
|
||||
Understanding execution modes is crucial for security and performance. See the [Workers & Sandboxing](/install/architecture/workers) guide to choose the right mode for your deployment.
|
||||
|
||||
## Uninstalling
|
||||
|
||||
```bash
|
||||
helm uninstall activepieces
|
||||
|
||||
# Clean up persistent volumes (optional)
|
||||
kubectl delete pvc -l app.kubernetes.io/instance=activepieces
|
||||
```
|
||||
108
activepieces-fork/docs/install/options/railway.mdx
Normal file
108
activepieces-fork/docs/install/options/railway.mdx
Normal file
@@ -0,0 +1,108 @@
|
||||
---
|
||||
title: "Railway"
|
||||
description: "Deploy Activepieces to the cloud in minutes using Railway's one-click template"
|
||||
---
|
||||
|
||||
Railway simplifies your infrastructure stack from servers to observability with a single, scalable, easy-to-use platform. With Railway's one-click deployment, you can get Activepieces up and running in minutes without managing servers, databases, or infrastructure.
|
||||
|
||||
<a href="https://railway.com/deploy/kGEO1J" target="_blank">
|
||||
<img alt="Deploy on Railway" src="https://railway.app/button.svg" />
|
||||
</a>
|
||||
|
||||
## What Gets Deployed
|
||||
|
||||
The Railway template deploys Activepieces with the following components:
|
||||
|
||||
- **Activepieces Application**: The main Activepieces container running the latest version from [Docker Hub](https://hub.docker.com/r/activepieces/activepieces)
|
||||
- **PostgreSQL Database**: Managed PostgreSQL database for storing flows, executions, and application data
|
||||
- **Redis Cache**: Redis instance for job queuing and caching (optional, can use in-memory cache)
|
||||
- **Automatic SSL**: Railway provides automatic HTTPS with SSL certificates
|
||||
- **Custom Domain Support**: Configure your own domain through Railway's dashboard
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Before deploying, ensure you have:
|
||||
|
||||
- A [Railway account](https://railway.app/) (free tier available)
|
||||
- Basic understanding of environment variables (optional, for advanced configuration)
|
||||
|
||||
## Quick Start
|
||||
|
||||
1. **Click the deploy button** above to open Railway's deployment interface
|
||||
2. **Configure environment variables for advanced usage** (see [Configuration](#configuration) below)
|
||||
3. **Deploy** - Railway will automatically provision resources and start your instance
|
||||
|
||||
Once deployed, Railway will provide you with a public URL where your Activepieces instance is accessible.
|
||||
|
||||
## Configuration
|
||||
|
||||
### Environment Variables
|
||||
|
||||
Railway allows you to configure Activepieces through environment variables. You can set these in the Railway dashboard under your project's **Variables** tab.
|
||||
|
||||
#### Execution Mode
|
||||
|
||||
Configure the execution mode for security and performance:
|
||||
See the [Workers & Sandboxing](/install/architecture/workers) documentation for details on each mode.
|
||||
|
||||
#### Other Important Variables
|
||||
|
||||
- `AP_TELEMETRY_ENABLED`: Enable/disable telemetry (default: `false`)
|
||||
|
||||
For a complete list of all available environment variables, see the [Environment Variables](/install/configuration/environment-variables) documentation.
|
||||
|
||||
## Custom Domain Setup
|
||||
|
||||
Railway supports custom domains with automatic SSL:
|
||||
|
||||
1. Go to your Railway project dashboard
|
||||
2. Navigate to **Settings** → **Networking**
|
||||
3. Add your custom domain
|
||||
4. Update `AP_FRONTEND_URL` environment variable to match your custom domain
|
||||
5. Railway will automatically provision SSL certificates
|
||||
|
||||
For more details on SSL configuration, see the [Setup SSL](/install/guides/setup-ssl) guide.
|
||||
|
||||
## Production Considerations
|
||||
|
||||
Before deploying to production, review these important points:
|
||||
|
||||
- [ ] Review [Security Practices](/admin-guide/security/practices) documentation
|
||||
- [ ] Configure `AP_WORKER_CONCURRENCY` based on your workload and hardware resources
|
||||
- [ ] Ensure PostgreSQL backups are configured in Railway
|
||||
- [ ] Consider database scaling options in Railway
|
||||
|
||||
## Observability
|
||||
|
||||
Railway provides built-in observability features for Activepieces. You can view logs and metrics in the Railway dashboard.
|
||||
|
||||
## Upgrading
|
||||
|
||||
To upgrade to a new version of Activepieces on Railway:
|
||||
|
||||
1. Go to your Railway project dashboard
|
||||
2. Navigate to **Deployments**
|
||||
3. Click **Redeploy** on the latest deployment
|
||||
4. Railway will pull the latest Activepieces image and redeploy
|
||||
|
||||
<Warning>
|
||||
Before upgrading, review the [Breaking Changes](/install/configuration/breaking-changes) documentation to ensure compatibility with your flows and configuration.
|
||||
</Warning>
|
||||
|
||||
## Next Steps
|
||||
|
||||
After deploying Activepieces on Railway:
|
||||
|
||||
1. **Access your instance** using the Railway-provided URL
|
||||
2. **Create your first flow** - see [Building Flows](/flows/building-flows)
|
||||
3. **Configure webhooks** - see [Setup App Webhooks](/install/guides/setup-app-webhooks)
|
||||
4. **Explore pieces** - browse available integrations in the piece library
|
||||
|
||||
## Additional Resources
|
||||
|
||||
- [Troubleshooting](/install/troubleshooting/websocket-issues): Troubleshooting guide
|
||||
- [Configuration Guide](/install/configuration/overview): Comprehensive configuration documentation
|
||||
- [Environment Variables](/install/configuration/environment-variables): Complete list of configuration options
|
||||
- [Architecture Overview](/install/architecture/overview): Understand Activepieces architecture
|
||||
- [Railway Documentation](https://docs.railway.app/): Official Railway platform documentation
|
||||
|
||||
104
activepieces-fork/docs/install/overview.mdx
Executable file
104
activepieces-fork/docs/install/overview.mdx
Executable file
@@ -0,0 +1,104 @@
|
||||
---
|
||||
title: "Overview"
|
||||
icon: "hand-wave"
|
||||
description: "Introduction to the different ways to install Activepieces"
|
||||
---
|
||||
|
||||
Activepieces Community Edition can be deployed using **Docker**, **Docker Compose**, and **Kubernetes**.
|
||||
|
||||
<Tip>
|
||||
Community Edition is **free** and **open source**.
|
||||
|
||||
You can read the difference between the editions [here](https://www.activepieces.com/pricing).
|
||||
</Tip>
|
||||
|
||||
## Recommended Options
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Docker (Fastest)" icon="docker" color="#248fe0" href="./options/docker">
|
||||
Deploy Activepieces as a single Docker container using the SQLite database.
|
||||
</Card>
|
||||
|
||||
<Card title="Docker Compose" icon="layer-group" color="#00FFFF" href="./options/docker-compose">
|
||||
Deploy Activepieces with **Redis** and **PostgreSQL** setup.
|
||||
</Card>
|
||||
|
||||
</CardGroup>
|
||||
|
||||
## Other Options
|
||||
|
||||
<CardGroup cols={2}>
|
||||
|
||||
<Card title="Helm" icon="ship" color="#ff9900" href="./options/helm">
|
||||
Install on Kubernetes with Helm.
|
||||
</Card>
|
||||
|
||||
<Card title="Railway" icon={
|
||||
<img src="https://railway.com/brand/logo-light.png" alt="Railway" width="24" height="24" />
|
||||
} href="./options/railway">
|
||||
1-Click Install on Railway.
|
||||
</Card>
|
||||
|
||||
<Card title="Easypanel" icon={
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 245 245">
|
||||
<g clip-path="url(#a)">
|
||||
<path fill-rule="evenodd" clip-rule="evenodd" d="M242.291 110.378a15.002 15.002 0 0 0 0-15l-48.077-83.272a15.002 15.002 0 0 0-12.991-7.5H85.07a15 15 0 0 0-12.99 7.5L41.071 65.812a.015.015 0 0 0-.013.008L2.462 132.673a15 15 0 0 0 0 15l48.077 83.272a15 15 0 0 0 12.99 7.5h96.154a15.002 15.002 0 0 0 12.991-7.5l31.007-53.706c.005 0 .01-.003.013-.007l38.598-66.854Zm-38.611 66.861 3.265-5.655a15.002 15.002 0 0 0 0-15l-48.077-83.272a14.999 14.999 0 0 0-12.99-7.5H41.072l-3.265 5.656a15 15 0 0 0 0 15l48.077 83.271a15 15 0 0 0 12.99 7.5H203.68Z" fill="url(#b)" />
|
||||
</g>
|
||||
<defs>
|
||||
<linearGradient id="b" x1="188.72" y1="6.614" x2="56.032" y2="236.437" gradientUnits="userSpaceOnUse">
|
||||
<stop stop-color="#12CD87" />
|
||||
<stop offset="1" stop-color="#12ABCD" />
|
||||
</linearGradient>
|
||||
<clipPath id="a">
|
||||
<path fill="#fff" d="M0 0h245v245H0z" />
|
||||
</clipPath>
|
||||
</defs>
|
||||
</svg>
|
||||
} href="./options/easypanel">
|
||||
1-Click Install with Easypanel template, maintained by the community.
|
||||
</Card>
|
||||
|
||||
<Card title="Elestio" icon="cloud" color="#ff9900" href="./options/elestio">
|
||||
1-Click Install on Elestio.
|
||||
</Card>
|
||||
|
||||
<Card title="AWS (Pulumi)" icon="aws" color="#ff9900" href="./options/aws">
|
||||
Install on AWS with Pulumi.
|
||||
</Card>
|
||||
|
||||
<Card title="GCP" icon="cloud" color="#4385f5" href="./options/gcp">
|
||||
Install on GCP as a VM template.
|
||||
</Card>
|
||||
|
||||
<Card title="PikaPods" icon={
|
||||
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 402.2 402.2">
|
||||
<path d="M393 277c-3 7-8 9-15 9H66c-27 0-49-18-55-45a56 56 0 0 1 54-68c7 0 12-5 12-11s-5-11-12-11H22c-7 0-12-5-12-11 0-7 4-12 12-12h44c18 1 33 15 33 33 1 19-14 34-33 35-18 0-31 12-34 30-2 16 9 35 31 37h37c5-46 26-83 65-110 22-15 47-23 74-24l-4 16c-4 30 19 58 49 61l8 1c6-1 11-6 10-12 0-6-5-10-11-10-14-1-24-7-30-20-7-12-4-27 5-37s24-14 36-10c13 5 22 17 23 31l2 4c33 23 55 54 63 93l3 17v14m-57-59c0-6-5-11-11-11s-12 5-12 11 6 12 12 12c6-1 11-6 11-12"
|
||||
fill="#4daf4e"/>
|
||||
</svg>
|
||||
} href="https://www.pikapods.com/pods?run=activepieces">
|
||||
Instantly run on PikaPods from $2.9/month.
|
||||
</Card>
|
||||
|
||||
<Card title="RepoCloud" icon="cloud" href="https://repocloud.io/details/?app_id=177">
|
||||
Easily install on RepoCloud using this template, maintained by the community.
|
||||
</Card>
|
||||
|
||||
<Card title="Zeabur" icon={
|
||||
<svg viewBox="0 0 294 229" fill="none" xmlns="http://www.w3.org/2000/svg">
|
||||
<path d="M113.865 144.888H293.087V229H0V144.888H82.388L195.822 84.112H0V0H293.087V84.112L113.865 144.888Z" fill="black"/>
|
||||
<path d="M194.847 0H0V84.112H194.847V0Z" fill="#6300FF"/>
|
||||
<path d="M293.065 144.888H114.772V229H293.065V144.888Z" fill="#FF4400"/>
|
||||
</svg>
|
||||
} href="https://zeabur.com/templates/LNTQDF">
|
||||
1-Click Install on Zeabur.
|
||||
</Card>
|
||||
|
||||
</CardGroup>
|
||||
|
||||
## Cloud Edition
|
||||
|
||||
<CardGroup cols={2}>
|
||||
<Card title="Activepieces Cloud" icon="cloud" color="##5155D7" href="https://cloud.activepieces.com/">
|
||||
This is the fastest option.
|
||||
</Card>
|
||||
</CardGroup>
|
||||
71
activepieces-fork/docs/install/troubleshooting/bullboard.mdx
Normal file
71
activepieces-fork/docs/install/troubleshooting/bullboard.mdx
Normal file
@@ -0,0 +1,71 @@
|
||||
---
|
||||
title: "Queues Dashboard"
|
||||
icon: "gauge-high"
|
||||
---
|
||||
|
||||
The Bull Board is a tool that allows you to check issues with scheduling and internal flow runs issues.
|
||||
|
||||

|
||||
|
||||
## Setup BullBoard
|
||||
|
||||
To enable the Bull Board UI in your self-hosted installation:
|
||||
|
||||
1. Define these environment variables:
|
||||
- `AP_QUEUE_UI_ENABLED`: Set to `true`
|
||||
- `AP_QUEUE_UI_USERNAME`: Set your desired username
|
||||
- `AP_QUEUE_UI_PASSWORD`: Set your desired password
|
||||
|
||||
2. Access the UI at `/api/ui`
|
||||
|
||||
|
||||
<Tip>
|
||||
For cloud installations, please ask your team for access to the internal documentation that explains how to access BullBoard.
|
||||
</Tip>
|
||||
|
||||
## Queue Overview
|
||||
|
||||
We have one main queue called `workerJobs` that handles all job types. Each job has a `jobType` field that tells us what it does:
|
||||
|
||||
### Low Priority Jobs
|
||||
|
||||
#### RENEW_WEBHOOK
|
||||
Renews webhooks for pieces that have webhooks channel with expiration like Google Sheets.
|
||||
|
||||
#### EXECUTE_POLLING
|
||||
Checks external services for new data at regular intervals.
|
||||
|
||||
### Medium Priority Jobs
|
||||
|
||||
#### EXECUTE_FLOW
|
||||
Runs flows when they're triggered.
|
||||
|
||||
#### EXECUTE_WEBHOOK
|
||||
Processes incoming webhook requests that start flow runs.
|
||||
|
||||
#### DELAYED_FLOW
|
||||
Runs flows that were scheduled for later, like paused flows or delayed executions.
|
||||
|
||||
### High Priority Jobs
|
||||
|
||||
#### EXECUTE_PROPERTY
|
||||
Loads dynamic properties for pieces that need them at runtime.
|
||||
|
||||
#### EXECUTE_EXTRACT_PIECE_INFORMATION
|
||||
Gets information about pieces when they're being installed or set up.
|
||||
|
||||
#### EXECUTE_VALIDATION
|
||||
Checks that flow settings, inputs, or data are correct before running.
|
||||
|
||||
#### EXECUTE_TRIGGER_HOOK
|
||||
Runs special logic before or after triggers fire.
|
||||
|
||||
|
||||
<Info>
|
||||
Failed jobs are not normal and need to be checked right away to find and fix what's causing them.
|
||||
They require immediate investigation as they represent executions that failed for unknown reasons that could indicate system issues.
|
||||
</Info>
|
||||
|
||||
<Tip>
|
||||
Delayed jobs represent either paused flows scheduled for future execution, upcoming polling job iterations, or jobs being retried due to temporary failures. They indicate an internal system error occurred and the job will be retried automatically according to the backoff policy.
|
||||
</Tip>
|
||||
@@ -0,0 +1,37 @@
|
||||
---
|
||||
title: "Reset Password"
|
||||
description: "How to reset your password on a self-hosted instance"
|
||||
icon: "key"
|
||||
---
|
||||
|
||||
If you forgot your password on a self-hosted instance, you can reset it using the following steps:
|
||||
|
||||
1. **Locate PostgreSQL Docker Container**:
|
||||
- Use a command like `docker ps` to find the PostgreSQL container.
|
||||
|
||||
2. **Access the Container**:
|
||||
- Use SSH to access the PostgreSQL Docker container.
|
||||
```bash
|
||||
docker exec -it POSTGRES_CONTAINER_ID /bin/bash
|
||||
```
|
||||
|
||||
3. **Open the PostgreSQL Console**:
|
||||
- Inside the container, open the PostgreSQL console with the `psql` command.
|
||||
```bash
|
||||
psql -U postgres
|
||||
```
|
||||
|
||||
4. **Connect to the ActivePieces Database**:
|
||||
- Connect to the ActivePieces database.
|
||||
```sql
|
||||
\c activepieces
|
||||
```
|
||||
|
||||
5. **Create a Secure Password**:
|
||||
- Use a tool like [bcrypt-generator.com](https://bcrypt-generator.com/) to generate a new secure password, number of rounds is 10.
|
||||
|
||||
6. **Update Your Password**:
|
||||
- Run the following SQL query within the PostgreSQL console, replacing `HASH_PASSWORD` with your new password and `YOUR_EMAIL_ADDRESS` with your email.
|
||||
```sql
|
||||
UPDATE public.user_identity SET password='HASH_PASSWORD' WHERE email='YOUR_EMAIL_ADDRESS';
|
||||
```
|
||||
@@ -0,0 +1,35 @@
|
||||
---
|
||||
title: "Truncated Logs"
|
||||
description: "Understanding and resolving truncated flow run logs"
|
||||
icon: "file-lines"
|
||||
---
|
||||
|
||||
## Overview
|
||||
|
||||
If you see `(truncated)` in the flow run logs, it means the logs have exceeded the maximum allowed file size.
|
||||
|
||||
## How It Works
|
||||
|
||||
There is a current limitation where the log file of a run cannot grow past a certain size. When this limit is reached, the engine automatically removes the largest keys in the JSON output until it fits within the allowed size.
|
||||
|
||||
<Note>
|
||||
**This does not affect flow execution.** Your flow will continue to run normally even when logs are truncated.
|
||||
</Note>
|
||||
|
||||
## Known Limitation
|
||||
|
||||
There is one known issue with truncated logs:
|
||||
|
||||
If you **pause** a flow, then **resume** it, and the resumed step references data from a truncated step, the flow will fail because the referenced data is no longer available in the logs.
|
||||
|
||||
## Solution
|
||||
|
||||
You can increase the `AP_MAX_FILE_SIZE_MB` environment variable to a higher value to allow larger log files:
|
||||
|
||||
```bash
|
||||
AP_MAX_FILE_SIZE_MB=50
|
||||
```
|
||||
|
||||
<Info>
|
||||
**Future Improvement:** There is a planned enhancement to change this limit from per-log-file to per-step, which will provide more granular control over log sizes. This feature is currently in the planning phase.
|
||||
</Info>
|
||||
@@ -0,0 +1,17 @@
|
||||
---
|
||||
title: "Websocket Issues"
|
||||
description: "Troubleshoot websocket connection problems"
|
||||
icon: "plug"
|
||||
---
|
||||
|
||||
If you're experiencing issues with websocket connections, it's likely due to incorrect proxy configuration. Common symptoms include:
|
||||
|
||||
- Test Flow button not working
|
||||
- Test step in flows not working
|
||||
- Real-time updates not showing
|
||||
|
||||
To resolve these issues:
|
||||
|
||||
1. Ensure your reverse proxy is properly configured for websocket connections
|
||||
2. Check our [Setup HTTPS](/install/guides/setup-ssl) guide for correct configuration examples
|
||||
3. Some browsers block http websocket connections, please setup SSL to resolve this issue.
|
||||
Reference in New Issue
Block a user