Add Activepieces integration for workflow automation
- Add Activepieces fork with SmoothSchedule custom piece - Create integrations app with Activepieces service layer - Add embed token endpoint for iframe integration - Create Automations page with embedded workflow builder - Add sidebar visibility fix for embed mode - Add list inactive customers endpoint to Public API - Include SmoothSchedule triggers: event created/updated/cancelled - Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
18
activepieces-fork/docs/install/architecture/engine.mdx
Normal file
18
activepieces-fork/docs/install/architecture/engine.mdx
Normal file
@@ -0,0 +1,18 @@
|
||||
---
|
||||
title: "Engine"
|
||||
icon: "brain"
|
||||
---
|
||||
|
||||
The Engine file contains the following types of operations:
|
||||
|
||||
- **Extract Piece Metadata**: Extracts metadata when installing new pieces.
|
||||
- **Execute Step**: Executes a single test step.
|
||||
- **Execute Flow**: Executes a flow.
|
||||
- **Execute Property**: Executes dynamic dropdowns or dynamic properties.
|
||||
- **Execute Trigger Hook**: Executes actions such as OnEnable, OnDisable, or extracting payloads.
|
||||
- **Execute Auth Validation**: Validates the authentication of the connection.
|
||||
|
||||
The engine takes the flow JSON with an engine token scoped to this project and implements the API provided for the piece framework, such as:
|
||||
- Storage Service: A simple key/value persistent store for the piece framework.
|
||||
- File Service: A helper to store files either locally or in a database, such as for testing steps.
|
||||
- Fetch Metadata: Retrieves metadata of the current running project.
|
||||
67
activepieces-fork/docs/install/architecture/overview.mdx
Normal file
67
activepieces-fork/docs/install/architecture/overview.mdx
Normal file
@@ -0,0 +1,67 @@
|
||||
---
|
||||
title: "Overview"
|
||||
description: ""
|
||||
icon: "cube"
|
||||
---
|
||||
|
||||
This page focuses on describing the main components of Activepieces and focus mainly on workflow executions.
|
||||
|
||||
## Components
|
||||
|
||||

|
||||
|
||||
**Activepieces:**
|
||||
|
||||
- **App**: The main application that organizes everything from APIs to scheduled jobs.
|
||||
- **Worker**: Polls for new jobs and executes the flows with the engine, ensuring proper sandboxing, and sends results back to the app through the API.
|
||||
- **Engine**: TypeScript code that parses flow JSON and executes it. It is compiled into a single JS file.
|
||||
- **UI**: Frontend written in React.
|
||||
|
||||
**Third Party**:
|
||||
- **Postgres**: The main database for Activepieces.
|
||||
- **Redis**: This is used to power the queue using [BullMQ](https://docs.bullmq.io/).
|
||||
|
||||
## Reliability & Scalability
|
||||
|
||||
<Tip>
|
||||
Postgres and Redis availability is outside the scope of this documentation, as many cloud providers already implement best practices to ensure their availability.
|
||||
</Tip>
|
||||
|
||||
- **Webhooks**:
|
||||
All webhooks are sent to the Activepieces app, which performs basic validation and adds them to the queue. In case of a spike, webhooks will be added to the queue.
|
||||
|
||||
- **Polling Trigger**:
|
||||
All recurring jobs are added to Redis. In case of a failure, the missed jobs will be executed again.
|
||||
|
||||
- **Flow Execution**:
|
||||
Workers poll jobs from the queue. In the event of a spike, the flow execution will still work but may be delayed depending on the size of the spike.
|
||||
|
||||
To scale Activepieces, you typically need to increase the replicas of either workers, the app, or the Postgres database. A small Redis instance is sufficient as it can handle thousands of jobs per second and rarely acts as a bottleneck.
|
||||
|
||||
## Repository Structure
|
||||
|
||||
|
||||
The repository is structured as a monorepo using the NX build system, with TypeScript as the primary language. It is divided into several packages:
|
||||
|
||||
```
|
||||
.
|
||||
├── packages
|
||||
│ ├── react-ui
|
||||
│ ├── server
|
||||
| |── api
|
||||
| |── worker
|
||||
| |── shared
|
||||
| ├── ee
|
||||
│ ├── engine
|
||||
│ ├── pieces
|
||||
│ ├── shared
|
||||
```
|
||||
|
||||
- `react-ui`: This package contains the user interface, implemented using the React framework.
|
||||
- `server-api`: This package contains the main application written in TypeScript with the Fastify framework.
|
||||
- `server-worker`: This package contains the logic of accepting flow jobs and executing them using the engine.
|
||||
- `server-shared`: this package contains the shared logic between worker and app.
|
||||
- `engine`: This package contains the logic for flow execution within the sandbox.
|
||||
- `pieces`: This package contains the implementation of triggers and actions for third-party apps.
|
||||
- `shared`: This package contains shared data models and helper functions used by the other packages.
|
||||
- `ee`: This package contains features that are only available in the paid edition.
|
||||
107
activepieces-fork/docs/install/architecture/performance.mdx
Normal file
107
activepieces-fork/docs/install/architecture/performance.mdx
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
title: "Benchmarking"
|
||||
icon: "chart-line"
|
||||
---
|
||||
|
||||
## Performance
|
||||
|
||||
On average, Activepieces (self-hosted) can handle 95 flow executions per second on a single instance (including PostgreSQL and Redis) with under 300ms latency.\
|
||||
It can scale up much more with increasing instance resources and/or adding more instances.\
|
||||
\
|
||||
The result of **5000** requests with concurrency of **25**
|
||||
|
||||
```
|
||||
This is ApacheBench, Version 2.3 <$Revision: 1913912 $>
|
||||
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
|
||||
Licensed to The Apache Software Foundation, http://www.apache.org/
|
||||
|
||||
Benchmarking localhost (be patient)
|
||||
Completed 500 requests
|
||||
Completed 1000 requests
|
||||
Completed 1500 requests
|
||||
Completed 2000 requests
|
||||
Completed 2500 requests
|
||||
Completed 3000 requests
|
||||
Completed 3500 requests
|
||||
Completed 4000 requests
|
||||
Completed 4500 requests
|
||||
Completed 5000 requests
|
||||
Finished 5000 requests
|
||||
|
||||
|
||||
Server Software:
|
||||
Server Hostname: localhost
|
||||
Server Port: 4200
|
||||
|
||||
Document Path: /api/v1/webhooks/GMtpNwDsy4mbJe3369yzy/sync
|
||||
Document Length: 16 bytes
|
||||
|
||||
Concurrency Level: 25
|
||||
Time taken for tests: 52.087 seconds
|
||||
Complete requests: 5000
|
||||
Failed requests: 0
|
||||
Total transferred: 1375000 bytes
|
||||
HTML transferred: 80000 bytes
|
||||
Requests per second: 95.99 [#/sec] (mean)
|
||||
Time per request: 260.436 [ms] (mean)
|
||||
Time per request: 10.417 [ms] (mean, across all concurrent requests)
|
||||
Transfer rate: 25.78 [Kbytes/sec] received
|
||||
|
||||
Connection Times (ms)
|
||||
min mean[+/-sd] median max
|
||||
Connect: 0 0 0.0 0 1
|
||||
Processing: 32 260 23.8 254 756
|
||||
Waiting: 31 260 23.8 254 756
|
||||
Total: 32 260 23.8 254 756
|
||||
|
||||
Percentage of the requests served within a certain time (ms)
|
||||
50% 254
|
||||
66% 261
|
||||
75% 267
|
||||
80% 272
|
||||
90% 289
|
||||
95% 306
|
||||
98% 327
|
||||
99% 337
|
||||
100% 756 (longest request)
|
||||
```
|
||||
|
||||
#### Benchmarking
|
||||
|
||||
Here is how to reproduce the benchmark:
|
||||
|
||||
1. Run Activepieces with PostgreSQL and Redis with the following environment variables:
|
||||
|
||||
```env
|
||||
AP_EXECUTION_MODE=SANDBOX_CODE_ONLY
|
||||
AP_FLOW_WORKER_CONCURRENCY=25
|
||||
```
|
||||
|
||||
2. Create a flow with a Catch Webhook trigger and a webhook Return Response action.
|
||||
|
||||
|
||||

|
||||
3. Get the webhook URL from the webhook trigger and append `/sync` to it.
|
||||
4. Install a benchmark tool like [ab](https://httpd.apache.org/docs/2.4/programs/ab.html):
|
||||
|
||||
```bash
|
||||
sudo apt-get install apache2-utils
|
||||
```
|
||||
|
||||
5. Run the benchmark:
|
||||
|
||||
```bash
|
||||
ab -c 25 -n 5000 http://localhost:4200/api/v1/webhooks/GMtpNwDsy4mbJe3369yzy/sync
|
||||
```
|
||||
|
||||
6. Check the results:
|
||||
|
||||
Instance specs used to get the above results:
|
||||
|
||||
- 16GB RAM
|
||||
- AMD Ryzen 7 8845HS (8 cores, 16 threads)
|
||||
- Ubuntu 24.04 LTS
|
||||
|
||||
<Tip>
|
||||
These benchmarks are based on running Activepieces in `SANDBOX_CODE_ONLY` mode. This does **not** represent the performance of Activepieces Cloud, which uses a different sandboxing mechanism to support multi-tenancy. For more information, see [Sandboxing](/install/architecture/workers#sandboxing).
|
||||
</Tip>
|
||||
98
activepieces-fork/docs/install/architecture/workers.mdx
Normal file
98
activepieces-fork/docs/install/architecture/workers.mdx
Normal file
@@ -0,0 +1,98 @@
|
||||
---
|
||||
title: "Workers & Sandboxing"
|
||||
icon: "gears"
|
||||
---
|
||||
|
||||
This component is responsible for polling jobs from the app, preparing the sandbox, and executing them with the engine.
|
||||
|
||||
## Jobs
|
||||
|
||||
There are three types of jobs:
|
||||
|
||||
- **Recurring Jobs**: Polling/schedule triggers jobs for active flows.
|
||||
- **Flow Jobs**: Flows that are currently being executed.
|
||||
- **Webhook Jobs**: Webhooks that still need to be ingested, as third-party webhooks can map to multiple flows or need mapping.
|
||||
|
||||
<Tip>
|
||||
This documentation will not discuss how the engine works other than stating that it takes the jobs and produces the output. Please refer to [engine](./engine) for more information.
|
||||
</Tip>
|
||||
|
||||
## Sandboxing
|
||||
|
||||
Sandbox in Activepieces means in which environment the engine will execute the flow. There are four types of sandboxes, each with different trade-offs:
|
||||
|
||||
<Snippet file="execution-mode.mdx" />
|
||||
|
||||
|
||||
|
||||
|
||||
### No Sandboxing & V8 Sandboxing
|
||||
|
||||
The difference between the two modes is in the execution of code pieces. For V8 Sandboxing, we use [isolated-vm](https://www.npmjs.com/package/isolated-vm), which relies on V8 isolation to isolate code pieces.
|
||||
|
||||
These are the steps that are used to execute the flow:
|
||||
|
||||
<Steps>
|
||||
<Step title="Prepare Code Pieces">
|
||||
If the code doesn't exist, it will be built with bun with the necessary npm packages will be prepared, if possible.
|
||||
</Step>
|
||||
<Step title="Install Pieces">
|
||||
Pieces are npm packages, we use `bun` to install the pieces.
|
||||
</Step>
|
||||
<Step title="Execution">
|
||||
There is a pool of worker threads kept warm and the engine stays running and listening. Each thread executes one engine operation and sends back the result upon completion.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
|
||||
#### Security:
|
||||
In a self-hosted environment, all piece installations are done by the **platform admin**. It is assumed that the pieces are secure, as they have full access to the machine.
|
||||
|
||||
Code pieces provided by the end user are isolated using V8, which restricts the user to browser JavaScript instead of Node.js with npm.
|
||||
|
||||
#### Performance
|
||||
The flow execution is fast as the javascript can be, although there is overhead in polling from queue and prepare the files first time the flow get executed.
|
||||
|
||||
#### Benchmark
|
||||
|
||||
TBD
|
||||
|
||||
### Kernel Namespaces Sandboxing
|
||||
|
||||
This consists of two steps: the first one is preparing the sandbox, and the other one is the execution part.
|
||||
|
||||
#### Prepare the folder
|
||||
|
||||
Each flow will have a folder with everything required to execute this flows, which means the **engine**, **code pieces** and **npms**
|
||||
|
||||
<Steps>
|
||||
<Step title="Prepare Code Pieces">
|
||||
If the code doesn't exist, it will be compiled using TypeScript Compiler (tsc) and the necessary npm packages will be prepared, if possible.
|
||||
</Step>
|
||||
<Step title="Install Pieces">
|
||||
Pieces are npm packages, we perform simple check If they don't exist we use `pnpm` to install the pieces.
|
||||
</Step>
|
||||
</Steps>
|
||||
|
||||
#### Execute Flow using Sandbox
|
||||
|
||||
In this mode, we use kernel namespaces to isolate everything (file system, memory, CPU). The folder prepared earlier will be bound as a **Read Only** Directory.
|
||||
|
||||
Then we use the command line and to spin up the isolation with new node process, something like that.
|
||||
```bash
|
||||
./isolate node path/to/flow.js --- rest of args
|
||||
```
|
||||
|
||||
#### Security
|
||||
|
||||
The flow execution is isolated in their own namespaces, which means pieces are isolated in different process and namespaces, So the user can run bash scripts and use the file system safely as It's limited and will be removed after the execution, in this mode the user can use any **NPM package** in their code piece.
|
||||
|
||||
#### Performance
|
||||
|
||||
This mode is **Slow** and **CPU Intensive**. The reason behind this is the **cold boot** of Node.js, since each flow execution will require a new **Node.js** process. The Node.js process consumes a lot of resources and takes some time to compile the code and start executing.
|
||||
|
||||
|
||||
#### Benchmark
|
||||
|
||||
|
||||
TBD
|
||||
Reference in New Issue
Block a user