Add Activepieces integration for workflow automation

- Add Activepieces fork with SmoothSchedule custom piece
- Create integrations app with Activepieces service layer
- Add embed token endpoint for iframe integration
- Create Automations page with embedded workflow builder
- Add sidebar visibility fix for embed mode
- Add list inactive customers endpoint to Public API
- Include SmoothSchedule triggers: event created/updated/cancelled
- Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
poduck
2025-12-18 22:59:37 -05:00
parent 9848268d34
commit 3aa7199503
16292 changed files with 1284892 additions and 4708 deletions

View File

@@ -0,0 +1,98 @@
---
title: Handling Downtime
icon: turn-down
---
![Downtime Incident](https://media3.giphy.com/media/v1.Y2lkPTc5MGI3NjExdTZnbGxjc3k5d3NxeXQwcmhxeTRsbnNybnd4NG41ZnkwaDdsa3MzeSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/2UCt7zbmsLoCXybx6t/giphy.gif)
## 📋 What You Need Before Starting
Make sure these are ready:
- **[Incident.io Setup](../playbooks/setup-incident-io)**: For managing incidents.
- **ClickStack**: For checking logs and errors.
- **Checkly Debugging**: For testing and monitoring.
---
## 🚨 Stay Calm and Take Action
<Warning>
Dont panic! Follow these steps to fix the issue.
</Warning>
1. **Tell Your Users**:
- Let your users know theres an issue. Post on [Community](https://community.activepieces.com) and Discord.
- Example message: *“Were looking into a problem with our services. Thanks for your patience!”*
2. **Find Out Whats Wrong**:
- Gather details. Whats not working? When did it start?
3. **Update the Status Page**:
- Use [Incident.io](https://incident.io) to update the status page. Set it to *“Investigating”* or *“Partial Outage”*.
---
## 🔍 Check for Infrastructure Problems
1. **Look at DigitalOcean**:
- Check if the CPU, memory, or disk usage is too high.
- If it is:
- **Increase the machine size** temporarily to fix the issue.
- Keep looking for the root cause.
---
## 📜 Check Logs and Errors
1. **Use Clickstack**:
- Go to [https://watch.activepieces.com](https://watch.activepieces.com).
- Search for recent errors in the logs.
- Credentials are in the [Master Playbook](https://docs.google.com/document/d/15OwWnRwkhlx9l-EN5dXFoysw0OoxC0lVvnjbdbId4BE/edit?pli=1&tab=t.4lk480a2s8yh#heading=h.1qegnmb1w65k).
2. **Check Sentry**:
- Look for grouped errors (errors that happen a lot).
- Try to **reproduce the error** and fix it if possible.
---
## 🛠️ Debugging with Checkly
1. **Check Checkly Logs**:
- Watch the **video recordings** of failed checks to see what went wrong.
- If the issue is a **timeout**, it might mean theres a bigger performance problem.
- If it's an E2E test failure due to UI changes, it's likely not urgent.
- Fix the test and the issue will go away.
---
## 🚨 When Should You Ask for Help?
Ask for help right away if:
- Flows are failing.
- The whole platform is down.
- There's a lot of data loss or corruption.
- You're not sure what is causing the issue.
- You've spent **more than 5 minutes** and still don't know what's wrong.
💡 **How to Ask for Help**:
- Use **Incident.io** to create a **critical alert**.
- Go to the **Slack incident channel** and escalate the issue to the engineering team.
<Warning>
If youre unsure, **ask for help!** Its better to be safe than sorry.
</Warning>
---
## 💡 Helpful Tips
1. **Stay Organized**:
- Keep a list of steps to follow during downtime.
- Write down everything you do so you can refer to it later.
2. **Communicate Clearly**:
- Keep your team and users updated.
- Use simple language in your updates.
3. **Take Care of Yourself**:
- If you feel stressed, take a short break. Grab a coffee ☕, take a deep breath, and tackle the problem step by step.

View File

@@ -0,0 +1,31 @@
---
title: "Engineering Workflow"
icon: 'lightbulb'
---
Activepieces work is based on one-week sprints, as priorities change fast, the sprint has to be short to adapt.
## Sprints
Sprints are shared publicly on our GitHub account. This would give everyone visibility into what we are working on.
* There should be a GitHub issue for the sprint set up in advance that outlines the changes.
* Each _individual_ should come prepared with specific suggestions for what they will work on over the next sprint. **if you're in an engineering role, no one will dictate to you what to build it is up to you to drive this.**
* Teams generally meet once a week to pick the **priorities** together.
* Everyone in the team should attend the sprint planning.
* Anyone can comment on the sprint issue before or after the sprint.
## Pull Requests
When it comes to code review, we have a few guidelines to ensure efficiency:
- Create a pull request in draft state as soon as possible.
- Be proactive and review other peoples pull requests. Dont wait for someone to ask for your review; its your responsibility.
- Assign only one reviewer to your pull request.
- Add the PR to the current project (sprint) so we can keep track of unmerged PRs at the end of each sprint.
- It is the **responsibility** of the **PR owner** to draft the test scenarios within the PR description. Upon review, the reviewer may assume that these scenarios have been tested and provide additional suggestions for scenarios.
- Large, incomplete features should be broken down into smaller tasks and continuously merged into the main branch.
## Planning is everyone's job.
Every engineer is responsible for discovering bugs/opportunities and bringing them up in the sprint to convert them into actionable tasks.

View File

@@ -0,0 +1,64 @@
---
title: 'On-Call'
icon: 'phone'
---
## Prerequisites:
- [Setup Incident IO](../playbooks/setup-incident-io)
## Why On-Call?
We need to ensure there is **exactly one person** at the same time who is the main point of contact for the users and the **first responder** for the issues. It's also a great way to learn about the product and the users and have some fun.
<Tip>
You can listen to [Queen - Under Pressure](https://www.youtube.com/watch?v=a01QQZyl-_I) while on-call, it's fun and motivating.
</Tip>
<Tip>
If you ever feel burn out in middle of your rotation, please reach out to the team and we will help you with the rotation or take over the responsibility.
</Tip>
## On-Call Schedule
The on-call rotation is managed through Incident.io, with each engineer taking a one-week shift. You can:
- View the current schedule and upcoming rotations on [Incident.io On-Call Schedule](https://app.incident.io/activepieces/on-call/schedules)
- Add the schedule to your Google Calendar using [this link](https://calendar.google.com/calendar/r?cid=webcal://app.incident.io/api/schedule_feeds/cc024d13704b618cbec9e2c4b2415666dfc8b1efdc190659ebc5886dfe2a1e4b)
<Warning>
Make sure to update the on-call schedule in Incident.io if you cannot be available during your assigned rotation. This ensures alerts are routed to the correct person and maintains our incident response coverage.
To modify the schedule:
1. Go to [Incident.io On-Call Schedule](https://app.incident.io/activepieces/on-call/schedules)
2. Find your rotation slot
3. Click "Override schedule" to mark your unavailability
4. Coordinate with the team to find coverage for your slot
</Warning>
## What it means to be on-call
The primary objective of being on-call is to triage issues and assist users. It is not about fixing the issues or coding missing features. Delegation is key whenever possible.
You are responsible for the following:
* Respond to Slack messages as soon as possible, referring to the [customer support guidelines](/handbook/customer-support/overview).
* Check [community.activepieces.com](https://community.activepieces.com) for any new issues or to learn about existing issues.
* Monitor your Incident.io notifications and respond promptly when paged.
<Tip>
**Friendly Tip #1**: always escalate to the team if you are unsure what to do.
</Tip>
## How do you get paged?
Monitor and respond to incidents that come through these channels:
#### Slack Fire Emoji (🔥)
When a customer reports an issue in Slack and someone reacts with 🔥, you'll be automatically paged and a dedicated incident channel will be created.
#### Automated Alerts
Watch for notifications from:
- Digital Ocean about CPU, Memory, or Disk outages
- Checkly about e2e test failures or website downtime

View File

@@ -0,0 +1,52 @@
---
title: "Onboarding Check List"
icon: 'lightbulb'
---
🎉 Welcome to Activepieces!
This guide provides a checklist for the new hire onboarding process.
---
## 📧 Essentials
- [ ] Set up your @activepieces.com email account and setup 2FA
- [ ] Confirm access to out private Discord server.
- [ ] Get Invited to the Activepieces Github Organization and Setup 2FA
- [ ] Get Assigned to a buddy who will be your onboarding buddy.
<Tip>
During your first two months, we'll schedule 1:1 meetings every two weeks to ensure you're progressing well and to maintain open communication in both directions.
After two months, we will decrease the frequency of the 1:1 to once a month.
</Tip>
<Tip>
If you don't setup the 2FA, We will be alerted from security perspective.
</Tip>
---
### Engineering Checklist
- [ ] Setup your development environment using our setup guide
- [ ] Learn the repository structure and our tech stack (Fastify, React, PostgreSQL, SQLite, Redis)
- [ ] Understand the key database tables (Platform, Projects, Flows, Connections, Users)
- [ ] Complete your first "warmup" task within your first day (it's our tradition!)
---
## 🌟 Tips for Success
- Don't hesitate to ask questions—the team is especially helpful during your first days
- Take time to understand the product from a business perspective
- Work closely with your onboarding buddy to get up to speed quickly
- Review our documentation, explore the codebase, and check out community resources, outside your scope.
- Provide your ideas and feedback regularly
---
Welcome again to the team. We can't wait to see the impact you'll make at Activepieces! 😉

View File

@@ -0,0 +1,51 @@
---
title: "Stack & Tools"
icon: "hammer"
---
## Language
Activepieces uses **Typescript** as its one and only language.
The reason behind unifying the language is the ability for it to break data models and features into packages, which can be shared across its components (worker / frontend / backend).
This enables it to focus on learning fewer tooling options and perfect them across all its packages.
## Frontend
- Web framework/library: [React](https://reactjs.org/)
- Layout/components: [shadcn](https://shadcn.com/) / Tailwind
## Backend
- Framework: [Fastify](https://www.fastify.io/)
- Database: [PostgreSQL](https://www.postgresql.org/)
- Task Queuing: [Redis](https://redis.io/)
- Task Worker: [BullMQ](https://github.com/taskforcesh/bullmq)
## Testing
- Unit & Integration Tests: [Jest](https://jestjs.io/)
- E2E Test: [Playwright](https://playwright.dev/)
## Additional Tools
- Application monitoring: [Sentry](https://sentry.io/welcome/)
- CI/CD: [GitHub Actions](https://github.com/features/actions) / [Depot](https://depot.dev/) / [Kamal](https://kamal-deploy.org/)
- Containerization: [Docker](https://www.docker.com/)
- Linter: [ESLint](https://eslint.org/)
- Logging: [OpenTelemetry](https://opentelemetry.io/)
- Building: [NX Monorepo](https://nx.dev/)
## Adding New Tool
Adding a new tool isn't a simple choice. A simple choice is one that's easy to do or undo, or one that only affects your work and not others'.
We avoid adding new stuff to increase the ease of setup, which increases adoption. Having more dependencies means more moving parts and support.
If you're thinking about a new tool, ask yourself these:
- Is this tool open source? How can we give it to customers who use their own servers?
- What does it fix, and why do we need it now?
- Can we use what we already have instead?
These questions only apply to required services for everyone. If this tool speeds up your own work, we don't need to think so hard.

View File

@@ -0,0 +1,8 @@
---
title: "Overview"
icon: "code"
---
Welcome to the engineering team! This section contains essential information to help you get started, including our development processes, guidelines, and practices. We're excited to have you on board.

View File

@@ -0,0 +1,98 @@
---
title: "Database Migrations"
description: "Guide for creating database migrations in Activepieces"
icon: "database"
---
Activepieces uses TypeORM as its database driver in Node.js. We support two database types across different editions of our platform.
The database migration files contain both what to do to migrate (up method) and what to do when rolling back (down method).
<Tip>
Read more about TypeORM migrations here:
https://orkhan.gitbook.io/typeorm/docs/migrations
</Tip>
## Database Support
- PostgreSQL
- PGlite
<Tip>
**Why Do we have PGlite?**
We support PGlite to simplify development and self-hosting. It's particularly helpful for:
- Developers creating pieces who want a quick setup
- Self-hosters using platforms to manage docker images but doesn't support docker compose.
PGlite is a lightweight PostgreSQL implementation that runs embedded, so migrations are compatible with PostgreSQL.
</Tip>
## Editions
- **Enterprise & Cloud Edition** (Must use PostgreSQL)
- **Community Edition** (Can use PostgreSQL or PGlite)
### How To Generate
<Steps>
<Step title="Setup AP_DB_TYPE">
Set the `AP_DB_TYPE` environment variable to `POSTGRES` after making sure have latest state by running Activepieces first.
</Step>
<Step title="Generate Migration">
Run the migration generation command:
```bash
nx db-migration server-api --name=<MIGRATION_NAME>
```
Replace `<MIGRATION_NAME>` with a descriptive name for your migration.
</Step>
<Step title="Review Migration File">
The command will generate a new migration file in `packages/server/api/src/app/database/migration/postgres/`.
Review the generated file and register it in `postgres-connection.ts`.
</Step>
</Steps>
## PGlite Compatibility
While PGlite is mostly PostgreSQL-compatible, some features are not supported. When using features like `CONCURRENTLY` for index operations, you need to conditionally handle PGlite:
```typescript
import { AppSystemProp } from '@activepieces/server-shared'
import { MigrationInterface, QueryRunner } from 'typeorm'
import { DatabaseType, system } from '../../../helper/system/system'
const databaseType = system.get(AppSystemProp.DB_TYPE)
const isPGlite = databaseType === DatabaseType.PGLITE
export class AddMyIndex1234567890 implements MigrationInterface {
name = 'AddMyIndex1234567890'
transaction = false // Required when using CONCURRENTLY
public async up(queryRunner: QueryRunner): Promise<void> {
if (isPGlite) {
await queryRunner.query(`CREATE INDEX "idx_name" ON "table" ("column")`)
} else {
await queryRunner.query(`CREATE INDEX CONCURRENTLY "idx_name" ON "table" ("column")`)
}
}
public async down(queryRunner: QueryRunner): Promise<void> {
if (isPGlite) {
await queryRunner.query(`DROP INDEX "idx_name"`)
} else {
await queryRunner.query(`DROP INDEX CONCURRENTLY "idx_name"`)
}
}
}
```
<Warning>
`CREATE INDEX CONCURRENTLY` and `DROP INDEX CONCURRENTLY` are not supported in PGlite because PGLite is a single user/connection database. Always add a check for PGlite when using these operations.
</Warning>
<Tip>
Always test your migrations by running them both up and down to ensure they work as expected.
</Tip>

View File

@@ -0,0 +1,440 @@
---
title: "E2E Tests"
icon: 'clipboard-list'
---
## Overview
Our e2e test suite uses Playwright to ensure critical user workflows function correctly across the application. The tests are organized using the Page Object Model pattern to maintain clean, reusable, and maintainable test code. This playbook outlines the structure, conventions, and best practices for writing e2e tests.
## Project Structure
```
packages/tests-e2e/
├── scenarios/ # Test files (*.spec.ts)
├── pages/ # Page Object Models
│ ├── base.ts # Base page class
│ ├── index.ts # Page exports
│ ├── authentication.page.ts
│ ├── builder.page.ts
│ ├── flows.page.ts
│ └── agent.page.ts
├── helper/ # Utilities and configuration
│ └── config.ts # Environment configuration
├── playwright.config.ts # Playwright configuration
└── project.json # Nx project configuration
```
This playbook provides a comprehensive guide for writing e2e tests following the established patterns in your codebase. It covers the Page Object Model structure, test organization, configuration management, and best practices for maintaining reliable e2e tests.
## Page Object Model Pattern
### Base Page Structure
All page objects extend the `BasePage` class and follow a consistent structure:
```typescript
export class YourPage extends BasePage {
url = `${configUtils.getConfig().instanceUrl}/your-path`;
getters = {
// Locator functions that return page elements
elementName: (page: Page) => page.getByRole('button', { name: 'Button Text' }),
};
actions = {
// Action functions that perform user interactions
performAction: async (page: Page, params: { param1: string }) => {
// Implementation
},
};
}
```
### Page Object Guidelines
#### ❌ Don't do
```typescript
// Direct element selection in test files
test('should create flow', async ({ page }) => {
await page.getByRole('button', { name: 'Create Flow' }).click();
await page.getByText('From scratch').click();
// Test logic mixed with element selection
});
```
#### ✅ Do
```typescript
// flows.page.ts
export class FlowsPage extends BasePage {
getters = {
createFlowButton: (page: Page) => page.getByRole('button', { name: 'Create Flow' }),
fromScratchButton: (page: Page) => page.getByText('From scratch'),
};
actions = {
newFlowFromScratch: async (page: Page) => {
await this.getters.createFlowButton(page).click();
await this.getters.fromScratchButton(page).click();
},
};
}
// integration.spec.ts
test('should create flow', async ({ page }) => {
await flowsPage.actions.newFlowFromScratch(page);
// Clean test logic focused on behavior
});
```
## Test Organization
### Test File Structure
Test files should be organized by feature or workflow:
```typescript
import { test, expect } from '@playwright/test';
import {
AuthenticationPage,
FlowsPage,
BuilderPage
} from '../pages';
import { configUtils } from '../helper/config';
test.describe('Feature Name', () => {
let authenticationPage: AuthenticationPage;
let flowsPage: FlowsPage;
let builderPage: BuilderPage;
test.beforeEach(async () => {
// Initialize page objects
authenticationPage = new AuthenticationPage();
flowsPage = new FlowsPage();
builderPage = new BuilderPage();
});
test('should perform specific workflow', async ({ page }) => {
// Test implementation
});
});
```
### Test Naming Conventions
- Use descriptive test names that explain the expected behavior
- Follow the pattern: `should [action] [expected result]`
- Include context when relevant
```typescript
// Good test names
test('should send Slack message via flow', async ({ page }) => {});
test('should handle webhook with dynamic parameters', async ({ page }) => {});
test('should authenticate user with valid credentials', async ({ page }) => {});
// Avoid vague names
test('should work', async ({ page }) => {});
test('test flow', async ({ page }) => {});
```
## Configuration Management
### Environment Configuration
Use the centralized config utility to handle different environments:
```typescript
// helper/config.ts
export const configUtils = {
getConfig: (): Config => {
return process.env.E2E_INSTANCE_URL ? prodConfig : localConfig;
},
};
// Usage in pages
export class AuthenticationPage extends BasePage {
url = `${configUtils.getConfig().instanceUrl}/sign-in`;
}
```
### Environment Variables
Required environment variables for CI/CD:
- `E2E_INSTANCE_URL`: Target application URL
- `E2E_EMAIL`: Test user email
- `E2E_PASSWORD`: Test user password
## Writing Effective Tests
### Test Structure
Follow this pattern for comprehensive tests:
```typescript
test('should complete user workflow', async ({ page }) => {
// 1. Set up test data and timeouts
test.setTimeout(120000);
const config = configUtils.getConfig();
// 2. Authentication (if required)
await authenticationPage.actions.signIn(page, {
email: config.email,
password: config.password
});
// 3. Navigate to relevant page
await flowsPage.actions.navigate(page);
// 4. Clean up existing data (if needed)
await flowsPage.actions.cleanupExistingFlows(page);
// 5. Perform the main workflow
await flowsPage.actions.newFlowFromScratch(page);
await builderPage.actions.waitFor(page);
await builderPage.actions.selectInitialTrigger(page, {
piece: 'Schedule',
trigger: 'Every Hour'
});
// 6. Add assertions and validations
await builderPage.actions.testFlowAndWaitForSuccess(page);
// 7. Clean up (if needed)
await builderPage.actions.exitRun(page);
});
```
### Wait Strategies
Use appropriate wait strategies instead of fixed timeouts:
```typescript
// Good - Wait for specific conditions
await page.waitForURL('**/flows/**');
await page.waitForSelector('.react-flow__nodes', { state: 'visible' });
await page.waitForFunction(() => {
const element = document.querySelector('.target-element');
return element && element.textContent?.includes('Expected Text');
}, { timeout: 10000 });
// Avoid - Fixed timeouts
await page.waitForTimeout(5000);
```
### Error Handling
Implement proper error handling and cleanup:
```typescript
test('should handle errors gracefully', async ({ page }) => {
try {
await flowsPage.actions.navigate(page);
// Test logic
} catch (error) {
// Log error details
console.error('Test failed:', error);
// Take screenshot for debugging
await page.screenshot({ path: 'error-screenshot.png' });
throw error;
} finally {
// Clean up resources
await flowsPage.actions.cleanupExistingFlows(page);
}
});
```
## Best Practices
### Element Selection
Prefer semantic selectors over CSS selectors:
```typescript
// Good - Semantic selectors
getters = {
createButton: (page: Page) => page.getByRole('button', { name: 'Create Flow' }),
emailField: (page: Page) => page.getByPlaceholder('email@example.com'),
searchInput: (page: Page) => page.getByRole('textbox', { name: 'Search' }),
};
// Avoid - Fragile CSS selectors
getters = {
createButton: (page: Page) => page.locator('button.btn-primary'),
emailField: (page: Page) => page.locator('input[type="email"]'),
};
```
### Test Data Management
Use dynamic test data to avoid conflicts:
```typescript
// Good - Dynamic test data
const runVersion = Math.floor(Math.random() * 100000);
const uniqueFlowName = `Test Flow ${Date.now()}`;
// Avoid - Static test data
const flowName = 'Test Flow';
```
### Assertions
Use meaningful assertions that verify business logic:
```typescript
// Good - Business logic assertions
await builderPage.actions.testFlowAndWaitForSuccess(page);
const response = await apiRequest.get(urlWithParams);
const body = await response.json();
expect(body.targetRunVersion).toBe(runVersion.toString());
// Avoid - Implementation details
expect(await page.locator('.success-message').isVisible()).toBe(true);
```
## Running Tests
### Local Development & Debugging with Checkly
We use [Checkly](https://checklyhq.com/) to run and debug E2E tests. Checkly provides video recordings for each test run, making it easy to debug failures.
```bash
# Run tests with Checkly (includes video reporting)
npx nx run tests-e2e:test-checkly
```
- Test results, including video recordings, are available in the Checkly dashboard.
- You can debug failed tests by reviewing the video and logs provided by Checkly.
### Deploying Tests
Manual deployment is rarely needed, but you can trigger it with:
```bash
npx nx run tests-e2e:deploy-checkly
```
<Info>
Tests are deployed to Checkly automatically after successful test runs in the CI pipeline.
</Info>
## Debugging Tests
### 1. Checkly Videos and Reports
When running tests with Checkly, each test execution is recorded and detailed reports are generated. This is the fastest way to debug failures:
- **Video recordings**: Watch the exact browser session for any test run.
- **Step-by-step logs**: Review detailed logs and screenshots for each test step.
- **Access**: Open the Checkly dashboard and navigate to the relevant test run to view videos and reports.
### 2. VSCode Extension
For the best local debugging experience, install the **Playwright Test for VSCode** extension:
1. Open VSCode Extensions (Ctrl+Shift+X)
2. Search for "Playwright Test for VSCode"
3. Install the extension by Microsoft
**Benefits:**
- Debug tests directly in VSCode with breakpoints
- Step-through test execution
- View test results and traces in the Test Explorer
- Auto-completion for Playwright APIs
- Integrated test runner
### 3. Debugging Tips
1. **Use Checkly dashboard**: Review videos and logs for failed tests.
2. **Use VSCode Extension**: Set breakpoints directly in your test files.
3. **Step Through**: Use F10 (step over) and F11 (step into) in debug mode.
4. **Inspect Elements**: Use `await page.pause()` to pause execution and inspect the page.
5. **Console Logs**: Add `console.log()` statements to track execution flow.
6. **Manual Screenshots**: Take screenshots at critical points for visual debugging.
```typescript
test('should debug workflow', async ({ page }) => {
await page.goto('/flows');
// Pause execution for manual inspection
await page.pause();
// Take screenshot for debugging
await page.screenshot({ path: 'debug-screenshot.png' });
// Continue with test logic
await flowsPage.actions.newFlowFromScratch(page);
});
```
## Common Patterns
### Authentication Flow
```typescript
test('should authenticate user', async ({ page }) => {
const config = configUtils.getConfig();
await authenticationPage.actions.signIn(page, {
email: config.email,
password: config.password
});
await agentPage.actions.waitFor(page);
});
```
### Flow Creation and Testing
```typescript
test('should create and test flow', async ({ page }) => {
await flowsPage.actions.navigate(page);
await flowsPage.actions.cleanupExistingFlows(page);
await flowsPage.actions.newFlowFromScratch(page);
await builderPage.actions.waitFor(page);
await builderPage.actions.selectInitialTrigger(page, {
piece: 'Schedule',
trigger: 'Every Hour'
});
await builderPage.actions.testFlowAndWaitForSuccess(page);
});
```
### API Integration Testing
```typescript
test('should handle webhook integration', async ({ page }) => {
const apiRequest = await page.context().request;
const response = await apiRequest.get(urlWithParams);
const body = await response.json();
expect(body.targetRunVersion).toBe(expectedValue);
});
```
## Maintenance Guidelines
### Updating Selectors
When UI changes occur:
1. Update page object getters with new selectors
2. Test the changes locally
3. Update related tests if necessary
4. Ensure all tests pass before merging
### Adding New Tests
1. Create or update relevant page objects
2. Write test scenarios in appropriate spec files
3. Follow the established patterns and conventions
4. Add proper error handling and cleanup
5. Test locally before submitting
### Performance Considerations
- Keep tests focused and avoid unnecessary steps
- Use appropriate timeouts (not too short, not too long)
- Clean up test data to avoid conflicts
- Group related tests in the same describe block

View File

@@ -0,0 +1,253 @@
---
title: "Frontend Best Practices"
icon: 'lightbulb'
---
## Overview
Our frontend codebase is large and constantly growing, with multiple developers contributing to it. Establishing consistent rules across key areas like data fetching and state management will make the code easier to follow, refactor, and test. It will also help newcomers understand existing patterns and adopt them quickly.
## Data Fetching with React Query
### Hook Organization
All `useMutation` and `useQuery` hooks should be grouped by domain/feature in a single location: `features/lib/feature-hooks.ts`. Never call data fetching hooks directly from component bodies.
**Benefits:**
- Easier refactoring and testing
- Simplified mocking for tests
- Cleaner components focused on UI logic
- Reduced clutter in `.tsx` files
#### ❌ Don't do
```tsx
// UserProfile.tsx
import { useMutation, useQuery } from '@tanstack/react-query';
import { updateUser, getUser } from '../api/users';
function UserProfile({ userId }) {
const { data: user } = useQuery({
queryKey: ['user', userId],
queryFn: () => getUser(userId)
});
const updateUserMutation = useMutation({
mutationFn: updateUser,
onSuccess: () => {
// refetch logic here
}
});
return (
<div>
{/* UI logic */}
</div>
);
}
```
#### ✅ Do
```tsx
// features/users/lib/user-hooks.ts
import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query';
import { updateUser, getUser } from '../api/users';
import { userKeys } from './user-keys';
export function useUser(userId: string) {
return useQuery({
queryKey: userKeys.detail(userId),
queryFn: () => getUser(userId)
});
}
export function useUpdateUser() {
const queryClient = useQueryClient();
return useMutation({
mutationFn: updateUser,
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: userKeys.all });
}
});
}
// UserProfile.tsx
import { useUser, useUpdateUser } from '../lib/user-hooks';
function UserProfile({ userId }) {
const { data: user } = useUser(userId);
const updateUserMutation = useUpdateUser();
return (
<div>
{/* Clean UI logic only */}
</div>
);
}
```
### Query Keys Management
Query keys should be unique identifiers for specific queries. Avoid using boolean values, empty strings, or inconsistent patterns.
**Best Practice:** Group all query keys in one centralized location (inside the hooks file) for easy management and refactoring.
```tsx
// features/users/lib/user-hooks.ts
export const userKeys = {
all: ['users'] as const,
lists: () => [...userKeys.all, 'list'] as const,
list: (filters: string) => [...userKeys.lists(), { filters }] as const,
details: () => [...userKeys.all, 'detail'] as const,
detail: (id: string) => [...userKeys.details(), id] as const,
preferences: (id: string) => [...userKeys.detail(id), 'preferences'] as const,
};
// Usage examples:
// userKeys.all // ['users']
// userKeys.list('active') // ['users', 'list', { filters: 'active' }]
// userKeys.detail('123') // ['users', 'detail', '123']
```
**Benefits:**
- Easy key renaming and refactoring
- Consistent key structure across the app
- Better query specificity control
- Centralized key management
### Refetch vs Query Invalidation
Prefer using `invalidateQueries` over passing `refetch` functions between components. This approach is more maintainable and easier to understand.
#### ❌ Don't do
```tsx
function UserList() {
const { data: users, refetch } = useUsers();
return (
<div>
<UserForm onSuccess={refetch} />
<EditUserModal onSuccess={refetch} />
{/* Passing refetch everywhere */}
</div>
);
}
```
#### ✅ Do
```tsx
// In your mutation hooks
export function useCreateUser() {
const queryClient = useQueryClient();
return useMutation({
mutationFn: createUser,
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: userKeys.lists() });
}
});
}
// Components don't need to handle refetching
function UserList() {
const { data: users } = useUsers();
return (
<div>
<UserForm /> {/* Handles its own invalidation */}
<EditUserModal /> {/* Handles its own invalidation */}
</div>
);
}
```
## Dialog State Management
Use a centralized store or context to manage all dialog states in one place. This eliminates the need to pass local state between different components and provides global access to dialog controls.
### Implementation Example
```tsx
// stores/dialog-store.ts
import { create } from 'zustand';
import { immer } from 'zustand/middleware/immer';
interface DialogState {
createUser: boolean;
editUser: boolean;
deleteConfirmation: boolean;
// Add more dialogs as needed
}
interface DialogStore {
dialogs: DialogState;
setDialog: (dialog: keyof DialogState, isOpen: boolean) => void;
}
export const useDialogStore = create<DialogStore>()(
immer((set) => ({
dialogs: {
createUser: false,
editUser: false,
deleteConfirmation: false,
},
setDialog: (dialog, isOpen) =>
set((state) => {
state.dialogs[dialog] = isOpen;
}),
}))
);
// Usage in components
function UserManagement() {
const { dialogs, setDialog } = useDialogStore();
return (
<div>
<button onClick={() => setDialog('createUser', true)}>
Create User
</button>
<CreateUserDialog
open={dialogs.createUser}
onClose={() => setDialog('createUser', false)}
/>
<EditUserDialog
open={dialogs.editUser}
onClose={() => setDialog('editUser', false)}
/>
</div>
);
}
// Any component can control dialogs - no provider needed
function Sidebar() {
const setDialog = useDialogStore((state) => state.setDialog);
return (
<button onClick={() => setDialog('d', true)}>
Quick Create User
</button>
);
}
// You can also use selectors for better performance
function UserDialog() {
const isOpen = useDialogStore((state) => state.dialogs.createUser);
const setDialog = useDialogStore((state) => state.setDialog);
return (
<CreateUserDialog
open={isOpen}
onClose={() => setDialog('createUser', false)}
/>
);
}
```
**Benefits:**
- Centralized dialog state management
- No prop drilling of dialog states
- Easy to open/close dialogs from anywhere in the app
- Consistent dialog behavior across the application
- Simplified component logic

View File

@@ -0,0 +1,25 @@
---
title: "Cloud Infrastructure"
icon: "server"
---
<Warning>
The playbooks are private, Please ask your team for access.
</Warning>
Our infrastructure stack includes several key components to monitor, deploy, and manage our services effectively.
## Hosting Providers
We use two main hosting providers:
- **DigitalOcean**: Hosts our databases including Redis and PostgreSQL.
- **Hetzner**: Provides the machines that run our services.
## Observability: Logs & Telemetry
We collect logs and telemetry from all services using **HyperDX**.
## Kamal for Deployment
We use **Kamal** as a deployment tool to deploy our Docker containers to production with zero downtime.

View File

@@ -0,0 +1,41 @@
---
title: "Feature Announcement"
icon: "bullhorn"
---
When we develop new features, our marketing team handles the public announcements. As engineers, we need to clearly communicate:
1. The problem the feature solves
2. The benefit to our users
3. How it integrates with our product
### Handoff to Marketing Team
There is an integration between GitHub and Linear, that automatically open a ticket for the marketing team after 5 minutes of issue get closed.\
\
Please make sure of the following:
- Github Pull Request is linked to an issue.
- The pull request must have one of these labels: **"Pieces"**, **"Polishing"**, or **"Feature"**.
- If none of these labels are added, the PR will not be merged.
- You can also add any other relevant label.
- The GitHub issue must include the correct template (see "Ticket templates" below).
<Tip>
Bonus: Please include a video showing the marketing team on how to use this feature so they can create a demo video and market it correctly.
</Tip>
Ticket templates:
```
### What Problem Does This Feature Solve?
### Explain How the Feature Works
[Insert the video link here]
### Target Audience
Enterprise / Everyone
### Relevant User Scenarios
[Insert Pylon tickets or community posts here]
```

View File

@@ -0,0 +1,29 @@
---
title: 'How to create Release'
icon: 'flask'
---
Pre-releases are versions of the software that are released before the final version. They are used to test new features and bug fixes before they are released to the public. Pre-releases are typically labeled with a version number that includes a pre-release identifier, such as `official` or `rc`.
## Types of Releases
There are several types of releases that can be used to indicate the stability of the software:
- **Official**: Official releases are considered to be stable and are close to the final release.
- **Release Candidate (RC)**: Release candidates are versions of the software that are feature-complete and have been tested by a larger group of users. They are considered to be stable and are close to the final release. They are typically used for final testing before the final release.
## Why Use Pre-Releases
We do pre-release when we release hot-fixes / bug fixes / small and beta features.
## How to Release a Pre-Release
To release a pre-release version of the software, follow these steps:
1. **Create a new branch**: Create a new branch from the `main` branch. The branch name should be `release/vX.Y.Z` where `X.Y.Z` is the version number.
2. **Increase the version number**: Update the `package.json` file with the new version number.
3. **Open a Pull Request**: Open a pull request from the new branch to the `main` branch. Assign the `pre-release` label to the pull request.
4. **Check the Changelog**: Check the [Activepieces Releases](https://github.com/activepieces/activepieces/releases) page to see if there are any new features or bug fixes that need to be included in the pre-release. Make sure all PRs are labeled correctly so they show in the correct auto-generated changelog. If not, assign the labels and rerun the changelog by removing the "pre-release" label and adding it again to the PR.
5. Go to https://github.com/activepieces/activepieces/actions/workflows/release-rc.yml and run it on the release branch to build the rc image.
6. **Merge the Pull Request**: Merge the pull request to the `main` branch.
7. **Release the Notes**: Release the notes for the new version.

View File

@@ -0,0 +1,50 @@
---
title: "Run Enterprise Edition"
icon: "building"
---
The enterprise edition requires a postgres and redis instance to run, and a license key to activate.
<Steps>
<Step title="Run the dev container">
Follow the instructions [here](/build-pieces/misc/dev-container) to run the dev container.
</Step>
<Step title="Add the following env variables in `server/api/.env`">
Pase the following env variables in `server/api/.env`
```bash
## these variables are set to align with the .devcontainer/docker-compose.yml file
AP_DB_TYPE=POSTGRES
AP_DEV_PIECES="your_piece_name"
AP_ENVIRONMENT="dev"
AP_EDITION=ee
AP_EXECUTION_MODE=UNSANDBOXED
AP_FRONTEND_URL="http://localhost:4200"
AP_WEBHOOK_URL="http://localhost:3000"
AP_PIECES_SOURCE='FILE'
AP_PIECES_SYNC_MODE='NONE'
AP_LOG_LEVEL=debug
AP_LOG_PRETTY=true
AP_REDIS_HOST="redis"
AP_REDIS_PORT="6379"
AP_TRIGGER_DEFAULT_POLL_INTERVAL=1
AP_CACHE_PATH=/workspace/cache
AP_POSTGRES_DATABASE=activepieces
AP_POSTGRES_HOST=db
AP_POSTGRES_PORT=5432
AP_POSTGRES_USERNAME=postgres
AP_POSTGRES_PASSWORD=A79Vm5D4p2VQHOp2gd5
AP_ENCRYPTION_KEY=427a130d9ffab21dc07bcd549fcf0966
AP_JWT_SECRET=secret
```
</Step>
<Step title="Activate Your License Key">
After signing in, activate the license key by going to **Platform Admin -> Setup -> License Keys**
![Activation License Key](/resources/screenshots/activation-license-key-settings.png)
</Step>
</Steps>

View File

@@ -0,0 +1,30 @@
---
title: "Setup Incident.io"
icon: 'bell-ring'
---
Incident.io is our primary tool for managing and responding to urgent issues and service disruptions.
This guide explains how we use Incident.io to coordinate our on-call rotations and emergency response procedures.
## Setup and Notifications
### Personal Setup
1. Download the Incident.io mobile app from your device's app store
2. Ask your team to add you to the Incident.io workspace
3. Configure your notification preferences:
- Phone calls for critical incidents
- Push notifications for high-priority issues
- Slack notifications for standard updates
### On-Call Rotations
Our team operates on a weekly rotation schedule through Incident.io, where every team member participates. When you're on-call:
- You'll receive priority notifications for all urgent issues
- Phone calls will be placed for critical service disruptions
- Rotations change every week, with handoffs occurring on Monday mornings
- Response is expected within 15 minutes for critical incidents
<Tip>
If you are unable to respond to an incident, please escalate to the engineering team.
</Tip>