Add Activepieces integration for workflow automation

- Add Activepieces fork with SmoothSchedule custom piece
- Create integrations app with Activepieces service layer
- Add embed token endpoint for iframe integration
- Create Automations page with embedded workflow builder
- Add sidebar visibility fix for embed mode
- Add list inactive customers endpoint to Public API
- Include SmoothSchedule triggers: event created/updated/cancelled
- Include SmoothSchedule actions: create/update/cancel events, list resources/services/customers

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
poduck
2025-12-18 22:59:37 -05:00
parent 9848268d34
commit 3aa7199503
16292 changed files with 1284892 additions and 4708 deletions

View File

@@ -0,0 +1,98 @@
---
title: "Database Migrations"
description: "Guide for creating database migrations in Activepieces"
icon: "database"
---
Activepieces uses TypeORM as its database driver in Node.js. We support two database types across different editions of our platform.
The database migration files contain both what to do to migrate (up method) and what to do when rolling back (down method).
<Tip>
Read more about TypeORM migrations here:
https://orkhan.gitbook.io/typeorm/docs/migrations
</Tip>
## Database Support
- PostgreSQL
- PGlite
<Tip>
**Why Do we have PGlite?**
We support PGlite to simplify development and self-hosting. It's particularly helpful for:
- Developers creating pieces who want a quick setup
- Self-hosters using platforms to manage docker images but doesn't support docker compose.
PGlite is a lightweight PostgreSQL implementation that runs embedded, so migrations are compatible with PostgreSQL.
</Tip>
## Editions
- **Enterprise & Cloud Edition** (Must use PostgreSQL)
- **Community Edition** (Can use PostgreSQL or PGlite)
### How To Generate
<Steps>
<Step title="Setup AP_DB_TYPE">
Set the `AP_DB_TYPE` environment variable to `POSTGRES` after making sure have latest state by running Activepieces first.
</Step>
<Step title="Generate Migration">
Run the migration generation command:
```bash
nx db-migration server-api --name=<MIGRATION_NAME>
```
Replace `<MIGRATION_NAME>` with a descriptive name for your migration.
</Step>
<Step title="Review Migration File">
The command will generate a new migration file in `packages/server/api/src/app/database/migration/postgres/`.
Review the generated file and register it in `postgres-connection.ts`.
</Step>
</Steps>
## PGlite Compatibility
While PGlite is mostly PostgreSQL-compatible, some features are not supported. When using features like `CONCURRENTLY` for index operations, you need to conditionally handle PGlite:
```typescript
import { AppSystemProp } from '@activepieces/server-shared'
import { MigrationInterface, QueryRunner } from 'typeorm'
import { DatabaseType, system } from '../../../helper/system/system'
const databaseType = system.get(AppSystemProp.DB_TYPE)
const isPGlite = databaseType === DatabaseType.PGLITE
export class AddMyIndex1234567890 implements MigrationInterface {
name = 'AddMyIndex1234567890'
transaction = false // Required when using CONCURRENTLY
public async up(queryRunner: QueryRunner): Promise<void> {
if (isPGlite) {
await queryRunner.query(`CREATE INDEX "idx_name" ON "table" ("column")`)
} else {
await queryRunner.query(`CREATE INDEX CONCURRENTLY "idx_name" ON "table" ("column")`)
}
}
public async down(queryRunner: QueryRunner): Promise<void> {
if (isPGlite) {
await queryRunner.query(`DROP INDEX "idx_name"`)
} else {
await queryRunner.query(`DROP INDEX CONCURRENTLY "idx_name"`)
}
}
}
```
<Warning>
`CREATE INDEX CONCURRENTLY` and `DROP INDEX CONCURRENTLY` are not supported in PGlite because PGLite is a single user/connection database. Always add a check for PGlite when using these operations.
</Warning>
<Tip>
Always test your migrations by running them both up and down to ensure they work as expected.
</Tip>

View File

@@ -0,0 +1,440 @@
---
title: "E2E Tests"
icon: 'clipboard-list'
---
## Overview
Our e2e test suite uses Playwright to ensure critical user workflows function correctly across the application. The tests are organized using the Page Object Model pattern to maintain clean, reusable, and maintainable test code. This playbook outlines the structure, conventions, and best practices for writing e2e tests.
## Project Structure
```
packages/tests-e2e/
├── scenarios/ # Test files (*.spec.ts)
├── pages/ # Page Object Models
│ ├── base.ts # Base page class
│ ├── index.ts # Page exports
│ ├── authentication.page.ts
│ ├── builder.page.ts
│ ├── flows.page.ts
│ └── agent.page.ts
├── helper/ # Utilities and configuration
│ └── config.ts # Environment configuration
├── playwright.config.ts # Playwright configuration
└── project.json # Nx project configuration
```
This playbook provides a comprehensive guide for writing e2e tests following the established patterns in your codebase. It covers the Page Object Model structure, test organization, configuration management, and best practices for maintaining reliable e2e tests.
## Page Object Model Pattern
### Base Page Structure
All page objects extend the `BasePage` class and follow a consistent structure:
```typescript
export class YourPage extends BasePage {
url = `${configUtils.getConfig().instanceUrl}/your-path`;
getters = {
// Locator functions that return page elements
elementName: (page: Page) => page.getByRole('button', { name: 'Button Text' }),
};
actions = {
// Action functions that perform user interactions
performAction: async (page: Page, params: { param1: string }) => {
// Implementation
},
};
}
```
### Page Object Guidelines
#### ❌ Don't do
```typescript
// Direct element selection in test files
test('should create flow', async ({ page }) => {
await page.getByRole('button', { name: 'Create Flow' }).click();
await page.getByText('From scratch').click();
// Test logic mixed with element selection
});
```
#### ✅ Do
```typescript
// flows.page.ts
export class FlowsPage extends BasePage {
getters = {
createFlowButton: (page: Page) => page.getByRole('button', { name: 'Create Flow' }),
fromScratchButton: (page: Page) => page.getByText('From scratch'),
};
actions = {
newFlowFromScratch: async (page: Page) => {
await this.getters.createFlowButton(page).click();
await this.getters.fromScratchButton(page).click();
},
};
}
// integration.spec.ts
test('should create flow', async ({ page }) => {
await flowsPage.actions.newFlowFromScratch(page);
// Clean test logic focused on behavior
});
```
## Test Organization
### Test File Structure
Test files should be organized by feature or workflow:
```typescript
import { test, expect } from '@playwright/test';
import {
AuthenticationPage,
FlowsPage,
BuilderPage
} from '../pages';
import { configUtils } from '../helper/config';
test.describe('Feature Name', () => {
let authenticationPage: AuthenticationPage;
let flowsPage: FlowsPage;
let builderPage: BuilderPage;
test.beforeEach(async () => {
// Initialize page objects
authenticationPage = new AuthenticationPage();
flowsPage = new FlowsPage();
builderPage = new BuilderPage();
});
test('should perform specific workflow', async ({ page }) => {
// Test implementation
});
});
```
### Test Naming Conventions
- Use descriptive test names that explain the expected behavior
- Follow the pattern: `should [action] [expected result]`
- Include context when relevant
```typescript
// Good test names
test('should send Slack message via flow', async ({ page }) => {});
test('should handle webhook with dynamic parameters', async ({ page }) => {});
test('should authenticate user with valid credentials', async ({ page }) => {});
// Avoid vague names
test('should work', async ({ page }) => {});
test('test flow', async ({ page }) => {});
```
## Configuration Management
### Environment Configuration
Use the centralized config utility to handle different environments:
```typescript
// helper/config.ts
export const configUtils = {
getConfig: (): Config => {
return process.env.E2E_INSTANCE_URL ? prodConfig : localConfig;
},
};
// Usage in pages
export class AuthenticationPage extends BasePage {
url = `${configUtils.getConfig().instanceUrl}/sign-in`;
}
```
### Environment Variables
Required environment variables for CI/CD:
- `E2E_INSTANCE_URL`: Target application URL
- `E2E_EMAIL`: Test user email
- `E2E_PASSWORD`: Test user password
## Writing Effective Tests
### Test Structure
Follow this pattern for comprehensive tests:
```typescript
test('should complete user workflow', async ({ page }) => {
// 1. Set up test data and timeouts
test.setTimeout(120000);
const config = configUtils.getConfig();
// 2. Authentication (if required)
await authenticationPage.actions.signIn(page, {
email: config.email,
password: config.password
});
// 3. Navigate to relevant page
await flowsPage.actions.navigate(page);
// 4. Clean up existing data (if needed)
await flowsPage.actions.cleanupExistingFlows(page);
// 5. Perform the main workflow
await flowsPage.actions.newFlowFromScratch(page);
await builderPage.actions.waitFor(page);
await builderPage.actions.selectInitialTrigger(page, {
piece: 'Schedule',
trigger: 'Every Hour'
});
// 6. Add assertions and validations
await builderPage.actions.testFlowAndWaitForSuccess(page);
// 7. Clean up (if needed)
await builderPage.actions.exitRun(page);
});
```
### Wait Strategies
Use appropriate wait strategies instead of fixed timeouts:
```typescript
// Good - Wait for specific conditions
await page.waitForURL('**/flows/**');
await page.waitForSelector('.react-flow__nodes', { state: 'visible' });
await page.waitForFunction(() => {
const element = document.querySelector('.target-element');
return element && element.textContent?.includes('Expected Text');
}, { timeout: 10000 });
// Avoid - Fixed timeouts
await page.waitForTimeout(5000);
```
### Error Handling
Implement proper error handling and cleanup:
```typescript
test('should handle errors gracefully', async ({ page }) => {
try {
await flowsPage.actions.navigate(page);
// Test logic
} catch (error) {
// Log error details
console.error('Test failed:', error);
// Take screenshot for debugging
await page.screenshot({ path: 'error-screenshot.png' });
throw error;
} finally {
// Clean up resources
await flowsPage.actions.cleanupExistingFlows(page);
}
});
```
## Best Practices
### Element Selection
Prefer semantic selectors over CSS selectors:
```typescript
// Good - Semantic selectors
getters = {
createButton: (page: Page) => page.getByRole('button', { name: 'Create Flow' }),
emailField: (page: Page) => page.getByPlaceholder('email@example.com'),
searchInput: (page: Page) => page.getByRole('textbox', { name: 'Search' }),
};
// Avoid - Fragile CSS selectors
getters = {
createButton: (page: Page) => page.locator('button.btn-primary'),
emailField: (page: Page) => page.locator('input[type="email"]'),
};
```
### Test Data Management
Use dynamic test data to avoid conflicts:
```typescript
// Good - Dynamic test data
const runVersion = Math.floor(Math.random() * 100000);
const uniqueFlowName = `Test Flow ${Date.now()}`;
// Avoid - Static test data
const flowName = 'Test Flow';
```
### Assertions
Use meaningful assertions that verify business logic:
```typescript
// Good - Business logic assertions
await builderPage.actions.testFlowAndWaitForSuccess(page);
const response = await apiRequest.get(urlWithParams);
const body = await response.json();
expect(body.targetRunVersion).toBe(runVersion.toString());
// Avoid - Implementation details
expect(await page.locator('.success-message').isVisible()).toBe(true);
```
## Running Tests
### Local Development & Debugging with Checkly
We use [Checkly](https://checklyhq.com/) to run and debug E2E tests. Checkly provides video recordings for each test run, making it easy to debug failures.
```bash
# Run tests with Checkly (includes video reporting)
npx nx run tests-e2e:test-checkly
```
- Test results, including video recordings, are available in the Checkly dashboard.
- You can debug failed tests by reviewing the video and logs provided by Checkly.
### Deploying Tests
Manual deployment is rarely needed, but you can trigger it with:
```bash
npx nx run tests-e2e:deploy-checkly
```
<Info>
Tests are deployed to Checkly automatically after successful test runs in the CI pipeline.
</Info>
## Debugging Tests
### 1. Checkly Videos and Reports
When running tests with Checkly, each test execution is recorded and detailed reports are generated. This is the fastest way to debug failures:
- **Video recordings**: Watch the exact browser session for any test run.
- **Step-by-step logs**: Review detailed logs and screenshots for each test step.
- **Access**: Open the Checkly dashboard and navigate to the relevant test run to view videos and reports.
### 2. VSCode Extension
For the best local debugging experience, install the **Playwright Test for VSCode** extension:
1. Open VSCode Extensions (Ctrl+Shift+X)
2. Search for "Playwright Test for VSCode"
3. Install the extension by Microsoft
**Benefits:**
- Debug tests directly in VSCode with breakpoints
- Step-through test execution
- View test results and traces in the Test Explorer
- Auto-completion for Playwright APIs
- Integrated test runner
### 3. Debugging Tips
1. **Use Checkly dashboard**: Review videos and logs for failed tests.
2. **Use VSCode Extension**: Set breakpoints directly in your test files.
3. **Step Through**: Use F10 (step over) and F11 (step into) in debug mode.
4. **Inspect Elements**: Use `await page.pause()` to pause execution and inspect the page.
5. **Console Logs**: Add `console.log()` statements to track execution flow.
6. **Manual Screenshots**: Take screenshots at critical points for visual debugging.
```typescript
test('should debug workflow', async ({ page }) => {
await page.goto('/flows');
// Pause execution for manual inspection
await page.pause();
// Take screenshot for debugging
await page.screenshot({ path: 'debug-screenshot.png' });
// Continue with test logic
await flowsPage.actions.newFlowFromScratch(page);
});
```
## Common Patterns
### Authentication Flow
```typescript
test('should authenticate user', async ({ page }) => {
const config = configUtils.getConfig();
await authenticationPage.actions.signIn(page, {
email: config.email,
password: config.password
});
await agentPage.actions.waitFor(page);
});
```
### Flow Creation and Testing
```typescript
test('should create and test flow', async ({ page }) => {
await flowsPage.actions.navigate(page);
await flowsPage.actions.cleanupExistingFlows(page);
await flowsPage.actions.newFlowFromScratch(page);
await builderPage.actions.waitFor(page);
await builderPage.actions.selectInitialTrigger(page, {
piece: 'Schedule',
trigger: 'Every Hour'
});
await builderPage.actions.testFlowAndWaitForSuccess(page);
});
```
### API Integration Testing
```typescript
test('should handle webhook integration', async ({ page }) => {
const apiRequest = await page.context().request;
const response = await apiRequest.get(urlWithParams);
const body = await response.json();
expect(body.targetRunVersion).toBe(expectedValue);
});
```
## Maintenance Guidelines
### Updating Selectors
When UI changes occur:
1. Update page object getters with new selectors
2. Test the changes locally
3. Update related tests if necessary
4. Ensure all tests pass before merging
### Adding New Tests
1. Create or update relevant page objects
2. Write test scenarios in appropriate spec files
3. Follow the established patterns and conventions
4. Add proper error handling and cleanup
5. Test locally before submitting
### Performance Considerations
- Keep tests focused and avoid unnecessary steps
- Use appropriate timeouts (not too short, not too long)
- Clean up test data to avoid conflicts
- Group related tests in the same describe block

View File

@@ -0,0 +1,253 @@
---
title: "Frontend Best Practices"
icon: 'lightbulb'
---
## Overview
Our frontend codebase is large and constantly growing, with multiple developers contributing to it. Establishing consistent rules across key areas like data fetching and state management will make the code easier to follow, refactor, and test. It will also help newcomers understand existing patterns and adopt them quickly.
## Data Fetching with React Query
### Hook Organization
All `useMutation` and `useQuery` hooks should be grouped by domain/feature in a single location: `features/lib/feature-hooks.ts`. Never call data fetching hooks directly from component bodies.
**Benefits:**
- Easier refactoring and testing
- Simplified mocking for tests
- Cleaner components focused on UI logic
- Reduced clutter in `.tsx` files
#### ❌ Don't do
```tsx
// UserProfile.tsx
import { useMutation, useQuery } from '@tanstack/react-query';
import { updateUser, getUser } from '../api/users';
function UserProfile({ userId }) {
const { data: user } = useQuery({
queryKey: ['user', userId],
queryFn: () => getUser(userId)
});
const updateUserMutation = useMutation({
mutationFn: updateUser,
onSuccess: () => {
// refetch logic here
}
});
return (
<div>
{/* UI logic */}
</div>
);
}
```
#### ✅ Do
```tsx
// features/users/lib/user-hooks.ts
import { useMutation, useQuery, useQueryClient } from '@tanstack/react-query';
import { updateUser, getUser } from '../api/users';
import { userKeys } from './user-keys';
export function useUser(userId: string) {
return useQuery({
queryKey: userKeys.detail(userId),
queryFn: () => getUser(userId)
});
}
export function useUpdateUser() {
const queryClient = useQueryClient();
return useMutation({
mutationFn: updateUser,
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: userKeys.all });
}
});
}
// UserProfile.tsx
import { useUser, useUpdateUser } from '../lib/user-hooks';
function UserProfile({ userId }) {
const { data: user } = useUser(userId);
const updateUserMutation = useUpdateUser();
return (
<div>
{/* Clean UI logic only */}
</div>
);
}
```
### Query Keys Management
Query keys should be unique identifiers for specific queries. Avoid using boolean values, empty strings, or inconsistent patterns.
**Best Practice:** Group all query keys in one centralized location (inside the hooks file) for easy management and refactoring.
```tsx
// features/users/lib/user-hooks.ts
export const userKeys = {
all: ['users'] as const,
lists: () => [...userKeys.all, 'list'] as const,
list: (filters: string) => [...userKeys.lists(), { filters }] as const,
details: () => [...userKeys.all, 'detail'] as const,
detail: (id: string) => [...userKeys.details(), id] as const,
preferences: (id: string) => [...userKeys.detail(id), 'preferences'] as const,
};
// Usage examples:
// userKeys.all // ['users']
// userKeys.list('active') // ['users', 'list', { filters: 'active' }]
// userKeys.detail('123') // ['users', 'detail', '123']
```
**Benefits:**
- Easy key renaming and refactoring
- Consistent key structure across the app
- Better query specificity control
- Centralized key management
### Refetch vs Query Invalidation
Prefer using `invalidateQueries` over passing `refetch` functions between components. This approach is more maintainable and easier to understand.
#### ❌ Don't do
```tsx
function UserList() {
const { data: users, refetch } = useUsers();
return (
<div>
<UserForm onSuccess={refetch} />
<EditUserModal onSuccess={refetch} />
{/* Passing refetch everywhere */}
</div>
);
}
```
#### ✅ Do
```tsx
// In your mutation hooks
export function useCreateUser() {
const queryClient = useQueryClient();
return useMutation({
mutationFn: createUser,
onSuccess: () => {
queryClient.invalidateQueries({ queryKey: userKeys.lists() });
}
});
}
// Components don't need to handle refetching
function UserList() {
const { data: users } = useUsers();
return (
<div>
<UserForm /> {/* Handles its own invalidation */}
<EditUserModal /> {/* Handles its own invalidation */}
</div>
);
}
```
## Dialog State Management
Use a centralized store or context to manage all dialog states in one place. This eliminates the need to pass local state between different components and provides global access to dialog controls.
### Implementation Example
```tsx
// stores/dialog-store.ts
import { create } from 'zustand';
import { immer } from 'zustand/middleware/immer';
interface DialogState {
createUser: boolean;
editUser: boolean;
deleteConfirmation: boolean;
// Add more dialogs as needed
}
interface DialogStore {
dialogs: DialogState;
setDialog: (dialog: keyof DialogState, isOpen: boolean) => void;
}
export const useDialogStore = create<DialogStore>()(
immer((set) => ({
dialogs: {
createUser: false,
editUser: false,
deleteConfirmation: false,
},
setDialog: (dialog, isOpen) =>
set((state) => {
state.dialogs[dialog] = isOpen;
}),
}))
);
// Usage in components
function UserManagement() {
const { dialogs, setDialog } = useDialogStore();
return (
<div>
<button onClick={() => setDialog('createUser', true)}>
Create User
</button>
<CreateUserDialog
open={dialogs.createUser}
onClose={() => setDialog('createUser', false)}
/>
<EditUserDialog
open={dialogs.editUser}
onClose={() => setDialog('editUser', false)}
/>
</div>
);
}
// Any component can control dialogs - no provider needed
function Sidebar() {
const setDialog = useDialogStore((state) => state.setDialog);
return (
<button onClick={() => setDialog('d', true)}>
Quick Create User
</button>
);
}
// You can also use selectors for better performance
function UserDialog() {
const isOpen = useDialogStore((state) => state.dialogs.createUser);
const setDialog = useDialogStore((state) => state.setDialog);
return (
<CreateUserDialog
open={isOpen}
onClose={() => setDialog('createUser', false)}
/>
);
}
```
**Benefits:**
- Centralized dialog state management
- No prop drilling of dialog states
- Easy to open/close dialogs from anywhere in the app
- Consistent dialog behavior across the application
- Simplified component logic

View File

@@ -0,0 +1,25 @@
---
title: "Cloud Infrastructure"
icon: "server"
---
<Warning>
The playbooks are private, Please ask your team for access.
</Warning>
Our infrastructure stack includes several key components to monitor, deploy, and manage our services effectively.
## Hosting Providers
We use two main hosting providers:
- **DigitalOcean**: Hosts our databases including Redis and PostgreSQL.
- **Hetzner**: Provides the machines that run our services.
## Observability: Logs & Telemetry
We collect logs and telemetry from all services using **HyperDX**.
## Kamal for Deployment
We use **Kamal** as a deployment tool to deploy our Docker containers to production with zero downtime.

View File

@@ -0,0 +1,41 @@
---
title: "Feature Announcement"
icon: "bullhorn"
---
When we develop new features, our marketing team handles the public announcements. As engineers, we need to clearly communicate:
1. The problem the feature solves
2. The benefit to our users
3. How it integrates with our product
### Handoff to Marketing Team
There is an integration between GitHub and Linear, that automatically open a ticket for the marketing team after 5 minutes of issue get closed.\
\
Please make sure of the following:
- Github Pull Request is linked to an issue.
- The pull request must have one of these labels: **"Pieces"**, **"Polishing"**, or **"Feature"**.
- If none of these labels are added, the PR will not be merged.
- You can also add any other relevant label.
- The GitHub issue must include the correct template (see "Ticket templates" below).
<Tip>
Bonus: Please include a video showing the marketing team on how to use this feature so they can create a demo video and market it correctly.
</Tip>
Ticket templates:
```
### What Problem Does This Feature Solve?
### Explain How the Feature Works
[Insert the video link here]
### Target Audience
Enterprise / Everyone
### Relevant User Scenarios
[Insert Pylon tickets or community posts here]
```

View File

@@ -0,0 +1,29 @@
---
title: 'How to create Release'
icon: 'flask'
---
Pre-releases are versions of the software that are released before the final version. They are used to test new features and bug fixes before they are released to the public. Pre-releases are typically labeled with a version number that includes a pre-release identifier, such as `official` or `rc`.
## Types of Releases
There are several types of releases that can be used to indicate the stability of the software:
- **Official**: Official releases are considered to be stable and are close to the final release.
- **Release Candidate (RC)**: Release candidates are versions of the software that are feature-complete and have been tested by a larger group of users. They are considered to be stable and are close to the final release. They are typically used for final testing before the final release.
## Why Use Pre-Releases
We do pre-release when we release hot-fixes / bug fixes / small and beta features.
## How to Release a Pre-Release
To release a pre-release version of the software, follow these steps:
1. **Create a new branch**: Create a new branch from the `main` branch. The branch name should be `release/vX.Y.Z` where `X.Y.Z` is the version number.
2. **Increase the version number**: Update the `package.json` file with the new version number.
3. **Open a Pull Request**: Open a pull request from the new branch to the `main` branch. Assign the `pre-release` label to the pull request.
4. **Check the Changelog**: Check the [Activepieces Releases](https://github.com/activepieces/activepieces/releases) page to see if there are any new features or bug fixes that need to be included in the pre-release. Make sure all PRs are labeled correctly so they show in the correct auto-generated changelog. If not, assign the labels and rerun the changelog by removing the "pre-release" label and adding it again to the PR.
5. Go to https://github.com/activepieces/activepieces/actions/workflows/release-rc.yml and run it on the release branch to build the rc image.
6. **Merge the Pull Request**: Merge the pull request to the `main` branch.
7. **Release the Notes**: Release the notes for the new version.

View File

@@ -0,0 +1,50 @@
---
title: "Run Enterprise Edition"
icon: "building"
---
The enterprise edition requires a postgres and redis instance to run, and a license key to activate.
<Steps>
<Step title="Run the dev container">
Follow the instructions [here](/build-pieces/misc/dev-container) to run the dev container.
</Step>
<Step title="Add the following env variables in `server/api/.env`">
Pase the following env variables in `server/api/.env`
```bash
## these variables are set to align with the .devcontainer/docker-compose.yml file
AP_DB_TYPE=POSTGRES
AP_DEV_PIECES="your_piece_name"
AP_ENVIRONMENT="dev"
AP_EDITION=ee
AP_EXECUTION_MODE=UNSANDBOXED
AP_FRONTEND_URL="http://localhost:4200"
AP_WEBHOOK_URL="http://localhost:3000"
AP_PIECES_SOURCE='FILE'
AP_PIECES_SYNC_MODE='NONE'
AP_LOG_LEVEL=debug
AP_LOG_PRETTY=true
AP_REDIS_HOST="redis"
AP_REDIS_PORT="6379"
AP_TRIGGER_DEFAULT_POLL_INTERVAL=1
AP_CACHE_PATH=/workspace/cache
AP_POSTGRES_DATABASE=activepieces
AP_POSTGRES_HOST=db
AP_POSTGRES_PORT=5432
AP_POSTGRES_USERNAME=postgres
AP_POSTGRES_PASSWORD=A79Vm5D4p2VQHOp2gd5
AP_ENCRYPTION_KEY=427a130d9ffab21dc07bcd549fcf0966
AP_JWT_SECRET=secret
```
</Step>
<Step title="Activate Your License Key">
After signing in, activate the license key by going to **Platform Admin -> Setup -> License Keys**
![Activation License Key](/resources/screenshots/activation-license-key-settings.png)
</Step>
</Steps>

View File

@@ -0,0 +1,30 @@
---
title: "Setup Incident.io"
icon: 'bell-ring'
---
Incident.io is our primary tool for managing and responding to urgent issues and service disruptions.
This guide explains how we use Incident.io to coordinate our on-call rotations and emergency response procedures.
## Setup and Notifications
### Personal Setup
1. Download the Incident.io mobile app from your device's app store
2. Ask your team to add you to the Incident.io workspace
3. Configure your notification preferences:
- Phone calls for critical incidents
- Push notifications for high-priority issues
- Slack notifications for standard updates
### On-Call Rotations
Our team operates on a weekly rotation schedule through Incident.io, where every team member participates. When you're on-call:
- You'll receive priority notifications for all urgent issues
- Phone calls will be placed for critical service disruptions
- Rotations change every week, with handoffs occurring on Monday mornings
- Response is expected within 15 minutes for critical incidents
<Tip>
If you are unable to respond to an incident, please escalate to the engineering team.
</Tip>