Improve deployment process and add login redirect logic
Deployment improvements: - Add template env files (.envs.example/) for documentation - Create init-production.sh for one-time server setup - Create build-activepieces.sh for building/deploying AP image - Update deploy.sh with --deploy-ap flag - Make custom-pieces-metadata.sql idempotent - Update DEPLOYMENT.md with comprehensive instructions Frontend: - Redirect logged-in business owners from root domain to tenant dashboard - Redirect logged-in users from /login to /dashboard on their tenant - Log out customers on wrong subdomain instead of redirecting 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
This commit is contained in:
712
DEPLOYMENT.md
712
DEPLOYMENT.md
@@ -1,322 +1,381 @@
|
||||
# SmoothSchedule Production Deployment Guide
|
||||
|
||||
This guide covers deploying SmoothSchedule to a production server.
|
||||
|
||||
## Table of Contents
|
||||
|
||||
1. [Prerequisites](#prerequisites)
|
||||
2. [Quick Reference](#quick-reference)
|
||||
3. [Initial Server Setup](#initial-server-setup-first-time-only)
|
||||
4. [Regular Deployments](#regular-deployments)
|
||||
5. [Activepieces Updates](#activepieces-updates)
|
||||
6. [Troubleshooting](#troubleshooting)
|
||||
7. [Maintenance](#maintenance)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
### Server Requirements
|
||||
- Ubuntu/Debian Linux server
|
||||
- Minimum 2GB RAM, 20GB disk space
|
||||
- Docker and Docker Compose installed
|
||||
- Domain name pointed to server IP: `smoothschedule.com`
|
||||
- DNS configured with wildcard subdomain: `*.smoothschedule.com`
|
||||
- Ubuntu 20.04+ or Debian 11+
|
||||
- 4GB RAM minimum (2GB works but cannot build Activepieces image)
|
||||
- 40GB disk space
|
||||
- Docker and Docker Compose v2 installed
|
||||
- Domain with wildcard DNS configured
|
||||
|
||||
### Local Requirements (for deployment)
|
||||
- Git access to the repository
|
||||
- SSH access to the production server
|
||||
- Docker (for building Activepieces image)
|
||||
|
||||
### Required Accounts/Services
|
||||
- [x] DigitalOcean Spaces (already configured)
|
||||
- Access Key: DO801P4R8QXYMY4CE8WZ
|
||||
- Bucket: smoothschedule
|
||||
- Region: nyc3
|
||||
- [ ] Email service (optional - Mailgun or SMTP)
|
||||
- [ ] Sentry (optional - error tracking)
|
||||
- DigitalOcean Spaces (for static/media files)
|
||||
- Stripe (for payments)
|
||||
- Twilio (for SMS/phone features)
|
||||
- OpenAI API (optional, for Activepieces AI copilot)
|
||||
|
||||
## Pre-Deployment Checklist
|
||||
|
||||
### 1. DigitalOcean Spaces Setup
|
||||
## Quick Reference
|
||||
|
||||
```bash
|
||||
# Create the bucket (if not already created)
|
||||
aws --profile do-tor1 s3 mb s3://smoothschedule
|
||||
# Regular deployment (after initial setup)
|
||||
./deploy.sh
|
||||
|
||||
# Set bucket to public-read for static/media files
|
||||
aws --profile do-tor1 s3api put-bucket-acl \
|
||||
--bucket smoothschedule \
|
||||
--acl public-read
|
||||
# Deploy with Activepieces image rebuild
|
||||
./deploy.sh --deploy-ap
|
||||
|
||||
# Configure CORS (for frontend uploads)
|
||||
cat > cors.json <<EOF
|
||||
{
|
||||
"CORSRules": [
|
||||
{
|
||||
"AllowedOrigins": ["https://smoothschedule.com", "https://*.smoothschedule.com"],
|
||||
"AllowedMethods": ["GET", "PUT", "POST", "DELETE", "HEAD"],
|
||||
"AllowedHeaders": ["*"],
|
||||
"MaxAgeSeconds": 3000
|
||||
}
|
||||
]
|
||||
}
|
||||
EOF
|
||||
# Deploy specific services only
|
||||
./deploy.sh django nginx
|
||||
|
||||
aws --profile do-tor1 s3api put-bucket-cors \
|
||||
--bucket smoothschedule \
|
||||
--cors-configuration file://cors.json
|
||||
# Skip migrations (config changes only)
|
||||
./deploy.sh --no-migrate
|
||||
```
|
||||
|
||||
### 2. DNS Configuration
|
||||
## Initial Server Setup (First Time Only)
|
||||
|
||||
Configure these DNS records at your domain registrar:
|
||||
|
||||
```
|
||||
Type Name Value TTL
|
||||
A smoothschedule.com YOUR_SERVER_IP 300
|
||||
A *.smoothschedule.com YOUR_SERVER_IP 300
|
||||
CNAME www smoothschedule.com 300
|
||||
```
|
||||
|
||||
### 3. Environment Variables Review
|
||||
|
||||
**Backend** (`.envs/.production/.django`):
|
||||
- [x] DJANGO_SECRET_KEY - Set
|
||||
- [x] DJANGO_ALLOWED_HOSTS - Set to `.smoothschedule.com`
|
||||
- [x] DJANGO_AWS_ACCESS_KEY_ID - Set
|
||||
- [x] DJANGO_AWS_SECRET_ACCESS_KEY - Set
|
||||
- [x] DJANGO_AWS_STORAGE_BUCKET_NAME - Set to `smoothschedule`
|
||||
- [x] DJANGO_AWS_S3_ENDPOINT_URL - Set to `https://nyc3.digitaloceanspaces.com`
|
||||
- [x] DJANGO_AWS_S3_REGION_NAME - Set to `nyc3`
|
||||
- [ ] MAILGUN_API_KEY - Optional (for email)
|
||||
- [ ] MAILGUN_DOMAIN - Optional (for email)
|
||||
- [ ] SENTRY_DSN - Optional (for error tracking)
|
||||
|
||||
**Frontend** (`.env.production`):
|
||||
- [x] VITE_API_URL - Set to `https://smoothschedule.com/api`
|
||||
|
||||
## Deployment Steps
|
||||
|
||||
### Step 1: Server Preparation
|
||||
### 1. Server Preparation
|
||||
|
||||
```bash
|
||||
# SSH into production server
|
||||
ssh poduck@smoothschedule.com
|
||||
ssh your-user@your-server
|
||||
|
||||
# Install Docker (if not already installed)
|
||||
curl -fsSL https://get.docker.com -o get-docker.sh
|
||||
sudo sh get-docker.sh
|
||||
sudo usermod -aG docker $USER
|
||||
|
||||
# Install Docker Compose
|
||||
sudo curl -L "https://github.com/docker/compose/releases/latest/download/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
|
||||
sudo chmod +x /usr/local/bin/docker-compose
|
||||
|
||||
# Logout and login again for group changes to take effect
|
||||
# Logout and login again for group changes
|
||||
exit
|
||||
ssh poduck@smoothschedule.com
|
||||
ssh your-user@your-server
|
||||
```
|
||||
|
||||
### Step 2: Deploy Backend (Django)
|
||||
### 2. Clone Repository
|
||||
|
||||
```bash
|
||||
# Create deployment directory
|
||||
mkdir -p ~/smoothschedule
|
||||
git clone https://your-repo-url ~/smoothschedule
|
||||
cd ~/smoothschedule/smoothschedule
|
||||
```
|
||||
|
||||
### 3. Create Environment Files
|
||||
|
||||
Copy the template files and fill in your values:
|
||||
|
||||
```bash
|
||||
mkdir -p .envs/.production
|
||||
cp .envs.example/.django .envs/.production/.django
|
||||
cp .envs.example/.postgres .envs/.production/.postgres
|
||||
cp .envs.example/.activepieces .envs/.production/.activepieces
|
||||
```
|
||||
|
||||
Edit each file with your production values:
|
||||
|
||||
```bash
|
||||
nano .envs/.production/.django
|
||||
nano .envs/.production/.postgres
|
||||
nano .envs/.production/.activepieces
|
||||
```
|
||||
|
||||
**Key values to configure:**
|
||||
|
||||
| File | Variable | Description |
|
||||
|------|----------|-------------|
|
||||
| `.django` | `DJANGO_SECRET_KEY` | Generate: `openssl rand -hex 32` |
|
||||
| `.django` | `DJANGO_ALLOWED_HOSTS` | `.yourdomain.com` |
|
||||
| `.django` | `STRIPE_*` | Your Stripe keys (live keys for production) |
|
||||
| `.django` | `TWILIO_*` | Your Twilio credentials |
|
||||
| `.django` | `AWS_*` | DigitalOcean Spaces credentials |
|
||||
| `.postgres` | `POSTGRES_USER` | Generate random username |
|
||||
| `.postgres` | `POSTGRES_PASSWORD` | Generate: `openssl rand -hex 32` |
|
||||
| `.activepieces` | `AP_JWT_SECRET` | Generate: `openssl rand -hex 32` |
|
||||
| `.activepieces` | `AP_ENCRYPTION_KEY` | Generate: `openssl rand -hex 16` |
|
||||
| `.activepieces` | `AP_POSTGRES_USERNAME` | Generate random username |
|
||||
| `.activepieces` | `AP_POSTGRES_PASSWORD` | Generate: `openssl rand -hex 32` |
|
||||
|
||||
**Important:** `AP_JWT_SECRET` must be copied to `.django` as well!
|
||||
|
||||
### 4. DNS Configuration
|
||||
|
||||
Configure these DNS records:
|
||||
|
||||
```
|
||||
Type Name Value TTL
|
||||
A yourdomain.com YOUR_SERVER_IP 300
|
||||
A *.yourdomain.com YOUR_SERVER_IP 300
|
||||
CNAME www yourdomain.com 300
|
||||
```
|
||||
|
||||
### 5. Build Activepieces Image (on your local machine)
|
||||
|
||||
The production server typically cannot build this image (requires 4GB+ RAM):
|
||||
|
||||
```bash
|
||||
# On your LOCAL machine, not the server
|
||||
cd ~/smoothschedule
|
||||
|
||||
# Clone the repository (or upload files via rsync/git)
|
||||
# Option A: Clone from Git
|
||||
git clone <your-repo-url> .
|
||||
git checkout main
|
||||
|
||||
# Option B: Copy from local machine
|
||||
# From your local machine:
|
||||
# rsync -avz --exclude 'node_modules' --exclude '.venv' --exclude '__pycache__' \
|
||||
# /home/poduck/Desktop/smoothschedule2/ poduck@smoothschedule.com:~/smoothschedule/
|
||||
|
||||
# Navigate to backend
|
||||
cd smoothschedule
|
||||
|
||||
# Build and start containers
|
||||
docker compose -f docker-compose.production.yml build
|
||||
docker compose -f docker-compose.production.yml up -d
|
||||
|
||||
# Wait for containers to start
|
||||
sleep 10
|
||||
|
||||
# Check logs
|
||||
docker compose -f docker-compose.production.yml logs -f
|
||||
./scripts/build-activepieces.sh deploy
|
||||
```
|
||||
|
||||
### Step 3: Database Initialization
|
||||
Or manually:
|
||||
|
||||
```bash
|
||||
# Run migrations
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py migrate
|
||||
|
||||
# Create public schema (for multi-tenancy)
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py migrate_schemas --shared
|
||||
|
||||
# Create superuser
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py createsuperuser
|
||||
|
||||
# Collect static files (uploads to DigitalOcean Spaces)
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py collectstatic --noinput
|
||||
cd activepieces-fork
|
||||
docker build -t smoothschedule_production_activepieces .
|
||||
docker save smoothschedule_production_activepieces | gzip > /tmp/ap.tar.gz
|
||||
scp /tmp/ap.tar.gz your-user@your-server:/tmp/
|
||||
ssh your-user@your-server 'gunzip -c /tmp/ap.tar.gz | docker load'
|
||||
```
|
||||
|
||||
### Step 4: Create Initial Tenant
|
||||
### 6. Run Initialization Script
|
||||
|
||||
```bash
|
||||
# On the server
|
||||
cd ~/smoothschedule/smoothschedule
|
||||
chmod +x scripts/init-production.sh
|
||||
./scripts/init-production.sh
|
||||
```
|
||||
|
||||
This script will:
|
||||
1. Verify environment files
|
||||
2. Generate any missing security keys
|
||||
3. Start PostgreSQL and Redis
|
||||
4. Create the Activepieces database
|
||||
5. Start all services
|
||||
6. Run Django migrations
|
||||
7. Guide you through Activepieces platform setup
|
||||
|
||||
### 7. Complete Activepieces Platform Setup
|
||||
|
||||
After the init script completes:
|
||||
|
||||
1. Visit `https://automations.yourdomain.com`
|
||||
2. Create an admin account (this creates the platform)
|
||||
3. Get the platform ID:
|
||||
```bash
|
||||
docker compose -f docker-compose.production.yml exec postgres \
|
||||
psql -U <ap_db_user> -d activepieces -c "SELECT id FROM platform"
|
||||
```
|
||||
4. Update `AP_PLATFORM_ID` in both:
|
||||
- `.envs/.production/.activepieces`
|
||||
- `.envs/.production/.django`
|
||||
5. Restart services:
|
||||
```bash
|
||||
docker compose -f docker-compose.production.yml restart
|
||||
```
|
||||
|
||||
### 8. Create First Tenant
|
||||
|
||||
```bash
|
||||
# Access Django shell
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py shell
|
||||
|
||||
# In the shell, create your first business tenant:
|
||||
```
|
||||
|
||||
```python
|
||||
from core.models import Business
|
||||
from django.contrib.auth import get_user_model
|
||||
from smoothschedule.identity.core.models import Tenant, Domain
|
||||
|
||||
User = get_user_model()
|
||||
|
||||
# Create a business
|
||||
business = Business.objects.create(
|
||||
# Create tenant
|
||||
tenant = Tenant.objects.create(
|
||||
name="Demo Business",
|
||||
subdomain="demo",
|
||||
schema_name="demo",
|
||||
schema_name="demo"
|
||||
)
|
||||
|
||||
# Verify it was created
|
||||
print(f"Created business: {business.name} at {business.subdomain}.smoothschedule.com")
|
||||
|
||||
# Create a business owner
|
||||
owner = User.objects.create_user(
|
||||
username="demo_owner",
|
||||
email="owner@demo.com",
|
||||
password="your_password_here",
|
||||
role="owner",
|
||||
business_subdomain="demo"
|
||||
# Create domain
|
||||
Domain.objects.create(
|
||||
tenant=tenant,
|
||||
domain="demo.yourdomain.com",
|
||||
is_primary=True
|
||||
)
|
||||
|
||||
print(f"Created owner: {owner.username}")
|
||||
exit()
|
||||
```
|
||||
|
||||
### Step 5: Deploy Frontend
|
||||
### 9. Provision Activepieces Connection
|
||||
|
||||
```bash
|
||||
# On your local machine
|
||||
cd /home/poduck/Desktop/smoothschedule2/frontend
|
||||
|
||||
# Install dependencies
|
||||
npm install
|
||||
|
||||
# Build for production
|
||||
npm run build
|
||||
|
||||
# Upload build files to server
|
||||
rsync -avz dist/ poduck@smoothschedule.com:~/smoothschedule-frontend/
|
||||
|
||||
# On the server, set up nginx or serve via backend
|
||||
docker compose -f docker-compose.production.yml exec django \
|
||||
python manage.py provision_ap_connections --tenant demo
|
||||
```
|
||||
|
||||
**Option A: Serve via Django (simpler)**
|
||||
|
||||
The Django `collectstatic` command already handles static files. For serving the frontend:
|
||||
|
||||
1. Copy frontend build to Django static folder
|
||||
2. Django will serve it via Traefik
|
||||
|
||||
**Option B: Separate Nginx (recommended for production)**
|
||||
|
||||
```bash
|
||||
# Install nginx
|
||||
sudo apt-get update
|
||||
sudo apt-get install -y nginx
|
||||
|
||||
# Create nginx config
|
||||
sudo nano /etc/nginx/sites-available/smoothschedule
|
||||
```
|
||||
|
||||
```nginx
|
||||
server {
|
||||
listen 80;
|
||||
server_name smoothschedule.com *.smoothschedule.com;
|
||||
|
||||
# Frontend (React)
|
||||
location / {
|
||||
root /home/poduck/smoothschedule-frontend;
|
||||
try_files $uri $uri/ /index.html;
|
||||
}
|
||||
|
||||
# Backend API (proxy to Traefik)
|
||||
location /api {
|
||||
proxy_pass http://localhost:80;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
|
||||
proxy_set_header X-Forwarded-Proto $scheme;
|
||||
}
|
||||
|
||||
location /admin {
|
||||
proxy_pass http://localhost:80;
|
||||
proxy_set_header Host $host;
|
||||
proxy_set_header X-Real-IP $remote_addr;
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```bash
|
||||
# Enable site
|
||||
sudo ln -s /etc/nginx/sites-available/smoothschedule /etc/nginx/sites-enabled/
|
||||
sudo nginx -t
|
||||
sudo systemctl reload nginx
|
||||
```
|
||||
|
||||
### Step 6: SSL/HTTPS Setup
|
||||
|
||||
Traefik is configured to automatically obtain Let's Encrypt SSL certificates. Ensure:
|
||||
|
||||
1. DNS is pointed to your server
|
||||
2. Ports 80 and 443 are accessible
|
||||
3. Wait for Traefik to obtain certificates (check logs)
|
||||
|
||||
```bash
|
||||
# Monitor Traefik logs
|
||||
docker compose -f docker-compose.production.yml logs -f traefik
|
||||
|
||||
# You should see:
|
||||
# "Certificate obtained for domain smoothschedule.com"
|
||||
```
|
||||
|
||||
### Step 7: Verify Deployment
|
||||
### 10. Verify Deployment
|
||||
|
||||
```bash
|
||||
# Check all containers are running
|
||||
docker compose -f docker-compose.production.yml ps
|
||||
|
||||
# Should show:
|
||||
# - django (running)
|
||||
# - postgres (running)
|
||||
# - redis (running)
|
||||
# - traefik (running)
|
||||
# - celeryworker (running)
|
||||
# - celerybeat (running)
|
||||
# - flower (running)
|
||||
|
||||
# Test API endpoint
|
||||
curl https://smoothschedule.com/api/
|
||||
|
||||
# Test admin
|
||||
curl https://smoothschedule.com/admin/
|
||||
|
||||
# Access in browser:
|
||||
# https://smoothschedule.com - Main site
|
||||
# https://platform.smoothschedule.com - Platform dashboard
|
||||
# https://demo.smoothschedule.com - Demo business
|
||||
# https://smoothschedule.com:5555 - Flower (Celery monitoring)
|
||||
# Test endpoints
|
||||
curl https://yourdomain.com/api/
|
||||
curl https://platform.yourdomain.com/
|
||||
curl https://automations.yourdomain.com/api/v1/health
|
||||
```
|
||||
|
||||
## Post-Deployment
|
||||
## Regular Deployments
|
||||
|
||||
### 1. Monitoring
|
||||
After initial setup, deployments are simple:
|
||||
|
||||
```bash
|
||||
# View logs
|
||||
docker compose -f docker-compose.production.yml logs -f
|
||||
# From your local machine
|
||||
cd ~/smoothschedule
|
||||
|
||||
# View specific service logs
|
||||
docker compose -f docker-compose.production.yml logs -f django
|
||||
docker compose -f docker-compose.production.yml logs -f postgres
|
||||
# Commit and push your changes
|
||||
git add .
|
||||
git commit -m "Your changes"
|
||||
git push
|
||||
|
||||
# Monitor Celery tasks via Flower
|
||||
# Access: https://smoothschedule.com:5555
|
||||
# Login with credentials from .envs/.production/.django
|
||||
# Deploy
|
||||
./deploy.sh
|
||||
```
|
||||
|
||||
### 2. Backups
|
||||
### Deployment Options
|
||||
|
||||
| Command | Description |
|
||||
|---------|-------------|
|
||||
| `./deploy.sh` | Full deployment with migrations |
|
||||
| `./deploy.sh --no-migrate` | Deploy without running migrations |
|
||||
| `./deploy.sh --deploy-ap` | Rebuild and deploy Activepieces image |
|
||||
| `./deploy.sh django` | Rebuild only Django container |
|
||||
| `./deploy.sh nginx traefik` | Rebuild specific services |
|
||||
|
||||
### What the Deploy Script Does
|
||||
|
||||
1. Checks for uncommitted changes
|
||||
2. Verifies changes are pushed to remote
|
||||
3. (If `--deploy-ap`) Builds and transfers Activepieces image
|
||||
4. SSHs to server and pulls latest code
|
||||
5. Backs up and restores `.envs` directory
|
||||
6. Builds Docker images
|
||||
7. Starts containers
|
||||
8. Sets up Activepieces database (if needed)
|
||||
9. Runs Django migrations (unless `--no-migrate`)
|
||||
10. Seeds platform plugins for all tenants
|
||||
|
||||
## Activepieces Updates
|
||||
|
||||
When you modify custom pieces (in `activepieces-fork/`):
|
||||
|
||||
1. Make your changes to piece code
|
||||
2. Commit and push
|
||||
3. Deploy with the image flag:
|
||||
```bash
|
||||
./deploy.sh --deploy-ap
|
||||
```
|
||||
|
||||
The Activepieces container will:
|
||||
1. Start with the new image
|
||||
2. Run `publish-pieces.sh` to register custom pieces
|
||||
3. Insert piece metadata into the database
|
||||
|
||||
### Custom Pieces
|
||||
|
||||
Custom pieces are located in:
|
||||
- `activepieces-fork/packages/pieces/community/smoothschedule/` - Main SmoothSchedule piece
|
||||
- `activepieces-fork/packages/pieces/community/python-code/` - Python code execution
|
||||
- `activepieces-fork/packages/pieces/community/ruby-code/` - Ruby code execution
|
||||
|
||||
Piece metadata is registered via:
|
||||
- `activepieces-fork/custom-pieces-metadata.sql` - Database registration
|
||||
- `activepieces-fork/publish-pieces.sh` - Container startup script
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### View Logs
|
||||
|
||||
```bash
|
||||
# All services
|
||||
docker compose -f docker-compose.production.yml logs -f
|
||||
|
||||
# Specific service
|
||||
docker compose -f docker-compose.production.yml logs -f django
|
||||
docker compose -f docker-compose.production.yml logs -f activepieces
|
||||
docker compose -f docker-compose.production.yml logs -f traefik
|
||||
```
|
||||
|
||||
### Restart Services
|
||||
|
||||
```bash
|
||||
# All services
|
||||
docker compose -f docker-compose.production.yml restart
|
||||
|
||||
# Specific service
|
||||
docker compose -f docker-compose.production.yml restart django
|
||||
docker compose -f docker-compose.production.yml restart activepieces
|
||||
```
|
||||
|
||||
### Django Shell
|
||||
|
||||
```bash
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py shell
|
||||
```
|
||||
|
||||
### Database Access
|
||||
|
||||
```bash
|
||||
# SmoothSchedule database
|
||||
docker compose -f docker-compose.production.yml exec postgres \
|
||||
psql -U <postgres_user> -d smoothschedule
|
||||
|
||||
# Activepieces database
|
||||
docker compose -f docker-compose.production.yml exec postgres \
|
||||
psql -U <ap_user> -d activepieces
|
||||
```
|
||||
|
||||
### Common Issues
|
||||
|
||||
**1. Activepieces pieces not showing up**
|
||||
```bash
|
||||
# Check if platform exists
|
||||
docker compose -f docker-compose.production.yml exec postgres \
|
||||
psql -U <ap_user> -d activepieces -c "SELECT id FROM platform"
|
||||
|
||||
# Restart to re-run piece registration
|
||||
docker compose -f docker-compose.production.yml restart activepieces
|
||||
|
||||
# Check logs for errors
|
||||
docker compose -f docker-compose.production.yml logs activepieces | grep -i error
|
||||
```
|
||||
|
||||
**2. 502 Bad Gateway**
|
||||
- Service is still starting, wait a moment
|
||||
- Check container health: `docker compose ps`
|
||||
- Check logs for errors
|
||||
|
||||
**3. Database connection errors**
|
||||
- Verify credentials in `.envs/.production/`
|
||||
- Ensure PostgreSQL is running: `docker compose ps postgres`
|
||||
|
||||
**4. Activepieces embedding not working**
|
||||
- Verify `AP_JWT_SECRET` matches in both `.django` and `.activepieces`
|
||||
- Verify `AP_PLATFORM_ID` is set correctly in both files
|
||||
- Check `AP_EMBEDDING_ENABLED=true` in `.activepieces`
|
||||
|
||||
**5. SSL certificate issues**
|
||||
```bash
|
||||
# Check Traefik logs
|
||||
docker compose -f docker-compose.production.yml logs traefik
|
||||
|
||||
# Verify DNS is pointing to server
|
||||
dig yourdomain.com +short
|
||||
|
||||
# Ensure ports 80 and 443 are open
|
||||
sudo ufw allow 80
|
||||
sudo ufw allow 443
|
||||
```
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Backups
|
||||
|
||||
```bash
|
||||
# Database backup
|
||||
@@ -329,121 +388,50 @@ docker compose -f docker-compose.production.yml exec postgres backups
|
||||
docker compose -f docker-compose.production.yml exec postgres restore backup_filename.sql.gz
|
||||
```
|
||||
|
||||
### 3. Updates
|
||||
### Monitoring
|
||||
|
||||
```bash
|
||||
# Pull latest code
|
||||
cd ~/smoothschedule/smoothschedule
|
||||
git pull origin main
|
||||
- **Flower Dashboard**: `https://yourdomain.com:5555` - Celery task monitoring
|
||||
- **Container Status**: `docker compose ps`
|
||||
- **Resource Usage**: `docker stats`
|
||||
|
||||
# Rebuild and restart
|
||||
docker compose -f docker-compose.production.yml build
|
||||
docker compose -f docker-compose.production.yml up -d
|
||||
### Security Checklist
|
||||
|
||||
# Run migrations
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py migrate
|
||||
|
||||
# Collect static files
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py collectstatic --noinput
|
||||
```
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### SSL Certificate Issues
|
||||
```bash
|
||||
# Check Traefik logs
|
||||
docker compose -f docker-compose.production.yml logs traefik
|
||||
|
||||
# Verify DNS is pointing to server
|
||||
dig smoothschedule.com +short
|
||||
|
||||
# Ensure ports are open
|
||||
sudo ufw allow 80
|
||||
sudo ufw allow 443
|
||||
```
|
||||
|
||||
### Database Connection Issues
|
||||
```bash
|
||||
# Check PostgreSQL is running
|
||||
docker compose -f docker-compose.production.yml ps postgres
|
||||
|
||||
# Check database logs
|
||||
docker compose -f docker-compose.production.yml logs postgres
|
||||
|
||||
# Verify connection
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py dbshell
|
||||
```
|
||||
|
||||
### Static Files Not Loading
|
||||
```bash
|
||||
# Verify DigitalOcean Spaces credentials
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py shell
|
||||
>>> from django.conf import settings
|
||||
>>> print(settings.AWS_ACCESS_KEY_ID)
|
||||
>>> print(settings.AWS_STORAGE_BUCKET_NAME)
|
||||
|
||||
# Re-collect static files
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py collectstatic --noinput
|
||||
|
||||
# Check Spaces bucket
|
||||
aws --profile do-tor1 s3 ls s3://smoothschedule/static/
|
||||
aws --profile do-tor1 s3 ls s3://smoothschedule/media/
|
||||
```
|
||||
|
||||
### Celery Not Running Tasks
|
||||
```bash
|
||||
# Check Celery worker logs
|
||||
docker compose -f docker-compose.production.yml logs celeryworker
|
||||
|
||||
# Access Flower dashboard
|
||||
# https://smoothschedule.com:5555
|
||||
|
||||
# Restart Celery
|
||||
docker compose -f docker-compose.production.yml restart celeryworker celerybeat
|
||||
```
|
||||
|
||||
## Security Checklist
|
||||
|
||||
- [x] SSL/HTTPS enabled via Let's Encrypt
|
||||
- [x] DJANGO_SECRET_KEY set to random value
|
||||
- [x] Database password set to random value
|
||||
- [x] Flower dashboard password protected
|
||||
- [ ] Firewall configured (UFW or iptables)
|
||||
- [ ] SSH key-based authentication enabled
|
||||
- [ ] Fail2ban installed for brute-force protection
|
||||
- [x] SSL/HTTPS enabled via Let's Encrypt (automatic with Traefik)
|
||||
- [x] All secret keys are unique random values
|
||||
- [x] Database passwords are strong
|
||||
- [x] Flower dashboard is password protected
|
||||
- [ ] Firewall configured (UFW)
|
||||
- [ ] SSH key-based authentication only
|
||||
- [ ] Regular backups configured
|
||||
- [ ] Sentry error monitoring (optional)
|
||||
- [ ] Monitoring/alerting set up
|
||||
|
||||
## Performance Optimization
|
||||
## File Structure
|
||||
|
||||
1. **Enable CDN for DigitalOcean Spaces**
|
||||
- In Spaces settings, enable CDN
|
||||
- Update `DJANGO_AWS_S3_CUSTOM_DOMAIN=smoothschedule.nyc3.cdn.digitaloceanspaces.com`
|
||||
|
||||
2. **Scale Gunicorn Workers**
|
||||
- Adjust `WEB_CONCURRENCY` in `.envs/.production/.django`
|
||||
- Formula: (2 x CPU cores) + 1
|
||||
|
||||
3. **Add Redis Persistence**
|
||||
- Update docker-compose.production.yml redis config
|
||||
- Enable AOF persistence
|
||||
|
||||
4. **Database Connection Pooling**
|
||||
- Already configured via `CONN_MAX_AGE=60`
|
||||
|
||||
## Maintenance
|
||||
|
||||
### Weekly
|
||||
- Review error logs
|
||||
- Check disk space: `df -h`
|
||||
- Monitor Flower dashboard for failed tasks
|
||||
|
||||
### Monthly
|
||||
- Update Docker images: `docker compose pull`
|
||||
- Update dependencies: `uv sync`
|
||||
- Review backups
|
||||
|
||||
### As Needed
|
||||
- Scale resources (CPU/RAM)
|
||||
- Add more Celery workers
|
||||
- Optimize database queries
|
||||
```
|
||||
smoothschedule/
|
||||
├── deploy.sh # Main deployment script
|
||||
├── DEPLOYMENT.md # This file
|
||||
├── scripts/
|
||||
│ └── build-activepieces.sh # Activepieces image builder
|
||||
├── smoothschedule/
|
||||
│ ├── docker-compose.production.yml
|
||||
│ ├── scripts/
|
||||
│ │ └── init-production.sh # One-time initialization
|
||||
│ ├── .envs/
|
||||
│ │ └── .production/ # Production secrets (NOT in git)
|
||||
│ │ ├── .django
|
||||
│ │ ├── .postgres
|
||||
│ │ └── .activepieces
|
||||
│ └── .envs.example/ # Template files (in git)
|
||||
│ ├── .django
|
||||
│ ├── .postgres
|
||||
│ └── .activepieces
|
||||
└── activepieces-fork/
|
||||
├── Dockerfile
|
||||
├── custom-pieces-metadata.sql
|
||||
├── publish-pieces.sh
|
||||
└── packages/pieces/community/
|
||||
├── smoothschedule/ # Main custom piece
|
||||
├── python-code/
|
||||
└── ruby-code/
|
||||
```
|
||||
|
||||
@@ -1 +1 @@
|
||||
1766209168989
|
||||
1766280110308
|
||||
@@ -1,11 +1,15 @@
|
||||
FROM node:20.19-bullseye-slim AS base
|
||||
|
||||
# Set environment variables early for better layer caching
|
||||
# Memory optimizations for low-RAM servers (2GB):
|
||||
# - Limit Node.js heap to 1536MB to leave room for system
|
||||
# - Disable NX daemon and cloud to reduce overhead
|
||||
ENV LANG=en_US.UTF-8 \
|
||||
LANGUAGE=en_US:en \
|
||||
LC_ALL=en_US.UTF-8 \
|
||||
NX_DAEMON=false \
|
||||
NX_NO_CLOUD=true
|
||||
NX_NO_CLOUD=true \
|
||||
NODE_OPTIONS="--max-old-space-size=1536"
|
||||
|
||||
# Install all system dependencies in a single layer with cache mounts
|
||||
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||
@@ -63,7 +67,7 @@ RUN --mount=type=cache,target=/root/.bun/install/cache \
|
||||
COPY . .
|
||||
|
||||
# Build all projects including custom pieces
|
||||
RUN npx nx run-many --target=build --projects=react-ui,server-api,pieces-smoothschedule,pieces-python-code,pieces-ruby-code --configuration production --parallel=2 --skip-nx-cache --verbose
|
||||
RUN npx nx run-many --target=build --projects=react-ui,server-api,pieces-smoothschedule,pieces-python-code,pieces-ruby-code,pieces-interfaces --configuration production --parallel=2 --skip-nx-cache
|
||||
|
||||
# Install production dependencies only for the backend API
|
||||
RUN --mount=type=cache,target=/root/.bun/install/cache \
|
||||
@@ -77,6 +81,8 @@ RUN --mount=type=cache,target=/root/.bun/install/cache \
|
||||
cd ../python-code && \
|
||||
bun install --production && \
|
||||
cd ../ruby-code && \
|
||||
bun install --production && \
|
||||
cd ../interfaces && \
|
||||
bun install --production
|
||||
|
||||
### STAGE 2: Run ###
|
||||
@@ -84,24 +90,30 @@ FROM base AS run
|
||||
|
||||
WORKDIR /usr/src/app
|
||||
|
||||
# Install Nginx and gettext in a single layer with cache mount
|
||||
# Install Nginx, gettext, and PostgreSQL client in a single layer with cache mount
|
||||
RUN --mount=type=cache,target=/var/cache/apt,sharing=locked \
|
||||
--mount=type=cache,target=/var/lib/apt,sharing=locked \
|
||||
apt-get update && \
|
||||
apt-get install -y --no-install-recommends nginx gettext
|
||||
apt-get install -y --no-install-recommends nginx gettext postgresql-client
|
||||
|
||||
# Copy static configuration files first (better layer caching)
|
||||
COPY nginx.react.conf /etc/nginx/nginx.conf
|
||||
COPY --from=build /usr/src/app/packages/server/api/src/assets/default.cf /usr/local/etc/isolate
|
||||
COPY docker-entrypoint.sh .
|
||||
COPY custom-pieces-metadata.sql .
|
||||
COPY publish-pieces.sh .
|
||||
|
||||
# Create all necessary directories in one layer
|
||||
# Also create symlink for AP_DEV_PIECES to find pieces in dist folder
|
||||
# Structure: /packages/pieces/community -> /dist/packages/pieces/community
|
||||
RUN mkdir -p \
|
||||
/usr/src/app/dist/packages/server \
|
||||
/usr/src/app/dist/packages/engine \
|
||||
/usr/src/app/dist/packages/shared \
|
||||
/usr/src/app/dist/packages/pieces && \
|
||||
chmod +x docker-entrypoint.sh
|
||||
/usr/src/app/dist/packages/pieces \
|
||||
/usr/src/app/packages/pieces && \
|
||||
ln -sf /usr/src/app/dist/packages/pieces/community /usr/src/app/packages/pieces/community && \
|
||||
chmod +x docker-entrypoint.sh publish-pieces.sh
|
||||
|
||||
# Copy built artifacts from build stage
|
||||
COPY --from=build /usr/src/app/LICENSE .
|
||||
|
||||
123
activepieces-fork/custom-pieces-metadata.sql
Normal file
123
activepieces-fork/custom-pieces-metadata.sql
Normal file
@@ -0,0 +1,123 @@
|
||||
-- ==============================================================================
|
||||
-- Custom SmoothSchedule Pieces Metadata
|
||||
-- ==============================================================================
|
||||
-- This script registers custom pieces in the Activepieces database.
|
||||
-- It runs on container startup via publish-pieces.sh.
|
||||
--
|
||||
-- IMPORTANT:
|
||||
-- - Pieces use pieceType=CUSTOM with platformId to avoid being deleted by sync
|
||||
-- - This script is IDEMPOTENT - safe to run multiple times
|
||||
-- - If platform doesn't exist yet, this script will silently skip
|
||||
-- ==============================================================================
|
||||
|
||||
-- Get the platform ID dynamically and only proceed if platform exists
|
||||
DO $$
|
||||
DECLARE
|
||||
platform_id varchar(21);
|
||||
platform_count integer;
|
||||
BEGIN
|
||||
-- Check if platform table exists and has data
|
||||
SELECT COUNT(*) INTO platform_count FROM platform;
|
||||
|
||||
IF platform_count = 0 THEN
|
||||
RAISE NOTICE 'No platform found yet - skipping piece metadata registration';
|
||||
RAISE NOTICE 'Pieces will be registered on next container restart after platform is created';
|
||||
RETURN;
|
||||
END IF;
|
||||
|
||||
SELECT id INTO platform_id FROM platform LIMIT 1;
|
||||
RAISE NOTICE 'Registering custom pieces for platform: %', platform_id;
|
||||
|
||||
-- Pin our custom pieces in the platform so they appear first
|
||||
UPDATE platform
|
||||
SET "pinnedPieces" = ARRAY[
|
||||
'@activepieces/piece-smoothschedule',
|
||||
'@activepieces/piece-python-code',
|
||||
'@activepieces/piece-ruby-code'
|
||||
]::varchar[]
|
||||
WHERE id = platform_id
|
||||
AND ("pinnedPieces" = '{}' OR "pinnedPieces" IS NULL OR NOT '@activepieces/piece-smoothschedule' = ANY("pinnedPieces"));
|
||||
|
||||
-- Delete existing entries for our custom pieces (to avoid ID conflicts)
|
||||
DELETE FROM piece_metadata WHERE name IN (
|
||||
'@activepieces/piece-smoothschedule',
|
||||
'@activepieces/piece-python-code',
|
||||
'@activepieces/piece-ruby-code',
|
||||
'@activepieces/piece-interfaces'
|
||||
);
|
||||
|
||||
-- SmoothSchedule piece
|
||||
INSERT INTO piece_metadata (
|
||||
id, name, "displayName", "logoUrl", description, version,
|
||||
"minimumSupportedRelease", "maximumSupportedRelease",
|
||||
actions, triggers, auth, "pieceType", "packageType", categories, authors, "projectUsage", "platformId"
|
||||
) VALUES (
|
||||
'smoothschedule001',
|
||||
'@activepieces/piece-smoothschedule',
|
||||
'SmoothSchedule',
|
||||
'https://api.smoothschedule.com/images/logo-branding.png',
|
||||
'Scheduling and appointment management for your business',
|
||||
'0.0.1',
|
||||
'0.36.1',
|
||||
'99999.99999.9999',
|
||||
'{"create_event":{"name":"create_event","displayName":"Create Event","description":"Create a new event/appointment","props":{},"requireAuth":true,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}},"update_event":{"name":"update_event","displayName":"Update Event","description":"Update an existing event","props":{},"requireAuth":true,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}},"cancel_event":{"name":"cancel_event","displayName":"Cancel Event","description":"Cancel an event","props":{},"requireAuth":true,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}},"find_events":{"name":"find_events","displayName":"Find Events","description":"Search for events","props":{},"requireAuth":true,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}},"list_resources":{"name":"list_resources","displayName":"List Resources","description":"List all resources","props":{},"requireAuth":true,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}},"list_services":{"name":"list_services","displayName":"List Services","description":"List all services","props":{},"requireAuth":true,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}},"list_inactive_customers":{"name":"list_inactive_customers","displayName":"List Inactive Customers","description":"List customers who havent booked recently","props":{},"requireAuth":true,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}},"send_email":{"name":"send_email","displayName":"Send Email","description":"Send an email using a SmoothSchedule email template","props":{},"requireAuth":true,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}},"list_email_templates":{"name":"list_email_templates","displayName":"List Email Templates","description":"Get all available email templates","props":{},"requireAuth":true,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}},"custom_api_call":{"name":"custom_api_call","displayName":"Custom API Call","description":"Make a custom API request","props":{},"requireAuth":true,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}}}',
|
||||
'{"event_created":{"name":"event_created","displayName":"Event Created","description":"Triggers when a new event is created","props":{},"type":"WEBHOOK","handshakeConfiguration":{"strategy":"NONE"},"requireAuth":true,"testStrategy":"SIMULATION"},"event_updated":{"name":"event_updated","displayName":"Event Updated","description":"Triggers when an event is updated","props":{},"type":"WEBHOOK","handshakeConfiguration":{"strategy":"NONE"},"requireAuth":true,"testStrategy":"SIMULATION"},"event_cancelled":{"name":"event_cancelled","displayName":"Event Cancelled","description":"Triggers when an event is cancelled","props":{},"type":"WEBHOOK","handshakeConfiguration":{"strategy":"NONE"},"requireAuth":true,"testStrategy":"SIMULATION"},"event_status_changed":{"name":"event_status_changed","displayName":"Event Status Changed","description":"Triggers when event status changes","props":{},"type":"WEBHOOK","handshakeConfiguration":{"strategy":"NONE"},"requireAuth":true,"testStrategy":"SIMULATION"}}',
|
||||
'{"type":"CUSTOM_AUTH","displayName":"Connection","description":"Connect to your SmoothSchedule account","required":true,"props":{"baseUrl":{"displayName":"API URL","description":"Your SmoothSchedule API URL","required":true,"type":"SECRET_TEXT"},"apiToken":{"displayName":"API Token","description":"Your API token from Settings","required":true,"type":"SECRET_TEXT"}}}',
|
||||
'CUSTOM',
|
||||
'REGISTRY',
|
||||
ARRAY['PRODUCTIVITY', 'SALES_AND_CRM'],
|
||||
ARRAY['smoothschedule'],
|
||||
100,
|
||||
platform_id
|
||||
);
|
||||
|
||||
-- Python Code piece
|
||||
INSERT INTO piece_metadata (
|
||||
id, name, "displayName", "logoUrl", description, version,
|
||||
"minimumSupportedRelease", "maximumSupportedRelease",
|
||||
actions, triggers, auth, "pieceType", "packageType", categories, authors, "projectUsage", "platformId"
|
||||
) VALUES (
|
||||
'pythoncode00001',
|
||||
'@activepieces/piece-python-code',
|
||||
'Python Code',
|
||||
'https://api.smoothschedule.com/images/python-logo.svg',
|
||||
'Execute custom Python code in your workflows',
|
||||
'0.0.1',
|
||||
'0.36.1',
|
||||
'99999.99999.9999',
|
||||
'{"run_python":{"name":"run_python","displayName":"Run Python Code","description":"Execute Python code and return results","props":{"code":{"displayName":"Python Code","description":"The Python code to execute","required":true,"type":"LONG_TEXT"}},"requireAuth":false,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}}}',
|
||||
'{}',
|
||||
NULL,
|
||||
'CUSTOM',
|
||||
'REGISTRY',
|
||||
ARRAY['DEVELOPER_TOOLS'],
|
||||
ARRAY['smoothschedule'],
|
||||
0,
|
||||
platform_id
|
||||
);
|
||||
|
||||
-- Ruby Code piece
|
||||
INSERT INTO piece_metadata (
|
||||
id, name, "displayName", "logoUrl", description, version,
|
||||
"minimumSupportedRelease", "maximumSupportedRelease",
|
||||
actions, triggers, auth, "pieceType", "packageType", categories, authors, "projectUsage", "platformId"
|
||||
) VALUES (
|
||||
'rubycode000001',
|
||||
'@activepieces/piece-ruby-code',
|
||||
'Ruby Code',
|
||||
'https://api.smoothschedule.com/images/ruby-logo.svg',
|
||||
'Execute custom Ruby code in your workflows',
|
||||
'0.0.1',
|
||||
'0.36.1',
|
||||
'99999.99999.9999',
|
||||
'{"run_ruby":{"name":"run_ruby","displayName":"Run Ruby Code","description":"Execute Ruby code and return results","props":{"code":{"displayName":"Ruby Code","description":"The Ruby code to execute","required":true,"type":"LONG_TEXT"}},"requireAuth":false,"errorHandlingOptions":{"continueOnFailure":{"defaultValue":false},"retryOnFailure":{"defaultValue":false}}}}',
|
||||
'{}',
|
||||
NULL,
|
||||
'CUSTOM',
|
||||
'REGISTRY',
|
||||
ARRAY['DEVELOPER_TOOLS'],
|
||||
ARRAY['smoothschedule'],
|
||||
0,
|
||||
platform_id
|
||||
);
|
||||
END $$;
|
||||
@@ -12,6 +12,10 @@ echo "AP_FAVICON_URL: $AP_FAVICON_URL"
|
||||
envsubst '${AP_APP_TITLE} ${AP_FAVICON_URL}' < /usr/share/nginx/html/index.html > /usr/share/nginx/html/index.html.tmp && \
|
||||
mv /usr/share/nginx/html/index.html.tmp /usr/share/nginx/html/index.html
|
||||
|
||||
# Register custom pieces (publish to Verdaccio and insert metadata)
|
||||
if [ -f /usr/src/app/publish-pieces.sh ]; then
|
||||
/usr/src/app/publish-pieces.sh || echo "Warning: Custom pieces registration had issues"
|
||||
fi
|
||||
|
||||
# Start Nginx server
|
||||
nginx -g "daemon off;" &
|
||||
|
||||
@@ -31,7 +31,7 @@ http {
|
||||
proxy_send_timeout 900s;
|
||||
}
|
||||
|
||||
location ~* ^/(?!api/).*.(css|js|jpg|jpeg|png|gif|ico|svg)$ {
|
||||
location ~* ^/(?!api/).*\.(css|js|jpg|jpeg|png|gif|ico|svg|woff|woff2|ttf|eot)$ {
|
||||
root /usr/share/nginx/html;
|
||||
add_header Expires "0";
|
||||
add_header Cache-Control "public, max-age=31536000, immutable";
|
||||
|
||||
@@ -0,0 +1,4 @@
|
||||
{
|
||||
"name": "@activepieces/piece-interfaces",
|
||||
"version": "0.0.1"
|
||||
}
|
||||
@@ -0,0 +1,50 @@
|
||||
{
|
||||
"name": "pieces-interfaces",
|
||||
"$schema": "../../../../node_modules/nx/schemas/project-schema.json",
|
||||
"sourceRoot": "packages/pieces/community/interfaces/src",
|
||||
"projectType": "library",
|
||||
"targets": {
|
||||
"build": {
|
||||
"executor": "@nx/js:tsc",
|
||||
"outputs": [
|
||||
"{options.outputPath}"
|
||||
],
|
||||
"options": {
|
||||
"outputPath": "dist/packages/pieces/community/interfaces",
|
||||
"tsConfig": "packages/pieces/community/interfaces/tsconfig.lib.json",
|
||||
"packageJson": "packages/pieces/community/interfaces/package.json",
|
||||
"main": "packages/pieces/community/interfaces/src/index.ts",
|
||||
"assets": [],
|
||||
"buildableProjectDepsInPackageJsonType": "dependencies",
|
||||
"updateBuildableProjectDepsInPackageJson": true
|
||||
},
|
||||
"dependsOn": [
|
||||
"^build",
|
||||
"prebuild"
|
||||
]
|
||||
},
|
||||
"publish": {
|
||||
"command": "node tools/scripts/publish.mjs pieces-interfaces {args.ver} {args.tag}",
|
||||
"dependsOn": [
|
||||
"build"
|
||||
]
|
||||
},
|
||||
"lint": {
|
||||
"executor": "@nx/eslint:lint",
|
||||
"outputs": [
|
||||
"{options.outputFile}"
|
||||
]
|
||||
},
|
||||
"prebuild": {
|
||||
"executor": "nx:run-commands",
|
||||
"options": {
|
||||
"cwd": "packages/pieces/community/interfaces",
|
||||
"command": "bun install --no-save --silent"
|
||||
},
|
||||
"dependsOn": [
|
||||
"^build"
|
||||
]
|
||||
}
|
||||
},
|
||||
"tags": []
|
||||
}
|
||||
@@ -0,0 +1,14 @@
|
||||
import { createPiece, PieceAuth } from '@activepieces/pieces-framework';
|
||||
import { PieceCategory } from '@activepieces/shared';
|
||||
|
||||
export const interfaces = createPiece({
|
||||
displayName: 'Interfaces',
|
||||
description: 'Create custom forms and interfaces for your workflows.',
|
||||
auth: PieceAuth.None(),
|
||||
categories: [PieceCategory.CORE],
|
||||
minimumSupportedRelease: '0.52.0',
|
||||
logoUrl: 'https://cdn.activepieces.com/pieces/interfaces.svg',
|
||||
authors: ['activepieces'],
|
||||
actions: [],
|
||||
triggers: [],
|
||||
});
|
||||
@@ -0,0 +1,19 @@
|
||||
{
|
||||
"extends": "../../../../tsconfig.base.json",
|
||||
"compilerOptions": {
|
||||
"module": "commonjs",
|
||||
"forceConsistentCasingInFileNames": true,
|
||||
"strict": true,
|
||||
"noImplicitOverride": true,
|
||||
"noImplicitReturns": true,
|
||||
"noFallthroughCasesInSwitch": true,
|
||||
"noPropertyAccessFromIndexSignature": true
|
||||
},
|
||||
"files": [],
|
||||
"include": [],
|
||||
"references": [
|
||||
{
|
||||
"path": "./tsconfig.lib.json"
|
||||
}
|
||||
]
|
||||
}
|
||||
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"extends": "./tsconfig.json",
|
||||
"compilerOptions": {
|
||||
"module": "commonjs",
|
||||
"outDir": "../../../../dist/out-tsc",
|
||||
"declaration": true,
|
||||
"types": ["node"]
|
||||
},
|
||||
"exclude": ["jest.config.ts", "src/**/*.spec.ts", "src/**/*.test.ts"],
|
||||
"include": ["src/**/*.ts"]
|
||||
}
|
||||
@@ -5,6 +5,8 @@ import { createEventAction, findEventsAction, updateEventAction, cancelEventActi
|
||||
import { listResourcesAction } from './lib/actions/list-resources';
|
||||
import { listServicesAction } from './lib/actions/list-services';
|
||||
import { listInactiveCustomersAction } from './lib/actions/list-inactive-customers';
|
||||
import { sendEmailAction } from './lib/actions/send-email';
|
||||
import { listEmailTemplatesAction } from './lib/actions/list-email-templates';
|
||||
import { eventCreatedTrigger, eventUpdatedTrigger, eventCancelledTrigger, eventStatusChangedTrigger } from './lib/triggers';
|
||||
import { API_URL } from './lib/common';
|
||||
|
||||
@@ -75,6 +77,8 @@ export const smoothSchedule = createPiece({
|
||||
listResourcesAction,
|
||||
listServicesAction,
|
||||
listInactiveCustomersAction,
|
||||
sendEmailAction,
|
||||
listEmailTemplatesAction,
|
||||
createCustomApiCallAction({
|
||||
auth: smoothScheduleAuth,
|
||||
baseUrl: (auth) => (auth as SmoothScheduleAuth)?.props?.baseUrl ?? '',
|
||||
|
||||
@@ -0,0 +1,23 @@
|
||||
import { createAction } from '@activepieces/pieces-framework';
|
||||
import { HttpMethod } from '@activepieces/pieces-common';
|
||||
import { smoothScheduleAuth, SmoothScheduleAuth } from '../../index';
|
||||
import { makeRequest } from '../common';
|
||||
|
||||
export const listEmailTemplatesAction = createAction({
|
||||
auth: smoothScheduleAuth,
|
||||
name: 'list_email_templates',
|
||||
displayName: 'List Email Templates',
|
||||
description: 'Get all available email templates (system and custom)',
|
||||
props: {},
|
||||
async run(context) {
|
||||
const auth = context.auth as SmoothScheduleAuth;
|
||||
|
||||
const response = await makeRequest(
|
||||
auth,
|
||||
HttpMethod.GET,
|
||||
'/emails/templates/'
|
||||
);
|
||||
|
||||
return response;
|
||||
},
|
||||
});
|
||||
@@ -0,0 +1,112 @@
|
||||
import { Property, createAction } from '@activepieces/pieces-framework';
|
||||
import { HttpMethod } from '@activepieces/pieces-common';
|
||||
import { smoothScheduleAuth, SmoothScheduleAuth } from '../../index';
|
||||
import { makeRequest } from '../common';
|
||||
|
||||
export const sendEmailAction = createAction({
|
||||
auth: smoothScheduleAuth,
|
||||
name: 'send_email',
|
||||
displayName: 'Send Email',
|
||||
description: 'Send an email using a SmoothSchedule email template',
|
||||
props: {
|
||||
template_type: Property.StaticDropdown({
|
||||
displayName: 'Template Type',
|
||||
description: 'Choose whether to use a system template or a custom template',
|
||||
required: true,
|
||||
options: {
|
||||
options: [
|
||||
{ label: 'System Template', value: 'system' },
|
||||
{ label: 'Custom Template', value: 'custom' },
|
||||
],
|
||||
},
|
||||
}),
|
||||
email_type: Property.StaticDropdown({
|
||||
displayName: 'System Email Type',
|
||||
description: 'Select a system email template',
|
||||
required: false,
|
||||
options: {
|
||||
options: [
|
||||
{ label: 'Appointment Confirmation', value: 'appointment_confirmation' },
|
||||
{ label: 'Appointment Reminder', value: 'appointment_reminder' },
|
||||
{ label: 'Appointment Rescheduled', value: 'appointment_rescheduled' },
|
||||
{ label: 'Appointment Cancelled', value: 'appointment_cancelled' },
|
||||
{ label: 'Welcome Email', value: 'welcome_email' },
|
||||
{ label: 'Password Reset', value: 'password_reset' },
|
||||
{ label: 'Invoice', value: 'invoice' },
|
||||
{ label: 'Payment Receipt', value: 'payment_receipt' },
|
||||
{ label: 'Staff Invitation', value: 'staff_invitation' },
|
||||
{ label: 'Customer Winback', value: 'customer_winback' },
|
||||
],
|
||||
},
|
||||
}),
|
||||
template_slug: Property.ShortText({
|
||||
displayName: 'Custom Template Slug',
|
||||
description: 'The slug/identifier of your custom email template',
|
||||
required: false,
|
||||
}),
|
||||
to_email: Property.ShortText({
|
||||
displayName: 'Recipient Email',
|
||||
description: 'The email address to send to',
|
||||
required: true,
|
||||
}),
|
||||
subject_override: Property.ShortText({
|
||||
displayName: 'Subject Override',
|
||||
description: 'Override the template subject (optional)',
|
||||
required: false,
|
||||
}),
|
||||
reply_to: Property.ShortText({
|
||||
displayName: 'Reply-To Email',
|
||||
description: 'Reply-to email address (optional)',
|
||||
required: false,
|
||||
}),
|
||||
context: Property.Object({
|
||||
displayName: 'Template Variables',
|
||||
description: 'Variables to replace in the template (e.g., customer_name, appointment_date)',
|
||||
required: false,
|
||||
}),
|
||||
},
|
||||
async run(context) {
|
||||
const { template_type, email_type, template_slug, to_email, subject_override, reply_to, context: templateContext } = context.propsValue;
|
||||
const auth = context.auth as SmoothScheduleAuth;
|
||||
|
||||
// Validate that the right template identifier is provided based on type
|
||||
if (template_type === 'system' && !email_type) {
|
||||
throw new Error('System Email Type is required when using System Template');
|
||||
}
|
||||
if (template_type === 'custom' && !template_slug) {
|
||||
throw new Error('Custom Template Slug is required when using Custom Template');
|
||||
}
|
||||
|
||||
// Build the request body
|
||||
const requestBody: Record<string, unknown> = {
|
||||
to_email,
|
||||
};
|
||||
|
||||
if (template_type === 'system') {
|
||||
requestBody['email_type'] = email_type;
|
||||
} else {
|
||||
requestBody['template_slug'] = template_slug;
|
||||
}
|
||||
|
||||
if (subject_override) {
|
||||
requestBody['subject_override'] = subject_override;
|
||||
}
|
||||
|
||||
if (reply_to) {
|
||||
requestBody['reply_to'] = reply_to;
|
||||
}
|
||||
|
||||
if (templateContext && Object.keys(templateContext).length > 0) {
|
||||
requestBody['context'] = templateContext;
|
||||
}
|
||||
|
||||
const response = await makeRequest(
|
||||
auth,
|
||||
HttpMethod.POST,
|
||||
'/emails/send/',
|
||||
requestBody
|
||||
);
|
||||
|
||||
return response;
|
||||
},
|
||||
});
|
||||
@@ -1,6 +1,6 @@
|
||||
import { t } from 'i18next';
|
||||
import { Plus, Globe } from 'lucide-react';
|
||||
import { useState } from 'react';
|
||||
import { useState, useEffect } from 'react';
|
||||
import { useFormContext } from 'react-hook-form';
|
||||
|
||||
import { AutoFormFieldWrapper } from '@/app/builder/piece-properties/auto-form-field-wrapper';
|
||||
@@ -80,6 +80,27 @@ function ConnectionSelect(params: ConnectionSelectProps) {
|
||||
PropertyExecutionType.DYNAMIC;
|
||||
const isPLatformAdmin = useIsPlatformAdmin();
|
||||
|
||||
// Auto-select connection with autoSelect metadata if no connection is selected
|
||||
useEffect(() => {
|
||||
if (isLoadingConnections || !connections?.data) return;
|
||||
|
||||
const currentAuth = form.getValues().settings.input.auth;
|
||||
// Only auto-select if no connection is currently selected
|
||||
if (currentAuth && removeBrackets(currentAuth)) return;
|
||||
|
||||
// Find a connection with autoSelect metadata
|
||||
const autoSelectConnection = connections.data.find(
|
||||
(connection) => (connection as any).metadata?.autoSelect === true
|
||||
);
|
||||
|
||||
if (autoSelectConnection) {
|
||||
form.setValue('settings.input.auth', addBrackets(autoSelectConnection.externalId), {
|
||||
shouldValidate: true,
|
||||
shouldDirty: true,
|
||||
});
|
||||
}
|
||||
}, [connections?.data, isLoadingConnections, form]);
|
||||
|
||||
return (
|
||||
<FormField
|
||||
control={form.control}
|
||||
|
||||
@@ -22,8 +22,8 @@ import { ScrollArea } from '@/components/ui/scroll-area';
|
||||
import { LoadingSpinner } from '@/components/ui/spinner';
|
||||
import { TemplateCard } from '@/features/templates/components/template-card';
|
||||
import { TemplateDetailsView } from '@/features/templates/components/template-details-view';
|
||||
import { useTemplates } from '@/features/templates/hooks/templates-hook';
|
||||
import { Template, TemplateType } from '@activepieces/shared';
|
||||
import { useAllTemplates } from '@/features/templates/hooks/templates-hook';
|
||||
import { Template } from '@activepieces/shared';
|
||||
|
||||
const SelectFlowTemplateDialog = ({
|
||||
children,
|
||||
@@ -32,9 +32,7 @@ const SelectFlowTemplateDialog = ({
|
||||
children: React.ReactNode;
|
||||
folderId: string;
|
||||
}) => {
|
||||
const { filteredTemplates, isLoading, search, setSearch } = useTemplates({
|
||||
type: TemplateType.CUSTOM,
|
||||
});
|
||||
const { filteredTemplates, isLoading, search, setSearch } = useAllTemplates();
|
||||
const carousel = useRef<CarouselApi>();
|
||||
const [selectedTemplate, setSelectedTemplate] = useState<Template | null>(
|
||||
null,
|
||||
|
||||
@@ -247,6 +247,23 @@ export const appConnectionService = (log: FastifyBaseLogger) => ({
|
||||
},
|
||||
|
||||
async delete(params: DeleteParams): Promise<void> {
|
||||
// Check if connection is protected before deleting
|
||||
const connection = await appConnectionsRepo().findOneBy({
|
||||
id: params.id,
|
||||
platformId: params.platformId,
|
||||
scope: params.scope,
|
||||
...(params.projectId ? { projectIds: ArrayContains([params.projectId]) } : {}),
|
||||
})
|
||||
|
||||
if (connection?.metadata?.protected) {
|
||||
throw new ActivepiecesError({
|
||||
code: ErrorCode.VALIDATION,
|
||||
params: {
|
||||
message: 'This connection is protected and cannot be deleted. It is required for SmoothSchedule integration.',
|
||||
},
|
||||
})
|
||||
}
|
||||
|
||||
await appConnectionsRepo().delete({
|
||||
id: params.id,
|
||||
platformId: params.platformId,
|
||||
|
||||
164
activepieces-fork/publish-pieces.sh
Normal file
164
activepieces-fork/publish-pieces.sh
Normal file
@@ -0,0 +1,164 @@
|
||||
#!/bin/sh
|
||||
# Publish custom pieces to Verdaccio and register metadata in database
|
||||
# This script runs on container startup
|
||||
|
||||
set -e
|
||||
|
||||
VERDACCIO_URL="${VERDACCIO_URL:-http://verdaccio:4873}"
|
||||
PIECES_DIR="/usr/src/app/dist/packages/pieces/community"
|
||||
CUSTOM_PIECES="smoothschedule python-code ruby-code interfaces"
|
||||
|
||||
# Wait for Verdaccio to be ready
|
||||
wait_for_verdaccio() {
|
||||
echo "Waiting for Verdaccio to be ready..."
|
||||
max_attempts=30
|
||||
attempt=0
|
||||
while [ $attempt -lt $max_attempts ]; do
|
||||
if curl -sf "$VERDACCIO_URL/-/ping" > /dev/null 2>&1; then
|
||||
echo "Verdaccio is ready!"
|
||||
return 0
|
||||
fi
|
||||
attempt=$((attempt + 1))
|
||||
echo "Attempt $attempt/$max_attempts - Verdaccio not ready yet..."
|
||||
sleep 2
|
||||
done
|
||||
echo "Warning: Verdaccio not available after $max_attempts attempts"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Configure npm/bun to use Verdaccio with authentication
|
||||
configure_registry() {
|
||||
echo "Configuring npm registry to use Verdaccio..."
|
||||
|
||||
# Register user with Verdaccio first
|
||||
echo "Registering npm user with Verdaccio..."
|
||||
RESPONSE=$(curl -sf -X PUT "$VERDACCIO_URL/-/user/org.couchdb.user:publisher" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"name":"publisher","password":"publisher","email":"publisher@smoothschedule.com"}' 2>&1) || true
|
||||
echo "Registration response: $RESPONSE"
|
||||
|
||||
# Extract token from response if available
|
||||
TOKEN=$(echo "$RESPONSE" | node -pe "JSON.parse(require('fs').readFileSync('/dev/stdin').toString()).token" 2>/dev/null || echo "")
|
||||
|
||||
if [ -n "$TOKEN" ] && [ "$TOKEN" != "undefined" ]; then
|
||||
echo "Using token from registration"
|
||||
cat > ~/.npmrc << EOF
|
||||
registry=$VERDACCIO_URL
|
||||
//verdaccio:4873/:_authToken=$TOKEN
|
||||
EOF
|
||||
else
|
||||
echo "Using basic auth"
|
||||
# Use legacy _auth format (base64 of username:password)
|
||||
AUTH=$(echo -n "publisher:publisher" | base64)
|
||||
cat > ~/.npmrc << EOF
|
||||
registry=$VERDACCIO_URL
|
||||
//verdaccio:4873/:_auth=$AUTH
|
||||
always-auth=true
|
||||
EOF
|
||||
fi
|
||||
|
||||
# Create bunfig.toml for bun
|
||||
mkdir -p ~/.bun
|
||||
cat > ~/.bun/bunfig.toml << EOF
|
||||
[install]
|
||||
registry = "$VERDACCIO_URL"
|
||||
EOF
|
||||
echo "Registry configured: $VERDACCIO_URL"
|
||||
}
|
||||
|
||||
# Publish a piece to Verdaccio
|
||||
publish_piece() {
|
||||
piece_name=$1
|
||||
piece_dir="$PIECES_DIR/$piece_name"
|
||||
|
||||
if [ ! -d "$piece_dir" ]; then
|
||||
echo "Warning: Piece directory not found: $piece_dir"
|
||||
return 1
|
||||
fi
|
||||
|
||||
cd "$piece_dir"
|
||||
|
||||
# Get package name and version
|
||||
pkg_name=$(node -p "require('./package.json').name")
|
||||
pkg_version=$(node -p "require('./package.json').version")
|
||||
|
||||
echo "Publishing $pkg_name@$pkg_version to Verdaccio..."
|
||||
|
||||
# Check if already published
|
||||
if npm view "$pkg_name@$pkg_version" --registry "$VERDACCIO_URL" > /dev/null 2>&1; then
|
||||
echo " $pkg_name@$pkg_version already published, skipping..."
|
||||
return 0
|
||||
fi
|
||||
|
||||
# Publish to Verdaccio (--force to allow republishing)
|
||||
if npm publish --registry "$VERDACCIO_URL" 2>&1; then
|
||||
echo " Successfully published $pkg_name@$pkg_version"
|
||||
else
|
||||
echo " Warning: Could not publish $pkg_name (may already exist)"
|
||||
fi
|
||||
|
||||
cd /usr/src/app
|
||||
}
|
||||
|
||||
# Insert piece metadata into database
|
||||
insert_metadata() {
|
||||
if [ -z "$AP_POSTGRES_HOST" ] || [ -z "$AP_POSTGRES_DATABASE" ]; then
|
||||
echo "Warning: Database configuration not available, skipping metadata insertion"
|
||||
return 1
|
||||
fi
|
||||
|
||||
echo "Inserting custom piece metadata into database..."
|
||||
echo " Host: $AP_POSTGRES_HOST"
|
||||
echo " Database: $AP_POSTGRES_DATABASE"
|
||||
echo " User: $AP_POSTGRES_USERNAME"
|
||||
|
||||
# Wait for PostgreSQL to be ready
|
||||
max_attempts=30
|
||||
attempt=0
|
||||
while [ $attempt -lt $max_attempts ]; do
|
||||
if PGPASSWORD="$AP_POSTGRES_PASSWORD" psql -h "$AP_POSTGRES_HOST" -p "${AP_POSTGRES_PORT:-5432}" -U "$AP_POSTGRES_USERNAME" -d "$AP_POSTGRES_DATABASE" -c "SELECT 1" > /dev/null 2>&1; then
|
||||
break
|
||||
fi
|
||||
attempt=$((attempt + 1))
|
||||
echo "Waiting for PostgreSQL... ($attempt/$max_attempts)"
|
||||
sleep 2
|
||||
done
|
||||
|
||||
if [ $attempt -eq $max_attempts ]; then
|
||||
echo "Warning: PostgreSQL not available, skipping metadata insertion"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Run the SQL file
|
||||
PGPASSWORD="$AP_POSTGRES_PASSWORD" psql -h "$AP_POSTGRES_HOST" -p "${AP_POSTGRES_PORT:-5432}" -U "$AP_POSTGRES_USERNAME" -d "$AP_POSTGRES_DATABASE" -f /usr/src/app/custom-pieces-metadata.sql
|
||||
|
||||
echo "Piece metadata inserted successfully!"
|
||||
}
|
||||
|
||||
# Main execution
|
||||
main() {
|
||||
echo "============================================"
|
||||
echo "Custom Pieces Registration"
|
||||
echo "============================================"
|
||||
|
||||
# Configure registry first (needed for both Verdaccio and fallback to npm)
|
||||
if wait_for_verdaccio; then
|
||||
configure_registry
|
||||
|
||||
# Publish each custom piece
|
||||
for piece in $CUSTOM_PIECES; do
|
||||
publish_piece "$piece" || true
|
||||
done
|
||||
else
|
||||
echo "Skipping Verdaccio publishing - will use npm registry"
|
||||
fi
|
||||
|
||||
# Insert metadata into database
|
||||
insert_metadata || true
|
||||
|
||||
echo "============================================"
|
||||
echo "Custom Pieces Registration Complete"
|
||||
echo "============================================"
|
||||
}
|
||||
|
||||
main "$@"
|
||||
136
deploy.sh
136
deploy.sh
@@ -1,15 +1,33 @@
|
||||
#!/bin/bash
|
||||
# ==============================================================================
|
||||
# SmoothSchedule Production Deployment Script
|
||||
# Usage: ./deploy.sh [server_user@server_host] [services...]
|
||||
# Example: ./deploy.sh poduck@smoothschedule.com # Build all
|
||||
# Example: ./deploy.sh poduck@smoothschedule.com traefik # Build only traefik
|
||||
# Example: ./deploy.sh poduck@smoothschedule.com django nginx # Build django and nginx
|
||||
# ==============================================================================
|
||||
#
|
||||
# Available services: django, traefik, nginx, postgres, celeryworker, celerybeat, flower, awscli
|
||||
# Use --no-migrate to skip migrations (useful for config-only changes like traefik)
|
||||
# Usage: ./deploy.sh [server] [options] [services...]
|
||||
#
|
||||
# This script deploys from git repository, not local files.
|
||||
# Changes must be committed and pushed before deploying.
|
||||
# Examples:
|
||||
# ./deploy.sh # Deploy all services
|
||||
# ./deploy.sh --no-migrate # Deploy without migrations
|
||||
# ./deploy.sh django nginx # Deploy specific services
|
||||
# ./deploy.sh --deploy-ap # Build & deploy Activepieces image
|
||||
# ./deploy.sh poduck@server.com # Deploy to custom server
|
||||
#
|
||||
# Options:
|
||||
# --no-migrate Skip database migrations
|
||||
# --deploy-ap Build Activepieces image locally and transfer to server
|
||||
#
|
||||
# Available services:
|
||||
# django, traefik, nginx, postgres, celeryworker, celerybeat, flower, awscli, activepieces
|
||||
#
|
||||
# IMPORTANT: Activepieces Image
|
||||
# -----------------------------
|
||||
# The production server cannot build the Activepieces image (requires 4GB+ RAM).
|
||||
# Use --deploy-ap to build locally and transfer, or manually:
|
||||
# ./scripts/build-activepieces.sh deploy
|
||||
#
|
||||
# First-time setup:
|
||||
# Run ./smoothschedule/scripts/init-production.sh on the server
|
||||
# ==============================================================================
|
||||
|
||||
set -e
|
||||
|
||||
@@ -23,12 +41,23 @@ NC='\033[0m' # No Color
|
||||
SERVER=""
|
||||
SERVICES=""
|
||||
SKIP_MIGRATE=false
|
||||
DEPLOY_AP=false
|
||||
|
||||
for arg in "$@"; do
|
||||
if [[ "$arg" == "--no-migrate" ]]; then
|
||||
SKIP_MIGRATE=true
|
||||
elif [[ -z "$SERVER" ]]; then
|
||||
elif [[ "$arg" == "--deploy-ap" ]]; then
|
||||
DEPLOY_AP=true
|
||||
elif [[ "$arg" == *"@"* ]]; then
|
||||
# Looks like user@host
|
||||
SERVER="$arg"
|
||||
elif [[ -z "$SERVER" && ! "$arg" =~ ^- ]]; then
|
||||
# First non-flag argument could be server or service
|
||||
if [[ "$arg" =~ ^(django|traefik|nginx|postgres|celeryworker|celerybeat|flower|awscli|activepieces|redis|verdaccio)$ ]]; then
|
||||
SERVICES="$SERVICES $arg"
|
||||
else
|
||||
SERVER="$arg"
|
||||
fi
|
||||
else
|
||||
SERVICES="$SERVICES $arg"
|
||||
fi
|
||||
@@ -38,6 +67,7 @@ SERVER=${SERVER:-"poduck@smoothschedule.com"}
|
||||
SERVICES=$(echo "$SERVICES" | xargs) # Trim whitespace
|
||||
REPO_URL="https://git.talova.net/poduck/smoothschedule.git"
|
||||
REMOTE_DIR="/home/poduck/smoothschedule"
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
echo -e "${GREEN}==================================="
|
||||
echo "SmoothSchedule Deployment"
|
||||
@@ -51,6 +81,9 @@ fi
|
||||
if [[ "$SKIP_MIGRATE" == "true" ]]; then
|
||||
echo "Migrations: SKIPPED"
|
||||
fi
|
||||
if [[ "$DEPLOY_AP" == "true" ]]; then
|
||||
echo "Activepieces: BUILDING AND DEPLOYING"
|
||||
fi
|
||||
echo ""
|
||||
|
||||
# Function to print status
|
||||
@@ -94,8 +127,36 @@ fi
|
||||
|
||||
print_status "All changes committed and pushed!"
|
||||
|
||||
# Step 2: Deploy on server
|
||||
print_status "Step 2: Deploying on server..."
|
||||
# Step 2: Build and deploy Activepieces image (if requested)
|
||||
if [[ "$DEPLOY_AP" == "true" ]]; then
|
||||
print_status "Step 2: Building and deploying Activepieces image..."
|
||||
|
||||
# Check if the build script exists
|
||||
if [[ -f "$SCRIPT_DIR/scripts/build-activepieces.sh" ]]; then
|
||||
"$SCRIPT_DIR/scripts/build-activepieces.sh" deploy "$SERVER"
|
||||
else
|
||||
print_warning "Build script not found, building manually..."
|
||||
|
||||
# Build the image
|
||||
print_status "Building Activepieces Docker image locally..."
|
||||
cd "$SCRIPT_DIR/activepieces-fork"
|
||||
docker build -t smoothschedule_production_activepieces .
|
||||
|
||||
# Save and transfer
|
||||
print_status "Transferring image to server..."
|
||||
docker save smoothschedule_production_activepieces | gzip > /tmp/ap-image.tar.gz
|
||||
scp /tmp/ap-image.tar.gz "$SERVER:/tmp/"
|
||||
ssh "$SERVER" "gunzip -c /tmp/ap-image.tar.gz | docker load && rm /tmp/ap-image.tar.gz"
|
||||
rm /tmp/ap-image.tar.gz
|
||||
|
||||
cd "$SCRIPT_DIR"
|
||||
fi
|
||||
|
||||
print_status "Activepieces image deployed!"
|
||||
fi
|
||||
|
||||
# Step 3: Deploy on server
|
||||
print_status "Step 3: Deploying on server..."
|
||||
|
||||
ssh "$SERVER" "bash -s" << ENDSSH
|
||||
set -e
|
||||
@@ -174,6 +235,58 @@ docker compose -f docker-compose.production.yml up -d
|
||||
echo ">>> Waiting for containers to start..."
|
||||
sleep 5
|
||||
|
||||
# Setup Activepieces database (if not exists)
|
||||
echo ">>> Setting up Activepieces database..."
|
||||
AP_DB_USER=\$(grep AP_POSTGRES_USERNAME .envs/.production/.activepieces | cut -d= -f2)
|
||||
AP_DB_PASS=\$(grep AP_POSTGRES_PASSWORD .envs/.production/.activepieces | cut -d= -f2)
|
||||
AP_DB_NAME=\$(grep AP_POSTGRES_DATABASE .envs/.production/.activepieces | cut -d= -f2)
|
||||
|
||||
if [ -n "\$AP_DB_USER" ] && [ -n "\$AP_DB_PASS" ] && [ -n "\$AP_DB_NAME" ]; then
|
||||
# Check if user exists, create if not
|
||||
docker compose -f docker-compose.production.yml exec -T postgres psql -U \${POSTGRES_USER:-postgres} -d postgres -tc "SELECT 1 FROM pg_roles WHERE rolname='\$AP_DB_USER'" | grep -q 1 || {
|
||||
echo " Creating Activepieces database user..."
|
||||
docker compose -f docker-compose.production.yml exec -T postgres psql -U \${POSTGRES_USER:-postgres} -d postgres -c "CREATE USER \"\$AP_DB_USER\" WITH PASSWORD '\$AP_DB_PASS';"
|
||||
}
|
||||
|
||||
# Check if database exists, create if not
|
||||
docker compose -f docker-compose.production.yml exec -T postgres psql -U \${POSTGRES_USER:-postgres} -d postgres -tc "SELECT 1 FROM pg_database WHERE datname='\$AP_DB_NAME'" | grep -q 1 || {
|
||||
echo " Creating Activepieces database..."
|
||||
docker compose -f docker-compose.production.yml exec -T postgres psql -U \${POSTGRES_USER:-postgres} -d postgres -c "CREATE DATABASE \$AP_DB_NAME OWNER \"\$AP_DB_USER\";"
|
||||
}
|
||||
echo " Activepieces database ready."
|
||||
else
|
||||
echo " Warning: Could not read Activepieces database config from .envs/.production/.activepieces"
|
||||
fi
|
||||
|
||||
# Wait for Activepieces to be ready
|
||||
echo ">>> Waiting for Activepieces to be ready..."
|
||||
for i in {1..30}; do
|
||||
if curl -s http://localhost:80/api/v1/health 2>/dev/null | grep -q "ok"; then
|
||||
echo " Activepieces is ready."
|
||||
break
|
||||
fi
|
||||
if [ \$i -eq 30 ]; then
|
||||
echo " Warning: Activepieces health check timed out. It may still be starting."
|
||||
fi
|
||||
sleep 2
|
||||
done
|
||||
|
||||
# Check if Activepieces platform exists
|
||||
echo ">>> Checking Activepieces platform..."
|
||||
AP_PLATFORM_ID=\$(grep AP_PLATFORM_ID .envs/.production/.activepieces | cut -d= -f2)
|
||||
if [ -z "\$AP_PLATFORM_ID" ] || [ "\$AP_PLATFORM_ID" = "" ]; then
|
||||
echo " WARNING: No AP_PLATFORM_ID configured in .envs/.production/.activepieces"
|
||||
echo " To initialize Activepieces for the first time:"
|
||||
echo " 1. Visit https://automations.smoothschedule.com"
|
||||
echo " 2. Create an admin user (this creates the platform)"
|
||||
echo " 3. Get the platform ID from the response or database"
|
||||
echo " 4. Update AP_PLATFORM_ID in .envs/.production/.activepieces"
|
||||
echo " 5. Also update AP_PLATFORM_ID in .envs/.production/.django"
|
||||
echo " 6. Restart Activepieces: docker compose -f docker-compose.production.yml restart activepieces"
|
||||
else
|
||||
echo " Activepieces platform configured: \$AP_PLATFORM_ID"
|
||||
fi
|
||||
|
||||
# Run migrations unless skipped
|
||||
if [[ "$SKIP_MIGRATE" != "true" ]]; then
|
||||
echo ">>> Running database migrations..."
|
||||
@@ -210,6 +323,7 @@ echo "Your application should now be running at:"
|
||||
echo " - https://smoothschedule.com"
|
||||
echo " - https://platform.smoothschedule.com"
|
||||
echo " - https://*.smoothschedule.com (tenant subdomains)"
|
||||
echo " - https://automations.smoothschedule.com (Activepieces)"
|
||||
echo ""
|
||||
echo "To view logs:"
|
||||
echo " ssh $SERVER 'cd ~/smoothschedule/smoothschedule && docker compose -f docker-compose.production.yml logs -f'"
|
||||
|
||||
@@ -321,9 +321,37 @@ const AppContent: React.FC = () => {
|
||||
return hostname === 'localhost' || hostname === '127.0.0.1' || parts.length === 2;
|
||||
};
|
||||
|
||||
// On root domain, ALWAYS show marketing site (even if logged in)
|
||||
// Logged-in users will see a "Go to Dashboard" link in the navbar
|
||||
// On root domain, handle logged-in users appropriately
|
||||
if (isRootDomain()) {
|
||||
// If user is logged in as a business user (owner, staff, resource), redirect to their tenant dashboard
|
||||
if (user) {
|
||||
const isBusinessUserOnRoot = ['owner', 'staff', 'resource'].includes(user.role);
|
||||
const isCustomerOnRoot = user.role === 'customer';
|
||||
const hostname = window.location.hostname;
|
||||
const parts = hostname.split('.');
|
||||
const baseDomain = parts.length >= 2 ? parts.slice(-2).join('.') : hostname;
|
||||
const port = window.location.port ? `:${window.location.port}` : '';
|
||||
const protocol = window.location.protocol;
|
||||
|
||||
// Business users on root domain: redirect to their tenant dashboard
|
||||
if (isBusinessUserOnRoot && user.business_subdomain) {
|
||||
window.location.href = `${protocol}//${user.business_subdomain}.${baseDomain}${port}/dashboard`;
|
||||
return <LoadingScreen />;
|
||||
}
|
||||
|
||||
// Customers on root domain: log them out and show the form
|
||||
// Customers should only access their business subdomain
|
||||
if (isCustomerOnRoot) {
|
||||
deleteCookie('access_token');
|
||||
deleteCookie('refresh_token');
|
||||
localStorage.removeItem('masquerade_stack');
|
||||
// Don't redirect, just let them see the page as unauthenticated
|
||||
window.location.reload();
|
||||
return <LoadingScreen />;
|
||||
}
|
||||
}
|
||||
|
||||
// Show marketing site for unauthenticated users and platform users (who should use platform subdomain)
|
||||
return (
|
||||
<Suspense fallback={<LoadingScreen />}>
|
||||
<Routes>
|
||||
@@ -480,16 +508,23 @@ const AppContent: React.FC = () => {
|
||||
return <LoadingScreen />;
|
||||
}
|
||||
|
||||
// RULE: Customers must be on their business subdomain
|
||||
if (isCustomer && isPlatformDomain && user.business_subdomain) {
|
||||
const port = window.location.port ? `:${window.location.port}` : '';
|
||||
window.location.href = `${protocol}//${user.business_subdomain}.${baseDomain}${port}/`;
|
||||
// RULE: Customers must only access their own business subdomain
|
||||
// If on platform domain or wrong business subdomain, log them out and let them use the form
|
||||
if (isCustomer && isPlatformDomain) {
|
||||
deleteCookie('access_token');
|
||||
deleteCookie('refresh_token');
|
||||
localStorage.removeItem('masquerade_stack');
|
||||
window.location.reload();
|
||||
return <LoadingScreen />;
|
||||
}
|
||||
|
||||
if (isCustomer && isBusinessSubdomain && user.business_subdomain && user.business_subdomain !== currentSubdomain) {
|
||||
const port = window.location.port ? `:${window.location.port}` : '';
|
||||
window.location.href = `${protocol}//${user.business_subdomain}.${baseDomain}${port}/`;
|
||||
// Customer is on a different business's subdomain - log them out
|
||||
// They might be trying to book with a different business
|
||||
deleteCookie('access_token');
|
||||
deleteCookie('refresh_token');
|
||||
localStorage.removeItem('masquerade_stack');
|
||||
window.location.reload();
|
||||
return <LoadingScreen />;
|
||||
}
|
||||
|
||||
@@ -723,7 +758,8 @@ const AppContent: React.FC = () => {
|
||||
<Route path="/" element={<PublicPage />} />
|
||||
<Route path="/book" element={<BookingFlow />} />
|
||||
<Route path="/embed" element={<EmbedBooking />} />
|
||||
<Route path="/login" element={<LoginPage />} />
|
||||
{/* Logged-in business users on their own subdomain get redirected to dashboard */}
|
||||
<Route path="/login" element={<Navigate to="/dashboard" replace />} />
|
||||
<Route path="/sign/:token" element={<ContractSigning />} />
|
||||
|
||||
{/* Dashboard routes inside BusinessLayout */}
|
||||
|
||||
115
scripts/build-activepieces.sh
Executable file
115
scripts/build-activepieces.sh
Executable file
@@ -0,0 +1,115 @@
|
||||
#!/bin/bash
|
||||
# ==============================================================================
|
||||
# Build and Deploy Activepieces Docker Image
|
||||
#
|
||||
# This script builds the Activepieces image locally and optionally
|
||||
# transfers it to the production server.
|
||||
#
|
||||
# Usage:
|
||||
# ./scripts/build-activepieces.sh # Build only
|
||||
# ./scripts/build-activepieces.sh deploy # Build and deploy to server
|
||||
# ./scripts/build-activepieces.sh deploy user@server # Custom server
|
||||
# ==============================================================================
|
||||
|
||||
set -e
|
||||
|
||||
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
PROJECT_ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
AP_DIR="$PROJECT_ROOT/activepieces-fork"
|
||||
|
||||
# Colors
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m'
|
||||
|
||||
print_status() { echo -e "${GREEN}>>> $1${NC}"; }
|
||||
print_warning() { echo -e "${YELLOW}>>> $1${NC}"; }
|
||||
print_error() { echo -e "${RED}>>> $1${NC}"; }
|
||||
|
||||
# Parse arguments
|
||||
ACTION="${1:-build}"
|
||||
SERVER="${2:-poduck@smoothschedule.com}"
|
||||
|
||||
IMAGE_NAME="smoothschedule_production_activepieces"
|
||||
TEMP_FILE="/tmp/activepieces-image.tar.gz"
|
||||
|
||||
echo ""
|
||||
echo "==========================================="
|
||||
echo " Activepieces Docker Image Builder"
|
||||
echo "==========================================="
|
||||
echo ""
|
||||
|
||||
# Check we have the activepieces-fork directory
|
||||
if [ ! -d "$AP_DIR" ]; then
|
||||
print_error "activepieces-fork directory not found at: $AP_DIR"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ==============================================================================
|
||||
# Build the image
|
||||
# ==============================================================================
|
||||
print_status "Building Activepieces Docker image..."
|
||||
print_warning "This may take 5-10 minutes and requires 4GB+ RAM"
|
||||
|
||||
cd "$AP_DIR"
|
||||
|
||||
# Build with progress output
|
||||
docker build \
|
||||
--progress=plain \
|
||||
-t "$IMAGE_NAME" \
|
||||
.
|
||||
|
||||
print_status "Build complete!"
|
||||
|
||||
# Show image size
|
||||
docker images "$IMAGE_NAME" --format "Image size: {{.Size}}"
|
||||
|
||||
# ==============================================================================
|
||||
# Deploy to server (if requested)
|
||||
# ==============================================================================
|
||||
if [ "$ACTION" = "deploy" ]; then
|
||||
echo ""
|
||||
print_status "Preparing to deploy to: $SERVER"
|
||||
|
||||
# Save image to compressed archive
|
||||
print_status "Saving image to $TEMP_FILE..."
|
||||
docker save "$IMAGE_NAME" | gzip > "$TEMP_FILE"
|
||||
|
||||
# Show file size
|
||||
ls -lh "$TEMP_FILE" | awk '{print "Archive size: " $5}'
|
||||
|
||||
# Transfer to server
|
||||
print_status "Transferring to server (this may take a few minutes)..."
|
||||
scp "$TEMP_FILE" "$SERVER:/tmp/activepieces-image.tar.gz"
|
||||
|
||||
# Load on server
|
||||
print_status "Loading image on server..."
|
||||
ssh "$SERVER" "gunzip -c /tmp/activepieces-image.tar.gz | docker load && rm /tmp/activepieces-image.tar.gz"
|
||||
|
||||
# Restart Activepieces on server
|
||||
print_status "Restarting Activepieces on server..."
|
||||
ssh "$SERVER" "cd ~/smoothschedule/smoothschedule && docker compose -f docker-compose.production.yml up -d activepieces"
|
||||
|
||||
# Clean up local temp file
|
||||
rm -f "$TEMP_FILE"
|
||||
|
||||
print_status "Deployment complete!"
|
||||
echo ""
|
||||
echo "Activepieces should now be running with the new image."
|
||||
echo "Check status with:"
|
||||
echo " ssh $SERVER 'cd ~/smoothschedule/smoothschedule && docker compose -f docker-compose.production.yml ps activepieces'"
|
||||
echo ""
|
||||
else
|
||||
echo ""
|
||||
print_status "Image built successfully: $IMAGE_NAME"
|
||||
echo ""
|
||||
echo "To deploy to production, run:"
|
||||
echo " $0 deploy"
|
||||
echo ""
|
||||
echo "Or manually:"
|
||||
echo " docker save $IMAGE_NAME | gzip > /tmp/ap.tar.gz"
|
||||
echo " scp /tmp/ap.tar.gz $SERVER:/tmp/"
|
||||
echo " ssh $SERVER 'gunzip -c /tmp/ap.tar.gz | docker load'"
|
||||
echo ""
|
||||
fi
|
||||
71
smoothschedule/.envs.example/.activepieces
Normal file
71
smoothschedule/.envs.example/.activepieces
Normal file
@@ -0,0 +1,71 @@
|
||||
# Activepieces Environment Variables
|
||||
# Copy this file to .envs/.production/.activepieces and fill in values
|
||||
# ==============================================================================
|
||||
|
||||
# External URL (for browser iframe embed)
|
||||
AP_FRONTEND_URL=https://automations.smoothschedule.com
|
||||
|
||||
# Internal URL (for Django API calls within Docker network)
|
||||
AP_INTERNAL_URL=http://activepieces:80
|
||||
|
||||
# Security Keys - MUST match between Activepieces and Django
|
||||
# Generate with: openssl rand -hex 32
|
||||
AP_JWT_SECRET=<generate-with-openssl-rand-hex-32>
|
||||
AP_ENCRYPTION_KEY=<generate-with-openssl-rand-hex-16>
|
||||
|
||||
# Platform/Project IDs
|
||||
# ------------------------------------------------------------------------------
|
||||
# These are generated when you first create an admin user in Activepieces
|
||||
# Leave blank for initial deployment, then update after setup
|
||||
#
|
||||
# IMPORTANT: After initial setup:
|
||||
# 1. Visit https://automations.smoothschedule.com
|
||||
# 2. Create an admin account (this creates the platform)
|
||||
# 3. Get platform ID from the database:
|
||||
# docker compose exec postgres psql -U <user> -d activepieces -c "SELECT id FROM platform LIMIT 1"
|
||||
# 4. Update AP_PLATFORM_ID here AND in .django file
|
||||
AP_DEFAULT_PROJECT_ID=
|
||||
AP_PLATFORM_ID=
|
||||
|
||||
# Database (using same PostgreSQL as SmoothSchedule, but different database)
|
||||
# ------------------------------------------------------------------------------
|
||||
AP_POSTGRES_HOST=postgres
|
||||
AP_POSTGRES_PORT=5432
|
||||
AP_POSTGRES_DATABASE=activepieces
|
||||
|
||||
# Generate strong random values for these (separate from main DB credentials)
|
||||
AP_POSTGRES_USERNAME=<random-username>
|
||||
AP_POSTGRES_PASSWORD=<random-password>
|
||||
|
||||
# Redis
|
||||
# ------------------------------------------------------------------------------
|
||||
AP_REDIS_HOST=redis
|
||||
AP_REDIS_PORT=6379
|
||||
|
||||
# AI Copilot (optional)
|
||||
# ------------------------------------------------------------------------------
|
||||
AP_OPENAI_API_KEY=<your-openai-api-key>
|
||||
|
||||
# Execution Settings
|
||||
# ------------------------------------------------------------------------------
|
||||
AP_EXECUTION_MODE=UNSANDBOXED
|
||||
AP_TELEMETRY_ENABLED=false
|
||||
|
||||
# Pieces Configuration
|
||||
# ------------------------------------------------------------------------------
|
||||
# CLOUD_AND_DB: Fetch pieces from cloud registry and local database
|
||||
AP_PIECES_SOURCE=CLOUD_AND_DB
|
||||
# OFFICIAL_AUTO: Automatically sync official pieces metadata from cloud
|
||||
AP_PIECES_SYNC_MODE=OFFICIAL_AUTO
|
||||
|
||||
# Embedding (required for iframe integration)
|
||||
# ------------------------------------------------------------------------------
|
||||
AP_EMBEDDING_ENABLED=true
|
||||
|
||||
# Templates (fetch official templates from Activepieces cloud)
|
||||
# ------------------------------------------------------------------------------
|
||||
AP_TEMPLATES_SOURCE_URL=https://cloud.activepieces.com/api/v1/templates
|
||||
|
||||
# Custom Pieces Registry (Verdaccio - internal)
|
||||
# ------------------------------------------------------------------------------
|
||||
VERDACCIO_URL=http://verdaccio:4873
|
||||
75
smoothschedule/.envs.example/.django
Normal file
75
smoothschedule/.envs.example/.django
Normal file
@@ -0,0 +1,75 @@
|
||||
# Django Environment Variables
|
||||
# Copy this file to .envs/.production/.django and fill in values
|
||||
# ==============================================================================
|
||||
|
||||
# General Settings
|
||||
# ------------------------------------------------------------------------------
|
||||
USE_DOCKER=yes
|
||||
DJANGO_SETTINGS_MODULE=config.settings.production
|
||||
DJANGO_SECRET_KEY=<generate-a-strong-secret-key>
|
||||
DJANGO_ALLOWED_HOSTS=.smoothschedule.com,localhost
|
||||
|
||||
# IMPORTANT: Set to False in production
|
||||
DJANGO_DEBUG=False
|
||||
|
||||
# Security
|
||||
# ------------------------------------------------------------------------------
|
||||
DJANGO_SECURE_SSL_REDIRECT=True
|
||||
DJANGO_SECURE_HSTS_INCLUDE_SUBDOMAINS=True
|
||||
DJANGO_SECURE_HSTS_PRELOAD=True
|
||||
DJANGO_SECURE_CONTENT_TYPE_NOSNIFF=True
|
||||
|
||||
# CORS Configuration
|
||||
# Set specific origins in production, not all origins
|
||||
DJANGO_CORS_ALLOWED_ORIGINS=https://smoothschedule.com,https://platform.smoothschedule.com
|
||||
|
||||
# Redis
|
||||
# ------------------------------------------------------------------------------
|
||||
REDIS_URL=redis://redis:6379/0
|
||||
|
||||
# Celery
|
||||
# ------------------------------------------------------------------------------
|
||||
CELERY_FLOWER_USER=<random-username>
|
||||
CELERY_FLOWER_PASSWORD=<random-password>
|
||||
|
||||
# Activepieces Integration
|
||||
# ------------------------------------------------------------------------------
|
||||
# URL for Activepieces to call SmoothSchedule API (Docker internal network)
|
||||
SMOOTHSCHEDULE_API_URL=http://django:8000
|
||||
|
||||
# These MUST match the values in .activepieces file
|
||||
AP_FRONTEND_URL=https://automations.smoothschedule.com
|
||||
AP_INTERNAL_URL=http://activepieces:80
|
||||
AP_JWT_SECRET=<copy-from-activepieces-file>
|
||||
AP_ENCRYPTION_KEY=<copy-from-activepieces-file>
|
||||
AP_DEFAULT_PROJECT_ID=<copy-from-activepieces-file>
|
||||
AP_PLATFORM_ID=<copy-from-activepieces-file>
|
||||
|
||||
# Twilio (for SMS 2FA and phone numbers)
|
||||
# ------------------------------------------------------------------------------
|
||||
TWILIO_ACCOUNT_SID=<your-twilio-account-sid>
|
||||
TWILIO_AUTH_TOKEN=<your-twilio-auth-token>
|
||||
TWILIO_PHONE_NUMBER=<your-twilio-phone-number>
|
||||
|
||||
# Stripe (for payments)
|
||||
# ------------------------------------------------------------------------------
|
||||
# Use live keys in production (pk_live_*, sk_live_*)
|
||||
STRIPE_PUBLISHABLE_KEY=pk_live_<your-stripe-publishable-key>
|
||||
STRIPE_SECRET_KEY=sk_live_<your-stripe-secret-key>
|
||||
STRIPE_WEBHOOK_SECRET=whsec_<your-stripe-webhook-secret>
|
||||
|
||||
# Mail Server Configuration
|
||||
# ------------------------------------------------------------------------------
|
||||
MAIL_SERVER_SSH_HOST=mail.talova.net
|
||||
MAIL_SERVER_SSH_USER=poduck
|
||||
MAIL_SERVER_DOCKER_CONTAINER=mailserver
|
||||
MAIL_SERVER_SSH_KEY_PATH=/app/.ssh/id_ed25519
|
||||
MAIL_SERVER_SSH_KNOWN_HOSTS_PATH=/app/.ssh/known_hosts
|
||||
|
||||
# AWS S3 / DigitalOcean Spaces (for media storage)
|
||||
# ------------------------------------------------------------------------------
|
||||
AWS_ACCESS_KEY_ID=<your-spaces-access-key>
|
||||
AWS_SECRET_ACCESS_KEY=<your-spaces-secret-key>
|
||||
AWS_STORAGE_BUCKET_NAME=smoothschedule
|
||||
AWS_S3_REGION_NAME=nyc3
|
||||
AWS_S3_ENDPOINT_URL=https://nyc3.digitaloceanspaces.com
|
||||
15
smoothschedule/.envs.example/.postgres
Normal file
15
smoothschedule/.envs.example/.postgres
Normal file
@@ -0,0 +1,15 @@
|
||||
# PostgreSQL Environment Variables
|
||||
# Copy this file to .envs/.production/.postgres and fill in values
|
||||
# ==============================================================================
|
||||
|
||||
POSTGRES_HOST=postgres
|
||||
POSTGRES_PORT=5432
|
||||
POSTGRES_DB=smoothschedule
|
||||
|
||||
# Generate strong random values for these (32+ characters recommended)
|
||||
POSTGRES_USER=<random-username>
|
||||
POSTGRES_PASSWORD=<random-password>
|
||||
|
||||
# Construct DATABASE_URL from the above values
|
||||
# Format: postgres://USER:PASSWORD@HOST:PORT/DATABASE
|
||||
DATABASE_URL=postgres://<username>:<password>@postgres:5432/smoothschedule
|
||||
1
smoothschedule/.gitignore
vendored
1
smoothschedule/.gitignore
vendored
@@ -276,6 +276,7 @@ smoothschedule/media/
|
||||
.env
|
||||
.envs/*
|
||||
!.envs/.local/
|
||||
!.envs.example/
|
||||
|
||||
# SSH keys for mail server access
|
||||
.ssh/
|
||||
|
||||
72
smoothschedule/compose/production/verdaccio/config.yaml
Normal file
72
smoothschedule/compose/production/verdaccio/config.yaml
Normal file
@@ -0,0 +1,72 @@
|
||||
# Verdaccio configuration for SmoothSchedule custom Activepieces pieces
|
||||
|
||||
storage: /verdaccio/storage
|
||||
plugins: /verdaccio/plugins
|
||||
|
||||
web:
|
||||
enable: true
|
||||
title: SmoothSchedule NPM Registry
|
||||
|
||||
# Authentication - allow anyone to read, require auth to publish
|
||||
auth:
|
||||
htpasswd:
|
||||
file: /verdaccio/storage/htpasswd
|
||||
# Allow up to 100 users to register
|
||||
max_users: 100
|
||||
|
||||
# Uplink to official npm registry for packages we don't have
|
||||
uplinks:
|
||||
npmjs:
|
||||
url: https://registry.npmjs.org/
|
||||
cache: true
|
||||
|
||||
# Package access rules
|
||||
packages:
|
||||
# Our custom Activepieces pieces - serve locally, allow anonymous publish
|
||||
'@activepieces/piece-smoothschedule':
|
||||
access: $all
|
||||
publish: $anonymous
|
||||
# No proxy - only serve from local storage
|
||||
|
||||
'@activepieces/piece-python-code':
|
||||
access: $all
|
||||
publish: $anonymous
|
||||
|
||||
'@activepieces/piece-ruby-code':
|
||||
access: $all
|
||||
publish: $anonymous
|
||||
|
||||
'@activepieces/piece-interfaces':
|
||||
access: $all
|
||||
publish: $anonymous
|
||||
|
||||
# All other @activepieces packages - proxy to npm
|
||||
'@activepieces/*':
|
||||
access: $all
|
||||
publish: $authenticated
|
||||
proxy: npmjs
|
||||
|
||||
# All other scoped packages - proxy to npm
|
||||
'@*/*':
|
||||
access: $all
|
||||
publish: $authenticated
|
||||
proxy: npmjs
|
||||
|
||||
# All unscoped packages - proxy to npm
|
||||
'**':
|
||||
access: $all
|
||||
publish: $authenticated
|
||||
proxy: npmjs
|
||||
|
||||
# Server settings
|
||||
server:
|
||||
keepAliveTimeout: 60
|
||||
|
||||
# Middleware
|
||||
middlewares:
|
||||
audit:
|
||||
enabled: true
|
||||
|
||||
# Logging
|
||||
logs:
|
||||
- { type: stdout, format: pretty, level: warn }
|
||||
@@ -109,12 +109,14 @@ if AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY and AWS_STORAGE_BUCKET_NAME:
|
||||
"OPTIONS": {
|
||||
"location": "media",
|
||||
"file_overwrite": False,
|
||||
"default_acl": "public-read",
|
||||
},
|
||||
},
|
||||
"staticfiles": {
|
||||
"BACKEND": "storages.backends.s3.S3Storage",
|
||||
"OPTIONS": {
|
||||
"location": "static",
|
||||
"default_acl": "public-read",
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
@@ -1,6 +1,7 @@
|
||||
from django.conf import settings
|
||||
from django.conf.urls.static import static
|
||||
from django.contrib import admin
|
||||
from django.http import FileResponse, Http404
|
||||
from django.urls import include
|
||||
from django.urls import path
|
||||
from django.views import defaults as default_views
|
||||
@@ -9,6 +10,28 @@ from django.views.generic import TemplateView
|
||||
from drf_spectacular.views import SpectacularAPIView
|
||||
from drf_spectacular.views import SpectacularSwaggerView
|
||||
from rest_framework.authtoken.views import obtain_auth_token
|
||||
import os
|
||||
|
||||
|
||||
def serve_static_image(request, filename):
|
||||
"""Serve static images with CORS headers for Activepieces integration."""
|
||||
allowed_files = {
|
||||
'logo-branding.png': 'image/png',
|
||||
'python-logo.svg': 'image/svg+xml',
|
||||
'ruby-logo.svg': 'image/svg+xml',
|
||||
}
|
||||
if filename not in allowed_files:
|
||||
raise Http404("Image not found")
|
||||
|
||||
# Try to find the file in static directories
|
||||
for static_dir in settings.STATICFILES_DIRS:
|
||||
file_path = os.path.join(static_dir, 'images', filename)
|
||||
if os.path.exists(file_path):
|
||||
response = FileResponse(open(file_path, 'rb'), content_type=allowed_files[filename])
|
||||
response['Access-Control-Allow-Origin'] = '*'
|
||||
response['Cache-Control'] = 'public, max-age=86400'
|
||||
return response
|
||||
raise Http404("Image not found")
|
||||
|
||||
from smoothschedule.identity.users.api_views import (
|
||||
login_view, current_user_view, logout_view, send_verification_email, verify_email,
|
||||
@@ -45,6 +68,8 @@ from smoothschedule.identity.core.api_views import (
|
||||
)
|
||||
|
||||
urlpatterns = [
|
||||
# Static images with CORS (for Activepieces integration)
|
||||
path("images/<str:filename>", serve_static_image, name="serve_static_image"),
|
||||
# Django Admin, use {% url 'admin:index' %}
|
||||
path(settings.ADMIN_URL, admin.site.urls),
|
||||
# User management
|
||||
|
||||
@@ -4,6 +4,36 @@ volumes:
|
||||
smoothschedule_local_redis_data: {}
|
||||
smoothschedule_local_activepieces_cache: {}
|
||||
|
||||
# Memory limits for local development
|
||||
# Prevents containers from consuming all system RAM and freezing
|
||||
# Adjust if you have more/less RAM available
|
||||
x-memory-limits:
|
||||
small: &mem-small
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
medium: &mem-medium
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 256M
|
||||
large: &mem-large
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
xlarge: &mem-xlarge
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 768M
|
||||
database: &mem-database
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
|
||||
services:
|
||||
django: &django
|
||||
build:
|
||||
@@ -11,6 +41,7 @@ services:
|
||||
dockerfile: ./compose/local/django/Dockerfile
|
||||
image: smoothschedule_local_django
|
||||
container_name: smoothschedule_local_django
|
||||
<<: *mem-large
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
|
||||
@@ -4,6 +4,46 @@ volumes:
|
||||
production_traefik: {}
|
||||
production_redis_data: {}
|
||||
production_activepieces_cache: {}
|
||||
production_verdaccio_storage: {}
|
||||
|
||||
# Memory limits for 2GB RAM server
|
||||
# Total allocated: ~1.6GB, leaving ~400MB for system/OS
|
||||
x-memory-limits:
|
||||
small: &mem-small
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 64M
|
||||
reservations:
|
||||
memory: 32M
|
||||
medium: &mem-medium
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 128M
|
||||
reservations:
|
||||
memory: 64M
|
||||
large: &mem-large
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 256M
|
||||
reservations:
|
||||
memory: 128M
|
||||
xlarge: &mem-xlarge
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 384M
|
||||
reservations:
|
||||
memory: 192M
|
||||
database: &mem-database
|
||||
deploy:
|
||||
resources:
|
||||
limits:
|
||||
memory: 512M
|
||||
reservations:
|
||||
memory: 256M
|
||||
|
||||
services:
|
||||
django: &django
|
||||
@@ -12,6 +52,7 @@ services:
|
||||
dockerfile: ./compose/production/django/Dockerfile
|
||||
|
||||
image: smoothschedule_production_django
|
||||
<<: *mem-large
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
@@ -87,6 +128,15 @@ services:
|
||||
volumes:
|
||||
- production_postgres_data_backups:/backups:z
|
||||
|
||||
verdaccio:
|
||||
image: verdaccio/verdaccio:5
|
||||
restart: unless-stopped
|
||||
volumes:
|
||||
- production_verdaccio_storage:/verdaccio/storage
|
||||
- ./compose/production/verdaccio/config.yaml:/verdaccio/conf/config.yaml:ro
|
||||
environment:
|
||||
- VERDACCIO_PORT=4873
|
||||
|
||||
activepieces:
|
||||
build:
|
||||
context: ../activepieces-fork
|
||||
@@ -96,6 +146,7 @@ services:
|
||||
depends_on:
|
||||
- postgres
|
||||
- redis
|
||||
- verdaccio
|
||||
env_file:
|
||||
- ./.envs/.production/.activepieces
|
||||
volumes:
|
||||
|
||||
278
smoothschedule/scripts/init-production.sh
Executable file
278
smoothschedule/scripts/init-production.sh
Executable file
@@ -0,0 +1,278 @@
|
||||
#!/bin/bash
|
||||
# ==============================================================================
|
||||
# SmoothSchedule Production Initialization Script
|
||||
#
|
||||
# Run this ONCE on a fresh production server to set up everything.
|
||||
# Subsequent deployments should use deploy.sh
|
||||
#
|
||||
# Usage: ./scripts/init-production.sh
|
||||
# ==============================================================================
|
||||
|
||||
set -e
|
||||
|
||||
# Colors for output
|
||||
RED='\033[0;31m'
|
||||
GREEN='\033[0;32m'
|
||||
YELLOW='\033[1;33m'
|
||||
BLUE='\033[0;34m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
print_status() { echo -e "${GREEN}>>> $1${NC}"; }
|
||||
print_warning() { echo -e "${YELLOW}>>> $1${NC}"; }
|
||||
print_error() { echo -e "${RED}>>> $1${NC}"; }
|
||||
print_info() { echo -e "${BLUE}>>> $1${NC}"; }
|
||||
|
||||
echo ""
|
||||
echo "==========================================="
|
||||
echo " SmoothSchedule Production Initialization"
|
||||
echo "==========================================="
|
||||
echo ""
|
||||
|
||||
# Check we're in the right directory
|
||||
if [ ! -f "docker-compose.production.yml" ]; then
|
||||
print_error "Must run from smoothschedule/smoothschedule directory"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# ==============================================================================
|
||||
# Step 1: Check/Create Environment Files
|
||||
# ==============================================================================
|
||||
print_status "Step 1: Checking environment files..."
|
||||
|
||||
if [ ! -d ".envs/.production" ]; then
|
||||
print_warning "Production environment not found. Creating from templates..."
|
||||
mkdir -p .envs/.production
|
||||
|
||||
if [ -d ".envs.example" ]; then
|
||||
cp .envs.example/.django .envs/.production/.django
|
||||
cp .envs.example/.postgres .envs/.production/.postgres
|
||||
cp .envs.example/.activepieces .envs/.production/.activepieces
|
||||
print_info "Copied template files to .envs/.production/"
|
||||
print_warning "IMPORTANT: Edit these files with your actual values before continuing!"
|
||||
print_warning "Required files:"
|
||||
echo " - .envs/.production/.django"
|
||||
echo " - .envs/.production/.postgres"
|
||||
echo " - .envs/.production/.activepieces"
|
||||
echo ""
|
||||
read -p "Press Enter after you've edited the files, or Ctrl+C to abort..."
|
||||
else
|
||||
print_error "Template files not found in .envs.example/"
|
||||
print_error "Please create .envs/.production/ manually with .django, .postgres, and .activepieces"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
print_status "Environment files exist."
|
||||
|
||||
# ==============================================================================
|
||||
# Step 2: Generate Security Keys (if not set)
|
||||
# ==============================================================================
|
||||
print_status "Step 2: Checking security keys..."
|
||||
|
||||
DJANGO_FILE=".envs/.production/.django"
|
||||
AP_FILE=".envs/.production/.activepieces"
|
||||
|
||||
# Check if DJANGO_SECRET_KEY is a placeholder
|
||||
if grep -q "<generate" "$DJANGO_FILE" 2>/dev/null; then
|
||||
print_warning "Generating Django secret key..."
|
||||
NEW_SECRET=$(openssl rand -hex 32)
|
||||
sed -i "s/<generate-a-strong-secret-key>/$NEW_SECRET/" "$DJANGO_FILE"
|
||||
print_info "Django secret key generated."
|
||||
fi
|
||||
|
||||
# Check if AP_JWT_SECRET is a placeholder
|
||||
if grep -q "<generate" "$AP_FILE" 2>/dev/null; then
|
||||
print_warning "Generating Activepieces JWT secret..."
|
||||
NEW_JWT=$(openssl rand -hex 32)
|
||||
sed -i "s/<generate-with-openssl-rand-hex-32>/$NEW_JWT/" "$AP_FILE"
|
||||
# Also update in Django file
|
||||
sed -i "s/<copy-from-activepieces-file>/$NEW_JWT/" "$DJANGO_FILE" 2>/dev/null || true
|
||||
print_info "JWT secret generated."
|
||||
|
||||
print_warning "Generating Activepieces encryption key..."
|
||||
NEW_ENC=$(openssl rand -hex 16)
|
||||
sed -i "s/<generate-with-openssl-rand-hex-16>/$NEW_ENC/" "$AP_FILE"
|
||||
print_info "Encryption key generated."
|
||||
fi
|
||||
|
||||
# ==============================================================================
|
||||
# Step 3: Pull/Build Docker Images
|
||||
# ==============================================================================
|
||||
print_status "Step 3: Building Docker images..."
|
||||
|
||||
# Check if Activepieces image exists or needs to be pulled/loaded
|
||||
if ! docker images | grep -q "smoothschedule_production_activepieces"; then
|
||||
print_warning "Activepieces image not found locally."
|
||||
print_info "The production server typically cannot build this image (requires 4GB+ RAM)."
|
||||
print_info "Options:"
|
||||
echo " 1. Build on a dev machine and transfer:"
|
||||
echo " cd activepieces-fork"
|
||||
echo " docker build -t smoothschedule_production_activepieces ."
|
||||
echo " docker save smoothschedule_production_activepieces | gzip > /tmp/ap.tar.gz"
|
||||
echo " scp /tmp/ap.tar.gz server:/tmp/"
|
||||
echo " ssh server 'gunzip -c /tmp/ap.tar.gz | docker load'"
|
||||
echo ""
|
||||
read -p "Press Enter after you've loaded the Activepieces image, or type 'skip' to continue anyway: " SKIP_AP
|
||||
|
||||
if [ "$SKIP_AP" != "skip" ]; then
|
||||
if ! docker images | grep -q "smoothschedule_production_activepieces"; then
|
||||
print_error "Activepieces image still not found. Please load it first."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
|
||||
# Build other images
|
||||
print_info "Building Django and other images..."
|
||||
docker compose -f docker-compose.production.yml build django nginx
|
||||
|
||||
# ==============================================================================
|
||||
# Step 4: Start Core Services
|
||||
# ==============================================================================
|
||||
print_status "Step 4: Starting core services..."
|
||||
|
||||
docker compose -f docker-compose.production.yml up -d postgres redis
|
||||
print_info "Waiting for PostgreSQL to be ready..."
|
||||
sleep 10
|
||||
|
||||
# ==============================================================================
|
||||
# Step 5: Create Databases
|
||||
# ==============================================================================
|
||||
print_status "Step 5: Setting up databases..."
|
||||
|
||||
# Get credentials from env files
|
||||
POSTGRES_USER=$(grep -E "^POSTGRES_USER=" .envs/.production/.postgres | cut -d= -f2)
|
||||
POSTGRES_PASSWORD=$(grep -E "^POSTGRES_PASSWORD=" .envs/.production/.postgres | cut -d= -f2)
|
||||
AP_DB_USER=$(grep -E "^AP_POSTGRES_USERNAME=" .envs/.production/.activepieces | cut -d= -f2)
|
||||
AP_DB_PASS=$(grep -E "^AP_POSTGRES_PASSWORD=" .envs/.production/.activepieces | cut -d= -f2)
|
||||
AP_DB_NAME=$(grep -E "^AP_POSTGRES_DATABASE=" .envs/.production/.activepieces | cut -d= -f2)
|
||||
|
||||
# Wait for PostgreSQL to accept connections
|
||||
for i in {1..30}; do
|
||||
if docker compose -f docker-compose.production.yml exec -T postgres pg_isready -U "$POSTGRES_USER" > /dev/null 2>&1; then
|
||||
print_info "PostgreSQL is ready."
|
||||
break
|
||||
fi
|
||||
echo " Waiting for PostgreSQL... ($i/30)"
|
||||
sleep 2
|
||||
done
|
||||
|
||||
# Create Activepieces database and user
|
||||
print_info "Creating Activepieces database..."
|
||||
docker compose -f docker-compose.production.yml exec -T postgres psql -U "$POSTGRES_USER" -d postgres << EOSQL
|
||||
-- Create user if not exists
|
||||
DO \$\$
|
||||
BEGIN
|
||||
IF NOT EXISTS (SELECT FROM pg_roles WHERE rolname = '$AP_DB_USER') THEN
|
||||
CREATE USER "$AP_DB_USER" WITH PASSWORD '$AP_DB_PASS';
|
||||
END IF;
|
||||
END
|
||||
\$\$;
|
||||
|
||||
-- Create database if not exists
|
||||
SELECT 'CREATE DATABASE $AP_DB_NAME OWNER "$AP_DB_USER"'
|
||||
WHERE NOT EXISTS (SELECT FROM pg_database WHERE datname = '$AP_DB_NAME')\gexec
|
||||
EOSQL
|
||||
|
||||
print_info "Databases configured."
|
||||
|
||||
# ==============================================================================
|
||||
# Step 6: Start All Services
|
||||
# ==============================================================================
|
||||
print_status "Step 6: Starting all services..."
|
||||
|
||||
docker compose -f docker-compose.production.yml up -d
|
||||
|
||||
print_info "Waiting for services to start..."
|
||||
sleep 15
|
||||
|
||||
# ==============================================================================
|
||||
# Step 7: Run Django Migrations
|
||||
# ==============================================================================
|
||||
print_status "Step 7: Running Django migrations..."
|
||||
|
||||
docker compose -f docker-compose.production.yml exec -T django python manage.py migrate
|
||||
docker compose -f docker-compose.production.yml exec -T django python manage.py collectstatic --noinput
|
||||
|
||||
# ==============================================================================
|
||||
# Step 8: Create Django Superuser
|
||||
# ==============================================================================
|
||||
print_status "Step 8: Django superuser..."
|
||||
|
||||
echo "Would you like to create a Django superuser? (y/n)"
|
||||
read -r CREATE_SUPER
|
||||
if [ "$CREATE_SUPER" = "y" ]; then
|
||||
docker compose -f docker-compose.production.yml exec django python manage.py createsuperuser
|
||||
fi
|
||||
|
||||
# ==============================================================================
|
||||
# Step 9: Initialize Activepieces Platform
|
||||
# ==============================================================================
|
||||
print_status "Step 9: Initializing Activepieces platform..."
|
||||
|
||||
# Check if Activepieces is healthy
|
||||
AP_HEALTHY=false
|
||||
for i in {1..30}; do
|
||||
if curl -sf http://localhost:80/api/v1/health > /dev/null 2>&1; then
|
||||
AP_HEALTHY=true
|
||||
print_info "Activepieces is healthy."
|
||||
break
|
||||
fi
|
||||
echo " Waiting for Activepieces... ($i/30)"
|
||||
sleep 2
|
||||
done
|
||||
|
||||
if [ "$AP_HEALTHY" = "false" ]; then
|
||||
print_warning "Activepieces health check timed out. Check logs with:"
|
||||
echo " docker compose -f docker-compose.production.yml logs activepieces"
|
||||
fi
|
||||
|
||||
# Check if platform ID is configured
|
||||
AP_PLATFORM_ID=$(grep -E "^AP_PLATFORM_ID=" .envs/.production/.activepieces | cut -d= -f2)
|
||||
|
||||
if [ -z "$AP_PLATFORM_ID" ]; then
|
||||
print_warning "Activepieces platform not yet initialized."
|
||||
echo ""
|
||||
print_info "To complete Activepieces setup:"
|
||||
echo " 1. Visit https://automations.smoothschedule.com"
|
||||
echo " 2. Sign up to create the first admin user (this creates the platform)"
|
||||
echo " 3. Get the platform ID:"
|
||||
echo " docker compose -f docker-compose.production.yml exec postgres psql -U $AP_DB_USER -d $AP_DB_NAME -c 'SELECT id FROM platform'"
|
||||
echo " 4. Update AP_PLATFORM_ID in both:"
|
||||
echo " - .envs/.production/.activepieces"
|
||||
echo " - .envs/.production/.django"
|
||||
echo " 5. Restart services:"
|
||||
echo " docker compose -f docker-compose.production.yml restart"
|
||||
echo ""
|
||||
else
|
||||
print_info "Activepieces platform ID configured: $AP_PLATFORM_ID"
|
||||
fi
|
||||
|
||||
# ==============================================================================
|
||||
# Step 10: Show Status
|
||||
# ==============================================================================
|
||||
print_status "Step 10: Checking service status..."
|
||||
|
||||
echo ""
|
||||
docker compose -f docker-compose.production.yml ps
|
||||
|
||||
echo ""
|
||||
echo "==========================================="
|
||||
print_status "Initialization Complete!"
|
||||
echo "==========================================="
|
||||
echo ""
|
||||
echo "Your application should be available at:"
|
||||
echo " - https://smoothschedule.com"
|
||||
echo " - https://platform.smoothschedule.com"
|
||||
echo " - https://automations.smoothschedule.com"
|
||||
echo ""
|
||||
echo "Next steps:"
|
||||
echo " 1. Complete Activepieces setup (if platform ID not set)"
|
||||
echo " 2. Create your first tenant via Django admin"
|
||||
echo " 3. Run: python manage.py provision_ap_connections"
|
||||
echo ""
|
||||
echo "Useful commands:"
|
||||
echo " View logs: docker compose -f docker-compose.production.yml logs -f"
|
||||
echo " Restart: docker compose -f docker-compose.production.yml restart"
|
||||
echo " Django shell: docker compose -f docker-compose.production.yml exec django python manage.py shell"
|
||||
echo ""
|
||||
@@ -180,3 +180,70 @@ def seed_email_templates_on_tenant_create(sender, instance, created, **kwargs):
|
||||
|
||||
schema_name = instance.schema_name
|
||||
transaction.on_commit(lambda: _seed_email_templates_for_tenant(schema_name))
|
||||
|
||||
|
||||
def _provision_activepieces_connection(tenant_id):
|
||||
"""
|
||||
Provision SmoothSchedule connection in Activepieces for a tenant.
|
||||
Called after transaction commits to ensure tenant is fully saved.
|
||||
"""
|
||||
from smoothschedule.identity.core.models import Tenant
|
||||
from django.conf import settings
|
||||
|
||||
# Only provision if Activepieces is configured
|
||||
if not getattr(settings, 'ACTIVEPIECES_JWT_SECRET', ''):
|
||||
logger.debug("Activepieces not configured, skipping connection provisioning")
|
||||
return
|
||||
|
||||
try:
|
||||
tenant = Tenant.objects.get(id=tenant_id)
|
||||
|
||||
# Check if tenant has the automation feature (optional check)
|
||||
if hasattr(tenant, 'has_feature') and not tenant.has_feature('can_use_plugins'):
|
||||
logger.debug(
|
||||
f"Tenant {tenant.schema_name} doesn't have automation feature, "
|
||||
"skipping Activepieces connection"
|
||||
)
|
||||
return
|
||||
|
||||
# Import here to avoid circular imports
|
||||
from smoothschedule.integrations.activepieces.services import provision_tenant_connection
|
||||
|
||||
success = provision_tenant_connection(tenant)
|
||||
if success:
|
||||
logger.info(
|
||||
f"Provisioned Activepieces connection for tenant: {tenant.schema_name}"
|
||||
)
|
||||
else:
|
||||
logger.warning(
|
||||
f"Failed to provision Activepieces connection for tenant: {tenant.schema_name}"
|
||||
)
|
||||
|
||||
except Tenant.DoesNotExist:
|
||||
logger.error(f"Tenant {tenant_id} not found when provisioning Activepieces connection")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to provision Activepieces connection for tenant {tenant_id}: {e}")
|
||||
|
||||
|
||||
@receiver(post_save, sender='core.Tenant')
|
||||
def provision_activepieces_on_tenant_create(sender, instance, created, **kwargs):
|
||||
"""
|
||||
Provision SmoothSchedule connection in Activepieces when a new tenant is created.
|
||||
|
||||
This ensures every new tenant has a pre-configured, protected connection
|
||||
to SmoothSchedule in Activepieces so they can immediately use automation
|
||||
features without manual setup.
|
||||
|
||||
Uses transaction.on_commit() to defer provisioning until after the tenant
|
||||
is fully saved.
|
||||
"""
|
||||
if not created:
|
||||
return
|
||||
|
||||
# Skip public schema
|
||||
if instance.schema_name == 'public':
|
||||
return
|
||||
|
||||
tenant_id = instance.id
|
||||
# Use a delay to ensure all other tenant setup is complete
|
||||
transaction.on_commit(lambda: _provision_activepieces_connection(tenant_id))
|
||||
|
||||
@@ -0,0 +1,150 @@
|
||||
"""
|
||||
Management command to provision SmoothSchedule connections in Activepieces
|
||||
for all existing tenants.
|
||||
|
||||
Usage:
|
||||
docker compose -f docker-compose.local.yml exec django \
|
||||
python manage.py provision_ap_connections
|
||||
|
||||
Options:
|
||||
--tenant SCHEMA_NAME Only provision for a specific tenant
|
||||
--dry-run Show what would be done without making changes
|
||||
"""
|
||||
import logging
|
||||
import time
|
||||
from django.core.management.base import BaseCommand
|
||||
from django.conf import settings
|
||||
|
||||
from smoothschedule.identity.core.models import Tenant
|
||||
from smoothschedule.integrations.activepieces.services import provision_tenant_connection
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class Command(BaseCommand):
|
||||
help = "Provision SmoothSchedule connections in Activepieces for existing tenants"
|
||||
|
||||
def add_arguments(self, parser):
|
||||
parser.add_argument(
|
||||
"--tenant",
|
||||
type=str,
|
||||
help="Only provision for a specific tenant (schema_name)",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--dry-run",
|
||||
action="store_true",
|
||||
help="Show what would be done without making changes",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--force",
|
||||
action="store_true",
|
||||
help="Re-provision even if connection already exists",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--delay",
|
||||
type=float,
|
||||
default=1.0,
|
||||
help="Delay between tenant provisioning in seconds (default: 1.0)",
|
||||
)
|
||||
|
||||
def handle(self, *args, **options):
|
||||
tenant_filter = options["tenant"]
|
||||
dry_run = options["dry_run"]
|
||||
force = options["force"]
|
||||
delay = options["delay"]
|
||||
|
||||
# Check if Activepieces is configured
|
||||
if not getattr(settings, "ACTIVEPIECES_JWT_SECRET", ""):
|
||||
self.stderr.write(
|
||||
self.style.ERROR(
|
||||
"Activepieces is not configured. "
|
||||
"Set ACTIVEPIECES_JWT_SECRET in your environment."
|
||||
)
|
||||
)
|
||||
return
|
||||
|
||||
# Get tenants to provision
|
||||
if tenant_filter:
|
||||
tenants = Tenant.objects.filter(schema_name=tenant_filter)
|
||||
if not tenants.exists():
|
||||
self.stderr.write(
|
||||
self.style.ERROR(f"Tenant '{tenant_filter}' not found")
|
||||
)
|
||||
return
|
||||
else:
|
||||
# Exclude public schema
|
||||
tenants = Tenant.objects.exclude(schema_name="public")
|
||||
|
||||
total_count = tenants.count()
|
||||
self.stdout.write(f"Found {total_count} tenant(s) to process")
|
||||
|
||||
if dry_run:
|
||||
self.stdout.write(self.style.WARNING("DRY RUN - no changes will be made"))
|
||||
|
||||
success_count = 0
|
||||
skip_count = 0
|
||||
error_count = 0
|
||||
|
||||
for i, tenant in enumerate(tenants, 1):
|
||||
self.stdout.write(f"\n[{i}/{total_count}] Processing tenant: {tenant.schema_name}")
|
||||
|
||||
# Check if tenant already has a connection (unless force is set)
|
||||
if not force:
|
||||
from smoothschedule.integrations.activepieces.models import TenantActivepiecesProject
|
||||
if TenantActivepiecesProject.objects.filter(tenant=tenant).exists():
|
||||
self.stdout.write(
|
||||
self.style.WARNING(
|
||||
f" Skipping - already has Activepieces project "
|
||||
f"(use --force to re-provision)"
|
||||
)
|
||||
)
|
||||
skip_count += 1
|
||||
continue
|
||||
|
||||
# Check feature access (skip this check if --force is used)
|
||||
if not force and hasattr(tenant, 'has_feature') and not tenant.has_feature('can_use_plugins'):
|
||||
self.stdout.write(
|
||||
self.style.WARNING(
|
||||
f" Skipping - tenant doesn't have automation feature (use --force to bypass)"
|
||||
)
|
||||
)
|
||||
skip_count += 1
|
||||
continue
|
||||
|
||||
if dry_run:
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(f" Would provision connection for {tenant.name}")
|
||||
)
|
||||
success_count += 1
|
||||
continue
|
||||
|
||||
# Actually provision the connection
|
||||
try:
|
||||
success = provision_tenant_connection(tenant)
|
||||
if success:
|
||||
self.stdout.write(
|
||||
self.style.SUCCESS(f" Successfully provisioned connection")
|
||||
)
|
||||
success_count += 1
|
||||
else:
|
||||
self.stdout.write(
|
||||
self.style.ERROR(f" Failed to provision connection")
|
||||
)
|
||||
error_count += 1
|
||||
except Exception as e:
|
||||
self.stdout.write(
|
||||
self.style.ERROR(f" Error: {e}")
|
||||
)
|
||||
error_count += 1
|
||||
|
||||
# Add delay between tenants to avoid overwhelming Activepieces
|
||||
if i < total_count and delay > 0:
|
||||
time.sleep(delay)
|
||||
|
||||
# Summary
|
||||
self.stdout.write("\n" + "=" * 50)
|
||||
self.stdout.write(f"Provisioning complete:")
|
||||
self.stdout.write(self.style.SUCCESS(f" Success: {success_count}"))
|
||||
self.stdout.write(self.style.WARNING(f" Skipped: {skip_count}"))
|
||||
if error_count > 0:
|
||||
self.stdout.write(self.style.ERROR(f" Errors: {error_count}"))
|
||||
@@ -35,6 +35,8 @@ ACTIVEPIECES_API_SCOPES = [
|
||||
"customers:read",
|
||||
"customers:write",
|
||||
"business:read",
|
||||
"emails:read",
|
||||
"emails:write",
|
||||
]
|
||||
|
||||
|
||||
@@ -256,6 +258,7 @@ class ActivepiecesClient:
|
||||
|
||||
# Build the connection upsert request
|
||||
# This uses Activepieces' app-connection API
|
||||
# The 'protected' metadata prevents users from deleting this connection
|
||||
connection_data = {
|
||||
"externalId": f"smoothschedule-{tenant.schema_name}",
|
||||
"displayName": f"SmoothSchedule ({tenant.name})",
|
||||
@@ -270,6 +273,12 @@ class ActivepiecesClient:
|
||||
"subdomain": tenant.schema_name,
|
||||
},
|
||||
},
|
||||
"metadata": {
|
||||
"protected": True,
|
||||
"autoSelect": True,
|
||||
"source": "auto-provisioned",
|
||||
"description": "Auto-created connection for SmoothSchedule integration. Cannot be deleted.",
|
||||
},
|
||||
}
|
||||
|
||||
try:
|
||||
@@ -345,6 +354,67 @@ def get_activepieces_client() -> ActivepiecesClient:
|
||||
return ActivepiecesClient()
|
||||
|
||||
|
||||
def provision_tenant_connection(tenant) -> bool:
|
||||
"""
|
||||
Provision SmoothSchedule connection for a tenant in Activepieces.
|
||||
|
||||
This creates the Activepieces project (if needed) and provisions
|
||||
a protected SmoothSchedule connection so users can immediately
|
||||
use SmoothSchedule triggers and actions.
|
||||
|
||||
Args:
|
||||
tenant: The SmoothSchedule tenant (Client model)
|
||||
|
||||
Returns:
|
||||
True if connection was successfully provisioned, False otherwise
|
||||
"""
|
||||
from .models import TenantActivepiecesProject
|
||||
|
||||
client = get_activepieces_client()
|
||||
|
||||
try:
|
||||
# Get or create the Activepieces project for this tenant
|
||||
# by exchanging a trust token
|
||||
provisioning_token = client._generate_trust_token(tenant)
|
||||
result = client._request(
|
||||
"POST",
|
||||
"/api/v1/authentication/django-trust",
|
||||
data={"token": provisioning_token},
|
||||
)
|
||||
|
||||
session_token = result.get("token")
|
||||
project_id = result.get("projectId")
|
||||
|
||||
if not session_token or not project_id:
|
||||
logger.error(
|
||||
f"Failed to get Activepieces session for tenant {tenant.id}: "
|
||||
"missing token or projectId"
|
||||
)
|
||||
return False
|
||||
|
||||
# Store the project mapping
|
||||
TenantActivepiecesProject.objects.update_or_create(
|
||||
tenant=tenant,
|
||||
defaults={
|
||||
"activepieces_project_id": project_id,
|
||||
},
|
||||
)
|
||||
|
||||
# Provision the protected SmoothSchedule connection
|
||||
client._provision_smoothschedule_connection(tenant, session_token, project_id)
|
||||
|
||||
logger.info(
|
||||
f"Successfully provisioned SmoothSchedule connection for tenant {tenant.id}"
|
||||
)
|
||||
return True
|
||||
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to provision SmoothSchedule connection for tenant {tenant.id}: {e}"
|
||||
)
|
||||
return False
|
||||
|
||||
|
||||
def dispatch_event_webhook(tenant, event_type: str, payload: dict) -> None:
|
||||
"""
|
||||
Dispatch a SmoothSchedule event to Activepieces webhooks.
|
||||
|
||||
@@ -32,6 +32,8 @@ class APIScope:
|
||||
CUSTOMERS_WRITE = 'customers:write'
|
||||
BUSINESS_READ = 'business:read'
|
||||
WEBHOOKS_MANAGE = 'webhooks:manage'
|
||||
EMAILS_READ = 'emails:read'
|
||||
EMAILS_WRITE = 'emails:write'
|
||||
|
||||
CHOICES = [
|
||||
(SERVICES_READ, 'View services and pricing'),
|
||||
@@ -43,6 +45,8 @@ class APIScope:
|
||||
(CUSTOMERS_WRITE, 'Create and update customers'),
|
||||
(BUSINESS_READ, 'View business information'),
|
||||
(WEBHOOKS_MANAGE, 'Manage webhook subscriptions'),
|
||||
(EMAILS_READ, 'View email templates'),
|
||||
(EMAILS_WRITE, 'Send emails using templates'),
|
||||
]
|
||||
|
||||
ALL_SCOPES = [choice[0] for choice in CHOICES]
|
||||
|
||||
@@ -678,3 +678,86 @@ class RateLimitErrorSerializer(ErrorSerializer):
|
||||
retry_after = serializers.IntegerField(
|
||||
help_text="Seconds to wait before retrying"
|
||||
)
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Email Serializers
|
||||
# =============================================================================
|
||||
|
||||
class EmailTemplateListSerializer(serializers.Serializer):
|
||||
"""Serializer for listing available email templates."""
|
||||
slug = serializers.CharField(help_text="Template identifier (use this in send requests)")
|
||||
name = serializers.CharField(help_text="Human-readable template name")
|
||||
description = serializers.CharField(help_text="Description of when/how this template is used")
|
||||
type = serializers.CharField(help_text="Template type: 'system' or 'custom'")
|
||||
|
||||
|
||||
class SendEmailSerializer(serializers.Serializer):
|
||||
"""
|
||||
Serializer for sending an email using a template.
|
||||
|
||||
You can specify either a system template (by email_type) or a custom template (by template_slug).
|
||||
"""
|
||||
# Template identification - one of these is required
|
||||
email_type = serializers.CharField(
|
||||
required=False,
|
||||
help_text="System email type (e.g., 'appointment_confirmation', 'appointment_reminder')"
|
||||
)
|
||||
template_slug = serializers.CharField(
|
||||
required=False,
|
||||
help_text="Custom template slug (e.g., 'monthly-newsletter')"
|
||||
)
|
||||
|
||||
# Recipient
|
||||
to_email = serializers.EmailField(
|
||||
required=True,
|
||||
help_text="Recipient email address"
|
||||
)
|
||||
|
||||
# Context variables for template rendering
|
||||
context = serializers.DictField(
|
||||
required=False,
|
||||
default=dict,
|
||||
help_text="Dictionary of template variables (e.g., {'customer_name': 'John', 'appointment_date': 'Dec 15'})"
|
||||
)
|
||||
|
||||
# Optional overrides
|
||||
subject_override = serializers.CharField(
|
||||
required=False,
|
||||
allow_blank=True,
|
||||
help_text="Override the template's subject line"
|
||||
)
|
||||
from_email = serializers.EmailField(
|
||||
required=False,
|
||||
help_text="Override the sender email address"
|
||||
)
|
||||
reply_to = serializers.EmailField(
|
||||
required=False,
|
||||
help_text="Reply-to email address"
|
||||
)
|
||||
|
||||
def validate(self, data):
|
||||
"""Validate that either email_type or template_slug is provided."""
|
||||
email_type = data.get('email_type')
|
||||
template_slug = data.get('template_slug')
|
||||
|
||||
if not email_type and not template_slug:
|
||||
raise serializers.ValidationError({
|
||||
'email_type': 'Either email_type or template_slug is required.',
|
||||
'template_slug': 'Either email_type or template_slug is required.',
|
||||
})
|
||||
|
||||
if email_type and template_slug:
|
||||
raise serializers.ValidationError({
|
||||
'email_type': 'Provide only one of email_type or template_slug, not both.',
|
||||
'template_slug': 'Provide only one of email_type or template_slug, not both.',
|
||||
})
|
||||
|
||||
return data
|
||||
|
||||
|
||||
class SendEmailResponseSerializer(serializers.Serializer):
|
||||
"""Response after sending an email."""
|
||||
success = serializers.BooleanField(help_text="Whether the email was sent successfully")
|
||||
message = serializers.CharField(help_text="Status message")
|
||||
template_used = serializers.CharField(help_text="Template that was used")
|
||||
|
||||
@@ -27,6 +27,8 @@ from .views import (
|
||||
PublicAppointmentViewSet,
|
||||
PublicCustomerViewSet,
|
||||
WebhookViewSet,
|
||||
EmailTemplateListView,
|
||||
SendEmailView,
|
||||
)
|
||||
|
||||
app_name = 'public_api'
|
||||
@@ -135,6 +137,7 @@ All errors follow this format:
|
||||
{'name': 'Availability', 'description': 'Availability checking'},
|
||||
{'name': 'Appointments', 'description': 'Appointment/booking management'},
|
||||
{'name': 'Customers', 'description': 'Customer management'},
|
||||
{'name': 'Emails', 'description': 'Send emails using templates'},
|
||||
{'name': 'Webhooks', 'description': 'Webhook subscriptions'},
|
||||
{'name': 'Tokens', 'description': 'API token management'},
|
||||
],
|
||||
@@ -147,6 +150,10 @@ All errors follow this format:
|
||||
path('business/', PublicBusinessView.as_view(), name='business'),
|
||||
path('availability/', AvailabilityView.as_view(), name='availability'),
|
||||
|
||||
# Email Endpoints
|
||||
path('emails/templates/', EmailTemplateListView.as_view(), name='email-templates'),
|
||||
path('emails/send/', SendEmailView.as_view(), name='send-email'),
|
||||
|
||||
# ViewSet routes
|
||||
path('', include(router.urls)),
|
||||
]
|
||||
|
||||
@@ -1898,3 +1898,247 @@ class WebhookViewSet(PublicAPIViewMixin, viewsets.ViewSet):
|
||||
'message': 'Test webhook queued for delivery',
|
||||
'subscription_id': str(subscription.id),
|
||||
})
|
||||
|
||||
|
||||
# =============================================================================
|
||||
# Email Endpoints
|
||||
# =============================================================================
|
||||
|
||||
@extend_schema(
|
||||
summary="List email templates",
|
||||
description=(
|
||||
"Get all available email templates that can be used with the send_email action. "
|
||||
"Returns both system templates (appointment_confirmation, etc.) and custom templates."
|
||||
),
|
||||
responses={200: OpenApiResponse(description="List of available templates")},
|
||||
tags=['Emails'],
|
||||
)
|
||||
class EmailTemplateListView(PublicAPIViewMixin, APIView):
|
||||
"""
|
||||
List available email templates for automation.
|
||||
|
||||
Returns system email templates and custom templates
|
||||
that can be used with the /emails/send endpoint.
|
||||
|
||||
**Required scope:** `emails:read` (or any write scope)
|
||||
"""
|
||||
permission_classes = [HasAPIToken]
|
||||
|
||||
def get(self, request):
|
||||
tenant = self.get_tenant()
|
||||
if not tenant:
|
||||
return Response(
|
||||
{'error': 'not_found', 'message': 'Business not found'},
|
||||
status=status.HTTP_404_NOT_FOUND
|
||||
)
|
||||
|
||||
templates = []
|
||||
|
||||
# Add system email templates
|
||||
from smoothschedule.communication.messaging.email_types import EmailType
|
||||
|
||||
with schema_context(tenant.schema_name):
|
||||
for email_type in EmailType:
|
||||
templates.append({
|
||||
'slug': email_type.value,
|
||||
'name': EmailType.get_display_name(email_type),
|
||||
'description': EmailType.get_description(email_type),
|
||||
'type': 'system',
|
||||
})
|
||||
|
||||
# Add custom templates
|
||||
from smoothschedule.communication.messaging.models import CustomEmailTemplate
|
||||
|
||||
for template in CustomEmailTemplate.objects.filter(is_active=True):
|
||||
templates.append({
|
||||
'slug': template.slug,
|
||||
'name': template.name,
|
||||
'description': template.description or '',
|
||||
'type': 'custom',
|
||||
})
|
||||
|
||||
return Response(templates)
|
||||
|
||||
|
||||
@extend_schema(
|
||||
summary="Send email using template",
|
||||
description=(
|
||||
"Send an email using a system or custom template. "
|
||||
"Provide context variables to personalize the email content. "
|
||||
"Available template variables depend on the template type."
|
||||
),
|
||||
request={
|
||||
'application/json': {
|
||||
'type': 'object',
|
||||
'properties': {
|
||||
'email_type': {'type': 'string', 'description': 'System email type'},
|
||||
'template_slug': {'type': 'string', 'description': 'Custom template slug'},
|
||||
'to_email': {'type': 'string', 'format': 'email', 'description': 'Recipient email'},
|
||||
'context': {'type': 'object', 'description': 'Template variables'},
|
||||
'subject_override': {'type': 'string', 'description': 'Override subject'},
|
||||
'reply_to': {'type': 'string', 'format': 'email', 'description': 'Reply-to address'},
|
||||
},
|
||||
'required': ['to_email'],
|
||||
}
|
||||
},
|
||||
responses={
|
||||
200: OpenApiResponse(description="Email sent successfully"),
|
||||
400: OpenApiResponse(description="Validation error"),
|
||||
404: OpenApiResponse(description="Template not found"),
|
||||
},
|
||||
tags=['Emails'],
|
||||
)
|
||||
class SendEmailView(PublicAPIViewMixin, APIView):
|
||||
"""
|
||||
Send an email using a template.
|
||||
|
||||
You can use either:
|
||||
- System templates: email_type (e.g., 'appointment_confirmation')
|
||||
- Custom templates: template_slug (e.g., 'monthly-newsletter')
|
||||
|
||||
Context variables are replaced in the template.
|
||||
For system templates, only allowed variables for that type are accepted.
|
||||
For custom templates, all variables are allowed.
|
||||
|
||||
**Required scope:** `emails:write`
|
||||
"""
|
||||
permission_classes = [HasAPIToken]
|
||||
|
||||
def post(self, request):
|
||||
from .serializers import SendEmailSerializer
|
||||
|
||||
serializer = SendEmailSerializer(data=request.data)
|
||||
if not serializer.is_valid():
|
||||
return Response(
|
||||
{'error': 'validation_error', 'message': 'Invalid request', 'details': serializer.errors},
|
||||
status=status.HTTP_400_BAD_REQUEST
|
||||
)
|
||||
|
||||
tenant = self.get_tenant()
|
||||
if not tenant:
|
||||
return Response(
|
||||
{'error': 'not_found', 'message': 'Business not found'},
|
||||
status=status.HTTP_404_NOT_FOUND
|
||||
)
|
||||
|
||||
validated_data = serializer.validated_data
|
||||
email_type = validated_data.get('email_type')
|
||||
template_slug = validated_data.get('template_slug')
|
||||
to_email = validated_data['to_email']
|
||||
context = validated_data.get('context', {})
|
||||
subject_override = validated_data.get('subject_override')
|
||||
from_email = validated_data.get('from_email')
|
||||
reply_to = validated_data.get('reply_to')
|
||||
|
||||
with schema_context(tenant.schema_name):
|
||||
try:
|
||||
if email_type:
|
||||
# Send using system template
|
||||
from smoothschedule.communication.messaging.email_types import EmailType
|
||||
from smoothschedule.communication.messaging.email_service import send_system_email
|
||||
|
||||
try:
|
||||
email_type_enum = EmailType(email_type)
|
||||
except ValueError:
|
||||
return Response(
|
||||
{'error': 'not_found', 'message': f"Unknown email type: {email_type}"},
|
||||
status=status.HTTP_404_NOT_FOUND
|
||||
)
|
||||
|
||||
# Add business context automatically
|
||||
context = self._add_business_context(tenant, context)
|
||||
|
||||
success = send_system_email(
|
||||
email_type=email_type_enum,
|
||||
to_email=to_email,
|
||||
context=context,
|
||||
from_email=from_email,
|
||||
reply_to=reply_to,
|
||||
fail_silently=False,
|
||||
)
|
||||
|
||||
template_used = f"system:{email_type}"
|
||||
|
||||
else:
|
||||
# Send using custom template
|
||||
from smoothschedule.communication.messaging.models import CustomEmailTemplate
|
||||
from smoothschedule.communication.messaging.email_renderer import render_custom_email
|
||||
from django.core.mail import EmailMultiAlternatives
|
||||
from django.conf import settings
|
||||
|
||||
try:
|
||||
template = CustomEmailTemplate.objects.get(
|
||||
slug=template_slug,
|
||||
is_active=True
|
||||
)
|
||||
except CustomEmailTemplate.DoesNotExist:
|
||||
return Response(
|
||||
{'error': 'not_found', 'message': f"Custom template not found: {template_slug}"},
|
||||
status=status.HTTP_404_NOT_FOUND
|
||||
)
|
||||
|
||||
# Add business context automatically
|
||||
context = self._add_business_context(tenant, context)
|
||||
|
||||
# Render the template
|
||||
rendered = render_custom_email(template, context)
|
||||
|
||||
# Use subject override if provided
|
||||
subject = subject_override or rendered['subject']
|
||||
|
||||
# Send the email
|
||||
sender = from_email or getattr(settings, 'DEFAULT_FROM_EMAIL', 'noreply@smoothschedule.com')
|
||||
|
||||
msg = EmailMultiAlternatives(
|
||||
subject=subject,
|
||||
body=rendered['text'],
|
||||
from_email=sender,
|
||||
to=[to_email],
|
||||
)
|
||||
|
||||
if rendered['html']:
|
||||
msg.attach_alternative(rendered['html'], 'text/html')
|
||||
|
||||
if reply_to:
|
||||
msg.reply_to = [reply_to]
|
||||
|
||||
msg.send(fail_silently=False)
|
||||
success = True
|
||||
template_used = f"custom:{template_slug}"
|
||||
|
||||
return Response({
|
||||
'success': success,
|
||||
'message': 'Email sent successfully' if success else 'Failed to send email',
|
||||
'template_used': template_used,
|
||||
})
|
||||
|
||||
except Exception as e:
|
||||
return Response(
|
||||
{'error': 'send_error', 'message': f'Failed to send email: {str(e)}'},
|
||||
status=status.HTTP_500_INTERNAL_SERVER_ERROR
|
||||
)
|
||||
|
||||
def _add_business_context(self, tenant, context: dict) -> dict:
|
||||
"""Add business-related context variables automatically."""
|
||||
# Add standard business info if not provided
|
||||
if 'business_name' not in context:
|
||||
context['business_name'] = tenant.name
|
||||
|
||||
if 'business_email' not in context and hasattr(tenant, 'email'):
|
||||
context['business_email'] = tenant.email or ''
|
||||
|
||||
if 'business_phone' not in context and hasattr(tenant, 'phone'):
|
||||
context['business_phone'] = tenant.phone or ''
|
||||
|
||||
if 'business_website_url' not in context and hasattr(tenant, 'website'):
|
||||
context['business_website_url'] = tenant.website or ''
|
||||
|
||||
# Add current date/year
|
||||
from django.utils import timezone
|
||||
now = timezone.now()
|
||||
if 'current_date' not in context:
|
||||
context['current_date'] = now.strftime('%B %d, %Y')
|
||||
if 'current_year' not in context:
|
||||
context['current_year'] = str(now.year)
|
||||
|
||||
return context
|
||||
|
||||
Reference in New Issue
Block a user