feat: enhanced capture queue with multi-type conversion, and bottom menu bar for cell phones

This commit is contained in:
2026-03-01 21:48:15 +00:00
parent a21e00d0e0
commit dbd40485ba
17 changed files with 16450 additions and 0 deletions

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,349 @@
**Life OS**
Server & Infrastructure Configuration
**1. Server Overview**
-----------------------------------------------------------------------
**Property** **Value**
---------------------- ------------------------------------------------
Provider Hetzner Cloud
Server Name defiant-01
Public IP 46.225.166.142
IPv6 2a01:4f8:1c1f:9d94::1
OS Ubuntu 24.04.4 LTS (Noble Numbat)
Kernel Linux 6.8.0-90-generic x86_64
CPU Cores 12
RAM 22 GB
Disk 451 GB total / \~395 GB available
Swap 8 GB
-----------------------------------------------------------------------
**1.1 Installed Software**
-----------------------------------------------------------------------
**Software** **Version** **Notes**
------------------ --------------- ------------------------------------
Ubuntu 24.04.4 LTS Base OS
Python 3.12.3 Host-level, available system-wide
Nginx 1.24.0 Host-level reverse proxy, not
containerized
Docker Active Managing all application containers
PostgreSQL (host) Not installed Postgres runs in Docker containers
only
-----------------------------------------------------------------------
**1.2 Hetzner Cloud Firewall**
Firewall name: firewall-1
------------------------------------------------------------------------------
**Protocol** **Port** **Source** **Purpose**
-------------- ---------- ------------------ ---------------------------------
TCP 22 0.0.0.0/0 SSH access
TCP 80 0.0.0.0/0 HTTP (redirects to HTTPS via
Nginx)
TCP 443 0.0.0.0/0 HTTPS
TCP 8443 0.0.0.0/0 Kasm Workspaces (internal, set
during setup)
------------------------------------------------------------------------------
*Note: UFW is inactive on the host. Docker manages iptables rules
directly for container port exposure. No host-level firewall changes are
needed for new services - Nginx proxies all traffic on 80/443.*
**2. DNS Records**
Domain registrar / DNS provider: managed by Michael
Primary domain: invixiom.com
**2.1 Active DNS Records**
-----------------------------------------------------------------------------------------------
**Subdomain** **Type** **Value** **Purpose** **Status**
----------------------------- ---------- ---------------- ---------------------- --------------
**kasm.invixiom.com** A 46.225.166.142 Kasm Workspaces **ACTIVE**
virtual desktop
**files.invixiom.com** A 46.225.166.142 Nextcloud file storage **ACTIVE**
**lifeos.invixiom.com** A 46.225.166.142 Life OS PROD **PENDING**
application
**lifeos-dev.invixiom.com** A 46.225.166.142 Life OS DEV **PENDING**
application
**code.invixiom.com** A 46.225.166.142 Reserved - future use **RESERVED**
-----------------------------------------------------------------------------------------------
*Note: PENDING means DNS record exists but the Nginx config and
application container are not yet deployed. ACTIVE means fully
configured end-to-end.*
**3. Nginx Configuration**
Nginx runs directly on the host (not in Docker). Config files located at
/etc/nginx/sites-available/. The active config is invixiom (symlinked to
sites-enabled).
**3.1 SSL Certificates**
----------------------------------------------------------------------------------------------------------
**Certificate** **Path** **Covers** **Provider**
----------------- ------------------------------------------------------- ----------------- --------------
Primary cert /etc/letsencrypt/live/kasm.invixiom.com/fullchain.pem All active Let\'s Encrypt
subdomains
(wildcard or SAN)
Primary key /etc/letsencrypt/live/kasm.invixiom.com/privkey.pem All active Let\'s Encrypt
subdomains
Legacy cert /etc/nginx/ssl/invixiom.crt Old config only Self-signed or
(kasm manual
site-available)
----------------------------------------------------------------------------------------------------------
*Note: The Let\'s Encrypt cert path uses kasm.invixiom.com as the
primary name. When lifeos.invixiom.com and lifeos-dev.invixiom.com are
added to Nginx, the cert will need to be renewed/expanded to cover the
new subdomains.*
**3.2 Configured Virtual Hosts**
-------------------------------------------------------------------------------------
**Server Name** **Listens **Proxies To** **Notes**
On**
------------------------- ----------- ------------------------ ----------------------
kasm.invixiom.com 443 ssl https://127.0.0.1:8443 WebSocket support,
ssl_verify off, 30min
timeout
files.invixiom.com 443 ssl http://127.0.0.1:8080 Nextcloud container
lifeos-api.invixiom.com 443 ssl http://127.0.0.1:8000 LEGACY - maps to stub
container, to be
replaced
code.invixiom.com 443 ssl http://127.0.0.1:8081 Nothing running on
8081 yet
lifeos.invixiom.com 443 ssl http://127.0.0.1:8002 TO BE ADDED - Life OS
PROD
lifeos-dev.invixiom.com 443 ssl http://127.0.0.1:8003 TO BE ADDED - Life OS
DEV
-------------------------------------------------------------------------------------
**4. Docker Containers**
**4.1 Currently Running Containers**
-------------------------------------------------------------------------------------------------------
**Container Name** **Image** **Ports** **Purpose** **Touch?**
------------------------ --------------------------- ------------- ---------------------- -------------
fastapi stack-fastapi 8000-\>8000 Stub health check **REPLACE**
only - to be replaced
by Life OS PROD
nextcloud nextcloud:27-apache 8080-\>80 Nextcloud file storage **DO NOT
(files.invixiom.com) TOUCH**
redis redis:7-alpine internal Task queue for **DO NOT
existing stack TOUCH**
kasm_proxy kasmweb/proxy:1.18.0 8443-\>8443 Kasm entry point **DO NOT
(kasm.invixiom.com) TOUCH**
kasm_rdp_https_gateway kasmweb/rdp-https-gateway internal Kasm RDP gateway **DO NOT
TOUCH**
kasm_rdp_gateway kasmweb/rdp-gateway 3389-\>3389 Kasm RDP **DO NOT
TOUCH**
kasm_agent kasmweb/agent:1.18.0 internal Kasm agent **DO NOT
TOUCH**
kasm_guac kasmweb/kasm-guac internal Kasm Guacamole **DO NOT
TOUCH**
kasm_api kasmweb/api:1.18.0 internal Kasm API **DO NOT
TOUCH**
kasm_manager kasmweb/manager:1.18.0 internal Kasm manager **DO NOT
TOUCH**
kasm_db kasmweb/postgres:1.18.0 internal Kasm dedicated **DO NOT
Postgres TOUCH**
celery stack-celery internal Celery worker for **DO NOT
existing stack TOUCH**
postgres postgres:16-alpine internal Postgres for existing **DO NOT
stack TOUCH**
-------------------------------------------------------------------------------------------------------
**4.2 Planned Life OS Containers**
-------------------------------------------------------------------------------------------
**Container **Image** **Port** **Purpose** **Status**
Name**
--------------- -------------------- ------------- --------------------------- ------------
lifeos-db postgres:16-alpine internal only Dedicated Postgres for Life **ACTIVE**
OS - hosts lifeos_prod and
lifeos_dev databases
lifeos-prod lifeos-app (custom) 8002-\>8002 Life OS PROD application **TO BE
(lifeos.invixiom.com) CREATED**
lifeos-dev lifeos-app (custom) 8003-\>8003 Life OS DEV application **TO BE
(lifeos-dev.invixiom.com) CREATED**
-------------------------------------------------------------------------------------------
**4.3 Port Allocation**
-----------------------------------------------------------------------------
**Port** **Used By** **Direction** **Notes**
---------- ------------------- --------------- ------------------------------
22 SSH External Hetzner firewall open
inbound
80 Nginx External HTTP redirect to HTTPS
inbound
443 Nginx External HTTPS, all subdomains
inbound
3389 kasm_rdp_gateway External Hetzner firewall open
inbound
8000 fastapi (stub) Internal To be repurposed or removed
8080 nextcloud Internal Proxied via files.invixiom.com
8081 code.invixiom.com Internal Reserved, nothing running
8443 kasm_proxy External Kasm, Hetzner firewall open
inbound
8002 lifeos-prod Internal To be created - proxied via
lifeos.invixiom.com
8003 lifeos-dev Internal To be created - proxied via
lifeos-dev.invixiom.com
-----------------------------------------------------------------------------
**5. Docker Networks**
------------------------------------------------------------------------------------
**Network Name** **Driver** **Subnet** **Used By**
---------------------- ----------------- --------------- ---------------------------
bridge bridge 172.17.0.0/16 Default Docker bridge
kasm_default_network bridge 172.19.0.0/16 All Kasm containers
kasm_sidecar_network kasmweb/sidecar 172.20.0.0/16 Kasm sidecar
stack_web bridge 172.18.0.0/16 fastapi, celery, redis,
postgres containers
lifeos_network bridge 172.21.0.0/16 ACTIVE - lifeos-prod,
lifeos-dev, lifeos-db
------------------------------------------------------------------------------------
**6. Application Directories**
All Life OS application files live under /opt/lifeos on the host,
mounted into containers as volumes.
--------------------------------------------------------------------------
**Path** **Purpose** **Status**
----------------------------- --------------------------- ----------------
/opt/lifeos/lifeos-setup.sh Infrastructure setup script **ACTIVE**
/opt/lifeos/prod PROD application files and **ACTIVE**
config
/opt/lifeos/prod/files PROD user uploaded files **ACTIVE**
storage
/opt/lifeos/dev DEV application files and **ACTIVE**
config
/opt/lifeos/dev/files DEV user uploaded files **ACTIVE**
storage
lifeos_db_data (Docker Postgres data persistence **ACTIVE**
volume)
--------------------------------------------------------------------------
**7. Pending Configuration Tasks**
The following items are in sequence order and must be completed to
finish the infrastructure setup:
--------------------------------------------------------------------------------------------
**\#** **Task** **Status** **Notes**
-------- ------------------------------ -------------- -------------------------------------
1 Verify DNS propagation for **COMPLETE** Verified 2026-02-27
lifeos.invixiom.com and
lifeos-dev.invixiom.com
2 Create Docker network: **PENDING**
lifeos_network
3 Create lifeos-db Postgres **COMPLETE** Container: lifeos-db, image:
container postgres:16-alpine
4 Create lifeos_prod and **COMPLETE** lifeos_dev user created with separate
lifeos_dev databases inside password
lifeos-db
5 Create application directory **COMPLETE** /opt/lifeos/prod, /opt/lifeos/dev,
structure on host file storage dirs
6 Migrate existing Supabase **COMPLETE** 3 domains, 10 areas, 18 projects, 73
production data to lifeos_prod tasks, 5 links, 5 daily_focus, 80
capture, 6 context_types. Files table
empty - Supabase Storage paths
obsolete, files start fresh in R1.
7 Build Life OS Docker image **PENDING** FastAPI app, Python 3.12
(Dockerfile)
8 Create docker-compose.yml for **PENDING** PROD and DEV services
Life OS stack
9 Add lifeos.invixiom.com and **PENDING** New server blocks in
lifeos-dev.invixiom.com to /etc/nginx/sites-available/invixiom
Nginx config
10 Expand SSL cert to cover new **PENDING** Add lifeos.invixiom.com and
subdomains (certbot \--expand) lifeos-dev.invixiom.com to cert
11 Remove or retire stub fastapi **PENDING** After Life OS PROD is live
container on port 8000
12 Test end-to-end: HTTPS access **PENDING**
to lifeos.invixiom.com and
lifeos-dev.invixiom.com
--------------------------------------------------------------------------------------------
Life OS Server & Infrastructure Configuration \| Last updated:
2026-02-27

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,57 @@
# Life OS - Conversation Context (Test Infrastructure - Convo Test1)
## What This Is
Life OS is my personal productivity web application, live at https://lifeos-dev.invixiom.com on my self-hosted Hetzner server (defiant-01, 46.225.166.142). Convos 1-4 built 18 routers covering hierarchy, tasks, knowledge, daily workflows, search, admin, meetings, decisions, weblinks, appointments, and time tracking. Convo Test1 built a dynamic, introspection-based automated test suite that discovers routes from the live FastAPI app at runtime -- no hardcoded routes anywhere.
## How to Use the Project Documents
**lifeos-development-status-test1.md** - START HERE. Source of truth for the test infrastructure: what's deployed, how it works, what state it's in, and what to do next.
**lifeos-development-status-convo4.md** - Application source of truth. What's built, routers, templates, deploy patterns, remaining features. The test suite tests THIS application.
**lifeos-architecture.docx** - Full system specification. 50 tables, all subsystems. Reference when adding seed data for new entities.
**lifeos_r1_full_schema.sql** - Intended R1 schema. The test DB is cloned from the live dev DB (not this file), so always verify against: `docker exec lifeos-db psql -U postgres -d lifeos_dev -c "\d table_name"`
**life-os-server-config.docx** - Server infrastructure: containers, ports, Docker networks, Nginx, SSL.
## Current Tech Stack
- Python 3.12 / FastAPI / SQLAlchemy 2.0 async (raw SQL via text(), no ORM models) / asyncpg
- Jinja2 server-rendered templates, vanilla HTML/CSS/JS, no build pipeline
- PostgreSQL 16 in Docker, full-text search via tsvector
- Dark/light theme via CSS custom properties
- Container runs with hot reload (code mounted as volume)
- GitHub repo: mdombaugh/lifeos-dev (main branch)
## Key Patterns (Application)
- BaseRepository handles all CRUD with soft deletes (is_deleted filtering automatic)
- Every route calls get_sidebar_data(db) for the nav tree
- Forms use standard HTML POST with 303 redirect (PRG pattern)
- Templates extend base.html
- Exception: time_entries has no updated_at column, so use direct SQL for deletes instead of BaseRepository.soft_delete()
- Timer state: get_running_task_id() helper in routers/tasks.py queries time_entries WHERE end_at IS NULL
## Key Patterns (Test Suite)
- Tests introspect `app.routes` at import time to discover all paths, methods, Form() fields, and path params
- Dynamic tests auto-parametrize from the route registry -- adding a new router requires zero test file changes for smoke/CRUD coverage
- Business logic tests (timer constraints, soft delete behavior, search safety) are hand-written in test_business_logic.py
- Test DB: `lifeos_test` -- schema cloned from `lifeos_dev` via pg_dump on each deploy
- Per-test isolation: each test runs inside a transaction that rolls back
- Seed data: 15 entity fixtures inserted via raw SQL, composite `all_seeds` fixture
- `PREFIX_TO_SEED` in registry.py maps route prefixes to seed fixture keys for dynamic path resolution
- Form data auto-generated from introspected Form() signatures via form_factory.py
## Deploy Cycle (Application)
Code lives at /opt/lifeos/dev/ on the server. The container mounts this directory and uvicorn --reload picks up changes. No rebuild needed for code changes. Claude creates deploy scripts with heredocs that are uploaded via SCP and run with bash.
## Deploy Cycle (Tests)
```bash
scp deploy-tests.sh root@46.225.166.142:/opt/lifeos/dev/
ssh root@46.225.166.142
cd /opt/lifeos/dev && bash deploy-tests.sh
docker exec lifeos-dev bash /app/tests/run_tests.sh report # Verify introspection
docker exec lifeos-dev bash /app/tests/run_tests.sh # Full suite
```
## What I Need Help With
[State your current task here]

View File

@@ -0,0 +1,42 @@
# Life OS - Conversation Context (Convo 4)
## What This Is
Life OS is my personal productivity web application, live at https://lifeos-dev.invixiom.com on my self-hosted Hetzner server (defiant-01, 46.225.166.142). Convo 1 built the foundation (9 entity routers). Convo 2 added 7 more routers (search, trash, lists, files, meetings, decisions, weblinks). Convo 3 began Tier 3 (Time & Process subsystems), completing Appointments CRUD and Time Tracking with topbar timer pill. Convo 4 completed the time tracking UX by adding timer play/stop buttons to task list rows and task detail pages.
## How to Use the Project Documents
**lifeos-development-status-convo4.md** - START HERE. Source of truth for what's built, what's remaining, exact deploy state, file locations, and patterns to follow. Read this before doing any work.
**lifeos-architecture.docx** - Full system specification. 50 tables, all subsystems, UI patterns, component library, frontend design tokens, search architecture, time management logic, AI/MCP design (Phase 2). Reference when building new features.
**lifeos_r1_full_schema.sql** - The complete intended R1 schema including all tables, indexes, triggers. Verify against the live database when in doubt: `docker exec lifeos-db psql -U postgres -d lifeos_dev -c "\d table_name"`
**life-os-server-config.docx** - Server infrastructure: containers, ports, Docker networks, Nginx, SSL. Key detail: lifeos Nginx blocks use cert path `kasm.invixiom.com-0001` (not `kasm.invixiom.com`).
**Previous conversation docs** - Convo 3 and earlier docs are superseded by Convo 4 docs but provide historical context if needed.
## Current Tech Stack
- Python 3.12 / FastAPI / SQLAlchemy 2.0 async (raw SQL via text(), no ORM models) / asyncpg
- Jinja2 server-rendered templates, vanilla HTML/CSS/JS, no build pipeline
- PostgreSQL 16 in Docker, full-text search via tsvector
- Dark/light theme via CSS custom properties
- Container runs with hot reload (code mounted as volume)
- GitHub repo: mdombaugh/lifeos-dev (main branch)
## Key Patterns
- BaseRepository handles all CRUD with soft deletes (is_deleted filtering automatic)
- Every route calls get_sidebar_data(db) for the nav tree
- Forms use standard HTML POST with 303 redirect (PRG pattern)
- Templates extend base.html
- New routers: create file in routers/, add import + include_router in main.py, add nav link in base.html sidebar, create list/form/detail templates
- Search: add entity config to SEARCH_ENTITIES in routers/search.py
- Trash: add entity config to TRASH_ENTITIES in routers/admin.py
- Nullable fields for BaseRepository.update(): add to nullable_fields set in core/base_repository.py
- Exception: time_entries has no updated_at column, so use direct SQL for deletes instead of BaseRepository.soft_delete()
- Timer state: get_running_task_id() helper in routers/tasks.py queries time_entries WHERE end_at IS NULL
## Deploy Cycle
Code lives at /opt/lifeos/dev/ on the server. The container mounts this directory and uvicorn --reload picks up changes. No rebuild needed for code changes. Claude creates deploy scripts with heredocs that are uploaded via SCP and run with bash. GitHub repo is mdombaugh/lifeos-dev. Push with PAT (personal access token) as password.
## What I Need Help With
[State your current task here]

View File

@@ -0,0 +1,44 @@
# Life OS - Database Backup & Restore
## Quick Backup
```bash
docker exec lifeos-db pg_dump -U postgres -d lifeos_dev -Fc -f /tmp/lifeos_dev_backup.dump
docker cp lifeos-db:/tmp/lifeos_dev_backup.dump /opt/lifeos/backups/lifeos_dev_$(date +%Y%m%d_%H%M%S).dump
```
## Quick Restore
```bash
# Drop and recreate the database, then restore
docker exec lifeos-db psql -U postgres -c "SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = 'lifeos_dev' AND pid <> pg_backend_pid();"
docker exec lifeos-db psql -U postgres -c "DROP DATABASE lifeos_dev;"
docker exec lifeos-db psql -U postgres -c "CREATE DATABASE lifeos_dev;"
docker cp /opt/lifeos/backups/FILENAME.dump lifeos-db:/tmp/restore.dump
docker exec lifeos-db pg_restore -U postgres -d lifeos_dev /tmp/restore.dump
docker restart lifeos-dev
```
Replace `FILENAME.dump` with the actual backup filename.
## First-Time Setup
Create the backups directory:
```bash
mkdir -p /opt/lifeos/backups
```
## List Available Backups
```bash
ls -lh /opt/lifeos/backups/
```
## Notes
- `-Fc` = custom format (compressed, supports selective restore)
- Backup includes schema + data + indexes + triggers + search vectors
- Restore terminates active connections first, then drops/recreates the DB
- Restart the app container after restore so connection pool reconnects
- lifeos_prod is untouched by these commands (only lifeos_dev)

View File

@@ -0,0 +1,329 @@
# Life OS - Development Status & Continuation Guide (Convo 4)
**Last Updated:** 2026-02-28
**Current State:** Phase 1 - Tier 3 in progress (2 of 6 features built, time tracking UX complete)
**GitHub:** mdombaugh/lifeos-dev (main branch)
---
## 1. What Was Built in This Conversation
### Timer Buttons on Task UI (DEPLOYED)
- Play/stop button on each non-completed task row in tasks.html (between checkbox and priority dot)
- Play/stop button in task_detail.html header action bar (before Edit/Complete buttons)
- Running task row gets green left border highlight via `.timer-active` CSS class
- `get_running_task_id()` helper in routers/tasks.py queries `time_entries WHERE end_at IS NULL`
- Both list_tasks and task_detail routes pass `running_task_id` to template context
- Buttons POST to existing `/time/start` and `/time/stop` endpoints, redirect back via referer
- Only shown on non-completed, non-cancelled tasks
- ~60 lines of CSS appended to style.css (timer-btn, timer-btn-play, timer-btn-stop, timer-active, timer-detail-btn)
- Deployed via heredoc shell script (deploy-timer-buttons.sh)
This completes the Time Tracking feature. The full time tracking system is now:
- Start/stop timer per task from task list rows, task detail page, or time log page
- Topbar timer pill with green pulsing dot, task name link, live elapsed counter, stop button
- Auto-stop of running timer when starting a new one
- Manual time entry support
- Time log at /time with daily summaries, date-grouped entries, day filter
- Soft delete via direct SQL (time_entries lacks updated_at column)
### What Was NOT Built (deferred to Convo 5)
- **Processes / process_runs** - Most complex Tier 3 feature. 4 tables. Deferred due to usage limits.
- **Calendar view** - Unified read-only view
- **Time budgets** - Simple CRUD
- **Eisenhower matrix** - Derived view
---
## 2. Complete Application Inventory
### 2.1 Infrastructure (unchanged)
| Component | Status | Details |
|-----------|--------|---------|
| Server | LIVE | defiant-01, Hetzner, 46.225.166.142, Ubuntu 24.04 |
| Docker network | LIVE | `lifeos_network` (172.21.0.0/16) |
| PostgreSQL | LIVE | Container `lifeos-db`, postgres:16-alpine, volume `lifeos_db_data` |
| Databases | LIVE | `lifeos_prod` (R0 data, untouched), `lifeos_dev` (R1 schema + migrated data) |
| Application | LIVE | Container `lifeos-dev`, port 8003, image `lifeos-app` |
| Nginx | LIVE | lifeos-dev.invixiom.com -> localhost:8003 |
| SSL | LIVE | Let's Encrypt cert at `/etc/letsencrypt/live/kasm.invixiom.com-0001/` |
| GitHub | PUSHED | Convo 3 changes pushed. Convo 4 changes need push (see section 4.2). |
### 2.2 Core Modules
- `core/database.py` - Async engine, session factory, get_db dependency
- `core/base_repository.py` - Generic CRUD: list, get, create, update, soft_delete, restore, permanent_delete, bulk_soft_delete, reorder, count, list_deleted. Has `nullable_fields` set for update() null handling.
- `core/sidebar.py` - Domain > area > project nav tree, capture/focus badge counts
- `main.py` - FastAPI app, dashboard, health check, 18 router includes
### 2.3 Routers (18 total)
| Router | Prefix | Templates | Status |
|--------|--------|-----------|--------|
| domains | /domains | domains, domain_form | Convo 1 |
| areas | /areas | areas, area_form | Convo 1 |
| projects | /projects | projects, project_form, project_detail | Convo 1 |
| tasks | /tasks | tasks, task_form, task_detail | Convo 1, **updated Convo 4** |
| notes | /notes | notes, note_form, note_detail | Convo 1 |
| links | /links | links, link_form | Convo 1 |
| focus | /focus | focus | Convo 1 |
| capture | /capture | capture | Convo 1 |
| contacts | /contacts | contacts, contact_form, contact_detail | Convo 1 |
| search | /search | search | Convo 2 |
| admin | /admin/trash | trash | Convo 2 |
| lists | /lists | lists, list_form, list_detail | Convo 2 |
| files | /files | files, file_upload, file_preview | Convo 2 |
| meetings | /meetings | meetings, meeting_form, meeting_detail | Convo 2 |
| decisions | /decisions | decisions, decision_form, decision_detail | Convo 2 |
| weblinks | /weblinks | weblinks, weblink_form, weblink_folder_form | Convo 2 |
| appointments | /appointments | appointments, appointment_form, appointment_detail | Convo 3 |
| time_tracking | /time | time_entries | Convo 3 |
### 2.4 Templates (42 total, unchanged from Convo 3)
base.html, dashboard.html, search.html, trash.html,
tasks.html, task_form.html, task_detail.html,
projects.html, project_form.html, project_detail.html,
domains.html, domain_form.html,
areas.html, area_form.html,
notes.html, note_form.html, note_detail.html,
links.html, link_form.html,
focus.html, capture.html,
contacts.html, contact_form.html, contact_detail.html,
lists.html, list_form.html, list_detail.html,
files.html, file_upload.html, file_preview.html,
meetings.html, meeting_form.html, meeting_detail.html,
decisions.html, decision_form.html, decision_detail.html,
weblinks.html, weblink_form.html, weblink_folder_form.html,
appointments.html, appointment_form.html, appointment_detail.html,
time_entries.html
### 2.5 Static Assets
- `style.css` - ~1040 lines (timer button CSS appended in Convo 4)
- `app.js` - ~190 lines (timer pill polling from Convo 3, unchanged in Convo 4)
---
## 3. How the Container Runs
```bash
docker run -d \
--name lifeos-dev \
--network lifeos_network \
--restart unless-stopped \
--env-file .env \
-p 8003:8003 \
-v /opt/lifeos/dev/files:/opt/lifeos/files/dev \
-v /opt/lifeos/dev:/app \
lifeos-app \
uvicorn main:app --host 0.0.0.0 --port 8003 --workers 1 --reload
```
**Environment (.env):**
```
DATABASE_URL=postgresql+asyncpg://postgres:UCTOQDZiUhN8U@lifeos-db:5432/lifeos_dev
FILE_STORAGE_PATH=/opt/lifeos/files/dev
ENVIRONMENT=development
```
Deploy: edit files in `/opt/lifeos/dev/`, hot reload picks them up.
Restart: `docker restart lifeos-dev`
Logs: `docker logs lifeos-dev --tail 30`
---
## 4. Known Issues
### 4.1 Immediate
1. **Not yet tested by user** - Timer buttons deployed but user testing still pending. May have bugs.
2. **Convo 4 changes not pushed to GitHub** - Run: `cd /opt/lifeos/dev && git add . && git commit -m "Timer buttons on task rows and detail" && git push origin main`
### 4.2 Technical Debt
1. **time_entries missing `updated_at`** - Table lacks this column so BaseRepository methods that set updated_at will fail. Direct SQL used for soft_delete. If adding time_entries to TRASH_ENTITIES, restore will also need direct SQL.
2. **R1 schema file mismatch** - lifeos_schema_r1.sql in project doesn't reflect actual DB. Query DB directly to verify.
3. **No CSRF protection** - Single-user system, low risk.
4. **No pagination** - All list views load all rows. Fine at current scale.
5. **Font loading** - Google Fonts @import is render-blocking.
---
## 5. What's NOT Built Yet
### Tier 3 Remaining (4 features)
1. **Processes / process_runs** - Most complex Tier 3 feature. 4 tables: processes, process_steps, process_runs, process_run_steps. Template CRUD, run instantiation (copies steps as immutable snapshot), step completion tracking, task generation modes (all_at_once vs step_by_step). START HERE in Convo 5.
2. **Calendar view** - Unified `/calendar` page showing appointments (start_at) + meetings (meeting_date) + tasks (due_date). No new tables, read-only derived view. Filter by date range, domain, type.
3. **Time budgets** - Simple CRUD: domain_id + weekly_hours + effective_from. Used for overcommitment warnings on dashboard.
4. **Eisenhower matrix** - Derived 2x2 grid from task priority + due_date. Quadrants: Important+Urgent (priority 1-2, due <=7d), Important+Not Urgent (priority 1-2, due >7d), Not Important+Urgent (priority 3-4, due <=7d), Not Important+Not Urgent (priority 3-4, due >7d or null). Clickable to filter task list.
### Tier 4 - Advanced Features
- Releases / milestones
- Dependencies (DAG, cycle detection, status cascade)
- Task templates (instantiation with subtask generation)
- Note wiki-linking ([[ syntax)
- Note folders
- Bulk actions (multi-select, bulk complete/move/delete)
- CSV export
- Drag-to-reorder (SortableJS)
- Reminders
- Weekly review process template
- Dashboard metrics (weekly/monthly completion stats)
### UX Polish
- Breadcrumb navigation (partially done, inconsistent)
- Overdue visual treatment (red left border on task rows)
- Empty states with illustrations (basic emoji states exist)
- Skeleton loading screens
- Toast notification system
- Confirmation dialogs (basic confirm() exists, no modal)
- Mobile bottom tab bar
- Mobile responsive improvements
---
## 6. File Locations on Server
```
/opt/lifeos/
dev/ # DEV application (mounted as /app in container)
main.py # 18 router includes
core/
__init__.py
database.py
base_repository.py
sidebar.py
routers/
__init__.py
domains.py, areas.py, projects.py, tasks.py
notes.py, links.py, focus.py, capture.py, contacts.py
search.py, admin.py, lists.py
files.py, meetings.py, decisions.py, weblinks.py
appointments.py
time_tracking.py
templates/
base.html, dashboard.html, search.html, trash.html
tasks.html, task_form.html, task_detail.html
projects.html, project_form.html, project_detail.html
domains.html, domain_form.html
areas.html, area_form.html
notes.html, note_form.html, note_detail.html
links.html, link_form.html
focus.html, capture.html
contacts.html, contact_form.html, contact_detail.html
lists.html, list_form.html, list_detail.html
files.html, file_upload.html, file_preview.html
meetings.html, meeting_form.html, meeting_detail.html
decisions.html, decision_form.html, decision_detail.html
weblinks.html, weblink_form.html, weblink_folder_form.html
appointments.html, appointment_form.html, appointment_detail.html
time_entries.html
static/
style.css (~1040 lines)
app.js (~190 lines)
Dockerfile
requirements.txt
.env
backups/ # Database backups
```
---
## 7. How to Continue Development
### Recommended build order for Convo 5:
1. **Processes / process_runs** (most complex remaining feature - do first with full usage window)
2. **Calendar view** (combines appointments + meetings + tasks)
3. **Time budgets** (simple CRUD)
4. **Eisenhower matrix** (derived view, quick win)
### Adding a new entity router (pattern):
1. Create `routers/entity_name.py` following existing router patterns
2. Add import + `app.include_router()` in `main.py`
3. Create templates: list, form, detail (all extend base.html)
4. Add nav link in `templates/base.html` sidebar section
5. Add to `SEARCH_ENTITIES` in `routers/search.py` (if searchable)
6. Add to `TRASH_ENTITIES` in `routers/admin.py` (if soft-deletable)
7. Add any new nullable fields to `nullable_fields` in `core/base_repository.py`
8. Use `BaseRepository("table_name", db)` for all CRUD
9. Always call `get_sidebar_data(db)` and pass to template context
### Deploy cycle:
```bash
# Files are created locally by Claude, packaged as a deploy script with heredocs
# Upload to server, run the script
scp deploy-script.sh root@46.225.166.142:/opt/lifeos/dev/
ssh root@46.225.166.142
cd /opt/lifeos/dev && bash deploy-script.sh
# Commit
git add . && git commit -m "description" && git push origin main
```
### Database backup:
```bash
mkdir -p /opt/lifeos/backups
docker exec lifeos-db pg_dump -U postgres -d lifeos_dev -Fc -f /tmp/lifeos_dev_backup.dump
docker cp lifeos-db:/tmp/lifeos_dev_backup.dump /opt/lifeos/backups/lifeos_dev_$(date +%Y%m%d_%H%M%S).dump
```
### Key code patterns:
- Every route: `sidebar = await get_sidebar_data(db)`
- Forms POST to /create or /{id}/edit, redirect 303
- Filters: query params, auto-submit via JS onchange
- Detail views: breadcrumb nav at top
- Toggle/complete: inline form with checkbox onchange
- Junction tables: raw SQL INSERT with ON CONFLICT DO NOTHING
- File upload: multipart form, save to FILE_STORAGE_PATH, record in files table
- Timer: POST /time/start with task_id, POST /time/stop, GET /time/running (JSON for topbar pill)
- Timer buttons: get_running_task_id() helper in tasks.py, play/stop inline forms on task rows
---
## 8. Tier 3 Architecture Reference
### Processes / Process Runs (BUILD NEXT)
**Tables:** processes, process_steps, process_runs, process_run_steps
**Flow:**
1. Create a process template (processes) with ordered steps (process_steps)
2. Instantiate a run (process_runs) - copies all process_steps to process_run_steps as immutable snapshots
3. Steps in a run can be completed, which records completed_by_id and completed_at
4. Task generation modes: `all_at_once` creates all tasks when run starts, `step_by_step` creates next task only when current step completes
5. Template changes after run creation do NOT affect active runs (snapshot pattern)
**Schema notes:**
- processes: id, name, description, process_type (workflow|checklist), category, status, tags, search_vector
- process_steps: id, process_id, title, instructions, expected_output, estimated_days, context, sort_order
- process_runs: id, process_id, title, status, process_type (copied from template), task_generation, project_id, contact_id, started_at, completed_at
- process_run_steps: id, run_id, title, instructions (immutable), status, completed_by_id, completed_at, notes, sort_order
### Calendar View
- Unified read-only page at `/calendar`
- Show appointments (start_at), meetings (meeting_date + start_at), tasks (due_date)
- Filter by date range, domain, type
- No new tables needed
### Time Budgets
- Simple CRUD: domain_id + weekly_hours + effective_from
- Dashboard warning when domain time_entries exceed budget
### Eisenhower Matrix
- Derived from task priority (1-4) + due_date
- Quadrants: Important+Urgent, Important+Not Urgent, Not Important+Urgent, Not Important+Not Urgent
- Priority 1-2 = Important, Priority 3-4 = Not Important
- Due <= 7 days or overdue = Urgent, Due > 7 days or no date = Not Urgent
- Rendered as 2x2 grid, clicking quadrant filters to task list
---
## 9. Production Deployment (Not Yet Done)
When ready to go to PROD:
1. Apply R1 schema to lifeos_prod
2. Run data migration on lifeos_prod
3. Build and start lifeos-prod container on port 8002
4. Nginx already has lifeos.invixiom.com block pointing to 8002
5. SSL cert already covers lifeos.invixiom.com
6. Set ENVIRONMENT=production in prod .env
7. Set up daily backup cron job

View File

@@ -0,0 +1,261 @@
# Life OS - Development Status & Continuation Guide (Test Infrastructure - Convo Test1)
**Last Updated:** 2026-03-01
**Current State:** Test suite deployed, introspection verified (121 routes discovered), first test run pending
**GitHub:** mdombaugh/lifeos-dev (main branch)
---
## 1. What Was Built in This Conversation
### Dynamic Introspection-Based Test Suite (DEPLOYED)
Built and deployed an automated test suite that discovers routes from the live FastAPI app at runtime. Zero hardcoded routes. When a new router is added, smoke and CRUD tests auto-expand on next run.
**Architecture (11 files in /opt/lifeos/dev/tests/):**
| File | Purpose | Lines |
|------|---------|-------|
| introspect.py | Route discovery engine: walks app.routes, extracts paths/methods/Form() fields/path params, classifies routes | 357 |
| form_factory.py | Generates valid POST form data from introspected Form() signatures + seed data UUIDs | 195 |
| registry.py | Imports app, runs introspection once, exposes route registry + PREFIX_TO_SEED mapping + resolve_path() | 79 |
| conftest.py | Fixtures only: test DB engine, per-test rollback session, httpx client, 15 seed data fixtures, all_seeds composite | 274 |
| test_smoke_dynamic.py | 3 parametrized functions expanding to ~59 tests: all GETs (no params) return 200, all GETs (with seed ID) return 200, all detail/edit GETs (fake UUID) return 404 | 100 |
| test_crud_dynamic.py | 5 parametrized functions expanding to ~62 tests: all POST create/edit/delete redirect 303, all actions non-500, create-then-verify-in-list | 161 |
| test_business_logic.py | 16 hand-written tests: timer single-run constraint, stop sets end_at, soft delete/restore visibility, search SQL injection, sidebar integrity, focus/capture workflows, edge cases | 212 |
| route_report.py | CLI tool: dumps all discovered routes with classification, form fields, seed mapping coverage | 65 |
| run_tests.sh | Test runner with aliases: smoke, crud, logic, report, fast, full, custom args | 22 |
| __init__.py | Package marker | 0 |
| pytest.ini | Config: asyncio_mode=auto, verbose output, short tracebacks | 7 |
**Key design decisions:**
- `registry.py` separated from `conftest.py` to avoid pytest auto-loading conflicts (test files import from registry, not conftest)
- Form() detection uses `__class__.__name__` check, not `issubclass()`, because FastAPI's `Form` is a function not a class
- Test DB schema cloned from live dev DB via pg_dump (not from stale SQL files)
- Seed data uses raw SQL INSERT matching actual table columns
### Introspection Verification Results
Deploy script step 4 confirmed:
```
Routes discovered: 121
GET (no params): 36
GET (with params): 23
POST create: 13
POST edit: 13
POST delete: 17
POST action: 19
Entity prefixes: 32
```
### What Was NOT Done
- **First test run not yet executed** -- introspection works, tests deployed, but `run_tests.sh` has not been run yet
- **Seed data column mismatches likely** -- seed INSERTs written from architecture docs, not actual table inspection. First run will surface these as SQL errors
- **No test for file upload routes** -- file routes skipped (has_file_upload flag) because they need multipart handling
---
## 2. Test Infrastructure Inventory
### 2.1 Database
| Component | Details |
|-----------|---------|
| Test DB | `lifeos_test` on lifeos-db container |
| Schema source | Cloned from `lifeos_dev` via `pg_dump --schema-only` |
| Tables | 48 (matches dev) |
| Isolation | Per-test transaction rollback (no data persists between tests) |
| Credentials | Same as dev: postgres:UCTOQDZiUhN8U |
### 2.2 How Introspection Works
1. `registry.py` imports `main.app` (sets DATABASE_URL to test DB first)
2. `introspect.py` walks `app.routes`, for each `APIRoute`:
- Extracts path, HTTP methods, endpoint function reference
- Parses `{id}` path parameters via regex
- Inspects endpoint function signature for `Form()` parameters (checks `default.__class__.__name__` for "FieldInfo")
- Extracts query parameters (non-Form, non-Depends, non-Request params)
- Classifies route: list / detail / create_form / edit_form / create / edit / delete / toggle / action / json / page
3. Builds `ROUTE_REGISTRY` dict keyed by kind (get_no_params, post_create, etc.) and by prefix
4. Test files parametrize from this registry at collection time
### 2.3 How Dynamic Tests Work
**Smoke (test_smoke_dynamic.py):**
```python
@pytest.mark.parametrize("path", [r.path for r in GET_NO_PARAMS])
async def test_get_no_params_returns_200(client, path):
r = await client.get(path)
assert r.status_code == 200
```
N discovered GET routes = N smoke tests. No manual updates.
**CRUD (test_crud_dynamic.py):**
- Collects all POST create/edit/delete routes from registry
- Calls `build_form_data(route.form_fields, all_seeds)` to generate valid payloads
- `form_factory.py` resolves FK fields to seed UUIDs, generates values by field name pattern
- Asserts 303 redirect for create/edit/delete, non-500 for actions
**Business Logic (test_business_logic.py):**
- Hand-written, tests behavioral contracts not discoverable via introspection
- Timer: single running constraint, stop sets end_at, /time/running returns JSON
- Soft deletes: deleted task hidden from list, restore reappears
- Search: SQL injection doesn't crash, empty query works, unicode works
- Sidebar: domain appears on every page, project hierarchy renders
- Focus/capture: add to focus, multi-line capture creates multiple items
- Edge cases: invalid UUID, timer without task_id, double delete
### 2.4 Seed Data Fixtures (15 entities)
| Fixture | Table | Dependencies | Key Fields |
|---------|-------|-------------|------------|
| seed_domain | domains | none | id, name, color |
| seed_area | areas | seed_domain | id, domain_id |
| seed_project | projects | seed_domain, seed_area | id, domain_id, area_id |
| seed_task | tasks | seed_domain, seed_project | id, domain_id, project_id, title |
| seed_contact | contacts | none | id, first_name, last_name |
| seed_note | notes | seed_domain | id, domain_id, title |
| seed_meeting | meetings | none | id, title, meeting_date |
| seed_decision | decisions | seed_domain, seed_project | id, title |
| seed_appointment | appointments | seed_domain | id, title, start_at, end_at |
| seed_weblink_folder | weblink_folders | none | id, name |
| seed_list | lists | seed_domain, seed_project | id, name |
| seed_link | links | seed_domain | id, title, url |
| seed_weblink | weblinks | seed_weblink_folder | id, title, url |
| seed_capture | capture | none | id, raw_text |
| seed_focus | daily_focus | seed_task | id, task_id |
### 2.5 PREFIX_TO_SEED Mapping
Maps route prefixes to seed fixture keys so `resolve_path()` can replace `{id}` with real UUIDs:
```
/domains -> domain /contacts -> contact
/areas -> area /meetings -> meeting
/projects -> project /decisions -> decision
/tasks -> task /appointments -> appointment
/notes -> note /weblinks -> weblink
/links -> link /weblinks/folders -> weblink_folder
/lists -> list /focus -> focus
/capture -> capture /time -> task
/files -> None (skipped) /admin/trash -> None (skipped)
```
---
## 3. File Locations on Server
```
/opt/lifeos/dev/
tests/
__init__.py
introspect.py # Route discovery engine
form_factory.py # Form data generation
registry.py # Route registry + PREFIX_TO_SEED + resolve_path
conftest.py # Fixtures (DB, client, seeds)
route_report.py # CLI route dump
test_smoke_dynamic.py # Auto-parametrized GET tests
test_crud_dynamic.py # Auto-parametrized POST tests
test_business_logic.py # Hand-written behavioral tests
run_tests.sh # Test runner
pytest.ini # pytest config
deploy-tests.sh # Deployment script (can re-run to reset)
```
---
## 4. Known Issues & Expected First-Run Failures
### 4.1 Likely Seed Data Mismatches
Seed INSERT statements were written from architecture docs, not from inspecting actual table columns. The first test run will likely produce errors like:
- `column "X" of relation "Y" does not exist` -- seed INSERT has a column the actual table doesn't have
- `null value in column "X" violates not-null constraint` -- seed INSERT is missing a required column
**Fix process:** Run tests, read the SQL errors, adjust the INSERT in conftest.py to match actual columns (query with `\d table_name`), redeploy.
### 4.2 Possible Form Field Discovery Gaps
Some routers may use patterns the introspection engine doesn't handle:
- `Annotated[str, Form()]` style (handled via `__metadata__` check, but untested against live code)
- Form fields with non-standard defaults
- Routes that accept both Form and query params
The route report (`run_tests.sh report`) will show warnings for POST create/edit routes with zero discovered Form fields. Those need investigation.
### 4.3 Route Classification Edge Cases
Some routes may be misclassified:
- Admin trash restore routes (`/admin/trash/restore/{entity}/{id}`) may not match the standard patterns
- Capture routes (`/capture/add`, `/capture/{id}/convert`, `/capture/{id}/dismiss`) use non-standard action patterns
- Focus routes (`/focus/add`, `/focus/{id}/remove`) are action routes, not standard CRUD
These will show up as action route tests (non-500 assertion) rather than typed CRUD tests.
### 4.4 Not Yet Pushed to GitHub
Test files need to be committed: `cd /opt/lifeos/dev && git add . && git commit -m "Dynamic test suite" && git push origin main`
---
## 5. How to Continue (Convo Test2)
### Immediate Next Steps
1. **Run the route report** to verify introspection output:
```bash
docker exec lifeos-dev bash /app/tests/run_tests.sh report
```
2. **Run smoke tests first** (most likely to pass):
```bash
docker exec lifeos-dev bash /app/tests/run_tests.sh smoke
```
3. **Fix seed data failures** by inspecting actual tables and adjusting conftest.py INSERTs
4. **Run CRUD tests** after seeds are fixed:
```bash
docker exec lifeos-dev bash /app/tests/run_tests.sh crud
```
5. **Run business logic tests** last:
```bash
docker exec lifeos-dev bash /app/tests/run_tests.sh logic
```
6. **Run full suite** once individual categories pass:
```bash
docker exec lifeos-dev bash /app/tests/run_tests.sh
```
### When Adding a New Entity Router
1. Add seed fixture to `conftest.py` (INSERT matching actual table columns)
2. Add entry to `PREFIX_TO_SEED` in `registry.py`
3. Run tests -- smoke and CRUD auto-expand to cover new routes
4. Add behavioral tests to `test_business_logic.py` if entity has constraints or state machines
### When Schema Changes
Re-run `deploy-tests.sh` (step 1 drops and recreates lifeos_test from current dev schema).
Or manually:
```bash
docker exec lifeos-db psql -U postgres -c "DROP DATABASE IF EXISTS lifeos_test;"
docker exec lifeos-db psql -U postgres -c "CREATE DATABASE lifeos_test;"
docker exec lifeos-db pg_dump -U postgres -d lifeos_dev --schema-only -f /tmp/s.sql
docker exec lifeos-db psql -U postgres -d lifeos_test -f /tmp/s.sql -q
```
### Test Runner Commands
```bash
docker exec lifeos-dev bash /app/tests/run_tests.sh # Full suite
docker exec lifeos-dev bash /app/tests/run_tests.sh report # Route introspection dump
docker exec lifeos-dev bash /app/tests/run_tests.sh smoke # All GET endpoints
docker exec lifeos-dev bash /app/tests/run_tests.sh crud # All POST create/edit/delete
docker exec lifeos-dev bash /app/tests/run_tests.sh logic # Business logic
docker exec lifeos-dev bash /app/tests/run_tests.sh fast # Smoke, stop on first fail
docker exec lifeos-dev bash /app/tests/run_tests.sh -k "timer" # pytest keyword filter
```
---
## 6. Application Development Remaining (Unchanged from Convo 4)
### Tier 3 Remaining (4 features)
1. **Processes / process_runs** -- Most complex. 4 tables. Template CRUD, run instantiation, step completion, task generation.
2. **Calendar view** -- Unified read-only view of appointments + meetings + tasks.
3. **Time budgets** -- Simple CRUD: domain_id + weekly_hours + effective_from.
4. **Eisenhower matrix** -- Derived 2x2 grid from task priority + due_date.
### Tier 4, UX Polish, Production Deployment
See lifeos-development-status-convo4.md sections 5, 8, 9.

View File

@@ -0,0 +1,263 @@
#!/bin/bash
# =============================================================================
# Life OS Infrastructure Setup Script
# Server: defiant-01 (46.225.166.142) - Ubuntu 24.04 LTS
# Run as: root
# Purpose: Repeatable setup of Life OS DEV and PROD environments on Hetzner VM
# =============================================================================
# USAGE:
# Full run: bash lifeos-setup.sh
# Single section: bash lifeos-setup.sh network
# bash lifeos-setup.sh database
# bash lifeos-setup.sh app
# bash lifeos-setup.sh nginx
# bash lifeos-setup.sh ssl
# =============================================================================
set -e # Exit on any error
# --- Configuration -----------------------------------------------------------
LIFEOS_NETWORK="lifeos_network"
DB_CONTAINER="lifeos-db"
DB_IMAGE="postgres:16-alpine"
DB_PROD="lifeos_prod"
DB_DEV="lifeos_dev"
APP_PROD_CONTAINER="lifeos-prod"
APP_DEV_CONTAINER="lifeos-dev"
APP_PROD_PORT="8002"
APP_DEV_PORT="8003"
DOMAIN_PROD="lifeos.invixiom.com"
DOMAIN_DEV="lifeos-dev.invixiom.com"
CERT_PATH="/etc/letsencrypt/live/kasm.invixiom.com"
LIFEOS_DIR="/opt/lifeos"
# DB passwords - change these before running
DB_PROD_PASSWORD="CHANGE_ME_PROD"
DB_DEV_PASSWORD="CHANGE_ME_DEV"
# -----------------------------------------------------------------------------
section() {
echo ""
echo "=============================================="
echo " $1"
echo "=============================================="
}
# =============================================================================
# SECTION 1: Docker Network
# =============================================================================
setup_network() {
section "SECTION 1: Docker Network"
if docker network ls | grep -q "$LIFEOS_NETWORK"; then
echo "Network $LIFEOS_NETWORK already exists, skipping."
else
docker network create "$LIFEOS_NETWORK"
echo "Created network: $LIFEOS_NETWORK"
fi
docker network ls | grep lifeos
}
# =============================================================================
# SECTION 2: PostgreSQL Container
# =============================================================================
setup_database() {
section "SECTION 2: PostgreSQL Container"
if docker ps -a | grep -q "$DB_CONTAINER"; then
echo "Container $DB_CONTAINER already exists, skipping creation."
else
docker run -d \
--name "$DB_CONTAINER" \
--network "$LIFEOS_NETWORK" \
--restart unless-stopped \
-e POSTGRES_PASSWORD="$DB_PROD_PASSWORD" \
-v lifeos_db_data:/var/lib/postgresql/data \
"$DB_IMAGE"
echo "Created container: $DB_CONTAINER"
echo "Waiting for Postgres to be ready..."
sleep 5
fi
# Create PROD database
docker exec "$DB_CONTAINER" psql -U postgres -tc \
"SELECT 1 FROM pg_database WHERE datname='$DB_PROD'" | grep -q 1 || \
docker exec "$DB_CONTAINER" psql -U postgres \
-c "CREATE DATABASE $DB_PROD;"
# Create DEV database
docker exec "$DB_CONTAINER" psql -U postgres -tc \
"SELECT 1 FROM pg_database WHERE datname='$DB_DEV'" | grep -q 1 || \
docker exec "$DB_CONTAINER" psql -U postgres \
-c "CREATE DATABASE $DB_DEV;"
# Create DEV user with separate password
docker exec "$DB_CONTAINER" psql -U postgres -tc \
"SELECT 1 FROM pg_roles WHERE rolname='lifeos_dev'" | grep -q 1 || \
docker exec "$DB_CONTAINER" psql -U postgres \
-c "CREATE USER lifeos_dev WITH PASSWORD '$DB_DEV_PASSWORD';"
docker exec "$DB_CONTAINER" psql -U postgres \
-c "GRANT ALL PRIVILEGES ON DATABASE $DB_DEV TO lifeos_dev;"
echo "Databases ready:"
docker exec "$DB_CONTAINER" psql -U postgres -c "\l" | grep lifeos
}
# =============================================================================
# SECTION 3: Application Directory Structure
# =============================================================================
setup_app_dirs() {
section "SECTION 3: Application Directory Structure"
mkdir -p "$LIFEOS_DIR/prod"
mkdir -p "$LIFEOS_DIR/dev"
mkdir -p "$LIFEOS_DIR/prod/files"
mkdir -p "$LIFEOS_DIR/dev/files"
echo "Created directory structure:"
ls -la "$LIFEOS_DIR"
}
# =============================================================================
# SECTION 4: Nginx Configuration
# (Run after app containers are up and SSL cert is expanded)
# =============================================================================
setup_nginx() {
section "SECTION 4: Nginx Virtual Hosts"
# Add Life OS PROD and DEV server blocks to existing invixiom config
# We append to the existing file - kasm/files/code blocks remain untouched
if grep -q "$DOMAIN_PROD" /etc/nginx/sites-available/invixiom; then
echo "Nginx config for $DOMAIN_PROD already exists, skipping."
else
cat >> /etc/nginx/sites-available/invixiom << EOF
server {
listen 443 ssl;
server_name $DOMAIN_PROD;
ssl_certificate $CERT_PATH/fullchain.pem;
ssl_certificate_key $CERT_PATH/privkey.pem;
location / {
proxy_pass http://127.0.0.1:$APP_PROD_PORT;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
}
server {
listen 443 ssl;
server_name $DOMAIN_DEV;
ssl_certificate $CERT_PATH/fullchain.pem;
ssl_certificate_key $CERT_PATH/privkey.pem;
location / {
proxy_pass http://127.0.0.1:$APP_DEV_PORT;
proxy_http_version 1.1;
proxy_set_header Upgrade \$http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host \$host;
proxy_set_header X-Real-IP \$remote_addr;
proxy_set_header X-Forwarded-For \$proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto \$scheme;
}
}
EOF
echo "Added Nginx config for $DOMAIN_PROD and $DOMAIN_DEV"
fi
# Add new domains to the HTTP->HTTPS redirect block
# (manual step - see notes below)
echo ""
echo "NOTE: Also add $DOMAIN_PROD and $DOMAIN_DEV to the server_name line"
echo "in the HTTP redirect block at the top of /etc/nginx/sites-available/invixiom"
# Test and reload
nginx -t && systemctl reload nginx
echo "Nginx reloaded."
}
# =============================================================================
# SECTION 5: SSL Certificate Expansion
# (Expand Let's Encrypt cert to cover new subdomains)
# =============================================================================
setup_ssl() {
section "SECTION 5: SSL Certificate Expansion"
certbot certonly --nginx \
-d kasm.invixiom.com \
-d files.invixiom.com \
-d code.invixiom.com \
-d "$DOMAIN_PROD" \
-d "$DOMAIN_DEV" \
--expand
systemctl reload nginx
echo "SSL cert expanded and Nginx reloaded."
}
# =============================================================================
# MAIN
# =============================================================================
case "${1:-all}" in
network) setup_network ;;
database) setup_database ;;
dirs) setup_app_dirs ;;
nginx) setup_nginx ;;
ssl) setup_ssl ;;
all)
setup_network
setup_database
setup_app_dirs
# nginx and ssl run after app containers are built
echo ""
echo "=============================================="
echo " Sections 1-3 complete."
echo " Next: build Life OS Docker image, then run:"
echo " bash lifeos-setup.sh ssl"
echo " bash lifeos-setup.sh nginx"
echo "=============================================="
;;
*)
echo "Unknown section: $1"
echo "Usage: bash lifeos-setup.sh [network|database|dirs|nginx|ssl|all]"
exit 1
;;
esac
# =============================================================================
# SECTION 6: Data Migration (reference - already completed)
# Documents the steps used to migrate Supabase prod data to lifeos_prod
# =============================================================================
setup_migration_notes() {
section "SECTION 6: Data Migration Notes"
echo "Migration completed 2026-02-27"
echo ""
echo "Steps used:"
echo " 1. Exported data from Supabase using Python supabase client (supabase_export.py)"
echo " 2. Applied schema: docker exec -i lifeos-db psql -U postgres -d lifeos_prod < lifeos_schema_r0.sql"
echo " 3. Imported data: docker exec -i lifeos-db psql -U postgres -d lifeos_prod < lifeos_export.sql"
echo ""
echo "Final row counts:"
docker exec lifeos-db psql -U postgres -d lifeos_prod -c "
SELECT 'domains' as table_name, count(*) FROM domains UNION ALL
SELECT 'areas', count(*) FROM areas UNION ALL
SELECT 'projects', count(*) FROM projects UNION ALL
SELECT 'tasks', count(*) FROM tasks UNION ALL
SELECT 'notes', count(*) FROM notes UNION ALL
SELECT 'links', count(*) FROM links UNION ALL
SELECT 'files', count(*) FROM files UNION ALL
SELECT 'daily_focus', count(*) FROM daily_focus UNION ALL
SELECT 'capture', count(*) FROM capture UNION ALL
SELECT 'context_types', count(*) FROM context_types;
"
echo ""
echo "Note: files table is empty - Supabase Storage paths are obsolete."
echo "File uploads start fresh in Release 1 using local storage."
}

View File

@@ -0,0 +1,667 @@
**Life OS v2**
Data Migration Plan
Old Schema to New Schema Mapping + New Database DDL
-------------------- --------------------------------------------------
Document Version 1.0
Date February 2026
Old System Supabase (PostgreSQL) on Render
New System Self-hosted PostgreSQL on Hetzner VM (defiant-01)
Old Schema Tables 11
New Schema Tables \~50
-------------------- --------------------------------------------------
**1. Migration Overview**
This document defines the data migration from Life OS v1
(Supabase/Render) to Life OS v2 (self-hosted PostgreSQL on Hetzner). The
v1 schema and data remain untouched on Supabase for reference. The v2
schema is a completely separate database with new tables, new
conventions, and expanded capabilities.
**Strategy:** Export v1 data via pg_dump, transform using a Python
migration script, import into the v2 database. V1 remains read-only as a
reference. No shared database, no incremental sync.
**Key principle:** The new schema is NOT an evolution of the old schema.
It is a redesign. Some tables map 1:1 (domains, areas). Others split,
merge, or gain significant new columns. Some v2 tables have no v1
equivalent at all.
**2. Old Schema (R0 State)**
The v1 system has 11 tables. All PKs are UUID via gen_random_uuid().
Timestamps are TIMESTAMPTZ.
--------------- ---------- ---------------------------------------------
**Table** **Row **Purpose**
Est.**
domains 3-5 Top-level life categories (Work, Personal,
Sintri)
areas 5-10 Optional grouping within a domain
projects 10-20 Unit of work within domain/area
tasks 50-200 Atomic actions with priority, status, context
notes 10-50 Markdown documents attached to project/domain
links 10-30 Named URL references
files 5-20 Binary files in Supabase Storage with
metadata
daily_focus 30-100 Date-scoped task commitment list
capture 10-50 Raw text capture queue
context_types 6 GTD execution mode lookup (deep_work, quick,
etc.)
reminders 0 Schema exists but no UI or delivery built
--------------- ---------- ---------------------------------------------
**3. Table-by-Table Migration Mapping**
Each v1 table is mapped to its v2 equivalent(s) with column-level
transformations noted. Universal columns added to all v2 tables:
updated_at, is_active (BOOLEAN DEFAULT true), sort_order (INT DEFAULT
0).
**3.1 domains -\> domains**
**Mapping:** Direct 1:1. Preserve UUIDs.
----------------- ----------------- ------------ ---------------------------
**v1 Column** **v2 Column** **Action** **Notes**
id id Copy Preserve UUID, all FKs
depend on it
name name Copy
color color Copy
created_at created_at Copy
(none) updated_at Generate Set to created_at for
initial import
(none) is_active Default true
(none) sort_order Generate Assign sequential 10, 20,
30\...
(none) description Default NULL - new optional field
(none) icon Default NULL - new optional field
----------------- ----------------- ------------ ---------------------------
**3.2 areas -\> areas**
**Mapping:** Direct 1:1. Preserve UUIDs.
----------------- ----------------- ------------ ---------------------------
**v1 Column** **v2 Column** **Action** **Notes**
id id Copy Preserve UUID
domain_id domain_id Copy FK preserved
name name Copy
description description Copy
created_at created_at Copy
(none) updated_at Generate Set to created_at
(none) is_active Default true
(none) sort_order Generate Sequential per domain
(none) icon Default NULL
(none) color Default NULL - inherit from domain
or set later
----------------- ----------------- ------------ ---------------------------
**3.3 projects -\> projects**
**Mapping:** Direct 1:1 with new columns. Preserve UUIDs.
----------------- ----------------- ------------ ---------------------------
**v1 Column** **v2 Column** **Action** **Notes**
id id Copy Preserve UUID
domain_id domain_id Copy
area_id area_id Copy Nullable preserved
name name Copy
description description Copy
status status Map v1 \'archived\' -\> v2
\'archived\' (kept as-is)
due_date target_date Rename Column rename only, same
DATE type
created_at created_at Copy
updated_at updated_at Copy
(none) start_date Default NULL
(none) priority Default 3 (normal)
(none) is_active Default true
(none) sort_order Generate Sequential per area/domain
(none) color Default NULL
(none) release_id Default NULL - no releases in v1
----------------- ----------------- ------------ ---------------------------
**3.4 tasks -\> tasks**
**Mapping:** Direct 1:1 with significant new columns. Preserve UUIDs.
This is the most data-rich migration.
------------------- ------------------- ------------ ---------------------------
**v1 Column** **v2 Column** **Action** **Notes**
id id Copy Preserve UUID - many FKs
depend on this
domain_id domain_id Copy
project_id project_id Copy Nullable preserved
parent_id parent_id Copy Self-ref FK for subtasks
title title Copy
description description Copy
priority priority Copy 1-4 scale preserved
status status Copy Same enum values
due_date due_date Copy
deadline deadline Copy
recurrence recurrence Copy
tags tags Copy TEXT\[\] preserved
context context Copy
is_custom_context is_custom_context Copy
created_at created_at Copy
updated_at updated_at Copy
completed_at completed_at Copy
(none) assigned_to Default NULL - FK to contacts
(none) estimated_minutes Default NULL
(none) actual_minutes Default NULL
(none) energy_level Default NULL (low/medium/high)
(none) is_active Default true
(none) sort_order Generate Sequential per project
(none) template_id Default NULL
------------------- ------------------- ------------ ---------------------------
**3.5 notes -\> notes**
**Mapping:** Direct 1:1. Preserve UUIDs.
----------------- ----------------- ------------ ---------------------------
**v1 Column** **v2 Column** **Action** **Notes**
id id Copy Preserve UUID
domain_id domain_id Copy
project_id project_id Copy
task_id task_id Copy
title title Copy
body body Copy Markdown content preserved
as-is
content_format content_format Copy
tags tags Copy
created_at created_at Copy
updated_at updated_at Copy
(none) is_pinned Default false
(none) is_active Default true
(none) sort_order Default 0
----------------- ----------------- ------------ ---------------------------
**3.6 links -\> bookmarks**
**Mapping:** Renamed table. v2 expands links into a full
bookmark/weblink directory. Preserve UUIDs.
----------------- ----------------- ------------ ---------------------------
**v1 Column** **v2 Column** **Action** **Notes**
id id Copy Preserve UUID
domain_id domain_id Copy
project_id project_id Copy
task_id task_id Copy
label label Copy
url url Copy
description description Copy
created_at created_at Copy
(none) updated_at Generate Set to created_at
(none) folder_id Default NULL - bookmark folders are
new in v2
(none) favicon_url Default NULL
(none) is_active Default true
(none) sort_order Default 0
(none) tags Default NULL - new in v2
----------------- ----------------- ------------ ---------------------------
**3.7 files -\> files**
**Mapping:** 1:1 with storage path transformation. Files must be
downloaded from Supabase Storage and re-uploaded to local disk on
defiant-01.
------------------- ------------------- ------------ ---------------------------
**v1 Column** **v2 Column** **Action** **Notes**
id id Copy Preserve UUID
domain_id domain_id Copy
project_id project_id Copy
task_id task_id Copy
capture_id capture_id Copy
filename filename Copy Internal UUID-prefixed name
original_filename original_filename Copy
storage_path storage_path Transform Rewrite from Supabase path
to local path
mime_type mime_type Copy
size_bytes size_bytes Copy
description description Copy
tags tags Copy
created_at created_at Copy
updated_at updated_at Copy
(none) note_id Default NULL - new FK in v2
(none) is_active Default true
------------------- ------------------- ------------ ---------------------------
**File storage migration:** Use the Supabase Python client to iterate
the life-os-files bucket, download each file, and save to
/opt/lifeos/storage/files/ on defiant-01. Update storage_path values to
reflect the new local path.
**3.8 daily_focus -\> daily_focus**
**Mapping:** Direct 1:1. Preserve UUIDs.
----------------- ----------------- ------------ ---------------------------
**v1 Column** **v2 Column** **Action** **Notes**
id id Copy
focus_date focus_date Copy
task_id task_id Copy
slot slot Copy v2 removes the 3-item limit
completed completed Copy
note note Copy
created_at created_at Copy
(none) domain_id Derive Look up from task_id -\>
tasks.domain_id
----------------- ----------------- ------------ ---------------------------
**3.9 capture -\> capture**
**Mapping:** 1:1 with enrichment for new capture context fields.
----------------- ----------------- ------------ ---------------------------
**v1 Column** **v2 Column** **Action** **Notes**
id id Copy
raw_text raw_text Copy
processed processed Copy
task_id task_id Copy
created_at created_at Copy
(none) domain_id Default NULL - new optional context
during capture
(none) project_id Default NULL
(none) source Default \'web\' - v2 tracks capture
source (web/voice/telegram)
(none) updated_at Generate Set to created_at
----------------- ----------------- ------------ ---------------------------
**3.10 context_types -\> context_types**
**Mapping:** Direct copy. Small reference table.
----------------- ----------------- ------------ ---------------------------
**v1 Column** **v2 Column** **Action** **Notes**
id id Copy v1 uses UUID, v2 keeps UUID
for consistency
value value Copy
label label Copy
is_system is_system Copy
(none) is_active Default true
(none) sort_order Default Sequential
----------------- ----------------- ------------ ---------------------------
**3.11 reminders -\> reminders (redesigned)**
**Mapping:** v1 reminders is task-only with 0 rows. v2 redesigns
reminders as polymorphic (can remind about tasks, events, projects, or
arbitrary items). Since v1 has no data, this is seed-only with no
migration.
v2 reminders table adds: entity_type (TEXT), entity_id (UUID),
recurrence, snoozed_until, and removes the task_id-only FK in favor of
polymorphic reference.
**4. New Tables in v2 (No v1 Data)**
These tables exist only in v2 and will be empty after migration. They
are populated through normal application use.
------------------- ----------------------------------------------------
**Table** **Purpose**
contacts People for task assignment and project management
contact_groups Grouping contacts (team, family, etc.)
lists Named checklists and note lists
list_items Individual items within a list
calendar_events Appointments, meetings, date-based items
time_entries Time tracking records against tasks
time_blocks Scheduled time blocks (Pomodoro, deep work)
time_budgets Weekly/monthly time allocation targets
releases Release/version grouping for projects
milestones Project milestones with target dates
task_dependencies Task-to-task dependency relationships
task_templates Reusable task templates
note_links Cross-references between notes and other entities
bookmark_folders Hierarchical folder structure for bookmarks
tags Normalized tag table (replaces TEXT\[\] arrays
eventually)
entity_tags Junction table for normalized tagging
activity_log Audit trail of entity changes
user_settings Application preferences and configuration
saved_views Custom filtered/sorted views the user saves
search_index Full-text search materialized view / helper
------------------- ----------------------------------------------------
**5. Migration Script Approach**
**5.1 Prerequisites**
1\. pg_dump export of v1 Supabase database saved as
life_os_v1_backup.sql
2\. v2 PostgreSQL database created on defiant-01 (lifeos_dev for
testing, lifeos_prod for final)
3\. v2 schema DDL applied to the target database (see Section 6)
4\. Supabase Storage files downloaded to a local staging directory
5\. Python 3.11+ with psycopg2 and supabase client libraries
**5.2 Script Structure**
migrate_v1_to_v2.py
1\. Connect to v1 (read-only) and v2 (read-write)
2\. For each table in dependency order:
a\. SELECT \* FROM v1 table
b\. Transform each row per mapping rules above
c\. INSERT INTO v2 table
3\. Download files from Supabase Storage
4\. Verify row counts match
5\. Run FK integrity checks on v2
Table order (respects FK dependencies):
domains
areas
projects
context_types
tasks
notes
capture
bookmarks (from links)
files
daily_focus
**5.3 Transformation Rules Summary**
For all tables with missing updated_at: set to created_at.
For all tables with missing is_active: set to true.
For all tables with missing sort_order: assign sequential values (10,
20, 30) within their parent scope.
For projects.due_date: rename to target_date, no value change.
For links -\> bookmarks: table rename, add updated_at = created_at.
For files.storage_path: rewrite from Supabase bucket URL to local
filesystem path.
For daily_focus: derive domain_id by joining through task_id to
tasks.domain_id.
**5.4 Validation Checklist**
After migration, verify:
1\. Row counts: v2 table row count \>= v1 for every mapped table
2\. UUID preservation: SELECT id FROM v2.domains EXCEPT SELECT id FROM
v1.domains should be empty
3\. FK integrity: No orphaned foreign keys in v2
4\. File accessibility: Every file in v2.files table can be served from
local storage
5\. Note content: Spot-check 5 notes for body content integrity
6\. Task hierarchy: Verify parent_id chains are intact
**6. Platform Migration Summary**
----------------------- ----------------------- ------------------------
**Component** **v1 (Old)** **v2 (New)**
Database Supabase (managed Self-hosted PostgreSQL
PostgreSQL) on Hetzner
Application Server Render (web service) Docker container on
Hetzner VM
Reverse Proxy Render (built-in) Nginx on defiant-01
File Storage Supabase Storage Local filesystem
(S3-backed) (/opt/lifeos/storage/)
Data Access Layer supabase Python client SQLAlchemy + psycopg2
(REST) (direct SQL)
Templating Jinja2 Jinja2 (unchanged)
Backend Framework FastAPI FastAPI (unchanged)
Frontend Vanilla HTML/CSS/JS Vanilla HTML/CSS/JS
(redesigned UI)
Dev/Prod Separation Separate Supabase Docker Compose with
projects dev/prod configs
Backups Manual pg_dump Automated cron pg_dump
to /opt/lifeos/backups/
Domain/SSL \*.onrender.com lifeos.invixiom.com with
Let\'s Encrypt
----------------------- ----------------------- ------------------------
**7. Data Access Layer Migration**
Every Supabase client call in the v1 routers must be replaced. The
pattern is consistent:
\# v1 (Supabase REST client)
data = supabase.table(\'tasks\').select(\'\*\').eq(\'project_id\',
pid).execute()
rows = data.data
\# v2 (SQLAlchemy / raw SQL)
rows = db.execute(
text(\'SELECT \* FROM tasks WHERE project_id = :pid\'),
{\'pid\': pid}
).fetchall()
This transformation applies to every router file. The Jinja2 templates
remain unchanged because they consume the same data shape (list of
dicts). The migration is purely at the data access layer.
**8. Rollback Plan**
v1 on Supabase/Render remains untouched and running throughout the
migration. If v2 has issues:
1\. Point DNS back to Render (or simply use the .onrender.com URL
directly)
2\. v1 database on Supabase is read-only but intact - no data was
deleted
3\. Any data created in v2 after migration would need manual
reconciliation if rolling back
Recommended approach: run v1 and v2 in parallel for 1-2 weeks. Cut over
to v2 only after confirming data integrity and feature parity on the
critical path (tasks, focus, notes, capture).
Life OS v2 Migration Plan // Generated February 2026

View File

@@ -0,0 +1,263 @@
-- =============================================================================
-- Life OS - R0 to R1 Data Migration (FIXED)
-- Source: lifeos_prod (R0 schema - actual)
-- Target: lifeos_dev (R1 schema)
--
-- PREREQUISITE: R1 schema must already be applied to lifeos_dev
-- RUN FROM: lifeos_dev database as postgres user
-- =============================================================================
CREATE EXTENSION IF NOT EXISTS dblink;
-- =============================================================================
-- 1. DOMAINS
-- R0: id, name, color, created_at
-- R1: + description, icon, sort_order, is_deleted, deleted_at, updated_at, search_vector
-- =============================================================================
INSERT INTO domains (id, name, color, sort_order, is_deleted, created_at, updated_at)
SELECT id, name, color,
(ROW_NUMBER() OVER (ORDER BY created_at))::INTEGER * 10,
false, created_at, created_at
FROM dblink('dbname=lifeos_prod', '
SELECT id, name, color, created_at FROM domains
') AS r0(id UUID, name TEXT, color TEXT, created_at TIMESTAMPTZ);
-- =============================================================================
-- 2. AREAS
-- R0: id, domain_id, name, description, status, created_at
-- R1: + icon, color, sort_order, is_deleted, deleted_at, updated_at, search_vector
-- =============================================================================
INSERT INTO areas (id, domain_id, name, description, status, sort_order, is_deleted, created_at, updated_at)
SELECT id, domain_id, name, description, COALESCE(status, 'active'),
(ROW_NUMBER() OVER (PARTITION BY domain_id ORDER BY created_at))::INTEGER * 10,
false, created_at, created_at
FROM dblink('dbname=lifeos_prod', '
SELECT id, domain_id, name, description, status, created_at FROM areas
') AS r0(id UUID, domain_id UUID, name TEXT, description TEXT, status TEXT, created_at TIMESTAMPTZ);
-- =============================================================================
-- 3. PROJECTS
-- R0: id, domain_id, name, description, status, priority, start_date,
-- target_date, completed_at, tags, created_at, updated_at, area_id
-- R1: + color, sort_order, is_deleted, deleted_at, search_vector
-- =============================================================================
INSERT INTO projects (id, domain_id, area_id, name, description, status, priority,
start_date, target_date, completed_at, tags, sort_order, is_deleted, created_at, updated_at)
SELECT id, domain_id, area_id, name, description,
COALESCE(status, 'active'), COALESCE(priority, 3),
start_date, target_date, completed_at, tags,
(ROW_NUMBER() OVER (PARTITION BY domain_id ORDER BY created_at))::INTEGER * 10,
false, created_at, COALESCE(updated_at, created_at)
FROM dblink('dbname=lifeos_prod', '
SELECT id, domain_id, area_id, name, description, status, priority,
start_date, target_date, completed_at, tags, created_at, updated_at
FROM projects
') AS r0(
id UUID, domain_id UUID, area_id UUID, name TEXT, description TEXT,
status TEXT, priority INTEGER, start_date DATE, target_date DATE,
completed_at TIMESTAMPTZ, tags TEXT[], created_at TIMESTAMPTZ, updated_at TIMESTAMPTZ
);
-- =============================================================================
-- 4. TASKS
-- R0: id, domain_id, project_id, parent_id, title, description, priority,
-- status, due_date, deadline, recurrence, tags, context, is_custom_context,
-- created_at, updated_at, completed_at
-- R1: + area_id, release_id, estimated_minutes, energy_required,
-- waiting_for_contact_id, waiting_since, import_batch_id,
-- sort_order, is_deleted, deleted_at, search_vector
-- NOTE: R0 has no area_id on tasks. Left NULL in R1.
-- =============================================================================
INSERT INTO tasks (id, domain_id, project_id, parent_id, title, description,
priority, status, due_date, deadline, recurrence, tags, context,
is_custom_context, sort_order, is_deleted, created_at, updated_at, completed_at)
SELECT id, domain_id, project_id, parent_id, title, description,
COALESCE(priority, 3), COALESCE(status, 'open'),
due_date, deadline, recurrence, tags, context,
COALESCE(is_custom_context, false),
(ROW_NUMBER() OVER (PARTITION BY project_id ORDER BY created_at))::INTEGER * 10,
false, created_at, COALESCE(updated_at, created_at), completed_at
FROM dblink('dbname=lifeos_prod', '
SELECT id, domain_id, project_id, parent_id, title, description,
priority, status, due_date, deadline, recurrence, tags, context,
is_custom_context, created_at, updated_at, completed_at
FROM tasks
') AS r0(
id UUID, domain_id UUID, project_id UUID, parent_id UUID,
title TEXT, description TEXT, priority INTEGER, status TEXT,
due_date DATE, deadline TIMESTAMPTZ, recurrence TEXT, tags TEXT[],
context TEXT, is_custom_context BOOLEAN,
created_at TIMESTAMPTZ, updated_at TIMESTAMPTZ, completed_at TIMESTAMPTZ
);
-- =============================================================================
-- 5. NOTES
-- R0: id, domain_id, project_id, task_id, title, body, tags,
-- created_at, updated_at, content_format (default 'markdown')
-- R1: + folder_id, meeting_id, is_meeting_note, sort_order,
-- is_deleted, deleted_at, search_vector
-- Transform: content_format 'markdown' -> 'rich'
-- NOTE: R0 task_id dropped (no equivalent in R1 notes).
-- =============================================================================
INSERT INTO notes (id, domain_id, project_id, title, body, content_format, tags,
is_meeting_note, sort_order, is_deleted, created_at, updated_at)
SELECT id, domain_id, project_id,
CASE WHEN title IS NULL OR title = '' THEN 'Untitled Note' ELSE title END,
body,
CASE WHEN content_format = 'markdown' THEN 'rich' ELSE COALESCE(content_format, 'rich') END,
tags, false,
(ROW_NUMBER() OVER (ORDER BY created_at))::INTEGER * 10,
false, created_at, COALESCE(updated_at, created_at)
FROM dblink('dbname=lifeos_prod', '
SELECT id, domain_id, project_id, title, body, content_format, tags,
created_at, updated_at
FROM notes
') AS r0(
id UUID, domain_id UUID, project_id UUID, title TEXT, body TEXT,
content_format TEXT, tags TEXT[],
created_at TIMESTAMPTZ, updated_at TIMESTAMPTZ
);
-- =============================================================================
-- 6. LINKS
-- R0: id, domain_id, project_id, task_id, label, url, description, created_at
-- R1: + area_id, tags, sort_order, is_deleted, deleted_at, updated_at, search_vector
-- NOTE: R0 task_id dropped (no equivalent in R1 links).
-- =============================================================================
INSERT INTO links (id, domain_id, project_id, label, url, description,
sort_order, is_deleted, created_at, updated_at)
SELECT id, domain_id, project_id, label, url, description,
(ROW_NUMBER() OVER (ORDER BY created_at))::INTEGER * 10,
false, created_at, created_at
FROM dblink('dbname=lifeos_prod', '
SELECT id, domain_id, project_id, label, url, description, created_at
FROM links
') AS r0(
id UUID, domain_id UUID, project_id UUID,
label TEXT, url TEXT, description TEXT, created_at TIMESTAMPTZ
);
-- =============================================================================
-- 7. DAILY FOCUS
-- R0: id, focus_date, task_id, slot, completed, note, created_at
-- R1: + sort_order, is_deleted, deleted_at
-- =============================================================================
INSERT INTO daily_focus (id, focus_date, task_id, slot, completed, note,
sort_order, is_deleted, created_at)
SELECT id, focus_date, task_id, slot, COALESCE(completed, false), note,
COALESCE(slot, (ROW_NUMBER() OVER (PARTITION BY focus_date ORDER BY created_at))::INTEGER) * 10,
false, created_at
FROM dblink('dbname=lifeos_prod', '
SELECT id, focus_date, task_id, slot, completed, note, created_at
FROM daily_focus
') AS r0(
id UUID, focus_date DATE, task_id UUID, slot INTEGER,
completed BOOLEAN, note TEXT, created_at TIMESTAMPTZ
);
-- =============================================================================
-- 8. CAPTURE
-- R0: id, raw_text, processed, task_id, created_at
-- R1: + converted_to_type, converted_to_id, area_id, project_id, list_id,
-- import_batch_id, sort_order, is_deleted, deleted_at
-- Map: R0 task_id -> R1 converted_to_type='task', converted_to_id=task_id
-- =============================================================================
INSERT INTO capture (id, raw_text, processed, converted_to_type, converted_to_id,
sort_order, is_deleted, created_at)
SELECT id, raw_text, COALESCE(processed, false),
CASE WHEN task_id IS NOT NULL THEN 'task' ELSE NULL END,
task_id,
(ROW_NUMBER() OVER (ORDER BY created_at))::INTEGER * 10,
false, created_at
FROM dblink('dbname=lifeos_prod', '
SELECT id, raw_text, processed, task_id, created_at FROM capture
') AS r0(
id UUID, raw_text TEXT, processed BOOLEAN, task_id UUID, created_at TIMESTAMPTZ
);
-- =============================================================================
-- 9. CONTEXT TYPES
-- R0: id (SERIAL), name, description, is_system
-- R1: id (SERIAL), value, label, description, is_system, sort_order, is_deleted
-- Map: R0.name -> R1.value, generate label from name via INITCAP
-- =============================================================================
DELETE FROM context_types;
INSERT INTO context_types (id, value, label, description, is_system, sort_order, is_deleted)
SELECT id, name,
INITCAP(REPLACE(name, '_', ' ')),
description, COALESCE(is_system, true),
id * 10,
false
FROM dblink('dbname=lifeos_prod', '
SELECT id, name, description, is_system FROM context_types
') AS r0(id INTEGER, name TEXT, description TEXT, is_system BOOLEAN);
SELECT setval('context_types_id_seq', GREATEST((SELECT MAX(id) FROM context_types), 1));
-- =============================================================================
-- VERIFICATION
-- =============================================================================
DO $$
DECLARE
r0_domains INTEGER;
r0_areas INTEGER;
r0_projects INTEGER;
r0_tasks INTEGER;
r0_notes INTEGER;
r0_links INTEGER;
r0_daily_focus INTEGER;
r0_capture INTEGER;
r0_context INTEGER;
r1_domains INTEGER;
r1_areas INTEGER;
r1_projects INTEGER;
r1_tasks INTEGER;
r1_notes INTEGER;
r1_links INTEGER;
r1_daily_focus INTEGER;
r1_capture INTEGER;
r1_context INTEGER;
BEGIN
SELECT count INTO r0_domains FROM dblink('dbname=lifeos_prod', 'SELECT count(*) FROM domains') AS t(count INTEGER);
SELECT count INTO r0_areas FROM dblink('dbname=lifeos_prod', 'SELECT count(*) FROM areas') AS t(count INTEGER);
SELECT count INTO r0_projects FROM dblink('dbname=lifeos_prod', 'SELECT count(*) FROM projects') AS t(count INTEGER);
SELECT count INTO r0_tasks FROM dblink('dbname=lifeos_prod', 'SELECT count(*) FROM tasks') AS t(count INTEGER);
SELECT count INTO r0_notes FROM dblink('dbname=lifeos_prod', 'SELECT count(*) FROM notes') AS t(count INTEGER);
SELECT count INTO r0_links FROM dblink('dbname=lifeos_prod', 'SELECT count(*) FROM links') AS t(count INTEGER);
SELECT count INTO r0_daily_focus FROM dblink('dbname=lifeos_prod', 'SELECT count(*) FROM daily_focus') AS t(count INTEGER);
SELECT count INTO r0_capture FROM dblink('dbname=lifeos_prod', 'SELECT count(*) FROM capture') AS t(count INTEGER);
SELECT count INTO r0_context FROM dblink('dbname=lifeos_prod', 'SELECT count(*) FROM context_types') AS t(count INTEGER);
SELECT count(*) INTO r1_domains FROM domains;
SELECT count(*) INTO r1_areas FROM areas;
SELECT count(*) INTO r1_projects FROM projects;
SELECT count(*) INTO r1_tasks FROM tasks;
SELECT count(*) INTO r1_notes FROM notes;
SELECT count(*) INTO r1_links FROM links;
SELECT count(*) INTO r1_daily_focus FROM daily_focus;
SELECT count(*) INTO r1_capture FROM capture;
SELECT count(*) INTO r1_context FROM context_types;
RAISE NOTICE '=== MIGRATION VERIFICATION ===';
RAISE NOTICE 'domains: R0=% R1=% %', r0_domains, r1_domains, CASE WHEN r0_domains = r1_domains THEN 'OK' ELSE 'MISMATCH' END;
RAISE NOTICE 'areas: R0=% R1=% %', r0_areas, r1_areas, CASE WHEN r0_areas = r1_areas THEN 'OK' ELSE 'MISMATCH' END;
RAISE NOTICE 'projects: R0=% R1=% %', r0_projects, r1_projects, CASE WHEN r0_projects = r1_projects THEN 'OK' ELSE 'MISMATCH' END;
RAISE NOTICE 'tasks: R0=% R1=% %', r0_tasks, r1_tasks, CASE WHEN r0_tasks = r1_tasks THEN 'OK' ELSE 'MISMATCH' END;
RAISE NOTICE 'notes: R0=% R1=% %', r0_notes, r1_notes, CASE WHEN r0_notes = r1_notes THEN 'OK' ELSE 'MISMATCH' END;
RAISE NOTICE 'links: R0=% R1=% %', r0_links, r1_links, CASE WHEN r0_links = r1_links THEN 'OK' ELSE 'MISMATCH' END;
RAISE NOTICE 'daily_focus: R0=% R1=% %', r0_daily_focus, r1_daily_focus, CASE WHEN r0_daily_focus = r1_daily_focus THEN 'OK' ELSE 'MISMATCH' END;
RAISE NOTICE 'capture: R0=% R1=% %', r0_capture, r1_capture, CASE WHEN r0_capture = r1_capture THEN 'OK' ELSE 'MISMATCH' END;
RAISE NOTICE 'context_types:R0=% R1=% %', r0_context, r1_context, CASE WHEN r0_context = r1_context THEN 'OK' ELSE 'MISMATCH' END;
RAISE NOTICE '=== END VERIFICATION ===';
END $$;

View File

@@ -0,0 +1,979 @@
-- =============================================================================
-- Life OS - Release 1 COMPLETE Schema
-- Self-hosted PostgreSQL 16 on defiant-01 (Hetzner)
-- Database: lifeos_dev
-- Generated from Architecture Design Document v2.0
-- =============================================================================
-- Extensions
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- =============================================================================
-- LOOKUP TABLE: Context Types
-- =============================================================================
CREATE TABLE context_types (
id SERIAL PRIMARY KEY,
value TEXT NOT NULL UNIQUE,
label TEXT NOT NULL,
description TEXT,
is_system BOOLEAN NOT NULL DEFAULT true,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- CORE HIERARCHY
-- =============================================================================
CREATE TABLE domains (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
color TEXT,
description TEXT,
icon TEXT,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE areas (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID NOT NULL REFERENCES domains(id) ON DELETE CASCADE,
name TEXT NOT NULL,
description TEXT,
icon TEXT,
color TEXT,
status TEXT NOT NULL DEFAULT 'active',
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID NOT NULL REFERENCES domains(id) ON DELETE CASCADE,
area_id UUID REFERENCES areas(id) ON DELETE SET NULL,
name TEXT NOT NULL,
description TEXT,
status TEXT NOT NULL DEFAULT 'active',
priority INTEGER NOT NULL DEFAULT 3,
start_date DATE,
target_date DATE,
completed_at TIMESTAMPTZ,
color TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Forward-declare releases for tasks.release_id FK
CREATE TABLE releases (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
version_label TEXT,
description TEXT,
status TEXT NOT NULL DEFAULT 'planned',
target_date DATE,
released_at DATE,
release_notes TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Forward-declare contacts for tasks.waiting_for_contact_id FK
CREATE TABLE contacts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
first_name TEXT NOT NULL,
last_name TEXT,
company TEXT,
role TEXT,
email TEXT,
phone TEXT,
notes TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE tasks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID REFERENCES domains(id) ON DELETE CASCADE,
area_id UUID REFERENCES areas(id) ON DELETE SET NULL,
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
release_id UUID REFERENCES releases(id) ON DELETE SET NULL,
parent_id UUID REFERENCES tasks(id) ON DELETE SET NULL,
title TEXT NOT NULL,
description TEXT,
priority INTEGER NOT NULL DEFAULT 3,
status TEXT NOT NULL DEFAULT 'open',
due_date DATE,
deadline TIMESTAMPTZ,
recurrence TEXT,
estimated_minutes INTEGER,
energy_required TEXT,
context TEXT,
is_custom_context BOOLEAN NOT NULL DEFAULT false,
waiting_for_contact_id UUID REFERENCES contacts(id) ON DELETE SET NULL,
waiting_since DATE,
import_batch_id UUID,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
completed_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- KNOWLEDGE MANAGEMENT
-- =============================================================================
CREATE TABLE note_folders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
parent_id UUID REFERENCES note_folders(id) ON DELETE CASCADE,
name TEXT NOT NULL,
auto_generated BOOLEAN NOT NULL DEFAULT false,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- Forward-declare meetings for notes.meeting_id FK
CREATE TABLE meetings (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
parent_id UUID REFERENCES meetings(id) ON DELETE SET NULL,
title TEXT NOT NULL,
meeting_date DATE NOT NULL,
start_at TIMESTAMPTZ,
end_at TIMESTAMPTZ,
location TEXT,
status TEXT NOT NULL DEFAULT 'scheduled',
priority INTEGER,
recurrence TEXT,
agenda TEXT,
transcript TEXT,
notes_body TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE notes (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID REFERENCES domains(id) ON DELETE CASCADE,
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
folder_id UUID REFERENCES note_folders(id) ON DELETE SET NULL,
meeting_id UUID REFERENCES meetings(id) ON DELETE SET NULL,
title TEXT NOT NULL,
body TEXT,
content_format TEXT NOT NULL DEFAULT 'rich',
is_meeting_note BOOLEAN NOT NULL DEFAULT false,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE decisions (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
title TEXT NOT NULL,
rationale TEXT,
status TEXT NOT NULL DEFAULT 'proposed',
impact TEXT NOT NULL DEFAULT 'medium',
decided_at DATE,
meeting_id UUID REFERENCES meetings(id) ON DELETE SET NULL,
superseded_by_id UUID REFERENCES decisions(id) ON DELETE SET NULL,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE lists (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID REFERENCES domains(id) ON DELETE CASCADE,
area_id UUID REFERENCES areas(id) ON DELETE SET NULL,
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
name TEXT NOT NULL,
list_type TEXT NOT NULL DEFAULT 'checklist',
description TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE list_items (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
list_id UUID NOT NULL REFERENCES lists(id) ON DELETE CASCADE,
parent_item_id UUID REFERENCES list_items(id) ON DELETE SET NULL,
content TEXT NOT NULL,
completed BOOLEAN NOT NULL DEFAULT false,
completed_at TIMESTAMPTZ,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE links (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID REFERENCES domains(id) ON DELETE CASCADE,
area_id UUID REFERENCES areas(id) ON DELETE SET NULL,
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
label TEXT NOT NULL,
url TEXT NOT NULL,
description TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE files (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
filename TEXT NOT NULL,
original_filename TEXT NOT NULL,
storage_path TEXT NOT NULL,
mime_type TEXT,
size_bytes INTEGER,
description TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Appointments
-- =============================================================================
CREATE TABLE appointments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
title TEXT NOT NULL,
description TEXT,
location TEXT,
start_at TIMESTAMPTZ NOT NULL,
end_at TIMESTAMPTZ,
all_day BOOLEAN NOT NULL DEFAULT false,
recurrence TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Milestones
-- =============================================================================
CREATE TABLE milestones (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
release_id UUID REFERENCES releases(id) ON DELETE SET NULL,
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
name TEXT NOT NULL,
target_date DATE NOT NULL,
completed_at DATE,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Processes
-- =============================================================================
CREATE TABLE processes (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
description TEXT,
process_type TEXT NOT NULL DEFAULT 'checklist',
category TEXT,
status TEXT NOT NULL DEFAULT 'draft',
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE process_steps (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
process_id UUID NOT NULL REFERENCES processes(id) ON DELETE CASCADE,
title TEXT NOT NULL,
instructions TEXT,
expected_output TEXT,
estimated_days INTEGER,
context TEXT,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE process_runs (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
process_id UUID NOT NULL REFERENCES processes(id) ON DELETE CASCADE,
title TEXT NOT NULL,
status TEXT NOT NULL DEFAULT 'not_started',
process_type TEXT NOT NULL,
task_generation TEXT NOT NULL DEFAULT 'all_at_once',
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
contact_id UUID REFERENCES contacts(id) ON DELETE SET NULL,
started_at TIMESTAMPTZ,
completed_at TIMESTAMPTZ,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE process_run_steps (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
run_id UUID NOT NULL REFERENCES process_runs(id) ON DELETE CASCADE,
title TEXT NOT NULL,
instructions TEXT,
status TEXT NOT NULL DEFAULT 'pending',
completed_by_id UUID REFERENCES contacts(id) ON DELETE SET NULL,
completed_at TIMESTAMPTZ,
notes TEXT,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Daily Focus
-- =============================================================================
CREATE TABLE daily_focus (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
focus_date DATE NOT NULL,
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
slot INTEGER,
completed BOOLEAN NOT NULL DEFAULT false,
note TEXT,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Capture Queue
-- =============================================================================
CREATE TABLE capture (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
raw_text TEXT NOT NULL,
processed BOOLEAN NOT NULL DEFAULT false,
converted_to_type TEXT,
converted_to_id UUID,
area_id UUID REFERENCES areas(id) ON DELETE SET NULL,
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
list_id UUID REFERENCES lists(id) ON DELETE SET NULL,
import_batch_id UUID,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Task Templates
-- =============================================================================
CREATE TABLE task_templates (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
description TEXT,
priority INTEGER,
estimated_minutes INTEGER,
energy_required TEXT,
context TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE task_template_items (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
template_id UUID NOT NULL REFERENCES task_templates(id) ON DELETE CASCADE,
title TEXT NOT NULL,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- TIME MANAGEMENT
-- =============================================================================
CREATE TABLE time_entries (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
start_at TIMESTAMPTZ NOT NULL,
end_at TIMESTAMPTZ,
duration_minutes INTEGER,
notes TEXT,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE time_blocks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
task_id UUID REFERENCES tasks(id) ON DELETE SET NULL,
title TEXT NOT NULL,
context TEXT,
energy TEXT,
start_at TIMESTAMPTZ NOT NULL,
end_at TIMESTAMPTZ NOT NULL,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE time_budgets (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID NOT NULL REFERENCES domains(id) ON DELETE CASCADE,
weekly_hours DECIMAL NOT NULL,
effective_from DATE NOT NULL,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Weblink Directory
-- =============================================================================
CREATE TABLE weblink_folders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
parent_id UUID REFERENCES weblink_folders(id) ON DELETE CASCADE,
name TEXT NOT NULL,
auto_generated BOOLEAN NOT NULL DEFAULT false,
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE weblinks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
label TEXT NOT NULL,
url TEXT NOT NULL,
description TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
search_vector TSVECTOR,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Reminders (polymorphic)
-- =============================================================================
CREATE TABLE reminders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
entity_type TEXT NOT NULL,
entity_id UUID NOT NULL,
remind_at TIMESTAMPTZ NOT NULL,
note TEXT,
delivered BOOLEAN NOT NULL DEFAULT false,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- UNIVERSAL: Dependencies (polymorphic DAG)
-- =============================================================================
CREATE TABLE dependencies (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
blocker_type TEXT NOT NULL,
blocker_id UUID NOT NULL,
dependent_type TEXT NOT NULL,
dependent_id UUID NOT NULL,
dependency_type TEXT NOT NULL DEFAULT 'finish_to_start',
lag_days INTEGER NOT NULL DEFAULT 0,
note TEXT,
is_deleted BOOLEAN NOT NULL DEFAULT false,
deleted_at TIMESTAMPTZ,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE (blocker_type, blocker_id, dependent_type, dependent_id, dependency_type),
CHECK (NOT (blocker_type = dependent_type AND blocker_id = dependent_id))
);
-- =============================================================================
-- JUNCTION TABLES
-- =============================================================================
-- Notes <-> Projects (M2M)
CREATE TABLE note_projects (
note_id UUID NOT NULL REFERENCES notes(id) ON DELETE CASCADE,
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
is_primary BOOLEAN NOT NULL DEFAULT false,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (note_id, project_id)
);
-- Notes <-> Notes (wiki graph)
CREATE TABLE note_links (
source_note_id UUID NOT NULL REFERENCES notes(id) ON DELETE CASCADE,
target_note_id UUID NOT NULL REFERENCES notes(id) ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (source_note_id, target_note_id)
);
-- Files <-> any entity (polymorphic M2M)
CREATE TABLE file_mappings (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
file_id UUID NOT NULL REFERENCES files(id) ON DELETE CASCADE,
context_type TEXT NOT NULL,
context_id UUID NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE (file_id, context_type, context_id)
);
-- Releases <-> Projects (M2M)
CREATE TABLE release_projects (
release_id UUID NOT NULL REFERENCES releases(id) ON DELETE CASCADE,
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (release_id, project_id)
);
-- Releases <-> Domains (M2M)
CREATE TABLE release_domains (
release_id UUID NOT NULL REFERENCES releases(id) ON DELETE CASCADE,
domain_id UUID NOT NULL REFERENCES domains(id) ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (release_id, domain_id)
);
-- Contacts <-> Tasks
CREATE TABLE contact_tasks (
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (contact_id, task_id)
);
-- Contacts <-> Projects
CREATE TABLE contact_projects (
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (contact_id, project_id)
);
-- Contacts <-> Lists
CREATE TABLE contact_lists (
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
list_id UUID NOT NULL REFERENCES lists(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (contact_id, list_id)
);
-- Contacts <-> List Items
CREATE TABLE contact_list_items (
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
list_item_id UUID NOT NULL REFERENCES list_items(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (contact_id, list_item_id)
);
-- Contacts <-> Appointments
CREATE TABLE contact_appointments (
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
appointment_id UUID NOT NULL REFERENCES appointments(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (contact_id, appointment_id)
);
-- Contacts <-> Meetings
CREATE TABLE contact_meetings (
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
meeting_id UUID NOT NULL REFERENCES meetings(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (contact_id, meeting_id)
);
-- Decisions <-> Projects
CREATE TABLE decision_projects (
decision_id UUID NOT NULL REFERENCES decisions(id) ON DELETE CASCADE,
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (decision_id, project_id)
);
-- Decisions <-> Contacts
CREATE TABLE decision_contacts (
decision_id UUID NOT NULL REFERENCES decisions(id) ON DELETE CASCADE,
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (decision_id, contact_id)
);
-- Meetings <-> Tasks
CREATE TABLE meeting_tasks (
meeting_id UUID NOT NULL REFERENCES meetings(id) ON DELETE CASCADE,
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
source TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (meeting_id, task_id)
);
-- Process Run Steps <-> Tasks
CREATE TABLE process_run_tasks (
run_step_id UUID NOT NULL REFERENCES process_run_steps(id) ON DELETE CASCADE,
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (run_step_id, task_id)
);
-- Weblinks <-> Folders (M2M)
CREATE TABLE folder_weblinks (
folder_id UUID NOT NULL REFERENCES weblink_folders(id) ON DELETE CASCADE,
weblink_id UUID NOT NULL REFERENCES weblinks(id) ON DELETE CASCADE,
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (folder_id, weblink_id)
);
-- =============================================================================
-- INDEXES
-- =============================================================================
-- Sort order indexes
CREATE INDEX idx_domains_sort ON domains(sort_order);
CREATE INDEX idx_areas_sort ON areas(domain_id, sort_order);
CREATE INDEX idx_projects_sort ON projects(domain_id, sort_order);
CREATE INDEX idx_projects_area_sort ON projects(area_id, sort_order);
CREATE INDEX idx_tasks_project_sort ON tasks(project_id, sort_order);
CREATE INDEX idx_tasks_parent_sort ON tasks(parent_id, sort_order);
CREATE INDEX idx_tasks_domain_sort ON tasks(domain_id, sort_order);
CREATE INDEX idx_list_items_sort ON list_items(list_id, sort_order);
CREATE INDEX idx_list_items_parent_sort ON list_items(parent_item_id, sort_order);
CREATE INDEX idx_weblink_folders_sort ON weblink_folders(parent_id, sort_order);
-- Lookup indexes
CREATE INDEX idx_tasks_status ON tasks(status);
CREATE INDEX idx_tasks_due_date ON tasks(due_date);
CREATE INDEX idx_tasks_priority ON tasks(priority);
CREATE INDEX idx_projects_status ON projects(status);
CREATE INDEX idx_daily_focus_date ON daily_focus(focus_date);
CREATE INDEX idx_appointments_start ON appointments(start_at);
CREATE INDEX idx_capture_processed ON capture(processed);
CREATE INDEX idx_file_mappings_context ON file_mappings(context_type, context_id);
CREATE INDEX idx_dependencies_blocker ON dependencies(blocker_type, blocker_id);
CREATE INDEX idx_dependencies_dependent ON dependencies(dependent_type, dependent_id);
CREATE INDEX idx_reminders_entity ON reminders(entity_type, entity_id);
CREATE INDEX idx_time_entries_task ON time_entries(task_id);
CREATE INDEX idx_meetings_date ON meetings(meeting_date);
-- Full-text search GIN indexes
CREATE INDEX idx_domains_search ON domains USING GIN(search_vector);
CREATE INDEX idx_areas_search ON areas USING GIN(search_vector);
CREATE INDEX idx_projects_search ON projects USING GIN(search_vector);
CREATE INDEX idx_tasks_search ON tasks USING GIN(search_vector);
CREATE INDEX idx_notes_search ON notes USING GIN(search_vector);
CREATE INDEX idx_contacts_search ON contacts USING GIN(search_vector);
CREATE INDEX idx_meetings_search ON meetings USING GIN(search_vector);
CREATE INDEX idx_decisions_search ON decisions USING GIN(search_vector);
CREATE INDEX idx_lists_search ON lists USING GIN(search_vector);
CREATE INDEX idx_links_search ON links USING GIN(search_vector);
CREATE INDEX idx_files_search ON files USING GIN(search_vector);
CREATE INDEX idx_weblinks_search ON weblinks USING GIN(search_vector);
CREATE INDEX idx_processes_search ON processes USING GIN(search_vector);
CREATE INDEX idx_appointments_search ON appointments USING GIN(search_vector);
-- =============================================================================
-- SEARCH VECTOR TRIGGERS
-- =============================================================================
CREATE OR REPLACE FUNCTION update_search_vector() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.title, '') || ' ' ||
coalesce(NEW.description, '') || ' ' ||
coalesce(NEW.name, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), '')
);
RETURN NEW;
EXCEPTION WHEN undefined_column THEN
-- Fallback for tables with different column names
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
-- Per-table triggers with correct columns
CREATE OR REPLACE FUNCTION update_domains_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english', coalesce(NEW.name, ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_domains_search BEFORE INSERT OR UPDATE ON domains
FOR EACH ROW EXECUTE FUNCTION update_domains_search();
CREATE OR REPLACE FUNCTION update_areas_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.name, '') || ' ' || coalesce(NEW.description, ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_areas_search BEFORE INSERT OR UPDATE ON areas
FOR EACH ROW EXECUTE FUNCTION update_areas_search();
CREATE OR REPLACE FUNCTION update_projects_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.name, '') || ' ' || coalesce(NEW.description, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_projects_search BEFORE INSERT OR UPDATE ON projects
FOR EACH ROW EXECUTE FUNCTION update_projects_search();
CREATE OR REPLACE FUNCTION update_tasks_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.title, '') || ' ' || coalesce(NEW.description, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_tasks_search BEFORE INSERT OR UPDATE ON tasks
FOR EACH ROW EXECUTE FUNCTION update_tasks_search();
CREATE OR REPLACE FUNCTION update_notes_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.title, '') || ' ' || coalesce(NEW.body, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_notes_search BEFORE INSERT OR UPDATE ON notes
FOR EACH ROW EXECUTE FUNCTION update_notes_search();
CREATE OR REPLACE FUNCTION update_contacts_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.first_name, '') || ' ' || coalesce(NEW.last_name, '') || ' ' ||
coalesce(NEW.company, '') || ' ' || coalesce(NEW.email, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_contacts_search BEFORE INSERT OR UPDATE ON contacts
FOR EACH ROW EXECUTE FUNCTION update_contacts_search();
CREATE OR REPLACE FUNCTION update_meetings_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.title, '') || ' ' || coalesce(NEW.agenda, '') || ' ' ||
coalesce(NEW.notes_body, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_meetings_search BEFORE INSERT OR UPDATE ON meetings
FOR EACH ROW EXECUTE FUNCTION update_meetings_search();
CREATE OR REPLACE FUNCTION update_decisions_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.title, '') || ' ' || coalesce(NEW.rationale, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_decisions_search BEFORE INSERT OR UPDATE ON decisions
FOR EACH ROW EXECUTE FUNCTION update_decisions_search();
CREATE OR REPLACE FUNCTION update_lists_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.name, '') || ' ' || coalesce(NEW.description, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_lists_search BEFORE INSERT OR UPDATE ON lists
FOR EACH ROW EXECUTE FUNCTION update_lists_search();
CREATE OR REPLACE FUNCTION update_links_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.label, '') || ' ' || coalesce(NEW.url, '') || ' ' ||
coalesce(NEW.description, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_links_search BEFORE INSERT OR UPDATE ON links
FOR EACH ROW EXECUTE FUNCTION update_links_search();
CREATE OR REPLACE FUNCTION update_files_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.original_filename, '') || ' ' || coalesce(NEW.description, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_files_search BEFORE INSERT OR UPDATE ON files
FOR EACH ROW EXECUTE FUNCTION update_files_search();
CREATE OR REPLACE FUNCTION update_weblinks_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.label, '') || ' ' || coalesce(NEW.url, '') || ' ' ||
coalesce(NEW.description, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_weblinks_search BEFORE INSERT OR UPDATE ON weblinks
FOR EACH ROW EXECUTE FUNCTION update_weblinks_search();
CREATE OR REPLACE FUNCTION update_processes_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.name, '') || ' ' || coalesce(NEW.description, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_processes_search BEFORE INSERT OR UPDATE ON processes
FOR EACH ROW EXECUTE FUNCTION update_processes_search();
CREATE OR REPLACE FUNCTION update_appointments_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.title, '') || ' ' || coalesce(NEW.description, '') || ' ' ||
coalesce(NEW.location, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_appointments_search BEFORE INSERT OR UPDATE ON appointments
FOR EACH ROW EXECUTE FUNCTION update_appointments_search();
CREATE OR REPLACE FUNCTION update_releases_search() RETURNS trigger AS $$
BEGIN
NEW.search_vector := to_tsvector('pg_catalog.english',
coalesce(NEW.name, '') || ' ' || coalesce(NEW.description, '') || ' ' ||
coalesce(NEW.version_label, '') || ' ' ||
coalesce(array_to_string(NEW.tags, ' '), ''));
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER trg_releases_search BEFORE INSERT OR UPDATE ON releases
FOR EACH ROW EXECUTE FUNCTION update_releases_search();
-- =============================================================================
-- SEED DATA: Context Types
-- =============================================================================
INSERT INTO context_types (value, label, is_system, sort_order) VALUES
('deep_work', 'Deep Work', true, 10),
('quick', 'Quick', true, 20),
('waiting', 'Waiting', true, 30),
('someday', 'Someday', true, 40),
('meeting', 'Meeting', true, 50),
('errand', 'Errand', true, 60);

View File

@@ -0,0 +1,358 @@
-- =============================================================================
-- Life OS - Release 1 Schema
-- Self-hosted PostgreSQL on defiant-01 (Hetzner)
-- =============================================================================
CREATE EXTENSION IF NOT EXISTS "pgcrypto";
-- =============================================================================
-- SYSTEM LEVEL: Context Types
-- =============================================================================
CREATE TABLE context_types (
id SERIAL PRIMARY KEY,
name TEXT NOT NULL UNIQUE,
label TEXT NOT NULL,
description TEXT,
is_system BOOLEAN NOT NULL DEFAULT true,
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- ORGANIZATIONAL HIERARCHY
-- =============================================================================
CREATE TABLE domains (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
color TEXT,
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE areas (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID NOT NULL REFERENCES domains(id) ON DELETE CASCADE,
name TEXT NOT NULL,
description TEXT,
status TEXT NOT NULL DEFAULT 'active',
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE projects (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID NOT NULL REFERENCES domains(id) ON DELETE CASCADE,
area_id UUID REFERENCES areas(id) ON DELETE SET NULL,
name TEXT NOT NULL,
description TEXT,
status TEXT NOT NULL DEFAULT 'active',
priority INTEGER NOT NULL DEFAULT 3,
start_date DATE,
target_date DATE,
completed_at TIMESTAMPTZ,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE tasks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID NOT NULL REFERENCES domains(id) ON DELETE CASCADE,
area_id UUID REFERENCES areas(id) ON DELETE SET NULL,
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
parent_id UUID REFERENCES tasks(id) ON DELETE SET NULL,
title TEXT NOT NULL,
description TEXT,
priority INTEGER NOT NULL DEFAULT 3,
status TEXT NOT NULL DEFAULT 'open',
due_date DATE,
deadline TIMESTAMPTZ,
recurrence TEXT,
tags TEXT[],
context TEXT,
is_custom_context BOOLEAN NOT NULL DEFAULT false,
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now(),
completed_at TIMESTAMPTZ
);
CREATE TABLE notes (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID NOT NULL REFERENCES domains(id) ON DELETE CASCADE,
title TEXT NOT NULL,
body TEXT,
content_format TEXT NOT NULL DEFAULT 'rich',
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE lists (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID NOT NULL REFERENCES domains(id) ON DELETE CASCADE,
area_id UUID REFERENCES areas(id) ON DELETE SET NULL,
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
name TEXT NOT NULL,
list_type TEXT NOT NULL DEFAULT 'checklist',
description TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE list_items (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
list_id UUID NOT NULL REFERENCES lists(id) ON DELETE CASCADE,
parent_item_id UUID REFERENCES list_items(id) ON DELETE SET NULL,
content TEXT NOT NULL,
completed BOOLEAN NOT NULL DEFAULT false,
completed_at TIMESTAMPTZ,
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE links (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
domain_id UUID NOT NULL REFERENCES domains(id) ON DELETE CASCADE,
area_id UUID REFERENCES areas(id) ON DELETE SET NULL,
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
label TEXT NOT NULL,
url TEXT NOT NULL,
description TEXT,
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE files (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
filename TEXT NOT NULL,
original_filename TEXT NOT NULL,
storage_path TEXT NOT NULL,
mime_type TEXT,
size_bytes INTEGER,
description TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Contacts
-- =============================================================================
CREATE TABLE contacts (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
name TEXT NOT NULL,
company TEXT,
role TEXT,
email TEXT,
phone TEXT,
notes TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Appointments
-- =============================================================================
CREATE TABLE appointments (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
title TEXT NOT NULL,
description TEXT,
location TEXT,
start_at TIMESTAMPTZ NOT NULL,
end_at TIMESTAMPTZ,
all_day BOOLEAN NOT NULL DEFAULT false,
recurrence TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Weblink Directory
-- =============================================================================
CREATE TABLE weblink_folders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
parent_id UUID REFERENCES weblink_folders(id) ON DELETE CASCADE,
name TEXT NOT NULL,
auto_generated BOOLEAN NOT NULL DEFAULT false,
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
CREATE TABLE weblinks (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
label TEXT NOT NULL,
url TEXT NOT NULL,
description TEXT,
tags TEXT[],
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Daily Focus
-- =============================================================================
CREATE TABLE daily_focus (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
focus_date DATE NOT NULL,
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
slot INTEGER,
completed BOOLEAN NOT NULL DEFAULT false,
note TEXT,
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Capture Queue
-- =============================================================================
CREATE TABLE capture (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
raw_text TEXT NOT NULL,
processed BOOLEAN NOT NULL DEFAULT false,
converted_to_type TEXT,
converted_to_id UUID,
area_id UUID REFERENCES areas(id) ON DELETE SET NULL,
project_id UUID REFERENCES projects(id) ON DELETE SET NULL,
list_id UUID REFERENCES lists(id) ON DELETE SET NULL,
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- SYSTEM LEVEL: Reminders
-- =============================================================================
CREATE TABLE reminders (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
remind_at TIMESTAMPTZ NOT NULL,
delivered BOOLEAN NOT NULL DEFAULT false,
channel TEXT NOT NULL DEFAULT 'web',
created_at TIMESTAMPTZ NOT NULL DEFAULT now()
);
-- =============================================================================
-- JUNCTION TABLES
-- =============================================================================
-- Notes <-> Projects (M2M)
CREATE TABLE note_projects (
note_id UUID NOT NULL REFERENCES notes(id) ON DELETE CASCADE,
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
is_primary BOOLEAN NOT NULL DEFAULT false,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (note_id, project_id)
);
-- Files <-> any entity (polymorphic M2M)
CREATE TABLE file_mappings (
id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
file_id UUID NOT NULL REFERENCES files(id) ON DELETE CASCADE,
context_type TEXT NOT NULL,
context_id UUID NOT NULL,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
UNIQUE (file_id, context_type, context_id)
);
-- Contacts <-> Tasks
CREATE TABLE contact_tasks (
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
task_id UUID NOT NULL REFERENCES tasks(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (contact_id, task_id)
);
-- Contacts <-> Lists
CREATE TABLE contact_lists (
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
list_id UUID NOT NULL REFERENCES lists(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (contact_id, list_id)
);
-- Contacts <-> List Items
CREATE TABLE contact_list_items (
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
list_item_id UUID NOT NULL REFERENCES list_items(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (contact_id, list_item_id)
);
-- Contacts <-> Projects
CREATE TABLE contact_projects (
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
project_id UUID NOT NULL REFERENCES projects(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (contact_id, project_id)
);
-- Contacts <-> Appointments
CREATE TABLE contact_appointments (
contact_id UUID NOT NULL REFERENCES contacts(id) ON DELETE CASCADE,
appointment_id UUID NOT NULL REFERENCES appointments(id) ON DELETE CASCADE,
role TEXT,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (contact_id, appointment_id)
);
-- Weblinks <-> Folders (M2M)
CREATE TABLE folder_weblinks (
folder_id UUID NOT NULL REFERENCES weblink_folders(id) ON DELETE CASCADE,
weblink_id UUID NOT NULL REFERENCES weblinks(id) ON DELETE CASCADE,
sort_order INTEGER NOT NULL DEFAULT 0,
created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
PRIMARY KEY (folder_id, weblink_id)
);
-- =============================================================================
-- INDEXES
-- =============================================================================
-- Sort order indexes (used on every list render)
CREATE INDEX idx_domains_sort ON domains(sort_order);
CREATE INDEX idx_areas_sort ON areas(domain_id, sort_order);
CREATE INDEX idx_projects_sort ON projects(domain_id, sort_order);
CREATE INDEX idx_projects_area_sort ON projects(area_id, sort_order);
CREATE INDEX idx_tasks_project_sort ON tasks(project_id, sort_order);
CREATE INDEX idx_tasks_parent_sort ON tasks(parent_id, sort_order);
CREATE INDEX idx_tasks_domain_sort ON tasks(domain_id, sort_order);
CREATE INDEX idx_list_items_sort ON list_items(list_id, sort_order);
CREATE INDEX idx_list_items_parent_sort ON list_items(parent_item_id, sort_order);
CREATE INDEX idx_weblinks_sort ON weblink_folders(parent_id, sort_order);
-- Lookup indexes
CREATE INDEX idx_tasks_status ON tasks(status);
CREATE INDEX idx_tasks_due_date ON tasks(due_date);
CREATE INDEX idx_tasks_priority ON tasks(priority);
CREATE INDEX idx_projects_status ON projects(status);
CREATE INDEX idx_daily_focus_date ON daily_focus(focus_date);
CREATE INDEX idx_appointments_start ON appointments(start_at);
CREATE INDEX idx_capture_processed ON capture(processed);
CREATE INDEX idx_file_mappings_context ON file_mappings(context_type, context_id);

View File

@@ -0,0 +1,122 @@
#!/bin/bash
# =============================================================================
# Life OS - Step 1: DEV Database Setup
# Applies R1 schema to lifeos_dev, migrates data from lifeos_prod (R0)
# Run on: defiant-01 as root
# =============================================================================
set -e
DB_CONTAINER="lifeos-db"
DB_USER="postgres"
DEV_DB="lifeos_dev"
PROD_DB="lifeos_prod"
SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
section() {
echo ""
echo "=============================================="
echo " $1"
echo "=============================================="
}
# =============================================================================
# 1. Verify prerequisites
# =============================================================================
section "1. Verifying prerequisites"
echo "Checking lifeos-db container..."
if ! docker ps | grep -q "$DB_CONTAINER"; then
echo "ERROR: $DB_CONTAINER is not running"
exit 1
fi
echo "OK: $DB_CONTAINER is running"
echo "Checking lifeos_dev database exists..."
DEV_EXISTS=$(docker exec $DB_CONTAINER psql -U $DB_USER -tc "SELECT 1 FROM pg_database WHERE datname='$DEV_DB'" | tr -d ' ')
if [ "$DEV_EXISTS" != "1" ]; then
echo "ERROR: $DEV_DB database does not exist"
exit 1
fi
echo "OK: $DEV_DB exists"
echo "Checking lifeos_prod database exists..."
PROD_EXISTS=$(docker exec $DB_CONTAINER psql -U $DB_USER -tc "SELECT 1 FROM pg_database WHERE datname='$PROD_DB'" | tr -d ' ')
if [ "$PROD_EXISTS" != "1" ]; then
echo "ERROR: $PROD_DB database does not exist"
exit 1
fi
echo "OK: $PROD_DB exists"
echo "Checking R0 data in lifeos_prod..."
R0_DOMAINS=$(docker exec $DB_CONTAINER psql -U $DB_USER -d $PROD_DB -tc "SELECT count(*) FROM domains" 2>/dev/null | tr -d ' ')
echo "R0 domains count: $R0_DOMAINS"
if [ "$R0_DOMAINS" = "0" ] || [ -z "$R0_DOMAINS" ]; then
echo "WARNING: No domains found in lifeos_prod. Migration will produce empty tables."
fi
# =============================================================================
# 2. Drop existing R1 tables in lifeos_dev (clean slate)
# =============================================================================
section "2. Cleaning lifeos_dev (drop all tables)"
docker exec $DB_CONTAINER psql -U $DB_USER -d $DEV_DB -c "
DROP SCHEMA public CASCADE;
CREATE SCHEMA public;
GRANT ALL ON SCHEMA public TO $DB_USER;
GRANT ALL ON SCHEMA public TO public;
"
echo "OK: lifeos_dev schema reset"
# =============================================================================
# 3. Apply R1 schema
# =============================================================================
section "3. Applying R1 schema to lifeos_dev"
docker exec -i $DB_CONTAINER psql -U $DB_USER -d $DEV_DB < "$SCRIPT_DIR/lifeos_r1_full_schema.sql"
echo "OK: R1 schema applied"
# Verify table count
TABLE_COUNT=$(docker exec $DB_CONTAINER psql -U $DB_USER -d $DEV_DB -tc "
SELECT count(*) FROM information_schema.tables WHERE table_schema = 'public' AND table_type = 'BASE TABLE'
" | tr -d ' ')
echo "Tables created: $TABLE_COUNT"
# =============================================================================
# 4. Run data migration (R0 -> R1)
# =============================================================================
section "4. Migrating data from lifeos_prod (R0) to lifeos_dev (R1)"
docker exec -i $DB_CONTAINER psql -U $DB_USER -d $DEV_DB < "$SCRIPT_DIR/lifeos_r0_to_r1_migration.sql"
echo "OK: Data migration complete"
# =============================================================================
# 5. Final verification
# =============================================================================
section "5. Final verification"
echo "R1 table row counts:"
docker exec $DB_CONTAINER psql -U $DB_USER -d $DEV_DB -c "
SELECT 'domains' as table_name, count(*) FROM domains UNION ALL
SELECT 'areas', count(*) FROM areas UNION ALL
SELECT 'projects', count(*) FROM projects UNION ALL
SELECT 'tasks', count(*) FROM tasks UNION ALL
SELECT 'notes', count(*) FROM notes UNION ALL
SELECT 'links', count(*) FROM links UNION ALL
SELECT 'daily_focus', count(*) FROM daily_focus UNION ALL
SELECT 'capture', count(*) FROM capture UNION ALL
SELECT 'context_types', count(*) FROM context_types UNION ALL
SELECT 'contacts', count(*) FROM contacts UNION ALL
SELECT 'meetings', count(*) FROM meetings UNION ALL
SELECT 'decisions', count(*) FROM decisions UNION ALL
SELECT 'releases', count(*) FROM releases UNION ALL
SELECT 'processes', count(*) FROM processes
ORDER BY table_name;
"
echo ""
echo "=============================================="
echo " DEV database setup complete."
echo " lifeos_dev has R1 schema + migrated R0 data."
echo " lifeos_prod R0 data is UNTOUCHED."
echo "=============================================="

View File

@@ -0,0 +1,118 @@
#!/bin/bash
# =============================================================================
# Life OS - PROD Database Setup
# Backs up lifeos_dev (R1) and restores to lifeos_prod
# Run AFTER DEV is fully tested and confirmed working
# Run on: defiant-01 as root
# =============================================================================
set -e
DB_CONTAINER="lifeos-db"
DB_USER="postgres"
DEV_DB="lifeos_dev"
PROD_DB="lifeos_prod"
BACKUP_DIR="/opt/lifeos/backups"
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="$BACKUP_DIR/dev_to_prod_${TIMESTAMP}.sql"
section() {
echo ""
echo "=============================================="
echo " $1"
echo "=============================================="
}
# =============================================================================
# 1. Verify prerequisites
# =============================================================================
section "1. Verifying prerequisites"
if ! docker ps | grep -q "$DB_CONTAINER"; then
echo "ERROR: $DB_CONTAINER is not running"
exit 1
fi
echo "OK: $DB_CONTAINER is running"
mkdir -p "$BACKUP_DIR"
# =============================================================================
# 2. Backup current lifeos_prod (safety net)
# =============================================================================
section "2. Backing up current lifeos_prod (R0 safety copy)"
docker exec $DB_CONTAINER pg_dump -U $DB_USER $PROD_DB | gzip > "$BACKUP_DIR/prod_r0_backup_${TIMESTAMP}.sql.gz"
echo "OK: R0 prod backup saved to $BACKUP_DIR/prod_r0_backup_${TIMESTAMP}.sql.gz"
# =============================================================================
# 3. Backup lifeos_dev (source for PROD)
# =============================================================================
section "3. Backing up lifeos_dev (R1 source)"
docker exec $DB_CONTAINER pg_dump -U $DB_USER --clean --if-exists $DEV_DB > "$BACKUP_FILE"
echo "OK: DEV backup saved to $BACKUP_FILE"
# =============================================================================
# 4. Drop and recreate lifeos_prod with R1 data
# =============================================================================
section "4. Replacing lifeos_prod with lifeos_dev contents"
echo "WARNING: This will destroy the current lifeos_prod database."
echo "R0 backup is at: $BACKUP_DIR/prod_r0_backup_${TIMESTAMP}.sql.gz"
read -p "Continue? (yes/no): " CONFIRM
if [ "$CONFIRM" != "yes" ]; then
echo "Aborted."
exit 0
fi
# Drop and recreate prod database
docker exec $DB_CONTAINER psql -U $DB_USER -c "
SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = '$PROD_DB' AND pid <> pg_backend_pid();
"
docker exec $DB_CONTAINER psql -U $DB_USER -c "DROP DATABASE IF EXISTS $PROD_DB;"
docker exec $DB_CONTAINER psql -U $DB_USER -c "CREATE DATABASE $PROD_DB;"
# Restore DEV backup into PROD
docker exec -i $DB_CONTAINER psql -U $DB_USER -d $PROD_DB < "$BACKUP_FILE"
echo "OK: lifeos_prod now contains R1 schema + data from DEV"
# =============================================================================
# 5. Verify
# =============================================================================
section "5. Verification"
echo "PROD table row counts:"
docker exec $DB_CONTAINER psql -U $DB_USER -d $PROD_DB -c "
SELECT 'domains' as table_name, count(*) FROM domains UNION ALL
SELECT 'areas', count(*) FROM areas UNION ALL
SELECT 'projects', count(*) FROM projects UNION ALL
SELECT 'tasks', count(*) FROM tasks UNION ALL
SELECT 'notes', count(*) FROM notes UNION ALL
SELECT 'links', count(*) FROM links UNION ALL
SELECT 'daily_focus', count(*) FROM daily_focus UNION ALL
SELECT 'capture', count(*) FROM capture UNION ALL
SELECT 'context_types', count(*) FROM context_types
ORDER BY table_name;
"
# =============================================================================
# 6. Setup automated daily backup cron
# =============================================================================
section "6. Setting up automated daily backups"
CRON_LINE="0 3 * * * docker exec $DB_CONTAINER pg_dump -U $DB_USER $PROD_DB | gzip > $BACKUP_DIR/prod_\$(date +\\%Y\\%m\\%d).sql.gz && find $BACKUP_DIR -name 'prod_*.sql.gz' -mtime +30 -delete"
if crontab -l 2>/dev/null | grep -q "lifeos_prod"; then
echo "Backup cron already exists, skipping."
else
(crontab -l 2>/dev/null; echo "$CRON_LINE") | crontab -
echo "OK: Daily backup cron installed (3am, 30-day retention)"
fi
echo ""
echo "=============================================="
echo " PROD setup complete."
echo " lifeos_prod now has R1 schema + data."
echo " R0 backup: $BACKUP_DIR/prod_r0_backup_${TIMESTAMP}.sql.gz"
echo " Daily backups configured."
echo "=============================================="