Elite Outsiders ← Tools menu

Crons

05-05-2026—UPDATE
eo-backup-worker CF · maintained by Claude Code
Legend — categories

Cron = Cloudflare Workers cron (eo-backup-worker, scheduled handler). Déterministe, server-side, zero token LLM. Pour jobs mécaniques (count rows, push Notion API).

Routine = Claude Code scheduled task (~/.claude/scheduled-tasks/<id>/SKILL.md). Spawn une session Claude. Coûte des tokens. Pour jobs de raisonnement (audit, brief, review).

Event = Code applicatif fire'd sur événement utilisateur ou système (page load, form submit, file upload, webhook). Pas scheduled, pas LLM.

CF Worker code: ~/Projects/eo-backup-worker/.

CRONS10 cards · 9 CF Workers + 1 local launchd
CF Workers — 9 deployed

CRON1 — Security backup

CRON MONTHLY 01st @ 03h UTC
03:00 UTC (≈ 04-05h Paris)

Monthly Kit + D1 backup to private R2. Quarterly dry-run on preview DB. Email recap.

Technical details
  • Photocopies the Kit newsletter (all subscribers + tags) and the site DB (D1), stores 4 files in a private Cloudflare R2 drawer. Like a family photo kept safe.
  • 12-month retention. On Jan/Apr/Jul/Oct 1st, runs a restore dry-run on a preview DB to verify backups are readable (quarterly dry-run).
  • Each run sends a Resend recap email (✅ ok / ⚠️ partial / ❌ failed) listing what was backed up.
cron: 0 3 1 * * · R2 target: eo-backups bucket · doc: Notion ch. 13 (Monitoring/alerting) + sub-page Setup backup mensuel Kit + D1 — 2026-04-30

CRON2 — Cal.com slot watcher

CRON DAILY 07h UTC
07:00 UTC (≈ 08-09h Paris)

Daily check that Cal.com slots are bookable across the 3 brands. Email if any slot is dead.

Technical details
  • Watches the 3 public Cal.com pages (byCaliber / Forja / EO) and counts available slots over the next 4 weeks.
  • If a slot is dead (another booking blocks it), sends a "DEAD" email so you can decide what to do.
  • If you fix the conflict (or weekly chaos moves the slot), you get a "REVIVED" email at next check. No email when all is well — radio silence.
  • Memory in Cloudflare KV (key cal_slot_health:v2) — stores state per (event × date) to detect transitions.
cron: 0 7 * * * · code: eo-backup-worker/src/cal-monitor.js · manual trigger: POST /admin/cal-monitor-now?token=...

CRON3 — Weekly chaos magician

CRON WEEKLY SUN 21h UTC
21:00 UTC (≈ 22-23h Paris)

Weekly chaos magic: re-randomize Cal.com slot times + brand rotation across the 4 weeks ahead.

Technical details
  • Randomizes slot times for MON afternoon / WED morning / FRI evening over the next 4 weeks, and rotates byCaliber / Forja / EO across the 3 daily 30-min slots.
  • Goal: every visitor returning to cal.com a week later sees different times. Creates a "real busy life" feel rather than "well-oiled 3 fixed slots machine".
  • Time bounds in the STARTS constant in code (MON 13:00-16:00 / WED 10:00-11:00 / FRI 18:30-19:00). Edit + redeploy to change a bound. Recap email after each run.
cron: 0 21 * * SUN · code: eo-backup-worker/src/cal-randomize.js · manual trigger: POST /admin/cal-randomize-now?token=...

CRON4 — Weekly report

CRON WEEKLY MON 08h UTC
08:00 UTC (≈ 09-10h Paris) · LIVE since 2026-05-01

Monday morning report: 7-day counters + delta vs prev 7d + top pages, posted to Notion + email.

Technical details
  • Every Monday morning, reads the DB directly (D1 binding, not via dashboard — gated by CF Access), builds a Markdown with 7d counters + delta vs prior 7d + top 10 pages, and creates a Notion sub-page under the operational hub. Then sends a recap email to alexandre.corne.ac@gmail.com (override via NOTIFY_TO_WEEKLY).
  • Resilience: if Notion is down (token/parent_id missing or integration not connected), the email goes out anyway with the markdown embedded — AL never loses the info.
  • Code: eo-backup-worker/src/weekly-snapshot.js + Notion helper src/notion.js. Email subject: EO weekly — YYYY-WXX.
cron: 0 8 * * 1 · code: eo-backup-worker/src/weekly-snapshot.js · manual trigger: POST /admin/weekly-snapshot-now?token=...

CRON5 — Monthly mover

CRON MONTHLY 01st @ 02h UTC
02:00 UTC · LIVE since 2026-05-01

Monthly archive of prior-month events to R2 + Notion, then DELETE old rows from D1.

Technical details

⚠ Heads-up — this cron DELETEs rows from the live events D1 table after archiving them to R2. Safe in normal conditions (R2 backup-first, current-month events untouched, R2-fail-HARD prevents data loss), but it's the only cron on this page that touches data destructively. Yellow button = think before you click.

  • On the 1st of each month at 02h UTC (1h BEFORE the monthly backup 0 3 1 * * — by design so the R2 dump captures an already-cleaned table), takes prior-month events, dumps JSON to R2 (eo-backups/events-archive/YYYY-MM/), creates a Notion archive page with aggregations (event_type / top paths / activity per day), then only after that runs DELETE FROM events WHERE created_at < <month_start>.
  • Why: the D1 base grows fast (5,518 rows in 15 pre-launch days). Without cleanup we pay to store data we never look at live — aggregates land in the Notion page + the weekly report.
  • Manual precedent: Analytics Archive — Pre-2026-05-01 (5,518 rows archived 2026-04-30) — format mirrored by renderNotionBlocks().
  • Absolute safety (backup-first): R2 fail = HARD, the DELETE doesn't run. Notion fail = soft, the DELETE runs anyway (the JSON on R2 is the official safety net; Notion is read comfort). The recap email always shows the exact state of the 3 steps (R2 / Notion / DELETE).
  • Pull an archive: wrangler r2 object get eo-backups/events-archive/2026-04/events-archive-2026-05-01.json
cron: 0 2 1 * * · code: eo-backup-worker/src/events-archive.js · manual trigger: POST /admin/events-archive-now?token=... (idempotent on mid-month clicks; current-month events are never touched, R2-fail-HARD prevents data loss if backup fails)

CRON6 — Page Flow crawler

CRON WEEKLY MON 04h UTC
04:00 UTC (≈ 05-06h Paris) · LIVE since 2026-05-03

Monday crawl of eliteoutsiders.com sitemap, upsert page_inventory + button_inventory in D1.

Technical details
  • Every Monday at 04h UTC, this robot fetches eliteoutsiders.com/sitemap.xml, walks each public page, and extracts every clickable element (<a> / <button> / [role="button"] / [data-track-id]). Result: complete inventory of pages + buttons in D1 (page_inventory + button_inventory), even buttons that have never been clicked.
  • Tourne 4h AVANT le weekly-snapshot (08h UTC) pour que le rapport hebdo bénéficie d'un inventaire fraîchement mis à jour.
  • Stale-cleanup automatique: toute page disparue du sitemap est supprimée de l'inventaire (avec ses boutons). Skippé si le sitemap fail (sécurité contre wipe accidentel).
  • Lit par tools.eliteoutsiders.com/page-flow (User Journey / Traffic Flow). Aussi déclenchable manuellement via le bouton "Refresh now" sur cette page-là.
cron: 0 4 * * 1 · code: eo-backup-worker/src/page-flow-crawl.js · manual trigger: POST /admin/page-flow-crawl-now?token=...

CRON7 — URL audit

CRON MONTHLY 01st @ 04h UTC
04:00 UTC · LIVE since 2026-05-03 (séparé du backup mensuel ce jour-là)

Monthly crawl of every known URL (open + me + admin + tools), email anomaly report.

Technical details
  • Le 1er de chaque mois à 04h UTC (1h après le backup 0 3 1 * * pour ne pas charger le réseau pendant l'upload R2), ce robot crawl toutes les URLs connues du site (open + me + admin + tools), compare au baseline EXPECTED dans le code, et envoie un récap mail Resend à AL.
  • Ne fait AUCUN auto-fix. C'est de l'audit pur : "voici les URLs qui répondent autrement qu'attendu, à toi de décider si tu corriges ou si tu update le baseline".
  • To extend the URL list: edit EXPECTED dans eo-backup-worker/src/url-audit.js.
cron: 0 4 1 * * · code: eo-backup-worker/src/url-audit.js · manual trigger: POST /admin/url-audit-now?token=...

CRON8 — Analytics map refresh

CRON SEMI-ANNUAL 01/01 + 01/07 05h
05:00 UTC · LIVE since 2026-05-05

Semi-annual snapshot of D1 row counts per analytics table, append to Notion source-of-truth.

Technical details
  • Twice a year (1st of January + 1st of July, 05:00 UTC), this robot queries D1 for row counts + last-write timestamp on every analytics table, and the breakdown of event_type values in the events table (30d + total). It then appends a snapshot block to the Notion source-of-truth page 📊 EO Analytics — Source of truth so AL has a history of how the analytics stack has grown over time.
  • Companion live mirror: tools.eliteoutsiders.com/backdata/analytics-map queries D1 in real time on every page load (no scheduled refresh needed for the live numbers — they're always fresh).
  • To extend the inventory: edit the TABLES list in eo-backup-worker/src/analytics-map-refresh.js AND the mirror in eliteoutsiders-tools/functions/api/admin/analytics-map.js. Keep both in sync.
cron: 0 5 1 1,7 * · code: eo-backup-worker/src/analytics-map-refresh.js · manual trigger: POST /admin/analytics-map-refresh-now?token=...
Local launchd Mac — 1

CRON9 — WIP safety net

CRON EVERY 30 MIN LOCAL LAUNCHD
Local launchd · LIVE since 2026-05-02

Every 30 min, rsync uncommitted changes from 3 working dirs to wip-autosave GitHub branch.

Pulse: loading…
Technical details
  • Every 30 min, this robot looks in your 3 working dirs (eliteoutsiders-site, eliteoutsiders-tools, eo-backup-worker) and copies anything not yet in Git into a parallel GitHub box called wip-autosave. Like a photocopier passing through your office every 30 min, photographing what's lying on the table without touching anything. If your Mac dies or you close by mistake, max 30 min of lost work.
  • Architecture: 3 mirror clones live in ~/Library/Application Support/eo-wip-autosave/. Each run rsyncs your working tree (excluding node_modules, .wrangler, dist, logs) to its mirror, commits + pushes to the wip-autosave branch (never main). Your main working tree is NEVER touched. No main history pollution.
  • Recover a copy: git fetch origin wip-autosave && git checkout wip-autosave in any clone of the repo, or directly on GitHub. Commits are timestamped: WIP autosave 2026-05-02T01:39:59Z.
cron: 0/30 * * * * (StartInterval 1800s launchd) · script: eo-backup-worker/scripts/wip-autosave.sh · plist: ~/Library/LaunchAgents/com.eo.wip-autosave.plist · log: ~/Projects/eo-backup-worker/logs/wip-autosave.log

CRON10 — Google Calendar scanner

CRON EVERY 30 MIN GCAL POLL
Every 30 min · LIVE since 2026-05-05

Every 30 min, scan Google Calendar for mentorship RDVs outside Cal.com, upsert bookings in D1.

Technical details
  • The robot reads your Google Calendar, detects "client mentorship" RDVs (tagged or with participant email matching an EO user), and inserts/updates the row in bookings with is_mentorship=1. So the "Your Bookings" cell turns green in the client's /me without you touching the DB by hand.
  • Why: today bookings rows are created only by the Cal.com webhook. If you book with a client outside Cal.com (manual email, direct GCal event), the client sees nothing in their /me. This robot fills that gap.
  • Prereq: Google Calendar OAuth already set up (table oauth_tokens with service='google_calendar', see migration 0010). Code to write: eo-backup-worker/src/gcal-scan.js running on 30-min cron, calls Google Calendar API, normalizes events, INSERT/UPSERT in D1 bookings with idempotency on external_id.
planned cron: */30 * * * * · status: SPEC · context: added 2026-05-01 at AL's request
ROUTINES1 card · Anthropic Cloud routine

ROUTINE1 — Cron auditor

ROUTINE MONTHLY 01st @ 09h UTC
09:00 UTC (≈ 11h Paris) · LIVE since 2026-05-02

Monthly Anthropic Cloud routine: scan 3 repos for new CRON / ROUTINE / EVENT, open PR if missing.

Technical details
  • Scope 1 — crons / routines / events audit. Once a month, this robot reads code from the 3 repos (eliteoutsiders-site, eliteoutsiders-tools, eo-backup-worker) and looks for any new CRON (CF Worker schedule or local launchd plist), ROUTINE (claude.ai/code/routines or ~/.claude/scheduled-tasks/), or EVENT-triggered code (page load handlers, form submit POSTs, webhook handlers) AL built without listing on /crons. For each new entry, it adds a card in the right outer dropdown (CRONS / ROUTINES / EVENTS) on a separate git branch and opens a Pull Request for AL to validate. Like a teacher checking each new drawing is taped to the wall, leaving a sticky note for any missing.
  • Scope 1bis — TECHNICAL BLOCK update (added 2026-05-05). When a new CRON / ROUTINE / EVENT is detected (Scope 1), the robot ALSO updates the "Technical block — for Claude Code" table at the bottom of /crons : adds the new cron expression to the Crons row, the new admin endpoint to the Admin endpoints row, the new secret to the Secrets row if applicable, and any new R2/D1/KV binding to the Bindings row. Same branch + PR as Scope 1.
  • Scope 2 — infra audit (added 2026-05-03). Same pass also scans the 3 repos for any new external tool / service / binding (CF Workers, R2 buckets, KV namespaces, D1 tables, Resend, Kit, Notion, Cal.com, GCal, Stripe, Lemon Squeezy, Anthropic, OAuth providers, etc.) and any new wiring between them. Updates /infra-tree if a tool or interconnection is missing — same branch + PR mechanic as Scope 1.
  • If nothing new in either scope: silent commit of an HTML comment <!-- monthly audit YYYY-MM-DD: no new crons / no infra changes --> and exit. No PR for nothing.
  • ON EVERY RUN (even when nothing new), the robot must update BOTH data-last-update spans:
    • public/crons.html, right of the "Crons" H1
    • public/infra-tree.html, right of the "Infra Tree" H1
    With today's date in the format DD-MM-YYYY—UPDATE (e.g. 03-05-2026—UPDATE). Both date changes go into the PR (or the silent commit if no PR).
  • Anthropic Cloud routine (CCR) — ID trig_01UU96BG1d6FXknfdYHDEY21. Runs in an isolated sandbox with HTTPS clones of the 3 repos via the GitHub account alexandrecorne. Never modifies main directly — always branch + PR.
  • Manual run (off-cycle): open the routine page on claude.ai/code/routines and click Run now. The on-demand run executes the same prompt as the monthly cron and stays idempotent.
cron: 0 9 1 * * · management: claude.ai/code/routines · out-of-scope (mentioned in PR): ~/.claude/scheduled-tasks/*/SKILL.md (local only)
Technical reference (for Claude Code)
WhatValue
Workereo-backup-worker · Cloudflare account 88ff6259a88cb5fb8e8b9e13e082d7c4
URLhttps://eo-backup-worker.alexandre-corne-ac.workers.dev
Repo~/Projects/eo-backup-worker/
Crons0 2 1 * * (events-archive) · 0 3 1 * * (backup) · 0 4 1 * * (url-audit) · 0 4 * * 1 (page-flow-crawl) · 0 5 1 1,7 * (analytics-map-refresh) · 0 7 * * * (cal-monitor) · 0 8 * * 1 (weekly-snapshot) · 0 21 * * SUN (cal-randomize) — 8 deployed
SecretsRESEND_API_KEY · KIT_API_KEY · D1_API_TOKEN · D1_PREVIEW_API_TOKEN · BACKUP_HEALTH_TOKEN · CAL_API_KEY · NOTIFY_FROM · NOTIFY_TO · NOTION_API_KEY · NOTION_WEEKLY_PARENT_ID · NOTION_ARCHIVE_PARENT_ID · NOTIFY_TO_WEEKLY (optional)
BindingsR2 eo-backups (+ prefix events-archive/YYYY-MM/ from CRON5 LIVE) · D1 eliteoutsiders + eliteoutsiders-preview · KV BACKUP_STATE
Admin endpoints (POST + ?token=BACKUP_HEALTH_TOKEN)/admin/run-now · /admin/cal-monitor-now · /admin/cal-randomize-now · /admin/weekly-snapshot-now · /admin/events-archive-now · /admin/page-flow-crawl-now · /admin/url-audit-now · /admin/analytics-map-refresh-now
Healthcheck (GET + ?token=BACKUP_HEALTH_TOKEN)/backup/last-run
State KV (cal-monitor)namespace id 09082b5063a34c238de4b117efaf7d93 · key cal_slot_health:v2
EVENTS3 cards · user/system action triggers

EVENT1 — Past weeks cleaner

EVENT ON EVENT /alsboard load
Client-side onload · LIVE since 2026-04-30

When /alsboard loads, auto-purge weeks whose end date has passed.

Technical details
  • When you open /alsboard, it automatically removes from the display the weeks whose end date has passed — the page cleans itself.
trigger: client-side onload · file: eliteoutsiders-tools/public/alsboard.html · code: parseWeekEndDate + purgePastUpcomingWeeks · persists via PUT /api/admin/state/eisenhower_matrix_al

EVENT2 — Post-quiz mailer

EVENT ON EVENT quiz submit
On Trauma Map submit · LIVE since 2026-04-26

When user submits Trauma Map quiz, send results email via Resend (fire-and-forget).

Technical details
  • When someone finishes the Trauma Map quiz and submits their email, this robot immediately sends a Resend email with full results (Past/Current scores + Inner/Outer Journey + 2 doors). Like a postman delivering the letter within the minute. If Resend crashes, the user still got their {ok:true} — the email goes out in background via waitUntil.
  • Code: functions/api/trauma-map/save.js calls buildTraumaMapEmail + sendEmail. Fire-and-forget. No retry yet.
trigger: POST /api/trauma-map/save · file: eliteoutsiders-site/functions/api/trauma-map/save.js

EVENT3 — Coaching library

EVENT ON EVENT toolkit upload
On /me/toolkit upload · LIVE since 2026-05-01

When client uploads to /me/toolkit, store blob in R2 + index in D1 toolkit_resources.

Technical details
  • When a coaching client uploads a document from /me/toolkit (PDF, DOCX, image, audio, video — up to 100 MB), this robot stores the file in the private R2 vault eo-user-toolkit and writes the index card in D1 toolkit_resources (with columns r2_key/mime/bytes added via migration 0018). Like a librarian shelving the book on the client's private shelf and noting the location. Nobody else (besides the client and the admin) can access the file — every read re-verifies identity before serving the blob.
  • Read side: GET /api/me/toolkit/file/<id> streams the R2 blob with ownership check (WHERE user_email = <jwt-email>) and Range header support so videos can scrub. Annotation side: toolkit_annotations table (migration 0019) — 3-color highlights + margin comments, same mechanics as the manifesto reader. Annotations active only on text types (TXT, MD, DOCX rendered to HTML via mammoth.js); PDF/image/audio/video read-only for V1.
  • Pricing link: the coaching cell on /workwithalex turns green and redirects to /me/toolkit as soon as a purchases row with product='coaching' and status='paid' exists for the logged-in email. Single source of truth: /api/me.
trigger: POST /api/me/toolkit/upload · files: eliteoutsiders-site/functions/api/me/toolkit/*.js + public/me/toolkit.html · binding: R2 USER_TOOLKIT → bucket eo-user-toolkit