This document traces the redesign of the QuranFlow admin backend from start to finish. Each step documents what we started with, what we did, what we produced, and what that fed into next. Every source document is linked so you can read the full artifact.

A theme runs through this work: the amount of structured thinking required for quality output. You have to force the process through phases — define the problem, analyze frameworks, map workflows, evaluate candidates — to get well-reasoned results. The systematic evaluation process is the story.

The process ran in two cycles. Cycle 1 (Dec 2025 – Mar 2026) — steps 1–10 — produced the v1 spec and a specification-accurate mockup. Building the mockup exposed gaps between the spec and the stakeholder's actual working reality, which triggered Cycle 2 (Apr 2026) — steps 11–15 — a structured stakeholder review that produced Framework v2, six architecture decision records, a revised 12-file spec, and the v2 mockup shipping alongside v1. Review and revision were treated as a first-class part of the process, not an afterthought.

Step 1

What Does the System Actually Do?

Thursday, December 18, 2025

Input

Live admin backend + database

Output

810-line capability map documenting 35+ screens, 10 database tables, 3 external integrations

Capability map →

Before you can redesign a system, you have to understand everything it does. Not what people say it does — what it actually does, screen by screen, field by field.

The QuranFlow admin backend has 35+ screens spread across 9 navigation sections, backed by 10 database tables and integrations with Stripe (payments), Vimeo (video hosting), and InfusionSoft (email automation). Every screen was documented: what data it shows, what actions it supports, what business rules it enforces.

The capability map revealed a structural pattern: the backend is organized by data type. There's a "Students" section, a "Semesters" section, a "Program" section. Each maps to a database table. This seems logical — until you try to actually use it.

35+ Screens
10 DB Tables
3 External Integrations
9 Nav Sections
Current admin backend — Dashboard with passive statistics

The current dashboard: passive statistics, no actionable alerts

Current admin backend — Students list organized by data type

Students page: organized by data type, not by workflow

Current admin backend — Semester management

Semesters page: 3 entries, manual setup for each

Step 2

What's Actually Broken?

Wednesday, March 11, 2026

Input

Capability map + 10-question interview script

Output

137-line interview report — 5 pain points, 9 recommendations, the admin's mental model

Interview script → Interview report →

The capability map documented the system. The interview revealed how it fails.

The admin — the primary power user who manages students, semesters, billing, and content daily — walked through her real workflows. Five pain points emerged:

  1. Semester setup is entirely manual. Content must be re-uploaded from scratch every semester. 19+ screens, 50-80 clicks, 3 external tools.
  2. Billing has zero presence in the backend. All payment data — subscriptions, family plans, scholarships, deferments — lives in Google Sheets and Stripe. Support staff can't answer basic billing questions without escalating.
  3. Student deactivation is disconnected from payment cancellation. Two steps, two systems, no link between them.
  4. The dashboard provides no actionable information. It shows passive stats. No alerts, no failed payments, no TA response times.
  5. Navigation is organized by data type, but the admin thinks in domains of responsibility.

That last point was the critical reframe. When asked how she'd organize the backend, her answer wasn't "Students, Lessons, Reports" (the current structure). It was:

Student Management, Semester Management, Content, Scheduling, Teacher Management, Reporting, and Miscellaneous.

She doesn't think in database tables. She thinks in areas of responsibility. The current navigation forces her mental model into a structure that doesn't match how she works — which is why single workflows span 5-9 screens across multiple sections.

5 Pain Points
9 Recommendations
7 Domains Proposed
7 Sheet Workarounds
Step 3

Two Problems, Two Workstreams

Wednesday, March 11, 2026

Input

Capability map + interview findings

Output

532-line redesign plan with two parallel workstreams

Redesign plan →

The interview revealed two distinct problems that need different research methods:

Navigation — Which organizational principle should govern the backend? This requires framework analysis: study different ways to organize admin tools, map real workflows through each option, score them, pick a winner. It's a design problem.

Billing — What billing data exists, where does it live, and what would it take to bring it into the admin backend? This requires technical discovery: audit code repositories, analyze Stripe's API, document every Google Sheet workaround. It's a research problem.

These problems are independent. Navigation research doesn't need billing data, and billing discovery doesn't need navigation decisions. So we split into two parallel workstreams that converge when both are complete.

WS1: Navigation Redesign          WS2: Billing Discovery
├─ 1.1 Frameworks                     ├─ 2.1 Code audit
├─ 1.2 Workflows                      ├─ 2.2 Workaround mapping
├─ 1.3 Pre-filter                     ├─ 2.3 Stripe API analysis
├─ 1.4 Candidates                    ├─ 2.4 Screen requirements
└─ 1.5 Evaluation                    └─ 2.5 Integration brief
                └──────────┬──────────────────┘
                           ↓
             Convergence → Specification → Mockup

This is a theme in AI-assisted product work: you can't just ask the AI to "redesign the admin backend." The problem is too large and too ambiguous. You have to decompose it into structured phases, each with a defined input, methodology, and deliverable. The next five steps are those phases.

Step 4 Workstream 1: Navigation

Six Ways to Organize an Admin Backend

Wednesday, March 11, 2026

Input

Interview report + capability map

Output

1,931 combined lines — 6 organizational frameworks analyzed + 6 real workflows mapped step-by-step

Framework analysis → Workflow maps →

There are fundamentally different ways to organize an admin backend. The current system uses one (organize by data type). But there are at least five others, each with different assumptions about how the admin thinks and works. Before choosing, we analyzed all six:

Framework Organizing Principle
Object-Based By data type — Students, Lessons, Reports (current state)
Domain-Based By area of responsibility — Student Mgmt, Content, Billing
Workflow-Based By task — "Set Up Semester," "Handle Student Issue"
Phase-Based By semester lifecycle — Setup tools, Active tools, Close tools
Entity-Centric By entity — click a student, see everything about them
Hub-and-Spoke By exceptions — dashboard surfaces alerts, sections handle resolution

Each framework was analyzed against the admin's interview data: what assumptions does it make about how the admin thinks? What does it do well? Where does it break down?

In parallel, we mapped six real admin workflows step-by-step through the current system. This produced the baseline data every candidate would be measured against:

Workflow Screens Clicks External Tools
Semester Setup 19+ 50-80+ 3
Student Support 5-7 15-25 3
Semester Close 8+ 40-70+ 3
Daily Monitoring 4-5 12-18 3
Scheduling 4-6 12-20 2
TA Onboarding 6-8 20-35 1

The numbers made the problem concrete. Semester setup alone: 19 screens, 50-80 clicks, 3 external tools. Any candidate that doesn't dramatically reduce these numbers isn't worth building.

Step 5 Workstream 1: Navigation

Eliminating Three, Specifying Three

Wednesday, March 11, 2026

Input

Framework analysis + workflow maps

Output

935 combined lines — 3 frameworks eliminated with evidence + 3 hybrid candidates fully specified

Elimination rationale → Candidate structures →

With workflow evidence, three frameworks were eliminated as standalone navigation paradigms. Not because they're bad ideas — because each has a structural flaw that disqualifies it as the primary organizing principle:

  • Object-Based eliminated — it's the current system, and it's the problem. Scored lowest on mental model match (2/5). The admin explicitly rejected it.
  • Workflow-Based eliminated — guided wizards constrain power users. Edge cases (which the admin deals with constantly) break codified flows.
  • Phase-Based eliminated — semester phases overlap in practice. A nav that changes by phase confuses TAs who log in irregularly.

But elimination doesn't mean discarding their best ideas. From each eliminated framework, we borrowed specific features that every surviving candidate must include:

  • From Workflow: a semester setup checklist and a gated close workflow (payment setup blocked until promotions are complete)
  • From Phase: phase-aware dashboard prompts ("semester close approaching — 4 items need attention")

The three survivors became three hybrid candidates, each specified down to the screen level — complete navigation trees, screen placement maps for all 35 existing screens, new screens introduced:

Candidate Principle Screens
A: Domain-First + Rich Entity Pages Admin's 7 domains with consolidated entity pages 48
B: Hub-and-Spoke + Domain Sections Dashboard as operational nerve center 45
C: Entity-Centric + Contextual Nav Start from entity, see everything about it 50
Step 6 Workstream 1: Navigation

Walking Real Workflows Through Each Candidate

Wednesday, March 11, 2026

Input

3 candidates + 6 workflow maps

Output

526-line scored evaluation — Enhanced Candidate A wins 30/35

Full evaluation →

A navigation design can look good on paper and fail in practice. The only way to know: walk real workflows through it. Every workflow from Step 4 was traced through every candidate — exact screen paths, click counts, information co-location, external tool dependencies.

The results:

Criterion A B C
Screen-hop reduction 4 3 5
Mental model match 5 4 3
Billing accommodation 5 3 5
Scalability 4 5 3
Implementation effort 3 4 2
Weighted Total (/35) 30 29 26

Enhanced Candidate A won. Domain-first navigation matching how the admin actually thinks, augmented with the strongest ideas from the other two:

  • From B: an operational dashboard with configurable alert tiles and phase-aware prompts
  • From C: global search resolving to entity pages from any screen

The projected improvements across all 6 workflows:

48-65% Fewer Screens
50-70% Fewer Clicks
93% External Tools Eliminated
30/35 Evaluation Score
Step 7 Workstream 2: Billing

What Billing Data Exists and Where?

Wednesday, March 11, 2026

Input

5 code repositories + Stripe account + Google Sheets + interview data

Output

1,489-line discovery document — code audit, workaround mapping, Stripe API analysis, billing requirements

Billing discovery →

While Workstream 1 solved how to organize the backend, Workstream 2 tackled the system's biggest missing capability: billing has zero presence in the admin interface.

The admin manages payments weekly — but entirely outside the backend. Five code repositories were audited to understand what billing infrastructure exists. The checkout pipeline was traced end-to-end: frontend checkout → webhook processing → Stripe subscription creation → and then... nothing. Subscription data flows into Stripe and stays there. The admin backend never reads it.

Every Google Sheet workaround was documented:

What the Sheet Tracks Why It Exists
Family payment plans Stripe doesn't natively support multi-student family billing
Scholarship students No system for tracking 50%/25% discount tiers
Deferment students No UI for Stripe's pause_collection feature
Failed payment follow-up No alert when a payment fails
Semester-end payment setup Bulk subscription creation done manually in Stripe

Stripe's API was analyzed for what data could surface in the admin backend. The finding: Stripe has everything — subscription status, payment history, failure reasons, coupon redemptions. The data exists. It just isn't visible.

Six new billing screens were defined: Payment Overview, Payment Plans (enhanced), Coupons (enhanced), Family Plans, Scholarships & Deferments, plus billing tabs on student and semester entity pages.

5 Repos Audited
6 Sheet Workarounds
6 New Screens Defined
Step 8

The Complete Specification

Wednesday, March 11, 2026

Input

Enhanced Candidate A (WS1) + Billing Integration Brief (WS2)

Output

11 specification files, ~4,300 lines, 48 screens across 9 domains

Spec index →

The two workstreams converged. Enhanced Candidate A provided the 8+1 domain navigation structure. Billing discovery provided the requirements for six new billing screens plus entity-level payment tabs. The result: a complete, mockup-ready specification.

Dashboard (alerts, phase prompts, quick actions)
├── Student Management    → Students (6-tab Detail), Submissions, Promoted, Failed Signups
├── Semester Management → Semesters (4-tab Hub), Welcome Package, Email, Tags, Operations
├── Content                     → Video Lessons, Resources, Recordings, Tutorials, MCQ, Quizzes
├── Scheduling                → Calendar View, Live Sessions, Appointments, TA Schedules, Holidays
├── Teacher Management → TAs (4-tab Detail), Assignment Criteria, Reports, Groups
├── Billing & Payments     → Overview, Payments, Plans, Coupons, Family Plans, Scholarships
├── Reporting                  → Student Report, Referrals, Reports, Composition, Logs
└── Admin & System          → Admins, Settings, Notifications, Support, Issue Queue

Each of the 11 spec files is detailed enough to build from without asking clarifying questions: every column, filter, action, state, role-based visibility rule, and data source for every screen.

The three most specified areas — each required the depth of a mini-application:

  • Student Detail Page: 6 tabs consolidating what currently requires 5-7 screens. The profile the admin asked for in the interview — "level, TA, gender, semester history, submission count, and payment status on one screen."
  • Semester Hub: 4 tabs including a gated Close Workflow that blocks payment setup until promotions are complete — directly solving the admin's most dangerous pain point.
  • Billing & Payments: 6 screens bringing Stripe data into the admin backend for the first time, plus entity-level tabs so payment status appears in context.
48 Screens
9 Domains
11 Spec Files
~4,300 Lines
Step 9

The Admin Mockup

Wednesday–Thursday, March 11–12, 2026

Input

Admin specification (11 files) + 13-phase implementation plan

Output

Interactive React + TypeScript mockup — 59 page components, 35 routes, 32 mock data files

Implementation plan → View the mockup →

The specification was translated into a working interactive prototype.

Built in 13 phases, each producing committed, navigable screens. Complex screens first (Student Detail, Semester Hub Close Workflow, Billing) — they set patterns that simpler screens follow. Six independent content screens were built in parallel. Stripe's admin UI served as the design reference: data-dense tables, inline status badges, metric cards with trends.

After the initial build, every screen was audited line-by-line against its specification. 52 deviations found — missing columns, incorrect filters, wrong badge colors, absent empty states. 44 resolved across four batches. This is the work that separates a demo from a specification-accurate prototype: ensuring that what you see matches what the spec says, so that future engineering decisions can reference the mockup with confidence.

The three screens that demonstrate the redesign's impact:

  • Dashboard: Seven alert tiles (failed payments, pending submissions, TA response times, appointment utilization, semester phase status, app issues, student snapshot) replace what was a page of passive statistics. The admin's daily monitoring workflow drops from 4-5 screens with 3 external tools to one screen with zero external tools.
  • Student Detail Page: One entity page with six tabs replaces the current 5-7 screen workflow for handling a student support issue. Profile, submissions, payments (from Stripe), appointments, semester history, and actions — all on one page. The consolidated student profile the admin asked for in the interview, built exactly as described.
  • Semester Hub Close Workflow: A five-step gated sequence that enforces: EOC review complete → promotions finalized → payment setup begins → deactivation processed → semester archived. Payment setup is literally blocked until promotions are marked complete — directly preventing the cascading corrections that were the admin's most painful operational problem.
59 Components
35 Routes
32 Data Files
52 Deviations Found
Redesigned Dashboard — alert tiles, quick actions, semester context

Redesigned dashboard: alert tiles replace passive statistics

Redesigned Semester Management — setup progress, status tracking

Semester Management: setup progress bars and status tracking

Redesigned Billing — Payment Overview with failed payments and subscriptions

Billing & Payments: data from Stripe, visible for the first time

Explore the Admin Mockup Navigate through Dashboard, Student Management, Billing & Payments, and Semester Management.
See domain-based navigation, entity pages, and the Close Workflow in action.
Step 10

Making the Mock Data Believable

Saturday, March 29, 2026

Input

Staff walkthrough feedback: contradictory student data causing confusion

Output

94 submission records, 20 coherent student stories, 9 cross-reference rules

Full data audit →

The screens matched the specification, but the data behind them didn't hold up. During a staff walkthrough, an "Active" student had a "Cancelled" subscription. A full-scholarship student showed a $199 payment. A group claimed 5 members but listed 4. The mock data was undermining the demo.

An audit cross-referenced all 33 data files and found five categories of issues:

  • 4 students with contradictory states — Active status but Cancelled subscriptions, or active deferments for withdrawn students.
  • 54 missing submission records — 14 of 20 students had fewer records than their profile claimed. A student showing "6 submissions" would display 1 in the Submissions tab.
  • 4 student groups with wrong member counts
  • 4 scholarship-billing mismatches — full scholarships showing full-price payments.
  • Dashboard alert tiles that didn't match the underlying data

Each of the 20 mock students was given a coherent story across all screens — a student with failed payments is also behind on submissions, has a filed issue about access, and shows declined charges in billing. Nine cross-reference rules now govern the data so future changes maintain consistency.

Cycle 2 — v2 Review & Mockup

Review Is Part of the Process

The v1 mockup was specification-accurate. It was also the first time the stakeholder could see her own workflows rendered as concrete screens. The act of showing her the v1 build surfaced gaps no amount of pre-build review could have caught: proposed interactions that didn't match how she actually operates, terminology she'd never use, whole workflows (Communication, Repeat Semester) that were missing from the v1 spec entirely.

What follows is the second cycle — a structured stakeholder review that ran over nine calendar days in April 2026. Two live meetings, one email exchange, six architecture decision records, and a revised 12-file spec. The v2 mockup was built in 36 hours against the revised spec, alongside the frozen v1 mockup for reference. This cycle is the evidence for a claim worth making to colleagues and contractors: in AI-assisted product work, building the thing is often the best review of the spec. Plan for two cycles from the start.

Step 11 v2 Review

Gap Analysis: What the v1 Build Exposed

Tuesday, April 14, 2026

Input

Complete v1 mockup (35 screens) + stakeholder's own backend-ramblings document + spec-vs-reality observations

Output

9 per-domain gap analyses + 5-category risk framework + pre-meeting categorization of every proposed change

Pre-meeting framework → Stakeholder ramblings →

The stakeholder delivered a long document in her own words — what she needs on each screen, what she ignores, where the current system breaks down. Alongside that, walking her through the v1 mockup surfaced a second signal: features the v1 spec described accurately but built in a way that didn't fit her mental model.

Every proposed change was catalogued into one of five categories keyed to risk, so the stakeholder review could spend its time on the high-risk ones:

Category Risk Example
A — Refinements Low Add gender column to Students list; add Admin Notes tab
B — Additive Features Low–medium 48-hour late-response flag; Onboarding Support tab
C — Replacements HIGH Replace gated Close Workflow with flat checklist; rewrite Assignment Criteria
D — Structural Additions HIGH Add Communication as a 9th top-level domain; "Send as TA" identity switching
E — Deferred Blocked Reporting (needs definition); Issue Queue (state machine undefined)

The framework was deliberately conservative. Nine of the biggest proposed changes landed in Categories C and D — changes that would either replace load-bearing v1 mechanisms or expand the information architecture. These couldn't be shipped on stakeholder vibes. They needed a meeting.

9 Gap Analyses
5 Risk Categories
9 High-Risk Items
35 v1 Screens Reviewed
Step 12 v2 Review

The Stakeholder Review — 90 Minutes With Lejla

Wednesday, April 15, 2026

Input

Pre-meeting framework + 9 gap analyses + v1 spec (11 files) + live v1 mockup walkthrough

Output

11 decisions, 5 Architecture Decision Records, Framework v2 decision catalog, and a revised 12-file spec produced within 24 hours

Meeting transcript → Framework v2 →

A 90-minute session with Lejla, QuranFlow's Project Manager. Every Category C and D item on the table. The goal wasn't to brainstorm — the framework had already done that — it was to resolve each open item with a decision she could stand behind.

Every Category C and D item was resolved. The 11 decisions:

# Decision Outcome Record
1 Master + semester content copy Clone-from-Previous + auto-clone on semester create Cat A
2 Registration Tracker as new screen Dissolved — filter preset on Enrollment tab instead ADR-005
3 Close Workflow (gated 5-step) Replaced with flat End Checklist + backend automation ADR-001
4 Assignment Criteria rules engine Replaced with hybrid per-TA matrix + small named rules ADR-002
5 Communication as 9th domain Accepted — 6 screens, absorbs Email Mgmt + Notifications ADR-003
6 "Send as TA" identity switching Rejected — replaced with admin pre-drafts, TA sends from own account ADR-004
7 Tags Preserve v1; bulk email will use tags for filtering No change
8 Payment Plan grouping Grouped presentation (Standard / Family / Scholarship / Discounts) Cat A
9 Unmentioned v1 subsections Preserve from v1 with explicit "NOT ADDRESSED" markers New policy
10 Auto-enrollment / auto-billing Parked — revisit post-app-redesign Deferred
11 Repeat Semester workflow Deferred — meeting ended before discussion Deferred

The three load-bearing decisions explain themselves:

  • Close Workflow → End Checklist (ADR-001). Lejla: "I'm not too convinced of all these steps that are being proposed." The gated 5-step workflow was designed to prevent specific operational errors (W4: payment setup before promotions finalized). She wanted a flat checklist instead, with backend automation handling the ordering. The decision has a real trade-off — without the gates and without automation, the flat checklist is strictly worse than v1 — and ADR-001 documents that trade-off honestly.
  • Assignment Criteria → Hybrid Matrix (ADR-002). The v1 generic rules engine scored 30/35 in the original evaluation for flexibility. But nobody configures assignments as abstract rules — Lejla configures per-TA: "which TAs handle which levels, max capacity for each TA, who takes new vs returning students." The matrix makes the actual configuration surface direct.
  • Communication as 9th Domain (ADR-003). v1 had no blast email, no announcement board, no 1:1 messaging, and no communication audit trail. It had Email Management buried under Semester Mgmt and Notifications buried under Admin & System. Pulling all outbound communication into its own domain — with 6 screens, 4 net-new — was the single biggest IA change in v2.

Within 24 hours of the meeting, Framework v2 captured all 11 decisions, five ADRs documented the high-risk ones end-to-end (context, decision, rationale, first- and second-order consequences, what's lost, what's gained, open questions), and the full 12-file v2 admin spec was regenerated from the revised ground truth.

90 min Meeting
11 Decisions Made
5 ADRs Written
12 Spec Files Revised
Step 13 v2 Review

Lejla's Reply — End Checklist Defined in Her Own Words

Tuesday, April 21, 2026

Input

Stakeholder Summary sent to Lejla with 5 ADR open questions + 13 "defaults I'm making if you don't flag them"

Output

Full 5-step End Checklist definition (canonical) + 11 of 13 defaults confirmed + 2 flipped; impact map of every spec edit required

Summary sent to Lejla → Lejla's reply →

The Apr 15 meeting left ADR-001's core question — "what should the End Checklist actually contain?" — explicitly unresolved. Lejla said: needs to be discussed. The other ADRs had smaller open questions: 13 in total, captured as "defaults I'm making; push back if any feel wrong."

Rather than schedule another meeting immediately, the open questions and proposed defaults were packaged into a Stakeholder Summary and sent to Lejla as a single HTML document. Silence would be acceptance. She replied in writing six days later.

She defined the End Checklist verbatim:

Step 1 — All Submissions Reviewed. Auto-completes when there are no more pending submissions.

Step 2 — Pass/Fail. Every L1–4 student marked pass or fail by their TA or admin. Mastery students excluded — auto-passed at Step 5.

Step 3 — Send Pass/Fail Emails. Pass email, fail-with-opt-in-to-repeat email, or mastery-elective email — whichever applies per student.

Step 4 — Set Up Automatic Payments. Passing L1–4 recitation students get auto-created subscriptions. Mastery students self-enroll.

Step 5 — Semester Close. Fully automatic at midnight EST on the semester end date.

Eleven of the 13 defaults were confirmed as proposed. Two were flipped:

  • Announcement Board posters: proposed "admin-only" → actual "admins AND TAs can post." Rationale: TAs manage their own posts.
  • Level 0 promote target: spec said Level 1 → actual Level 2. Lejla: "Level 0 is a placeholder for students whose level hasn't been assessed yet, and our default has always been to place them at Level 2. The spec saying Level 1 is an error — please update accordingly." Spec error caught before build.

Her 5-step definition did more than close out ADR-001 — it surfaced new design work that wasn't in the v2 spec yet. Where does the TA actually assign pass/fail? Where does the admin see which students still need payment setup? Which system-seeded email templates power Step 3? These became four design-decision calls made on the spot (documented in the follow-up email as "flag if wrong; silence = acceptance") so the build could start without waiting on a second round-trip.

5 steps End Checklist
11/13 Defaults Confirmed
2 Defaults Flipped
3 Topics Still Deferred
Step 14 v2 Review

Working Session — Three Deferred Topics Resolved

Wednesday, April 22, 2026

Input

Three topics still open: Reporting scope, Issue Queue state machine, Repeat Semester admin-side workflow

Output

All three resolved. 4 new reports defined, ADR-006 for the Issue Queue state machine, one policy inversion that amended ADR-002

Session transcript → Apr 22 answers →

A 60-minute working session to close out the three topics that had been deferred through both earlier rounds. Three surprises came out of it:

Surprise 1 — v1 had a wrong policy baked in. Both v1 and the draft v2 spec said: "repeat students should be auto-assigned to their previous teacher." Lejla's actual policy, stated plainly in the session: "for repeat students, we will assign them to a different teacher, if at all possible." The rule is rotation, not continuity — so the student sees a fresh perspective on the level they're redoing. ADR-002 was amended the same day; the named rule renamed from "Repeat-Student Auto-Assign" to "Repeat-Student Rotation."

Surprise 2 — "Delete" was the wrong word. The v1 Issue Queue had a Delete action whose semantics were never specified. Lejla's framing: Resolve means "handled, kept in history"; the missing verb is "not a real issue, still kept in history" — Reject. No issue is hard-deleted. Archive is a view over Resolved + Rejected, not a fifth state. Delegation (to CS, IT, or Teaching Staff) is an In Progress sub-state — informational only, admin retains ownership. ADR-006 captures the full state machine.

Surprise 3 — Reporting needed less than feared. The Apr 15 framing treated Reporting as a Category E black box ("needs definition"). The session resolved it with concrete asks: two primary reports (Active Semester Enrollment, Revenue Breakdown), two nice-to-have stubs (Installment→One-Time Conversion, Notification/Email Impact), CSV export only, live dashboards rather than scheduled emails. Enough to build against without waiting for a full reporting working session later.

Topic Resolution Artifact
Reporting scope 2 primary + 2 stub reports; CSV export only; live dashboards v2 spec §9 revised
Issue Queue state machine Open / In Progress (±Delegated) / Resolved / Rejected. Reopen via Open. Archive is a view. ADR-006 (new)
Repeat Semester — admin side Enrollment Type column (Continuing / New / Repeat / Year 2); self-service via email link; TA rotation policy inverted ADR-002 amendment

The lesson worth taking forward: no amount of specification quality catches every stakeholder-side misread. The TA-rotation policy had been frozen into v1 and carried into the v2 spec draft. It was only the working-session conversation that surfaced the inversion. The cost of catching it at the spec stage was a one-line amendment and a mock-data rewrite in one file. The cost of catching it at launch would have been an operational regression affecting every semester close.

60 min Working Session
3 Topics Resolved
1 Policy Inversion Caught
6 ADRs Total
Step 15 v2 Build

The v2 Mockup — Built Alongside v1 in 36 Hours

Wednesday–Thursday, April 22–23, 2026

Input

v2 admin spec (12 files, ~5,300 lines) + 6 ADRs + two stakeholder-answer docs + frozen v1 mockup as reference

Output

43 v2 screens at /v2/*, 9 domains + Dashboard, the 7-tab Student Detail, 5-tab Semester Hub (End Checklist), hybrid Assignment Matrix, and the entire new Communication domain

View the v2 mockup →

v2 is not a replacement for v1 — it's a parallel build. src/pages/ (v1) is frozen; src/pages/v2/ (v2) runs alongside it in the same Vite bundle, mounted under a /v2/* route tree. Both are deployed together at the same URL. Contractors and stakeholders can load / for the v1 reference and /v2/ for the revised design and compare screen-for-screen.

The build ran as 8 named phases in a single long push, with deliberate topology at each stage:

Phase Work Topology
A — Foundation Types, mock data, nav config, 4 new Reporting routes for Apr 22 additions Sequential lead
B — Apr 22 Hits Enrollment Type column; Student Detail enrollment card; Issue Queue state machine (Reject action, Delegation, Archive view, Reopen) Sequential lead
C Wave 1 Student Mgmt + Content + Semester Mgmt stubs + Comm Logs 4 parallel worktree teammates
C Wave 2 Teacher Mgmt (list + 4-tab Detail) + Billing (End Checklist Payment Setup Queue, Family Plans, Scholarships) + Reporting (2 new real, 2 stubs) 3 parallel worktree teammates
D — Scheduling 6 screens including new One-to-One Appointment Usability dashboard Single teammate
E — Communication + Assignment Criteria Blast Emails, Announcement Board, Private Messages, Push Notifications; hybrid Assignment Matrix with rotation rule Teammate + lead (rate-limit fallback)
F — Simplify Sweep 3 parallel review agents (reuse / quality / efficiency); shared DateRangeFilter extracted; hotspot memoization 3 parallel review agents
G — Integration Pass Full cross-domain deep-link audit; 8 bugs fixed including V2EntityTabShell hash honoring (restored 10+ deep links) Sequential lead
H — Polish Cmd+K global search across students / teachers / semesters + nav commands; shadcn Empty states; raw palette colors swapped for semantic tokens Sequential lead

Three build patterns earned their keep under real time pressure:

  • Spec-extraction as first deliverable. Every parallel teammate's first commit was a chore: extract spec checklist reading every column, filter, action, and state from their assigned spec section. Loose prompts let teammates skip this. When the Comm teammate hit a rate limit mid-stride, its spec checklist had already landed — the bleed-through edits were coherent enough for the lead to salvage. The pattern turned out to be a rate-limit contingency plan too.
  • Pre-flight rebase check. Worktree teammates started from a pinned base; before merge, the lead rebased onto main. This caught stale bases on three separate occasions across Waves 1, 2, and Phase D — load-bearing, not ceremony.
  • Post-commit plan sync. A background agent updated IMPLEMENTATION-PLAN.md after every commit. The durable plan document stayed in sync with reality rather than drifting into a historical artifact halfway through.

v2 ships feature-complete against the revised spec. The deferred items in docs/IMPLEMENTATION-PLAN.md are all production-scope (backend End Checklist triggers, real email delivery, Stripe webhooks, TA-facing pre-draft queue) — mockup-appropriate placeholders stand in for each, and every placeholder links back to its spec section.

43 v2 Screens
8 Build Phases
~36h Total Build Time
0 v1 Files Modified
Explore the v2 Mockup Same bundle serves both — quranflow-admin.pages.dev/ is v1 (reference, frozen), /v2/ is the current design.
See the End Checklist tab, hybrid Assignment Matrix, new Communication domain, and Cmd+K global search.

The Full Pipeline

Capability Map 810 lines — 35+ screens, 10 DB tables, 3 integrations — Thu, Dec 18, 2025
Admin Interview 137 lines — 5 pain points, 9 recommendations, 7 domains proposed — Wed, Mar 11, 2026
Redesign Plan 532 lines — two parallel workstreams defined — Wed, Mar 11, 2026
Framework Analysis WS1 — 6 frameworks analyzed, 6 workflows mapped (1,931 lines) — Wed, Mar 11, 2026
Candidate Specification WS1 — 3 eliminated, 3 hybrid candidates specified (935 lines) — Wed, Mar 11, 2026
Candidate Evaluation WS1 — Enhanced Candidate A wins 30/35 (526 lines) — Wed, Mar 11, 2026
Billing Discovery WS2 — 5 repos audited, 6 workarounds mapped, 6 screens defined (1,489 lines) — Wed, Mar 11, 2026
Complete Specification 48 screens across 9 domains, 11 spec files (~4,300 lines) — Wed, Mar 11, 2026
Admin Mockup 59 components, 35 routes, 13 build phases, 52 deviations found — Wed–Thu, Mar 11–12, 2026
Mock Data Audit 54 submissions added, 20 student stories, 9 cross-reference rules — Sat, Mar 29, 2026
Gap Analysis + Pre-Meeting Framework 9 per-domain gap analyses + 5-category risk framework — Tue, Apr 14, 2026
Stakeholder Review Meeting 90 min with Lejla — 11 decisions, 5 ADRs, Framework v2, revised 12-file spec — Wed, Apr 15, 2026
Lejla's Written Reply 5-step End Checklist canonical; 11/13 defaults confirmed, 2 flipped — Tue, Apr 21, 2026
Working Session 60 min — Reporting, Issue Queue, Repeat Semester resolved; ADR-006 added; ADR-002 amended — Wed, Apr 22, 2026
v2 Mockup Build 43 screens across 9+1 domains, 8 phases (A–H), ~36 hours, v1 untouched — Wed–Thu, Apr 22–23, 2026

Each step's output became the next step's input. Every decision was driven by evidence — workflow data, interview findings, scored evaluations, stakeholder quotes — not intuition. The process produced 50+ source documents across two cycles, all linked above.

The durable artifacts for anyone picking this up next — the contractor who will build against v2, a colleague joining the project, a future reviewer — are: the Framework v2 decision catalog, the six ADRs, the 12-file v2 admin spec, and the live v2 mockup. Everything else on this page traces how those four artifacts got to their current shape.

We'd love your feedback

If you have thoughts on the redesign, questions about the process, or feedback on the mockup — drop us a voice note on WhatsApp or Slack.