Requirements Traceability
The missing layer between planning and code. Structured user stories with ID conventions and a traceability matrix that connects every requirement to its tests and implementation.
Why Requirements Traceability?
The Planning System tells AI how to build things. Requirements traceability tells AI what the system should do — and how to verify it does. Without this layer, AI assistants have no authoritative source to consult when they encounter contradictions between documentation, code, and tests.
The Problem Without Traceability
- Someone asks “how does user login work?” — the answer is scattered across a README, a comment in the auth file, and a test that may or may not be current.
- A test fails. Is it a bug in the code, or did the requirement change? No way to know without digging through git history.
- An AI assistant sees conflicting information in docs vs. code. It guesses which one is correct. Sometimes it guesses wrong.
- A feature is “done” but nobody can verify all the acceptance criteria were actually implemented.
The Solution: Three Linked Artifacts
- User stories in
docs/usecases/— the canonical description of what the system should do, with acceptance criteria. - Test cases in your test files — each referencing the user story ID they verify.
- Traceability matrix in
TRACEABILITY.md— a table mapping US-XXX → TC-XXX → code location.
User Story Template
Each user story lives in its own file under docs/usecases/. The filename includes the ID so you can find any story instantly with a file search.
docs/usecases/US-001-user-login.md — Template
---
id: US-001
title: User Login
status: implemented # draft | implemented | deprecated
priority: critical # critical | high | medium | low
---
# US-001: User Login
## User Story
As a **registered user**, I want to **log in with my email and password**
so that **I can access my dashboard**.
## Acceptance Criteria
1. [ ] User can submit email + password via the login form
2. [ ] Valid credentials redirect to the appropriate dashboard (coach/partner/ascent)
3. [ ] Invalid credentials show an error message without revealing which field is wrong
4. [ ] After 5 failed attempts, the account is rate-limited for 15 minutes
5. [ ] Session persists across browser refreshes (JWT in httpOnly cookie)
6. [ ] Logout clears the session and redirects to the login page
## API Endpoints
- POST /api/auth/signin — Authenticate credentials, return session
- POST /api/auth/signout — Clear session
## Database Entities
- User (id, email, passwordHash, role, failedLoginAttempts, lockedUntil)
## Related Test Cases
- TC-001: Login with valid credentials
- TC-002: Login with invalid password
- TC-003: Login rate limiting after 5 failures
- TC-004: Session persistence across page reload
- TC-005: Logout clears session
## Notes
Password hashing uses bcrypt with cost factor 12.
Role-based redirect: COACH → /dashboard/coach, PARTNER → /dashboard/partnerKey fields in the template:
- Frontmatter: Machine-readable ID, status, and priority. AI tools can filter by status to find what's implemented vs. still in draft.
- Acceptance criteria as checkboxes: Concrete, testable conditions. Each criterion maps to at least one test case.
- API endpoints and entities: Tells AI exactly what code to look at when investigating this story.
- Related test cases: The explicit link from story to test. TC-XXX IDs let you grep from requirement to test in seconds.
ID Conventions
Consistent IDs are what make the system searchable. Every story and test case gets a unique ID that appears in the story file, the test file, and the traceability matrix.
User Story
e.g. US-001, US-042
Appears in: docs/usecases/US-001-title.md, test file comments, TRACEABILITY.md
Test Case
e.g. TC-001, TC-042
Appears in: Test file comments, TRACEABILITY.md, user story "Related Test Cases"
In test files, reference the user story in a comment at the top of the test:
scripts/tests/suites/auth.ts — Story References
import type { TestSuite } from '../lib/types';
export const authSuite: TestSuite = {
name: 'Authentication',
tests: [
{
// TC-001 — Verifies US-001 AC#1, AC#2
name: 'Login with valid credentials returns session',
endpoint: '/api/auth/signin',
method: 'POST',
expectedStatus: 200,
body: { email: process.env['TEST_EMAIL'], password: process.env['TEST_PASSWORD'] },
validate: (_, body) => {
const b = body as Record<string, unknown>;
return {
pass: typeof b['token'] === 'string' || b['ok'] === true,
message: 'Expected a session token or ok:true in response',
};
},
},
{
// TC-002 — Verifies US-001 AC#3
name: 'Login with wrong password returns 401',
endpoint: '/api/auth/signin',
method: 'POST',
expectedStatus: 401,
body: { email: process.env['TEST_EMAIL'], password: 'wrong-password' },
suggestedFix: 'Check auth handler returns 401 for invalid credentials',
},
],
};With this convention, you can grep US-001 across the entire codebase and find every test that verifies that story — no manual cross-referencing needed.
Traceability Matrix
The traceability matrix lives at TRACEABILITY.md in the project root. It is the single document that answers “what was built, how do I know it works, and where is the code?”
TRACEABILITY.md — Structure
# Requirements Traceability Matrix
Last updated: 2026-01-15
## How to Read This
- **US-XXX**: User story in docs/usecases/
- **TC-XXX**: Test case in scripts/tests/suites/ or e2e/tests/
- **Status**: implemented | partial | missing
| Story | Title | Test Cases | Code Location | Status |
|-------|-------|------------|---------------|--------|
| US-001 | User Login | TC-001, TC-002, TC-003 | src/app/api/auth/, src/lib/auth.ts | implemented |
| US-002 | User Registration | TC-010, TC-011 | src/app/api/users/route.ts | implemented |
| US-003 | Password Reset | TC-020 | src/app/api/auth/reset/ | partial |
| US-004 | Role-Based Dashboard | TC-030, TC-031, TC-032 | src/middleware.ts, src/app/dashboard/ | implemented |
## Coverage Summary
- Total stories: 4
- Fully covered: 3 (75%)
- Partially covered: 1 (25%)
- No test coverage: 0 (0%)The matrix serves three audiences:
- Developers — quickly find the code for any feature without digging through the whole codebase.
- QA / testers — know exactly which tests cover which requirements, and which requirements have gaps.
- AI assistants — have a single starting point when asked about any feature. Read the user story, find the tests, trace to the code.
The Source of Truth Rule
Without this rule, AI assistants face an ambiguity problem: when docs say one thing and code does another, the AI must guess which is intentional. It will sometimes guess wrong, treating a bug as a feature or a feature as a bug.
Priority Order (highest to lowest)
- User story acceptance criteria — what the system must do. If this conflicts with anything else, the story wins.
- Failing tests — a test that verifies a user story acceptance criterion is a bug report, not a test to delete.
- Code behavior — what the system currently does. May be wrong.
- Inline comments and README — may be outdated. Lowest priority.
This rule also clarifies what to do when a requirement changes: update the user story first, then update the tests, then update the code. Never the other way around. The story is the specification; the code is the implementation.
How to Build This (AI Guide)
Step 1: Create the User Stories Folder
Set up docs/usecases/
mkdir -p docs/usecases
# Create the template file
touch docs/usecases/_TEMPLATE.md
# Create your first user story
touch docs/usecases/US-001-user-login.mdStep 2: Write the User Story Template
Save this as docs/usecases/_TEMPLATE.md — copy it for every new story:
docs/usecases/_TEMPLATE.md
---
id: US-XXX
title: Feature Title
status: draft
priority: medium
---
# US-XXX: Feature Title
## User Story
As a **[role]**, I want to **[action]** so that **[benefit]**.
## Acceptance Criteria
1. [ ] ...
2. [ ] ...
3. [ ] ...
## API Endpoints
- METHOD /api/path — description
## Database Entities
- EntityName (relevant fields)
## Related Test Cases
- TC-XXX: Test description
## Notes
Any implementation notes, constraints, or decisions.Step 3: Create the Traceability Matrix
Create TRACEABILITY.md
# Create at the project root
cat > TRACEABILITY.md << 'EOF'
# Requirements Traceability Matrix
| Story | Title | Test Cases | Code Location | Status |
|-------|-------|------------|---------------|--------|
| US-001 | ... | TC-001 | src/... | draft |
EOFStep 4: Add Story References to Tests
In every test file, add a comment referencing the user story and acceptance criterion being verified:
Convention for test comments
// TC-001 — Verifies US-001 AC#1 (user can submit email + password)
test('login form submits credentials', async () => {
// ...
});
// TC-002 — Verifies US-001 AC#3 (invalid credentials show error)
test('login shows error for wrong password', async () => {
// ...
});Step 5: Add the Source of Truth Rule to Your AI Rules
Add this to your .cursor/rules/ or equivalent AI instruction file:
.cursor/rules/requirements.md
# Requirements Traceability
User stories in docs/usecases/ are the canonical source of truth.
When documentation conflicts with code:
- The user story is correct
- The code has a bug
Priority order:
1. User story acceptance criteria (highest)
2. Failing tests that reference a user story
3. Current code behavior
4. Inline comments and README (lowest)
When asked about any feature, start with the user story in docs/usecases/,
trace through TRACEABILITY.md to find tests and code locations.Step 6: Keep It Updated
- When a requirement changes: update the user story first, then tests, then code.
- When a new feature is built: create the user story before writing code (or at the same time — never after).
- When a test is added: add the TC-XXX reference to the user story and to TRACEABILITY.md.
- Periodically run a grep for
US-XXXin test files to verify the matrix is current.