Skip to content

Project Spec

project-spec -> architecture-design -> domain-develop -> application-develop -> adapter-develop -> observability-develop -> test-develop

None. This skill is the first step of the workflow. It starts from the user’s vision and business problem.

The most common problem when starting a new project is “jumping straight into writing code.” Writing code with ambiguous Aggregate boundaries leads to expensive refactoring later. Without defining a Ubiquitous Language, team members end up calling the same concept by different names.

The project-spec skill integrates PM-perspective spec writing (vision, user stories, priorities, acceptance criteria) with DDD-perspective domain analysis (Ubiquitous Language, Aggregate boundaries, business rules). This document becomes the input for domain-develop, application-develop, and adapter-develop skills, ensuring that design intent is consistently carried through to the code.

PhaseActivityDeliverable
1. Vision CollectionProject basic info, users, KPIs, Non-Goals, timelineProject overview draft
2. Domain Analysis + User StoriesUbiquitous Language, business rules, user stories (INVEST), acceptance criteria (Given/When/Then)Language table + rule catalog + stories
3. Scope Decision + PrioritizationAggregate boundary identification, P0/P1/P2 priorities, milestonesAggregate candidates + priority table
4. Document GenerationOrganize all content into a structured document00-project-spec.md
Write the PRD
Define the requirements
Plan the project
Write the spec
Organize business requirements
Start the project

The skill collects the following information through conversation.

Project Basic Information:

  • What is the project name?
  • Describe it in one sentence?
  • What business problem are you solving?

User Information:

  • Who are the target users (personas)?
  • What are each persona’s key goals?

Business Information:

  • What are the key success metrics (KPIs)?
    • Leading indicators (e.g., daily active users, feature adoption rate)
    • Lagging indicators (e.g., revenue, retention rate)
  • What are the integration constraints with existing systems?
  • What are the technical constraints? (.NET 10, monolith/microservice, etc.)

Scope Boundaries:

  • What are the Non-Goals? — Features/scope explicitly excluded from this project
  • Are there hard deadlines or external dependencies?

The user does not need to provide all information at once. The skill incrementally collects it through questions.

Extracts domain model candidates and user stories from the collected vision.

Identifies key elements from business descriptions:

  • Key nouns -> Entity/VO candidates (e.g., “model”, “deployment”, “assessment”)
  • Key verbs -> Use case (Command/Query) candidates (e.g., “register”, “approve”, “quarantine”)
  • State changes -> Domain event candidates (e.g., “activated”, “quarantined”)

Results are organized into a table like:

KoreanEnglishDefinition
AI ModelAIModelA trained AI/ML model. The core entity managing lifecycle and risk
Model DeploymentModelDeploymentDeployment of a specific AI model to a production environment. Tracks version, environment, and status
Risk TierRiskTierEU AI Act-based model risk level (Minimal, Limited, High, Unacceptable)

Core stories are written for each persona:

As a [persona], I want to [action], in order to [value].

INVEST Criteria Verification:

  • Independent: Independent from other stories
  • Negotiable: Implementation approach is negotiable
  • Valuable: Provides value to users
  • Estimable: Size can be estimated
  • Small: Completable within one sprint
  • Testable: Verifiable with acceptance criteria

For each use case (Command/Query), acceptance criteria are written in Given/When/Then format:

Given: [Precondition]
When: [User action]
Then: [Expected result]

Both success and rejection scenarios are written.

Identified rules are classified by type:

Rule TypeDescriptionExample
InvariantA condition that must always be true”Model name cannot be empty”
State TransitionOnly allowed transitions are possible”Only Draft -> PendingReview is allowed”
Cross-cutting RuleReferences multiple Aggregates”Automatically quarantine Active deployment on Critical incident”
Forbidden StateMust be structurally impossible”Deployment of Unacceptable risk models”

Identifies Aggregate candidates according to Evans’ criteria.

Boundary Decision Criteria:

  1. Transactional consistency — Data changed in the same transaction belongs in the same Aggregate
  2. Invariant scope — The scope of data that invariants must guarantee
  3. Independent lifecycle — Can be created/deleted independently without other Aggregates
  4. Inter-Aggregate references — Reference by ID only (direct object references forbidden)

Inter-Aggregate Coordination:

  • Synchronous coordination -> Domain Service (within the same transaction)
  • Asynchronous coordination -> Domain Event + Event Handler (eventual consistency)

All use cases and user stories are classified by priority:

PriorityCriteriaMoSCoW Mapping
P0Cannot ship without itMust Have
P1Weakens competitiveness without itShould Have
P2Differentiates if presentCould Have

When a feature addition is requested, check these 5 items:

  1. Does this feature directly solve the core problem?
  2. Does this feature provide value without P0?
  3. Can users accept it being deferred to post-launch?
  4. Is the value sufficient relative to implementation cost?
  5. Does it not fall under Non-Goals?
  • Identify hard deadlines
  • External dependencies (other teams, third-party APIs, infrastructure)
  • Scope per milestone (Phase 1: P0, Phase 2: P0+P1, …)

All collected information is structured into {context}/00-project-spec.md.

# {Project Name} -- Project Requirements Specification
## 1. Project Overview
### Background / Goals / Target Users / Success Metrics (Leading+Lagging) / Technical Constraints
## 2. Non-Goals (What We Won't Do)
## 3. Ubiquitous Language
| Korean | English | Definition |
## 4. User Stories (INVEST)
### Per-persona stories + priorities
## 5. Aggregate Candidates
| Aggregate | Core Responsibility | State Transitions | Key Events |
## 6. Business Rules
### Per-Aggregate rules + cross-cutting rules
## 7. Use Cases + Acceptance Criteria
### Commands / Queries / Event Handlers + Given/When/Then
## 8. Forbidden States
| Forbidden State | Prevention Strategy | Functorium Pattern |
## 9. Priority Summary (P0/P1/P2)
## 10. Timeline / Milestones
## 11. Open Questions (engineering/product/design/legal)
## 12. Next Steps

After document generation, the skill guides the next steps:

The project spec is complete.

Next Steps:

  1. Use the architecture-design skill to design the project structure and infrastructure
  2. Use the domain-develop skill to design and implement each Aggregate in detail
  3. Use the application-develop skill to implement use cases

Here is a key summary of a real project’s PRD.

  • Model training pipeline management — Separate MLOps platform domain
  • A/B testing platform — Consider after Phase 2
  • Real-time model performance dashboard — Replaced with external monitoring tool integration
AggregateCore ResponsibilityState TransitionsKey Events
AIModelModel lifecycle management-RegisteredEvent, RiskClassifiedEvent
ModelDeploymentDeployment environment managementDraft -> PendingReview -> Active -> Quarantined -> DecommissionedActivatedEvent, QuarantinedEvent
ComplianceAssessmentRegulatory compliance assessmentInitiated -> InProgress -> Passed/FailedPassedEvent, FailedEvent
ModelIncidentModel incident/issue trackingReported -> Investigating -> Resolved/EscalatedReportedEvent, ResolvedEvent
IDStoryPriority
US-001As an AI governance manager, I want to register a new AI model, in order to systematically manage risk.P0
US-002As an AI governance manager, I want to register a model in a deployment environment, in order to track operational status.P0
US-003As a compliance officer, I want to conduct a compliance assessment before deployment, in order to comply with the EU AI Act.P0

Model Registration (RegisterModel):

Success Scenario:

Given: A valid model name, version (SemVer), and purpose are ready
When: The manager registers a model
Then: The model is created with Minimal risk level and a RegisteredEvent is published

Rejection Scenario:

Given: The model name is empty
When: The manager attempts to register a model
Then: A "ModelName is required" validation error is returned
PriorityUse Cases
P0RegisterModel, CreateDeployment, ReportIncident, InitiateAssessment
P1SubmitForReview, ActivateDeployment, QuarantineDeployment
P2Drift detection automation, parallel compliance checks
IDQuestionCategoryBlocking
Q-001When will the external ML monitoring API spec be finalized?engineeringNon-blocking
Q-002Is there a possibility of compliance criteria changes based on the EU AI Act enforcement timeline?legalBlocking
  • Start with business language, don’t end with technical language — Ubiquitous Language is the source of code naming
  • Explicitly state Non-Goals to prevent scope creep — Agreeing on “what not to do” is as important as “what to do”
  • User stories are verified against INVEST criteria — Untestable stories must be revised
  • Acceptance criteria include both success and rejection scenarios — Rejection scenarios are the source of domain rules
  • Start from P0 — Cannot ship without P0, P1/P2 are lower priority
  • Aggregate boundaries are based on transactional consistency — Business rules, not data models, determine boundaries
  • Forbidden states are structurally eliminated through the type system — Compile-time guarantees take priority over runtime validation
  • Open Questions are tracked with per-category tagging — Blocking/non-blocking distinction determines whether progress can continue