AAA Cycle: Phase 3 - Apply the Plan and Deliver

📖 8 min read

Phase Overview

Purpose

Execute the agreed plan with discipline while maintaining continuous alignment. This is sustained execution with governance, quality assurance, stakeholder communication, and value delivery.

The Universal Pattern

Regardless of project size or methodology, application follows these steps:

  1. Implement incrementally: Build in small batches
  2. Maintain quality: Test continuously, don’t compromise
  3. Govern architecture: Ensure integrity through reviews
  4. Keep stakeholders aligned: Regular communication and demos
  5. Deploy reliably: Automate and practice
  6. Reflect and improve: Learn from experience

Recursive Application

Level Timeframe What Apply Looks Like
Program months Multiple projects, portfolio governance, quarterly reviews
Project weeks Team executing sprints, regular demos, project retrospective
Sprint days Daily development, sprint review, sprint retro
Feature hours Code, test, review, merge with continuous integration

At the feature level, Apply happens through CI/CD pipelines. Every commit is an act of applying the technical agreements made during design. The pipeline verifies that agreements are honored; tests are agreements encoded as executable specifications. See CI/CD and Technical Agreement for how AAA operates at the code level.

Entry & Exit

You start with: Approved architecture and implementation plan from Phase 2

You deliver: Working software in production that meets business objectives


The Core Value: Honoring the Agreement While Learning

Apply is not blind execution; it's disciplined delivery that honors the agreement while maintaining the courage to pause when discovery demands it.

Discovery during implementation is inevitable:

  • Implementation reveals hidden complexity
  • Building the first version surfaces better approaches
  • Edge cases emerge that invalidate assumptions
  • Dependencies appear that change timelines

Discovery Is Not Failure

This is the inevitable outcome of doing real work. Teams that ship based on broken assumptions, just to avoid "changing the plan," deliver work that misses the mark.

The discipline of Apply is knowing when to:

  • Continue: Discovery confirms the approach; keep executing
  • Adapt: Minor adjustments within the agreed scope and architecture
  • Pause and Realign: Discovery invalidates core assumptions; cycle back to Align or Agree

When to Pause vs. Adapt

Pause and Realign Adapt and Continue
Technical discovery: Assumed approach won’t work Minor technical adjustments within the architecture
Scope discovery: Original scope misunderstood Small scope clarifications that don’t change core agreement
Dependency discovery: Critical dependencies emerge Implementation details that don’t affect stakeholders
Value discovery: Better problem to solve revealed Performance optimizations within agreed SLOs

How to handle discovery:

  1. Document what you learned (what assumption broke, what’s now understood)
  2. Assess impact (timeline, cost, scope, quality, risk)
  3. Present options to stakeholders (continue as-is, adapt, or realign)
  4. If realignment needed, cycle back to Align or Agree phases
  5. Update the agreement and communicate changes

Circuit Breakers

The guidance above describes how to handle discovery, but when to trigger reassessment can remain vague. Circuit breakers make the decision explicit by defining specific boundaries that, when crossed, force a pause and reassessment.

Temporal circuit breakers are feature-specific time limits set during Agree. A simple CRUD screen might have a 3-day limit while a complex integration might have a 6-week limit. When the time limit is reached, you stop and reassess rather than pushing through.

Assumption circuit breakers trigger when critical assumptions identified during Agree prove invalid. During shaping, you document assumptions like “the third-party API supports bulk operations” or “the existing database schema can handle this query pattern.” If implementation proves an assumption wrong, the circuit breaker trips.

When a circuit breaker trips:

  • Extend with explicit agreement: If work is close to done and stakeholders agree
  • Reduce scope: Cut features to fit within the boundary
  • Reshape: Return the work to Agree for reshaping
  • Drop: If work proved unviable, stop rather than continuing to invest

For more on circuit breakers, see Shaped Kanban.


Managing External Dependencies

When Dependencies Fail

External dependencies are a primary source of discovery during Apply. Other teams miss commitments, third-party services don’t work as documented, and vendors encounter delays.

Types of dependency failures:

  • Delay: External team pushes their delivery date
  • Capability gap: Delivered dependency doesn’t meet your needs
  • Quality issues: Dependency works but has bugs or performance problems
  • Complete failure: Dependency won’t be delivered at all

Dependency Management

Track dependencies proactively:

  • Maintain a dependency register with: owner, expected delivery, confidence level, fallback plan
  • Schedule regular check-ins with dependency owners
  • Don’t rely on status reports alone

When a dependency fails:

  1. Assess the impact: How does this affect your timeline, scope, and quality?
  2. Identify options: Wait, work around, de-scope, build it yourself, or escalate
  3. Present options to stakeholders: Don’t just report the problem; present choices with trade-offs
  4. Update the agreement: If response changes scope, timeline, or cost, cycle back to Align or Agree

Red Flags

  • No fallback plans: Critical dependencies without contingencies
  • Passive monitoring: Waiting for dependency owners to report problems
  • Hope as strategy: "They'll probably make it" without evidence

Scope Negotiation

The Iron Triangle Reality

When reality forces changes, something has to give. The classic constraints are scope, timeline, and cost.

Adjustable Rarely Adjustable
Scope: Reduce features, defer to future phases Quality: Cutting corners creates technical debt
Timeline: Extend delivery date Fixed deadlines: Regulatory, contractual, market-driven
Cost: Add resources, buy vs. build Budget caps: When resources truly aren’t available

Scope Negotiation Framework

  1. Quantify the gap: How much are we short? (days, story points, effort)
  2. Present options, not problems: Reduce scope, extend timeline, add resources, or accept quality risk
  3. Recommend an option: Don’t just present choices; provide your recommendation with reasoning
  4. Make trade-offs explicit: “If we cut feature X, we lose capability Y”
  5. Get explicit agreement: Document the decision

Negotiation Principles

Lead with impact, not excuses:

  • ❌ “We underestimated the complexity”
  • ✅ “The integration revealed requirements we didn’t anticipate. Here are our options.”

Protect quality last: Quality is the easiest thing to sacrifice and the hardest thing to recover. Quantify the risk when stakeholders push to “just get it done.”

Negotiate early: A 10% scope reduction in week 2 is easier than a 30% scope reduction in week 8.

Red Flags

  • Presenting problems without options
  • Silent scope creep: Accepting additions without negotiating
  • Sacrificing quality first: "We'll skip testing to make the date"
  • Late negotiation: Raising scope issues the week before delivery

Core Activities

1. Implementation & Architecture Governance

Build incrementally while maintaining architectural integrity.

Implementation Approach:

  • Work in small, releasable increments
  • Continuously integrate and test changes
  • Gather feedback early and often

CI/CD as Agreement Verification:

At the implementation level, CI/CD pipelines operationalize the AAA discipline. Every commit triggers verification that technical agreements are being honored:

  • Tests verify agreements: Unit tests encode component behavior agreements. Integration tests encode contracts between components. When tests fail, an agreement was broken.
  • Code reviews verify alignment: PR reviews confirm that the implementation matches shared understanding of intent.
  • Security and quality gates verify standards: Pipeline gates enforce the quality and security agreements established during Phase 2.

When CI failures occur, ask “what agreement broke?” rather than just “how do I fix this build?” This question leads to root causes and process improvements, not just symptom fixes.

See CI/CD and Technical Agreement for detailed guidance on how AAA operates at the code level.

Architecture Governance:

  • Architecture Decision Records (ADRs): Document significant decisions as they’re made
  • Architecture Reviews: Weekly or bi-weekly review of significant changes
  • Code Reviews: Review for architectural conformance, not just correctness
  • Tech Stack Governance: Evaluate new libraries/frameworks before adoption

See Governance for detailed guidance.

Red Flags

  • Architecture astronauts: Over-governing, creating bottlenecks
  • No governance: Inconsistent implementation, architectural drift
  • Ignoring technical debt: Until it's unmanageable

2. Continuous Stakeholder Alignment

Maintain alignment throughout implementation as discovery happens.

The human connection established during Align must be maintained throughout Apply. As implementation reveals new information, the team and stakeholders must stay aligned on what’s being built, why it matters, and what trade-offs are being made.

Regular Touchpoints:

  • Sprint/Iteration Reviews: Demo working software, gather feedback
  • Stakeholder Updates: Progress, blockers, risks, budget, timeline
  • Retrospectives: Reflect on what went well, identify improvements

Progress Tracking: Hill Charts Over Percent Complete

Traditional progress tracking (“we’re 80% done”) hides more than it reveals. Teams can be “80% done” for weeks because they’re stuck on the hard part.

Hill charts provide more honest visibility by distinguishing two phases:

Uphill (figuring it out): The team is still discovering unknowns, solving novel problems. Progress feels slow because you’re learning, not just executing.

Downhill (making it happen): The unknowns are resolved. The team knows what to build and is executing.

This distinction matters for stakeholder communication:

  • “We’re uphill on the integration” signals uncertainty
  • “We’re downhill on the UI” signals confidence

Red Flags

  • Communication vacuum: No updates until the end
  • Hiding problems: Not escalating risks/issues early
  • Stakeholders surprised at delivery: Lost alignment during execution

3. Quality Assurance

Ensure quality through continuous testing and validation.

Testing Activities:

  • Test-driven development: Write tests with code
  • Automated testing: Unit, integration, E2E tests in CI pipeline
  • Security testing: SAST, DAST, dependency scanning, penetration testing
  • Performance testing: Load testing, stress testing, SLO validation

See Security Testing for detailed guidance.

Quality Gates:

  • Pre-merge: Tests pass, review approved
  • Pre-release: All acceptance criteria met, no critical bugs, security scan clean
  • Pre-production: UAT passed, performance validated, rollback plan tested

Ready for Release

  • All acceptance criteria met
  • Test coverage targets achieved
  • No critical/high-severity bugs
  • Security scan passed
  • Performance meets SLOs
  • UAT completed and approved

4. Deployment & Operations

Deploy reliably and transition operations smoothly.

For detailed deployment guidance, see:

Key Principles:

  • Automate everything: deployments, rollbacks, monitoring
  • Deploy frequently: small, frequent deployments reduce risk
  • Use feature flags: decouple deployment from feature release
  • Test rollback regularly: it should be routine, not exceptional

Operations Handoff:

  • Train operations team
  • Provide runbooks for common issues
  • Establish on-call rotation and escalation paths

5. Delivery & Handoff

Complete delivery and transition to ongoing operations.

Final Validation:

  • Complete User Acceptance Testing
  • Validate all acceptance criteria met
  • Confirm SLOs being achieved
  • Get stakeholder sign-off on delivery

Retrospective:

  • Reflect on entire project (not just last sprint)
  • What went well? What didn’t?
  • Capture lessons learned
  • Celebrate team accomplishments

Project Closure:

  • Final project report to stakeholders
  • Archive project artifacts
  • Plan for ongoing enhancements

Delivery Acceptance

  • All must-have requirements implemented
  • Acceptance criteria met and validated
  • SLOs being met in production
  • Documentation complete
  • Operations team trained and ready
  • Stakeholders satisfied with delivery

Red Flags

  • No clear acceptance criteria: Project drags on indefinitely
  • No retrospective: Missing opportunity to learn
  • Ghosting operations: Dev team disappears after launch

Found this guide helpful? Share it with your team:

Share on LinkedIn