AAA Cycle: Phase 2 - Agree to the Plan

📖 8 min read

Phase Overview

Purpose

Transform aligned understanding into concrete, approved technical plans. Design the solution, validate critical assumptions, and secure commitment on approach and resources.

Agreement without understanding is not agreement; it's compliance waiting to fail.

The Universal Pattern

Regardless of project size or methodology, agreement follows these steps:

  1. Design the solution: How will we build this?
  2. Validate assumptions: Will this approach work?
  3. Define quality standards: How good is good enough?
  4. Set performance targets: What’s the bar for success?
  5. Analyze costs: What’s the total investment and return?
  6. Plan the work: What’s the sequence and effort?
  7. Get commitment: Do we all agree to proceed?

Recursive Application

Level Timeframe What Agree Looks Like
Program weeks Full architecture design, comprehensive POC, detailed TCO/ROI
Project days Component design, targeted spikes, focused cost analysis
Sprint hours Technical design discussion, quick validation
Feature minutes Code review validates approach

Entry & Exit

You start with: Approved project charter from Phase 1

You deliver: Approved architecture and implementation plan with resource commitment


Core Activities

1. Architecture Design

Define the system architecture that meets requirements within constraints.

Key Design Decisions:

  • Architectural Characteristics: What quality attributes matter most?
    • Identify 7 characteristics critical to success (performance, scalability, availability, security, maintainability, etc.)
    • Prioritize the top 3; these drive architecture style selection
    • See Architecture Foundations
  • Architectural Style: Monolithic, microservices, serverless, event-driven?
  • Component Boundaries: How do you break the system into pieces?
    • Define responsibilities, interfaces, and data ownership
    • Align boundaries with domain partitioning when possible
  • Integration Patterns: How do components and external systems communicate?
  • Data Architecture: How is data stored and managed?

Document Your Decisions:

  • Architecture Decision Records (ADRs): Context → Decision → Consequences
  • Document WHY, not just WHAT
  • Record alternatives considered and why they were rejected
  • See Architecture Decisions & Leadership

How to Do This Well:

  • Evaluate multiple options before deciding
  • Document trade-offs explicitly for future reference
  • Design for 2x growth, not 100x (YAGNI)
  • Involve senior engineers throughout design
  • Consider operations from the start

Red Flags

  • Resume-driven architecture: Choosing trendy tech, not what fits
  • Over-engineering: Building for scale you'll never need
  • Designing in a vacuum: No team input or buy-in
  • Not documenting decisions: Future teams won't know why

2. Architecture Documentation

Create the minimum documentation needed to achieve genuine agreement on high-risk decisions.

During Agree, diagrams codify decisions and create shared commitment. Use documentation strategically to clarify what you’re agreeing to, not to create comprehensive reference material.

Match documentation to risk:

  • High-risk decisions (deployment model, technology selection, integration points) need more documentation
  • Low-risk decisions can be described in prose or decided during Apply

Use the C4 Model selectively:

  • Level 1 (Context): When multiple systems/teams are involved or boundaries need clarification
  • Level 2 (Container): When agreeing on deployment architecture or operational concerns
  • Level 3 (Component): When maintainability is a top concern or parallel team development requires clear boundaries
  • Level 4 (Code): Almost never during Agree; code evolves too rapidly

Alternatives to diagrams:

  • ADRs capture the “why” behind decisions
  • API specifications (OpenAPI/Swagger) for integration contracts
  • Trade-off tables for comparing alternatives

If a diagram doesn't help someone commit to a specific decision, don't create it.


3. Technical Proof of Concept

Validate critical technical assumptions before full commitment.

When to Build a POC:

  • Using new or unfamiliar technology
  • Complex integration with unclear feasibility
  • Performance requirements that need validation
  • High uncertainty in technical approach

What to Validate:

  • Integration: Can we actually connect? Does their API work as documented?
  • Performance: Can we meet response time targets?
  • Technology Feasibility: Does this framework do what we need?

POC Best Practices:

  • Define clear goals: What specific questions need answering?
  • Time-box it (3-7 days typical)
  • Take shortcuts: It’s throwaway code
  • Document findings and update estimates based on learnings
  • Throw away the code (POC ≠ production)

Red Flags

  • POC becomes production code: Quality shortcuts in production
  • Testing easy things: Not addressing real risks
  • POC drags on indefinitely: No decisions being made

4. Quality & Testing Strategy

Define how you’ll ensure quality throughout development.

Testing Pyramid:

  • Unit Tests (70-80%): Test individual functions/methods
  • Integration Tests (15-20%): Test component interactions
  • End-to-End Tests (5-10%): Test complete workflows

Security Testing:

  • SAST on every build, dependency scanning daily
  • DAST in staging, penetration testing before launch
  • See Security Testing

Quality Gates:

  • Code commit: Tests pass, linting passes, review approved
  • Merge to main: All tests pass, coverage target met
  • Release: E2E tests pass, security scan clean, UAT approved

5. SLA/SLO Definition

Establish measurable performance and availability targets.

Key Concepts:

  • SLI (Service Level Indicator): Metric measuring service quality (latency, availability, error rate)
  • SLO (Service Level Objective): Target value for an SLI (“95% of requests < 200ms”)
  • SLA (Service Level Agreement): Commitment to customers (usually less aggressive than SLO)
  • Error Budget: Allowable downtime (99.9% = 43.8 minutes/month)

Best Practices:

  • Use percentiles (95th, 99th), not averages
  • Build in margin: SLO tighter than SLA
  • Make targets visible in dashboards

99.9% availability sounds impressive until you realize it allows 43.8 minutes of downtime per month.


6. Planning & Budget

Appetite vs. Estimation

Before diving into work breakdown, consider whether you’re asking the right question.

Bottom-up estimation asks “How long will this take?” This produces guesses that become commitments.

Top-down appetite asks “How much time is this problem worth?” This produces a constraint that forces scope decisions during Agree, not during Apply.

Setting appetite means leadership decides: “This problem is worth 2 weeks, not more.” If you can’t shape a viable solution within the appetite, either the problem needs a bigger appetite or you shouldn’t work on it yet.

For more on this approach, see Shaped Kanban.

Work Breakdown & Cost Analysis

  • Break components into implementable stories/tasks
  • Identify dependencies and sequence work
  • Include time for testing, reviews, rework
  • Add contingency buffer (20-30%)

For detailed TCO and ROI guidance, see Total Cost of Ownership and Return on Investment.

Red Flags

  • No developer input: Architect or PM creates plan alone
  • No buffer time: Surprises are inevitable
  • Unrealistic dependencies: Assuming external teams deliver on time

When Stakeholders Disagree

The Reality of Conflicting Priorities

Genuine agreement requires stakeholders to agree with each other, not just with you. In practice, stakeholders often have conflicting priorities:

  • Marketing wants features fast; Security wants comprehensive review
  • Finance wants cost reduction; Operations wants redundancy
  • Product wants scope expansion; Engineering wants technical debt reduction

These conflicts don’t resolve themselves. If you proceed without addressing them, you’re building on a foundation of hidden disagreement that will surface during Apply, usually at the worst possible moment.

Surfacing Hidden Disagreements

Before formal agreement, probe for alignment gaps:

  • Present trade-offs explicitly and watch for reactions
  • Ask each stakeholder to prioritize requirements independently, then compare
  • Ask “What would make this project fail from your perspective?”
  • Ask “What would you cut if we had to reduce scope by 30%?”

Warning signs of hidden disagreement:

  • Stakeholders nod in meetings but raise concerns in private
  • Different stakeholders describe success criteria differently
  • Silence when you ask for objections (silence ≠ agreement)

Facilitating Consensus

When stakeholders disagree, your role is to facilitate resolution, not to pick a winner:

  1. Make trade-offs explicit: “We can optimize for X or Y, but not both. Here’s what each choice costs.”

  2. Escalate when necessary: Some conflicts require someone with authority over both parties to decide.

  3. Use structured decision frameworks: Weighted scoring, time-boxed debate, or pilot-and-measure.

  4. Document dissent: If genuine consensus isn’t possible, document who disagreed and why.

When Agreement Isn’t Possible

Options when genuine agreement fails:

  1. Escalate to decision authority: Present the conflict for resolution
  2. Document the disagreement and proceed: Record that parties disagree, proceed with explicit risk acceptance
  3. Reduce scope to the agreed portion: Find the subset where agreement exists
  4. Recommend no-go: If the conflict is fundamental enough that success is unlikely

Whatever you choose, don’t pretend agreement exists when it doesn’t. Proceeding with false consensus leads to scope creep, rework, and blame when the hidden conflict surfaces.

Red Flags: Pseudo-Agreement

  • Silence as consent: No objections doesn't mean agreement
  • Rubber-stamp approval: Sign-off without real review
  • Agreement with fingers crossed: Stakeholders who plan to revisit later

Securing Team Buy-In

Why Team Agreement Matters

Stakeholder agreement is necessary but not sufficient. If the development team doesn’t believe in the approach, execution will suffer:

  • Passive resistance manifests as slow progress
  • Technical concerns get suppressed until they become crises
  • The team builds what was specified, not what was intended

The team are stakeholders too. Their genuine commitment is as essential as any business stakeholder’s.

Building Team Ownership

Involve the team in design, not just estimation:

  • Include senior engineers in architecture discussions
  • Present options and trade-offs, don’t just present conclusions
  • Let them challenge assumptions

Address dissent explicitly:

  • When team members disagree, understand why
  • If you override team concerns, explain your reasoning
  • Document unresolved concerns as risks

Create shared ownership:

  • Team should be able to explain the architecture and why it was chosen
  • Assign architecture areas to team members to own

Red Flags: Team Disengagement

  • Team is silent during design discussions
  • Estimates are given without pushback
  • Technical concerns only surface during implementation
  • Team treats the plan as "your plan" rather than "our plan"

For detailed guidance on team dynamics, see Dev Team Leadership and Team Organization.


Readiness: Transitioning to Apply

Behavioral Indicators of True Agreement

Sign-offs and approved documents don’t guarantee genuine agreement. Before moving to Phase 3, verify that all parties truly understand and commit.

Test agreement with these questions:

  • Can stakeholders explain the architecture approach and why it was chosen?
  • Can the development team walk through the implementation plan and defend the estimates?
  • When asked “What are the biggest risks?”, do stakeholders and team give consistent answers?

You’re ready for Apply when:

  • Architecture design is documented and approved
  • Key design decisions are captured in ADRs
  • Critical technical assumptions are validated (POC complete if needed)
  • Quality and testing strategy is defined
  • SLOs are established with stakeholder buy-in
  • Implementation plan exists with realistic estimates (team-validated)
  • Budget is formally committed
  • Team genuinely believes in the approach
  • Stakeholders agree with each other (conflicts resolved)
  • Dependencies are identified with contingency plans

You’re NOT ready for Apply when:

  • Stakeholders approved but can’t explain what they approved
  • Technical concerns were raised but not resolved
  • POC results were concerning but got rationalized away
  • Team members privately express doubts about feasibility

When in doubt, stay in Agree. The cost of additional validation is far lower than the cost of discovering false agreement during Apply.

Found this guide helpful? Share it with your team:

Share on LinkedIn