<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://stevenstuartm.com/feed-blog.xml" rel="self" type="application/atom+xml" /><link href="https://stevenstuartm.com/" rel="alternate" type="text/html" /><updated>2026-03-16T15:52:49+00:00</updated><id>https://stevenstuartm.com/feed-blog.xml</id><title type="html">Architecture Insights</title><subtitle>Clarity from complexity. Practical insights on building software architectures that serve real needs and deliver genuine value.</subtitle><author><name>Steven Stuart</name><email>stevenstuartm@gmail.com</email></author><entry><title type="html">Reporting and Production Make Terrible Roommates</title><link href="https://stevenstuartm.com/blog/2026/03/11/reporting-and-production-make-terrible-roommates.html" rel="alternate" type="text/html" title="Reporting and Production Make Terrible Roommates" /><published>2026-03-11T00:00:00+00:00</published><updated>2026-03-11T00:00:00+00:00</updated><id>https://stevenstuartm.com/blog/2026/03/11/reporting-and-production-make-terrible-roommates</id><content type="html" xml:base="https://stevenstuartm.com/blog/2026/03/11/reporting-and-production-make-terrible-roommates.html"><![CDATA[<p>A transactional schema optimizes for write consistency, referential integrity, and the access patterns of the application that owns it. A reporting schema optimizes for read throughput, aggregation, and the access patterns of analysts and dashboards. When both concerns share a single schema, every design decision becomes a negotiation between them, and reporting usually wins because it’s the most visible to leadership and the most painful to change after the fact. These concerns can and should be separated so each model serves the workload it was designed for.</p>

<p>Consider what happens when the analytics team asks for a denormalized <code class="language-plaintext highlighter-rouge">order_summary</code> view on the production database so their dashboards load faster. The DBA obliges, adds a materialized view, and now every schema migration has to account for it. Six months later the team wants to split the <code class="language-plaintext highlighter-rouge">orders</code> table into <code class="language-plaintext highlighter-rouge">orders</code> and <code class="language-plaintext highlighter-rouge">order_line_items</code>, but the view is embedded in 10 dashboard queries and a nightly export job. The refactor stalls, and the production schema fossilizes around a reporting concern.</p>

<p>Not every system needs a separation on day one. A small team with a single database, low reporting complexity, and a schema that’s still fluid can query production directly without meaningful friction. But this distortion is predictable, not surprising. It emerges when reporting consumers multiply, when dashboards become load-bearing, and when schema changes require cross-team coordination. Architects who recognize this trajectory can keep the door open for separation without building the full pipeline prematurely, by resisting the urge to denormalize production schemas for reporting convenience and by keeping reporting access patterns from becoming implicit contracts on the production schema. When the separation does happen, it can be reactive, tapping into what the database already captures, or intentional, making the application responsible for producing reporting-quality records in the write path.</p>

<h2 id="reactive-separation">Reactive Separation</h2>

<h3 id="a-dedicated-reporting-replica">A Dedicated Reporting Replica</h3>

<p>The simplest place to start is to point reporting tools at a read replica of the production DB. Many teams already have replicas for distributing query load, and so dedicating one to reporting keeps analytical queries from competing with production traffic. No new infrastructure, no async pipeline, no application code changes.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  ┌─────────────┐         ┌─────────────────┐
  │ Application │────────&gt;│  Production DB   │
  └─────────────┘  writes │  (Primary)       │
                          └────────┬─────────┘
                                   │ replication
                                   v
                          ┌─────────────────┐
                          │  Read Replica    │
                          │  (Same Schema)   │
                          └────────┬─────────┘
                                   │ direct queries
                          ┌────────┴─────────┐
                          │  BI / Dashboards  │
                          └──────────────────┘
</code></pre></div></div>

<p>This is a feasible fit when reporting needs are straightforward, the production schema is close enough to what reporting consumers need, and data that’s a few seconds stale is acceptable. “A few seconds stale” is the optimistic case, though. Heavy analytical queries on the replica can cause replication lag to spike well beyond that, especially during peak reporting windows. Still, it’s the path of least resistance, and for many systems it works well enough that teams never move beyond it.</p>

<p>The replica also serves as the foundation for ETL. Rather than querying the replica live, teams extract data from it on a schedule, transform it into reporting-friendly shapes, and load it into a warehouse or data lake. Same infrastructure, different consumption pattern. Live queries hit the replica directly for near-real-time results while ETL jobs use it as a source for batch aggregation and historical snapshots. Both approaches keep analytical workloads off the primary.</p>

<p>The replica breaks down, for both live queries and ETL, when reporting needs diverge far enough from the production schema’s shape. Reporting consumers write increasingly complex queries with multiple joins, or they start requesting schema changes to production to make their queries simpler, which is exactly the distortion this post is about. The replica also can’t capture history. It mirrors current state, so if a record changes twice between queries the intermediate state is gone.</p>

<h3 id="change-data-capture">Change Data Capture</h3>

<p>CDC tools like Debezium tap the database’s transaction log and emit changes as events without any application code changes. The application writes normally to whatever schema makes sense, and CDC streams those changes to a separate store. The stream is async by default, and unlike the replica approach, CDC captures every intermediate state change because it reads from the transaction log rather than polling snapshots.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  ┌─────────────┐         ┌─────────────────┐
  │ Application │────────&gt;│  Production DB   │
  └─────────────┘  writes └────────┬─────────┘
                                   │ transaction log
                                   v
                          ┌─────────────────┐
                          │  CDC Connector   │
                          │  (e.g. Debezium) │
                          └────────┬─────────┘
                                   │ change events
                                   v
                          ┌─────────────────┐
                          │  Stream / Queue  │
                          │  (Kafka, Kinesis)│
                          └────────┬─────────┘
                                   │
                     ┌─────────────┴─────────────┐
                     v                           v
            ┌────────────────┐          ┌────────────────┐
            │  Transform (T) │          │  Schema        │
            │  Reshape/Join  │          │  Registry      │
            └───────┬────────┘          └────────────────┘
                    v
            ┌────────────────┐
            │ Reporting Store│
            │ (Warehouse/DL) │
            └────────────────┘
</code></pre></div></div>

<p>CDC’s greatest strength is that it requires no application code changes, no additional transaction overhead, and no new abstractions in the write path. For legacy systems where the risk of changing the write path is too high, or for teams that need separation now and can’t afford to modify every service that writes data, CDC is often the only viable option. It also solves payload completeness for free: the transaction log captures the full row state after each write regardless of whether the application only updated a single field, so downstream consumers never have to wonder whether a missing field means “unchanged” or “removed.”</p>

<p>CDC does have limitations.</p>

<p>The first limitation is semantic. CDC events originate from the database layer, so they capture <em>what</em> changed but not <em>why</em> it changed. A row update that represents a customer canceling an order looks identical to a row update that represents a system correcting a data entry error. The database can’t distinguish between them because it only sees the state change, not the business intent. For domains where that distinction matters, like financial ledgers or audit-critical workflows, event sourcing is the appropriate tool because it captures the intent as the primary record.</p>

<p>The second limitation is the absence of a contract boundary. The table structure <em>is</em> the contract, implicitly. When that schema changes, nothing fails at build time. The CDC pipeline either silently emits differently shaped events or breaks at runtime, and reporting consumers discover the problem in production rather than in development. A schema registry can partially close this gap by enforcing compatibility rules at deserialization, but that’s added infrastructure catching incompatibility at runtime rather than at build time.</p>

<p>The third limitation is database dependency. Not every database has a strong CDC story. PostgreSQL and DynamoDB have mature options, but weaker change stream capabilities can push teams toward application-layer alternatives earlier than expected.</p>

<h2 id="intentional-separation">Intentional Separation</h2>

<p>Reactive approaches separate the workload but not the context. They can tell you <em>what</em> changed, but not <em>who</em> changed it or <em>why</em>. That context exists at the application layer when the write happens, and it’s lost the moment the data hits the database unless someone deliberately captures it. An intentional separation of concerns takes full advantage of the production context while serving the needs of both reporting and prod as equally vital priorities.</p>

<h3 id="the-outbox-pattern">The Outbox Pattern</h3>

<p>The outbox pattern makes the application responsible for producing reporting-quality records. Instead of letting the database schema define the downstream contract implicitly, the application writes a versioned record to an outbox table within the same database transaction as the domain state change. Either both commit or neither does, so consistency is guaranteed. A separate process reads from the outbox and projects into whatever reporting store analytics needs. The application controls the payload shape, the versioning, and the context included in each record.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  ┌─────────────┐
  │ Application │
  └──────┬──────┘
         │ single transaction
         v
  ┌──────────────────────────────────────┐
  │          Production DB               │
  │                                      │
  │  ┌──────────────┐  ┌──────────────┐  │
  │  │ Domain Table  │  │ Outbox Table │  │
  │  │ (orders,      │  │ (versioned   │  │
  │  │  customers)   │  │  records)    │  │
  │  └──────────────┘  └──────┬───────┘  │
  └───────────────────────────┼──────────┘
                              │ poll / stream
                              v
                     ┌─────────────────┐
                     │ Relay Process   │
                     │ (reads outbox)  │
                     └────────┬────────┘
                              │ publish
                              v
                     ┌─────────────────┐
                     │ Reporting Store │
                     └─────────────────┘
</code></pre></div></div>

<p><strong>Outbox table via a relational database:</strong></p>

<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">outbox</span> <span class="p">(</span>
    <span class="n">id</span>              <span class="nb">BIGINT</span> <span class="k">GENERATED</span> <span class="n">ALWAYS</span> <span class="k">AS</span> <span class="k">IDENTITY</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">,</span>
    <span class="n">event_id</span>        <span class="n">UUID</span>          <span class="k">NOT</span> <span class="k">NULL</span> <span class="k">DEFAULT</span> <span class="n">gen_random_uuid</span><span class="p">(),</span>  <span class="c1">-- globally unique, used downstream</span>
    <span class="n">aggregate_type</span>  <span class="nb">VARCHAR</span><span class="p">(</span><span class="mi">100</span><span class="p">)</span>  <span class="k">NOT</span> <span class="k">NULL</span><span class="p">,</span>  <span class="c1">-- e.g. 'Order', 'Customer'</span>
    <span class="n">aggregate_id</span>    <span class="nb">VARCHAR</span><span class="p">(</span><span class="mi">100</span><span class="p">)</span>  <span class="k">NOT</span> <span class="k">NULL</span><span class="p">,</span>  <span class="c1">-- e.g. order ID</span>
    <span class="n">event_type</span>      <span class="nb">VARCHAR</span><span class="p">(</span><span class="mi">100</span><span class="p">)</span>  <span class="k">NOT</span> <span class="k">NULL</span><span class="p">,</span>  <span class="c1">-- e.g. 'OrderCancelled'</span>
    <span class="n">schema_version</span>  <span class="nb">INT</span>           <span class="k">NOT</span> <span class="k">NULL</span><span class="p">,</span>  <span class="c1">-- contract versioning</span>
    <span class="n">occurred_at</span>     <span class="n">TIMESTAMPTZ</span>   <span class="k">NOT</span> <span class="k">NULL</span> <span class="k">DEFAULT</span> <span class="n">now</span><span class="p">(),</span>
    <span class="n">initiated_by</span>    <span class="nb">VARCHAR</span><span class="p">(</span><span class="mi">200</span><span class="p">),</span>            <span class="c1">-- who: user ID, system name</span>
    <span class="n">reason</span>          <span class="nb">VARCHAR</span><span class="p">(</span><span class="mi">500</span><span class="p">),</span>            <span class="c1">-- why: 'customer_request', 'admin_override'</span>
    <span class="n">payload</span>         <span class="n">JSONB</span>         <span class="k">NOT</span> <span class="k">NULL</span><span class="p">,</span>  <span class="c1">-- full state snapshot + context</span>
    <span class="n">published</span>       <span class="nb">BOOLEAN</span>       <span class="k">NOT</span> <span class="k">NULL</span> <span class="k">DEFAULT</span> <span class="k">FALSE</span>
<span class="p">);</span>
</code></pre></div></div>

<p><strong>Outbox record via a Kinesis/Kafka stream (JSON envelope):</strong></p>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"id"</span><span class="p">:</span><span class="w"> </span><span class="s2">"a1b2c3d4-e5f6-7890-abcd-ef1234567890"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"aggregateType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Order"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"aggregateId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ORD-20260218-4417"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"eventType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"OrderCancelled"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"schemaVersion"</span><span class="p">:</span><span class="w"> </span><span class="mi">2</span><span class="p">,</span><span class="w">
  </span><span class="nl">"occurredAt"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2026-02-18T14:32:08.771Z"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"initiatedBy"</span><span class="p">:</span><span class="w"> </span><span class="s2">"user:jsmith"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"reason"</span><span class="p">:</span><span class="w"> </span><span class="s2">"customer_request"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"payload"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"orderId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"ORD-20260218-4417"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"customerId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"CUST-8821"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"previousStatus"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Confirmed"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"newStatus"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Cancelled"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"lineItems"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">,</span><span class="w">
    </span><span class="nl">"totalAmount"</span><span class="p">:</span><span class="w"> </span><span class="mf">284.50</span><span class="p">,</span><span class="w">
    </span><span class="nl">"currency"</span><span class="p">:</span><span class="w"> </span><span class="s2">"USD"</span><span class="w">
  </span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<p>Because the application controls the payload, the outbox captures context that reactive approaches cannot: who initiated a change, whether it was a customer action or an admin override or a system timeout, and why it happened. The application has this context at write time, and it’s impossible to recover after the fact.</p>

<p>This also gives the outbox an explicit, versionable contract boundary. The application decides what the downstream record looks like and versions it independently. A breaking change to the outbox record is a code change that has to compile, pass tests, and go through review. If a developer renames a column in the production schema, the outbox record doesn’t change unless someone deliberately updates it. And because the outbox doesn’t rely on transaction log capabilities or vendor-specific change feed APIs, any database that supports transactions supports the pattern.</p>

<p>The outbox does not require a record for every database write. It only fires when a specific entity type has a meaningful state change, and only for the types that reporting cares about. Background jobs updating internal timestamps produce nothing. Even deletions produce a record, because knowing that an entity was removed and who removed it is itself meaningful. This keeps the coupling concentrated in write paths that produce meaningful state transitions, not spread across every query and update in the codebase.</p>

<p>For most teams that have outgrown a read replica but don’t need full event sourcing, the outbox is my recommendation. It provides intentional separation, explicit contracts, and rich context without the architectural commitment of an append-only event store.</p>

<h3 id="cqrs-and-event-sourcing">CQRS and Event Sourcing</h3>

<p>CQRS (Command Query Responsibility Segregation) formally separates the write model from the read model. The write side accepts commands and persists state. The read side maintains whatever views consumers need, shaped however they need them, updated as fast or as lazily as the use case demands. The two sides share no schema and no storage. What CQRS adds is the explicit acknowledgment that “what happened” and “what is the current state” are different questions that deserve different models. CQRS does not require event sourcing. It can sit in front of a traditional stateful database where the write side persists state normally and the read side maintains separate, denormalized views optimized for queries.</p>

<p>Event sourcing takes this further by changing what the write side stores. Instead of persisting current state and producing reporting records alongside it, every state mutation is recorded as an immutable event, and current state is derived by replaying those events. The event log becomes the source of truth, not the current snapshot. Nothing is overwritten. Every transition is preserved in the order it occurred. Production state is a projection of the event stream, and so is reporting state, and so is any other view you need. If the analytics team changes their requirements six months from now, you replay the same events through a new projection and the full history is there.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>                          Commands
                              │
                              v
                     ┌─────────────────┐
                     │   Write Side    │
                     │ (Command Handler│
                     │  + Aggregates)  │
                     └────────┬────────┘
                              │ append events
                              v
                     ┌─────────────────┐
                     │  Event Store    │
                     │  (append-only)  │
                     └────────┬────────┘
                              │ project
                              v
                     ┌─────────────────┐
                     │  Projected      │
                     │  State Tables   │
                     │  (per entity)   │
                     └────────┬────────┘
                              │ query (read side)
              ┌───────────────┼───────────────┐
              v               v               v
     ┌────────────┐  ┌─────────────┐  ┌────────────────┐
     │  Prod API  │  │  Reporting  │  │  Audit /       │
     │  Queries   │  │  Store      │  │  Compliance    │
     └────────────┘  └─────────────┘  └────────────────┘
</code></pre></div></div>

<p><strong>Event store document (append-only, NoSQL):</strong></p>

<div class="language-json highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="p">{</span><span class="w">
  </span><span class="nl">"streamId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Order-ORD-4417"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"position"</span><span class="p">:</span><span class="w"> </span><span class="mi">4</span><span class="p">,</span><span class="w">
  </span><span class="nl">"eventType"</span><span class="p">:</span><span class="w"> </span><span class="s2">"OrderCancelled"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"occurredAt"</span><span class="p">:</span><span class="w"> </span><span class="s2">"2026-02-18T14:32:07Z"</span><span class="p">,</span><span class="w">
  </span><span class="nl">"payload"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"initiatedBy"</span><span class="p">:</span><span class="w"> </span><span class="s2">"user:jsmith"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"reason"</span><span class="p">:</span><span class="w"> </span><span class="s2">"customer_request"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"previousStatus"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Shipped"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"newStatus"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Cancelled"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"lineItems"</span><span class="p">:</span><span class="w"> </span><span class="mi">3</span><span class="p">,</span><span class="w">
    </span><span class="nl">"totalAmount"</span><span class="p">:</span><span class="w"> </span><span class="mf">284.50</span><span class="p">,</span><span class="w">
    </span><span class="nl">"currency"</span><span class="p">:</span><span class="w"> </span><span class="s2">"USD"</span><span class="w">
  </span><span class="p">},</span><span class="w">
  </span><span class="nl">"metadata"</span><span class="p">:</span><span class="w"> </span><span class="p">{</span><span class="w">
    </span><span class="nl">"correlationId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"req-88a1c"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"causationId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"cmd-cancel-4417"</span><span class="p">,</span><span class="w">
    </span><span class="nl">"userId"</span><span class="p">:</span><span class="w"> </span><span class="s2">"user:jsmith"</span><span class="w">
  </span><span class="p">}</span><span class="w">
</span><span class="p">}</span><span class="w">
</span></code></pre></div></div>

<p><strong>Projected state table (derived from events, used by reporting/ETL):</strong></p>

<div class="language-sql highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">CREATE</span> <span class="k">TABLE</span> <span class="n">order_projections</span> <span class="p">(</span>
    <span class="n">order_id</span>         <span class="nb">VARCHAR</span><span class="p">(</span><span class="mi">100</span><span class="p">)</span> <span class="k">PRIMARY</span> <span class="k">KEY</span><span class="p">,</span>
    <span class="n">customer_id</span>      <span class="nb">VARCHAR</span><span class="p">(</span><span class="mi">100</span><span class="p">)</span> <span class="k">NOT</span> <span class="k">NULL</span><span class="p">,</span>
    <span class="n">current_status</span>   <span class="nb">VARCHAR</span><span class="p">(</span><span class="mi">50</span><span class="p">)</span>  <span class="k">NOT</span> <span class="k">NULL</span><span class="p">,</span>
    <span class="n">item_count</span>       <span class="nb">INT</span>          <span class="k">NOT</span> <span class="k">NULL</span><span class="p">,</span>
    <span class="n">total_amount</span>     <span class="nb">DECIMAL</span><span class="p">(</span><span class="mi">12</span><span class="p">,</span><span class="mi">2</span><span class="p">)</span> <span class="k">NOT</span> <span class="k">NULL</span><span class="p">,</span>
    <span class="n">currency</span>         <span class="nb">VARCHAR</span><span class="p">(</span><span class="mi">3</span><span class="p">)</span>   <span class="k">NOT</span> <span class="k">NULL</span><span class="p">,</span>
    <span class="n">placed_at</span>        <span class="n">TIMESTAMPTZ</span><span class="p">,</span>
    <span class="n">shipped_at</span>       <span class="n">TIMESTAMPTZ</span><span class="p">,</span>
    <span class="n">cancelled_at</span>     <span class="n">TIMESTAMPTZ</span><span class="p">,</span>
    <span class="n">cancelled_by</span>     <span class="nb">VARCHAR</span><span class="p">(</span><span class="mi">200</span><span class="p">),</span>
    <span class="n">cancel_reason</span>    <span class="nb">VARCHAR</span><span class="p">(</span><span class="mi">500</span><span class="p">),</span>
    <span class="n">last_event_pos</span>   <span class="nb">INT</span>          <span class="k">NOT</span> <span class="k">NULL</span><span class="p">,</span>  <span class="c1">-- tracks replay position</span>
    <span class="n">projected_at</span>     <span class="n">TIMESTAMPTZ</span>  <span class="k">NOT</span> <span class="k">NULL</span> <span class="k">DEFAULT</span> <span class="n">now</span><span class="p">()</span>
<span class="p">);</span>
</code></pre></div></div>

<p>In practice, reporting consumers rarely subscribe to the event stream directly. Event sourcing produces projected state tables, one per entity, where each row represents the current state derived from the event history. Reporting and ETL pull from these projections rather than from raw events. This keeps the event stream internal to the domain, which matters because not everything in the stream is a clean domain event. The projections give reporting consumers a familiar, queryable surface while the event stream retains full history for replay and audit.</p>

<p>This is a good fit for domains where the complete history of state transitions is genuinely valuable, like financial ledgers, audit-critical workflows, or systems where “undo” and “replay” are first-class requirements. The combination of event sourcing and CQRS provides the most complete separation: full history, arbitrary projections, and independent evolution of read and write models.</p>

<p>Most teams should not reach for this combination. Martin Fowler has <a href="https://martinfowler.com/bliki/CQRS.html" target="_blank" rel="noopener noreferrer">warned consistently</a> that CQRS is misapplied far more often than it’s applied well. Many systems fit a CRUD mental model and should stay that way. CQRS should only apply to specific bounded contexts where the read and write access patterns are genuinely different, not across entire applications. Event sourcing compounds the cost: events are immutable and permanent so schema design requires careful thought, aggregate replay gets expensive without snapshotting, and debugging production issues means reasoning about event sequences rather than inspecting current state.</p>

<h2 id="separate-early-or-pay-later">Separate Early or Pay Later</h2>

<p>A read replica is enough to start, but every shortcut that ties these workloads together makes the eventual separation harder. Both production and reporting deserve to be first-class concerns, and treating them that way means decoupling from the schema entirely.</p>

<p>Production databases can now optimize for their inserts and their queries. Dev teams can now deploy and evolve a component’s database as needs are discovered, without asking permission. Reporting teams can now get richer, more contextual insights that are readily available. And the two groups can now stop being at each other’s throats, because they’re no longer competing for the same resource.</p>]]></content><author><name>Steven Stuart</name><email>stevenstuartm@gmail.com</email></author><category term="architecture" /><category term="databases" /><category term="design-patterns" /><category term="distributed-systems" /><category term="data-modeling" /><category term="event-sourcing" /><category term="cqrs" /><summary type="html"><![CDATA[Reporting pressure gradually distorts production schemas until they serve two masters and compromise for both. Separating the workloads lets each model evolve for the consumers it was designed to serve.]]></summary></entry><entry><title type="html">The Measure of a Decision</title><link href="https://stevenstuartm.com/blog/2026/03/05/the-measure-of-a-decision.html" rel="alternate" type="text/html" title="The Measure of a Decision" /><published>2026-03-05T00:00:00+00:00</published><updated>2026-03-05T00:00:00+00:00</updated><id>https://stevenstuartm.com/blog/2026/03/05/the-measure-of-a-decision</id><content type="html" xml:base="https://stevenstuartm.com/blog/2026/03/05/the-measure-of-a-decision.html"><![CDATA[<blockquote class="pull-quote">
<p>Clarity does not simplify truth. It makes truth accessible so that it can be held, tested, and defended by everyone it reaches.</p>
</blockquote>

<p>In a previous post, I explored the idea that creation is the only alternative to dissolution, that progress requires sustained effort against entropy and regression requires nothing. That post settled on what we owe each other: the obligation to search for truth and pass what we find to those who come after us.</p>

<p>But owing each other something and measuring whether we’ve paid the debt are different problems. If the obligation is to make decisions that serve others and move them toward truth, how do we know whether a given decision actually does that? Conviction alone isn’t enough. History is full of people who were certain they were doing good while producing destruction.</p>

<p>What follows is a triad, a set of three criteria that must be fulfilled simultaneously for a decision to have integrity. Not sequentially, not partially. All three, at once, or the decision is compromised.</p>

<h2 id="the-judge-the-servant-the-steward">The Judge, The Servant, The Steward</h2>

<p>Every decision that affects others carries three obligations. These aren’t roles assigned to different people. Every person embodies all three simultaneously. They can be understood as archetypes, as adjectives describing the character of the act, or as nouns describing what the act produces.</p>

<table>
  <thead>
    <tr>
      <th>Archetype</th>
      <th>Adjective</th>
      <th>Noun</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>The Judge</td>
      <td>Just</td>
      <td>Justice</td>
    </tr>
    <tr>
      <td>The Servant</td>
      <td>Dutiful</td>
      <td>Service</td>
    </tr>
    <tr>
      <td>The Steward</td>
      <td>Constructive</td>
      <td>Cultivation</td>
    </tr>
  </tbody>
</table>

<p><strong>The Judge</strong> asks: is this grounded in something true? A just decision derives from an objective standard that can be reached through reason. The judge doesn’t invent the law. The judge discerns and applies it. The moment the judge begins legislating, creating standards that serve the judge’s own position, the criterion is broken.</p>

<p><strong>The Servant</strong> asks: who is this for? A dutiful decision is oriented toward others, not toward the actor’s comfort, reputation, or power. The servant bears the weight of the obligation even when it costs something. Especially when it costs something. The servant may struggle, may question, may resist the weight of the obligation, but chooses to bear it anyway. Service without struggle is just convenience.</p>

<p><strong>The Steward</strong> asks: did something real change? A constructive decision leaves persons or circumstances materially different than before. The steward is entrusted with something that matters and held responsible for its condition. Stewardship isn’t maintenance; it’s cultivation. Things must be better for the steward’s involvement, not merely preserved.</p>

<h2 id="the-simultaneous-constraint">The Simultaneous Constraint</h2>

<p>The triad’s power isn’t in any single criterion. It’s in the requirement that all three hold at the same time within the same act.</p>

<p>Remove one and the act degrades into something recognizable:</p>

<p>A decision that is just and constructive but not dutiful is <strong>tyranny</strong>. It may be grounded in truth and it may change circumstances, but it serves the actor. Empires are built this way.</p>

<p>A decision that is dutiful and constructive but not just is <strong>manipulation</strong>. It may serve others and produce real change, but the change isn’t grounded in truth. Propaganda works this way. So does enabling.</p>

<p>A decision that is just and dutiful but not constructive is <strong>ceremony</strong>. It may be grounded in truth and oriented toward others, but nothing actually changes. This is where empty political decisions land. The language of justice and service is present, the substance is absent. Nothing is cultivated. Nothing is different afterward.</p>

<p>The triad exposes each failure by name. And the failures aren’t hypothetical. Most decisions that cause lasting damage satisfy one or two criteria while missing the third.</p>

<h2 id="the-triad-at-every-scale">The Triad at Every Scale</h2>

<p>Consider a parent. A parent is judge, servant, and steward simultaneously. The parent discerns what is actually good for the child, not what the child wants and not what is convenient. The parent bears the cost of that discernment, waking up at night, sacrificing time, enduring the child’s resistance. And the child is materially different for the parent’s involvement: more capable, more oriented, better equipped to eventually stand on their own.</p>

<p>The child can’t fulfill the triad yet. But the triad is what the child is being shaped toward. The entire arc from obedience to understanding to independent action is the path from being subject to the triad to being capable of embodying it.</p>

<p>The same structure holds for a small business owner. Provision for the family fulfills the servant’s role. But the triad won’t allow the owner to stop there. If the products or services don’t also pass the test, the owner has served their family while failing justice and stewardship toward their customers. The triad doesn’t let you pick your audience. It applies to everyone the decision reaches.</p>

<p>It holds for a teacher, a coach, and anyone with authority or influence over others. The scale changes. The structure doesn’t.</p>

<h2 id="what-anchors-the-definitions">What Anchors the Definitions</h2>

<p>The three criteria only function as a measuring tool if their definitions are stable. Justice must mean something specific and consistent. Service must mean something beyond what the powerful find convenient. Cultivation must be measured against a standard of genuine human flourishing, not productivity or compliance. If the triad requires immutable definitions to function, the question becomes: what provides them?</p>

<p>The definition can’t come from the state, because the state changes with every administration and every shift in power. It can’t come from popular consensus, because consensus is just power distributed differently. It can’t come from the market, because the market optimizes for exchange, not for truth. Any human authority that defines the criteria can also redefine them, and eventually will, when redefinition becomes convenient.</p>

<p>The anchor has to come from somewhere beneath all of that. It has to rest on two kinds of foundation working together: axioms and tenets.</p>

<p><strong>Axioms</strong> are self-evident truths, principles discoverable through reason that don’t require external proof because they are the proof. They function the way axioms function in mathematics: you can’t derive them from something deeper, and every system that depends on them collapses the moment you treat them as negotiable. The claim that human beings possess inherent dignity is an axiom. You don’t prove it by appeal to a higher principle. You recognize it, and everything else follows from it.</p>

<p><strong>Tenets</strong> are the beliefs a culture holds to be true through experience, moral intuition, and accumulated wisdom. They aren’t provable in the way axioms are self-evident, but they carry the weight of what a society has learned about human flourishing across generations. Tenets answer the questions that reason alone can’t settle: what constitutes a good life, what obligations bind a community together, what forms of cultivation are worth pursuing.</p>

<blockquote class="pull-quote">
<p>"We hold these truths to be self-evident, that all men are created equal, that they are endowed by their Creator with certain unalienable Rights, that among these are Life, Liberty and the pursuit of Happiness."<br />— The Declaration of Independence</p>
</blockquote>

<p>Jefferson understood this. He didn’t write “we have decided” or “the state grants.” “Self-evident” means these truths require no external proof. “Unalienable” means no human authority has jurisdiction to revoke them. Both words describe the same claim: that these rights are axiomatic, not granted by any power that could later withdraw them. Remove that claim and the rights become negotiable by definition.</p>

<p>Every enduring society in recorded history has grounded its moral framework in some combination of axioms and tenets, principles it treated as foundational and placed beyond renegotiation. The specific foundations vary enormously, yet the structural role is identical. The foundation provides an immutable reference point against which definitions of justice, service, and cultivation are tested. Not a set of rules imposed from outside, but a bedrock from which the rules derive their meaning and stability.</p>

<p>This isn’t an argument for any particular foundation over another. It’s a structural observation. Different cultures arrive at different axioms and hold different tenets. The claim isn’t that one foundation is superior. The claim is that abandoning the foundation, treating axioms as debatable and tenets as disposable, removes the anchor.</p>

<p>And this is where the most dangerous form of corruption lives. When the foundation is treated as mutable, every criterion can be redefined while the language stays intact. If the state defines justice, the Judge serves the state. If the collective defines service, the Servant serves the mob. If the market defines cultivation, the Steward serves shareholders. The archetypes are still invoked, the language of justice, service, and cultivation is still deployed, but the decisions that pass the corrupted triad bear no resemblance to the ones that would pass the intact version. The definitions shift gradually, the language persists, and by the time anyone notices, the triad has become a tool of power rather than a measure of integrity.</p>

<h2 id="where-corruption-enters">Where Corruption Enters</h2>

<p>The assumption is that the Judge fails first, that corruption begins when justice is redefined. But the more common point of failure is the Steward.</p>

<p>The Judge’s role is constrained by the standard. The Servant’s role is constrained by duty. But the Steward acts on the world, and that action can be directed toward clarity or confusion, unity or division. A Steward who cultivates confusion is still cultivating. On the surface, the criterion appears satisfied. But what they produce isn’t constructive. It’s corrosive.</p>

<p>This is how the foundation gets turned against itself. The person who rejects the axioms and tenets still inherits their moral intuitions from them. Their sense of justice, their concept of service, their understanding of what cultivation means, all of it was shaped by the very foundation they want to discard. They aren’t operating from a new framework. They’re operating from the old one while denying its source. They want to redefine the triad’s criteria using the moral reasoning that the foundation itself provided.</p>

<p>Without education and self-awareness, this happens invisibly. The Steward redefines cultivation while believing they’re improving it, introducing division while speaking the language of progress. The Judge’s standard hasn’t changed and the Servant’s orientation can still be toward others, but the Steward has redirected what “constructive” means. When that obligation to clarity and unity is abandoned, the other two criteria lose their grounding even if their definitions haven’t technically changed.</p>

<p>The defense against this is that the triad is distributed, not hierarchical. Every person embodies all three archetypes. When the majority holds the full triad and understands its foundation, a corrupt actor can be corrected because the collective still measures against the original standard. And when the collective begins to drift, a single person who holds the triad can resist and correct, because they carry the same authority of measurement that everyone shares.</p>

<p>But this defense only works through clarity. Every person must cultivate understanding that is both broad enough to reach everyone it needs to reach and deep enough to hold up under scrutiny. A confused population can’t tell when cultivation has been redirected. A population with access to clear, deep understanding can. Clarity is the immune system of the triad. It doesn’t prevent corruption from being attempted, but it makes corruption visible before it takes hold.</p>

<h2 id="what-follows">What Follows</h2>

<p>Every decision that affects others deserves the triad. Not as a formality, not as a retrospective exercise, but as a live constraint applied in the moment the decision is made. The judge, the servant, and the steward aren’t roles you step into when the stakes feel high enough. They are what every decision demands whether you acknowledge them or not. The only question is whether you measure against all three or let one quietly slip.</p>

<p>But the triad only works if the foundation holds. Axioms and tenets that anchor the definitions of justice, service, and cultivation must be defended with the same rigor they demand of the decisions built on top of them. Drift doesn’t announce itself. It arrives gradually, dressed in the language of progress or pragmatism, and by the time anyone names it, the definitions have already shifted. The defense is clarity: constant, deliberate, and shared. Every person who holds the triad has an obligation to articulate the foundation clearly enough that drift becomes visible before it takes root.</p>

<p>And when drift is detected, the response must come from the triad itself. The same shared criteria that the drift may be undermining are the criteria used to correct it. This is not a contradiction. It is the triad’s deepest strength. A foundation that can diagnose its own corruption, that measures every challenge against justice, service, and cultivation simultaneously, is a foundation worth fighting for. The moment you abandon the triad to address the drift, you’ve already conceded what the drift was after.</p>]]></content><author><name>Steven Stuart</name><email>stevenstuartm@gmail.com</email></author><category term="leadership" /><category term="philosophy" /><category term="growth" /><category term="decision-making" /><summary type="html"><![CDATA[A triad for measuring the integrity of decisions that affect others: Justice, Service, and Cultivation. All three must hold simultaneously, and all three require foundational axioms and tenets that no human authority can redefine.]]></summary></entry><entry><title type="html">Stop Hunting for Sales, Start Farming Chickens</title><link href="https://stevenstuartm.com/blog/2026/02/20/stop-hunting-for-revenue.html" rel="alternate" type="text/html" title="Stop Hunting for Sales, Start Farming Chickens" /><published>2026-02-20T00:00:00+00:00</published><updated>2026-02-20T00:00:00+00:00</updated><id>https://stevenstuartm.com/blog/2026/02/20/stop-hunting-for-revenue</id><content type="html" xml:base="https://stevenstuartm.com/blog/2026/02/20/stop-hunting-for-revenue.html"><![CDATA[<p>If your business relies on one-time sales, every customer is a dead end. The transaction closes, the relationship ends, and you start over. You are hunting: tracking down one target at a time, consuming what you catch, and heading back into the forest for the next one.</p>

<p>Hunting works when the forest is full and you only need to eat today. But forests thin out. The easy targets disappear first, and each successive hunt takes more effort and covers more ground for the same yield. If every customer you acquire has to fund the acquisition of the next one from a single transaction, and acquisition costs don’t decrease as you exhaust the easiest-to-reach segments, the math never balances.</p>

<p>This is the death spiral of one-time sales. Acquisition costs are recurring, but revenue isn’t.</p>

<h2 id="marketing-and-sales-are-the-same-problem">Marketing and Sales Are the Same Problem</h2>

<p>Marketing and sales are the same problem. The goal of one determines the method and outcome of the other.</p>

<p>If your sales model is one-time purchases, your marketing must constantly find fresh customers. Every batch you acquire has to be replaced by the next batch, and you need the money from the current batch to fund finding the next one. When cash flow from one-time sales can’t keep pace with the cost of acquiring new customers, both marketing and sales collapse together. The math can work if you are hunting whales and each transaction is large enough to fund the next hunt, but most businesses aren’t operating in that market.</p>

<p>Subscriptions change the equation entirely. Each acquired customer keeps paying. In month one, 100 subscribers fund your marketing budget. By month six, 500 subscribers fund it. The math compounds in your favor instead of against you, and marketing shifts from desperate acquisition to retention and upsell.</p>

<h2 id="the-expectation-trap">The Expectation Trap</h2>

<p>One-time purchases concentrate all customer expectation into a single transaction. A customer who pays $500 once expects the product to deliver everything they imagined at the moment of purchase. The bar is set at the point of sale, and it only goes up from there. You will struggle to reach what people expect of you when their entire investment happened in one moment.</p>

<p>Subscriptions distribute expectations across time. A customer paying $15 per month evaluates the product continuously against a lower threshold. Each month is an opportunity to deliver value, build trust, and demonstrate improvement. The relationship has room to grow because the stakes of any individual payment are low enough that customers stay through imperfection.</p>

<p>This also creates a natural path to upsell. A customer who started at $15 per month and gradually discovered they needed more is far more receptive to a premium tier than a customer who already paid $500 and feels shortchanged.</p>

<p>The low commitment cuts both ways, and that is a feature. Subscribers can leave, and that is one of the major reasons they sign up in the first place. The low barrier to exit is also a low barrier to entry. “We’ll make it up in volume” is the oldest self-deprecating joke in business, but with subscriptions it is actually true. People come and go as circumstances change, and many come back when circumstances change again. A one-time buyer who leaves is gone forever. A former subscriber is a warm lead who already knows your product.</p>

<h2 id="even-microsoft-figured-this-out">Even Microsoft Figured This Out</h2>

<p>If there was ever a company that could sustain one-time sales indefinitely, it was Microsoft. Office dominated the market for decades as boxed software. Businesses bought bulk licenses and individuals paid once per version. Microsoft had the brand, the distribution, the market share, and the switching costs to keep that model running longer than almost anyone else could.</p>

<p>They abandoned it anyway. The numbers tell you why: when Office 365 subscription revenue first surpassed perpetual license revenue, only about 10% of Office’s total user base was on subscriptions. Ten percent of users on a recurring model generated more revenue than the other 90% buying one-time licenses (<a href="https://www.computerworld.com/article/1714132/microsofts-office-365-subscription-push-pays-off-what-it-means-for-biz.html" target="_blank" rel="noopener noreferrer">Computerworld</a>). That ratio alone tells you everything about why Microsoft made the shift and why the one-time model was already dead before they formally moved away from it.</p>

<p>If Microsoft looked at one-time licenses and decided the model wasn’t working, smaller companies don’t stand a chance trying to make it work.</p>

<h2 id="get-chickens-first">Get Chickens First</h2>

<p>The instinct for many businesses is to chase the big deal. Land the whale, close the enterprise contract, win the marquee client. That is big game hunting, and it has the same problem as all hunting: it’s unpredictable, resource-intensive, and doesn’t compound.</p>

<p>Start with chickens instead. Chickens aren’t glamorous and they don’t make for exciting board presentations, but they produce eggs every day without drama.</p>

<p>In practice, this means offering a cheap or free tier that the majority of users can access with minimal friction. Free for the 80%, affordable subscriptions for most of the rest, and premium pricing for the power users who need more. The cheap tier generates cash flow and builds a user base. The premium tier captures the customers who have already proven they need what you offer.</p>

<p>And chickens don’t just lay eggs; they produce more chickens. Subscribers are better referral sources than one-time buyers for reasons that compound on each other. A one-time buyer disappears after the transaction, but a subscriber stays connected, which means you can involve them in polls, feedback loops, and community conversations that deepen their investment in the product. That involvement creates satisfaction that goes beyond the product itself; people advocate for things they feel part of, not things they bought once.</p>

<p>The low price point also makes referrals frictionless. “Try this, it’s $15 a month” is a far easier recommendation to make than “you should spend $500 on this.” And if you build a community culture around your subscriber base, you create advocates who do your marketing for free because they genuinely want others to share the experience. Your customers become your marketing engine, and that referral channel costs you nothing. Hunting never gets cheaper. Chickens compound.</p>

<p>With chickens producing steady cash flow, you can pursue big game selectively. Enterprise deals and high-value contracts become supplemental wins rather than survival necessities. When you aren’t desperate, you negotiate better, qualify opportunities more carefully, and walk away from deals that don’t fit. Big game is rarer and more valuable precisely because you don’t need it to keep the lights on.</p>

<h2 id="the-long-term-only-arrives-if-the-short-term-doesnt-kill-you">The Long Term Only Arrives if the Short Term Doesn’t Kill You</h2>

<p>One-time sales feel like progress because money arrives in visible chunks. But each sale is a dead end that resets the clock, and the short term is self-serving enough to consume every resource you have before the long-term strategy ever materializes.</p>

<p>Not every business can make this shift, and I won’t pretend to understand every nuance of every market well enough to make a universal claim. But for software and repeatable services, the math is unambiguous. Sell it cheap if you must. A $10 subscription that renews every month is worth more than a $200 one-time sale by month 21, and the customer relationship is still alive. The early months will be lean, but a one-time sales business at month 12 is in the same position it was in at month 1 while a subscription business at month 12 has compounding revenue and a customer base that makes the company worth acquiring if you ever want that option. Subscriptions solve the cash flow problem, the marketing sustainability problem, and the customer expectation problem simultaneously. They turn every customer from a dead end into a compounding asset.</p>

<p>Get chickens, let them lay eggs, and then go hunt big game when you can afford to be selective about it.</p>]]></content><author><name>Steven Stuart</name><email>stevenstuartm@gmail.com</email></author><category term="business" /><category term="marketing" /><category term="strategy" /><summary type="html"><![CDATA[One-time sales make every customer a dead end, creating a death spiral where acquisition costs are recurring but revenue isn't. Subscriptions fix the math by compounding cash flow, lowering customer expectations, and funding sustainable marketing.]]></summary></entry><entry><title type="html">How One Screen Holds the Entire Industry Hostage</title><link href="https://stevenstuartm.com/blog/2026/02/17/why-are-we-still-writing-two-mobile-apps.html" rel="alternate" type="text/html" title="How One Screen Holds the Entire Industry Hostage" /><published>2026-02-17T00:00:00+00:00</published><updated>2026-02-17T00:00:00+00:00</updated><id>https://stevenstuartm.com/blog/2026/02/17/why-are-we-still-writing-two-mobile-apps</id><content type="html" xml:base="https://stevenstuartm.com/blog/2026/02/17/why-are-we-still-writing-two-mobile-apps.html"><![CDATA[<p>Frameworks like React Native, Flutter, and MAUI keep promising to end the “write it twice” problem across mobile platforms. One codebase, every platform, native-quality results. Yet every time, the abstraction leaks, and then it floods so fast that bailing water is all you have time to do. I’ve been working with MAUI recently, and the experience crystallized a question I should have asked sooner: why am I not just building a website?</p>

<p>Once you pull that thread, it unravels fast. The web platform’s capability surface is far larger than the industry acknowledges, and nearly everything preventing universal web adoption is inertia, business incentives, or mental models rather than real technical constraints.</p>

<blockquote class="pull-quote">
<p>The web can do the job. One company made sure you'd never trust it to.</p>
</blockquote>

<p>This isn’t an argument that native apps are obsolete or that local executables should disappear. There are good reasons to run code on your own hardware, and the pure thin-client terminal hasn’t arrived yet; maybe it shouldn’t. But when teams default to native without questioning it, they accept costs and constraints on the client side that the backend abandoned years ago.</p>

<h2 id="what-the-web-platform-can-actually-do">What the Web Platform Can Actually Do</h2>

<p>The capabilities list for the modern web is longer than most developers and decision-makers expect. For the typical business application, whether it runs on a phone, a tablet, or a desktop, the web platform already covers the core requirements:</p>

<table>
  <thead>
    <tr>
      <th>Capability</th>
      <th>Web Technology</th>
    </tr>
  </thead>
  <tbody>
    <tr>
      <td>Offline support</td>
      <td>Service Workers, Cache API</td>
    </tr>
    <tr>
      <td>Push notifications</td>
      <td>Push API (iOS 16.4+, March 2023)</td>
    </tr>
    <tr>
      <td>Camera, microphone, biometrics</td>
      <td>getUserMedia, WebAuthn/Passkeys</td>
    </tr>
    <tr>
      <td>Payment processing</td>
      <td>Payment Request API (includes Apple Pay)</td>
    </tr>
    <tr>
      <td>Home screen installation</td>
      <td>Web App Manifest, standalone window</td>
    </tr>
    <tr>
      <td>GPU-accelerated graphics and compute</td>
      <td>WebGPU (all major browsers, Nov 2025)</td>
    </tr>
    <tr>
      <td>Peripheral device access</td>
      <td>WebUSB, WebSerial, WebBluetooth, WebHID (Chromium)</td>
    </tr>
    <tr>
      <td>Local file access</td>
      <td>File System Access API, Origin Private File System</td>
    </tr>
    <tr>
      <td>Near-native performance</td>
      <td>WebAssembly, Web Workers</td>
    </tr>
    <tr>
      <td>Real-time communication</td>
      <td>WebRTC</td>
    </tr>
  </tbody>
</table>

<p>That list covers what the vast majority of apps actually do. Most are thin clients over an API: authenticate a user, fetch data, display it, let the user interact with it. The web handles all of that with a single codebase on every platform with a browser, and the deployment model alone should give teams pause. No App Store review cycles, no waiting days for a critical bug fix to clear approval, no separate release pipelines for each platform.</p>

<h2 id="what-genuinely-requires-native">What Genuinely Requires Native</h2>

<p>The web can’t do everything. Some capabilities have no web equivalent and genuinely require native development.</p>

<ul>
  <li><strong>Wearable integration and health data</strong> like Apple Watch complications, Wear OS tiles, HealthKit, and Google Health Connect require platform SDKs with no web alternative</li>
  <li><strong>Advanced augmented reality</strong> using LiDAR scanning, scene understanding, and body tracking exceeds what WebXR currently offers</li>
  <li><strong>Deep OS integration</strong> like Siri Shortcuts, Google Assistant routines, home screen widgets, and inter-app communication remains outside the web’s reach</li>
  <li><strong>True background processing</strong> for geofencing, long-running background jobs, and persistent location tracking requires native APIs</li>
  <li><strong>Specific hardware access</strong> like NFC writing on iOS, advanced camera controls, and screenshot blocking are native-only capabilities</li>
</ul>

<p>This list is relevant, but it’s also narrow. Look at the apps on your phone and the software on your desktop, and count how many actually need any of these features.</p>

<h2 id="cross-platform-frameworks-are-the-wrong-answer">Cross-Platform Frameworks Are the Wrong Answer</h2>

<p>Cross-platform frameworks don’t eliminate the two-codebase problem; they disguise it. React Native’s bridge, Flutter’s rendering engine, and MAUI’s handler pattern each introduce their own category of bugs that don’t exist in either native platform. You haven’t removed the platform differences; you’ve added a third abstraction layer and inherited all three bug surfaces.</p>

<p>The tech debt is unprojectable because you don’t control the framework’s roadmap. When Apple changes iOS, you wait for the framework to catch up. When the framework ships breaking changes, you’re locked into an unplanned upgrade. When a critical bug sits in the issue tracker for months, your only options are workarounds or forks.</p>

<p>The original justification was that specialized native developers are expensive, so share code to reduce cost. AI code generation has collapsed that constraint. A competent developer with AI assistance can ship Swift or Kotlin without years of platform experience, but all the original disadvantages of cross-platform remain.</p>

<h2 id="two-companies-two-arcs">Two Companies, Two Arcs</h2>

<p>To understand why the web hasn’t become the default, it helps to look at how the two most influential companies in software development have traded places.</p>

<p>In the early 2000s, Microsoft was the villain. They owned the desktop, the browser, the runtime, and the development tools, and the DOJ antitrust case in 2001 was about exactly this: using a Windows monopoly to crush Netscape. Apple was the scrappy alternative making beautiful things for creative people, and when the iPhone launched in 2007 it felt like liberation from the carrier-controlled mobile landscape.</p>

<p>Then each company lost something important, and their responses tell you everything.</p>

<p>Microsoft lost mobile and Windows 8 alienated more and more desktop users. Their response was to stop trying to own the screen and instead to compete on the stack. .NET went open source, Visual Studio Code became the most popular editor in the world, they acquired GitHub and kept it open, and Azure now runs more Linux workloads than Windows. The company that once tried to kill Linux now employs more Linux kernel contributors than most Linux companies.</p>

<p>I am still baffled why Microsoft did not backpedal their bloated OS and clunky UX for Desktop as soon as their market assumptions proved to be so very wrong. Windows seems to have gotten worse with each version and with no sign of redemption. Two steps forward and one step back.</p>

<p>Apple very quickly went the other direction. When the iPhone became the dominant computing device, Apple discovered what Microsoft had known in the 1990s: if you control the platform people depend on, you don’t have to compete on openness. You compete on control.</p>

<p>I write .NET code for a living and I choose to do it on a Mac because the experience is genuinely better. Notice what that reveals about both companies though. Microsoft made it possible by building .NET and VS Code to run everywhere. Try the reverse: building an iOS app without a Mac, submitting to the App Store without Xcode, running Swift on Windows with the same support .NET has on macOS. You can’t. Microsoft earns developers by being useful everywhere. Apple captures them by being mandatory.</p>

<p>Apple’s products deserve their loyalty. The Mac is excellent, the ecosystem integration is seamless, and users trust the brand for good reasons. That trust is exactly what makes the constraint so effective. When a company makes products this good, people don’t scrutinize the walls. They assume the walls exist for good reasons.</p>

<p>But look at what Apple controls versus what they build. Siri has been outperformed by competitors for over a decade, and it doesn’t matter because Siri doesn’t need to be good; it needs to be on the iPhone. Owning the screen means you don’t have to be the best at anything that runs on it; you just need to be good enough at the thing people hold, and everything else flows through you.</p>

<blockquote class="pull-quote">
<p>Apple doesn't compete on technology. They compete on constraint ownership. The phone is the aperture, and Apple controls the aperture.</p>
</blockquote>

<h2 id="the-walls-apple-built">The Walls Apple Built</h2>

<p>The walls Apple has constructed around iOS are higher than anything Microsoft built around Windows in the 1990s, and they’re more sophisticated because they’re framed as user protection rather than vendor control.</p>

<p>Every browser on iOS must use Apple’s WebKit rendering engine. Chrome on your iPhone isn’t really Chrome. It’s a WebKit skin with Chrome’s UI on top. Firefox, Edge, Brave: all WebKit underneath. This means Apple alone controls what web capabilities exist on every iOS device, regardless of which browser icon a user taps.</p>

<p>On Chrome and Android, web apps can access over 47 Web APIs including Bluetooth, NFC, Background Sync, USB, and serial devices. On iOS, none of those APIs are available on any browser. In June 2020, Apple publicly rejected 16 Web APIs citing “privacy and fingerprinting concerns.” Android handles the same APIs with straightforward permission prompts. The privacy argument doesn’t hold up when every other platform manages these capabilities without the problems Apple claims are unsolvable.</p>

<p>Chrome on Android supported push notifications in 2015. iOS didn’t get web push until March 2023, and even then Apple requires users to install the web app to their home screen first. On Android, any website can request push permission.</p>

<p>The EU’s Digital Markets Act forced Apple’s hand on browser engine choice in 2024, but the response was revealing. Rather than comply, Apple attempted to remove PWA support entirely in the EU, converting installed web apps into simple bookmarks. Their justification was “complex security and privacy concerns.” After an open letter gathered over 4,200 signatures and the European Commission sent formal inquiries, Apple reversed the decision within two weeks. Genuine security concerns don’t evaporate under public pressure.</p>

<p>And even after the DMA technically required browser engine choice, as of early 2026 zero browsers have shipped a non-WebKit engine on iOS in the EU. The regulation exists on paper. The monopoly persists in practice.</p>

<p>The financial incentive is straightforward. The App Store generated approximately $27 billion in commissions in 2024 on a 30% cut. Every app that ships as a web app is revenue Apple doesn’t collect. The U.S. Department of Justice made this connection explicit in their March 2024 antitrust lawsuit, which specifically cites the WebKit requirement as part of Apple’s monopoly maintenance strategy.</p>

<p>Android doesn’t have these restrictions. Chrome supports the full suite of web APIs and PWAs work as first-class applications. But it doesn’t matter. No product leader will ship something that doesn’t work on iPhones, and Apple’s users represent the highest-value demographic in every Western market. The most constrained major platform sets the ceiling for what anyone builds.</p>

<h2 id="the-circular-logic-of-users-prefer-native">The Circular Logic of “Users Prefer Native”</h2>

<p>The most common justification for building native apps is market data showing that users spend 88-92% of their mobile time in apps and only 8-12% in browsers. Native retains users at 32% after 90 days compared to 20% for web. The data seems decisive.</p>

<p>But this is a post-hoc fallacy dressed up as market research. Of course the native experience retains users better; it received ten times the investment. Of course users spend more time in apps; they were never given an equivalent web alternative. Native gets the discovery mechanisms, the design talent, and the push notification support. Web gets a fraction of the budget and is treated as a fallback. You cannot measure user preference when one option was deliberately hobbled by the platform owner and underfunded by the developer.</p>

<p>The developer survey data has the same circularity. Flutter and React Native adoption is growing, but these frameworks exist because Apple won’t let the web do what it already does on every other platform. A developer checks iOS web capabilities, finds background sync missing and Bluetooth unavailable, builds native instead, and that decision gets counted as evidence that the web isn’t ready. The constraint creates the behavior that justifies the constraint.</p>

<p>The counterfactual has never been tested at scale because Apple has prevented it. Equivalent web and native experiences have never existed on iOS. The assumption that native is inherently superior has become so embedded that most teams skip straight to “which framework?” without ever stopping at “does this need to be an app?”</p>

<p>The few times the counterfactual has been tested, the results are telling. The Financial Times left the App Store in 2011 and is still web-first over a decade later. Starbucks built a PWA 99.84% smaller than their iOS app and doubled daily active users. But Starbucks kept the native app too, which raises an important question I can’t answer: did they keep it because native was genuinely better, or because no one was willing to ask “why do we still have this?”</p>

<h2 id="the-anxiety-that-predates-mobile">The Anxiety That Predates Mobile</h2>

<p>When the iPhone launched in 2007, Steve Jobs told developers to build web apps. The web genuinely wasn’t ready, and the App Store arrived a year later. But the response to that gap matters more than the gap itself. Rather than rallying behind closing it, the industry built an entirely parallel native ecosystem. This follows a pattern that has repeated since the 1960s: every generation of computing produces a viable thin-client model, and every generation finds reasons to reject it. Mainframe terminals gave way to PCs. Sun’s network computer was technically sound and commercially dead. Chromebooks were dismissed as laptops that couldn’t work offline, even as every application was migrating to the browser. The anxiety is always the same: if computation lives somewhere else, you lose control. Companies that profit from local-first computing have always been happy to amplify that fear.</p>

<p>The backend already completed the thin-client transition. Cloud won decisively; nobody serious argues for on-premises-first anymore. But the frontend is frozen at the same conceptual barrier that existed when the first PC replaced the first terminal. We accepted that our servers are someone else’s computers. We haven’t accepted that our applications could be someone else’s rendering.</p>

<p>Mobile is also the reason the web became capable enough to challenge native at all. Service workers, WebGL, touch APIs, and WebAssembly weren’t inevitable. They were a competitive response to native threatening to make the web irrelevant. The ecosystem that pressured the web into becoming a genuine application platform is now the same ecosystem preventing it from being used as one.</p>

<p>Cloud broke through because no single company controlled the server. The web can’t break through until it works on Apple’s phone, and Apple decides what works on Apple’s phone.</p>

<h2 id="progress-often-comes-by-getting-out-of-its-way">Progress Often Comes by Getting Out of Its Way</h2>

<p>Before writing that new shiny app, ask yourselves: “Do we have a specific, documented constraint that the web platform cannot satisfy?”</p>

<p>For most mobile software needs, the answer is no. The web runs everywhere, deploys instantly, requires no framework intermediary, and its capability surface grows with every browser release. Cross-platform frameworks tried to solve platform fragmentation by adding another platform on top. The web solved it by being the platform that was already there. In Android-dominant markets like India and Southeast Asia, companies like Flipkart and JioSaavn have already proven this works: one codebase, instant deployment, no App Store tax.</p>

<p>The immediate objection is discoverability. People find apps by searching the App Store, so if you’re not in the store, you’re invisible. But most app discovery doesn’t actually happen through store browsing; it happens through web search, social media, ads, and word of mouth. The store is more of a checkout counter than a shopping mall. Google Play already supports Trusted Web Activities, which let PWAs appear as store listings. The Microsoft Store accepts PWAs directly. For enterprise and B2B products, store discovery was never relevant to begin with. The discoverability argument is narrower than it sounds, and it gets narrower every year as deep links, QR codes, and social sharing put users directly into web experiences without a store in between.</p>

<p>The pragmatic strategy might be web-first. Build for the browser as the default platform, and only build native when a specific capability genuinely can’t be delivered through the web. The web app is your product. The native app, if you need one at all, exists only for the features that Apple won’t let the browser handle.</p>

<p>Cost, velocity, and agility shouldn’t be values we only demand from our backend infrastructure. The same expectations that drove the industry from on-premises servers to cloud should apply to how we build and deliver client software. Native apps aren’t going away, and they shouldn’t. But we should be progressing toward both efficiency and sustainability rather than accepting a status quo where one company’s business model determines how the entire industry ships code.</p>]]></content><author><name>Steven Stuart</name><email>stevenstuartm@gmail.com</email></author><category term="architecture" /><category term="web-development" /><category term="mobile" /><category term="pwa" /><category term="cross-platform" /><summary type="html"><![CDATA[The web platform already does what most apps do, but Apple's control of the phone screen keeps the industry building native. The 'users prefer native' narrative is circular logic created by the constraint itself.]]></summary></entry><entry><title type="html">Observability Is Authored, Not Installed</title><link href="https://stevenstuartm.com/blog/2026/02/11/observability-is-authored-not-installed.html" rel="alternate" type="text/html" title="Observability Is Authored, Not Installed" /><published>2026-02-11T00:00:00+00:00</published><updated>2026-02-11T00:00:00+00:00</updated><id>https://stevenstuartm.com/blog/2026/02/11/observability-is-authored-not-installed</id><content type="html" xml:base="https://stevenstuartm.com/blog/2026/02/11/observability-is-authored-not-installed.html"><![CDATA[<p>I have been a part of a dev team where poor observability constantly brought us to a standstill. Not because the tooling was missing, but because the data it collected never carried meaningful context. Alerts fired constantly, so operation teams ignored them, and dashboards existed for every service, but none of them answered the questions that mattered during incidents. Investigations that should have taken minutes took hours. It got bad enough that observability failures alone caused significant SLA violations.</p>

<p>We questioned the choice of platforms, dashboards, and alerting rules. Yet none of those could help because the problem was never the tooling. The problem was upstream since our code didn’t know the difference between “I handled this correctly” and “something is actually broken.”</p>

<h2 id="the-classification-problem">The Classification Problem</h2>

<p>Consider a payment processing system. A customer’s card gets declined for insufficient funds. The payment gateway returns a rejection, and the system logs it as an ERROR.</p>

<p>But this is the system working correctly. The card was declined because it should have been declined. Insufficient funds is a handled business case, not an exception. Because it’s logged as an error, though, it shows up in error dashboards, triggers error-rate alerts, and adds to the ambient noise that operators learn to tune out.</p>

<p>Over time, “payment errors” become background radiation. The team knows most of them are just declined cards, so they stop investigating. Then the gateway starts timing out, or a partner pushes a breaking change, and the real problem gets buried. Nobody notices because “payment errors are always high.”</p>

<p>The usual response is to blame the team for ignoring alerts. It is a discipline problem, yes, but the discipline that’s missing is upstream, in the code that treats expected outcomes as errors. Alert fatigue is the predictable consequence.</p>

<p>The fix is upstream of your alerting platform:</p>

<ul>
  <li><strong>Expected success</strong>: The happy path. Logged at DEBUG if at all.</li>
  <li><strong>Expected failure</strong>: Business logic correctly rejecting something, like declined payments, validation failures, or rate limiting. This is INFO, not ERROR.</li>
  <li><strong>Degraded but functional</strong>: The system recovered, but something is wearing thin. Retries succeeding after multiple attempts, response times approaching SLA thresholds, connection pools running hot. This is WARN: not broken yet, but worth watching before it becomes broken.</li>
  <li><strong>Unexpected failure</strong>: Something genuinely went wrong that demands investigation. This is the only category that should be ERROR.</li>
</ul>

<p>When the system correctly declines a card for insufficient funds, it’s tempting to log that as WARN because you want the metric reviewed often. But a correctly handled decline is the system working as designed, not degrading. Whether the decline rate is “concerning” is a business question that changes with strategy and context; log levels shouldn’t encode that judgment. Leave business interpretation to reports and dashboards where it can evolve, not to code where it gets baked in and forgotten.</p>

<p>This is one of the places where <a href="/blog/2025/10/29/result-pattern-vs-exceptions-revisited.html">result types earn their keep</a>. When expected failures are returned as typed results rather than thrown as exceptions, the classification is baked into the code’s structure. A declined payment returns a result; a gateway timeout throws an exception. The distinction is explicit at the point where it matters most, and logging infrastructure can respect it without guessing.</p>

<p>When classification is right, every downstream tool benefits. Dashboards that track error rates become genuine health indicators because errors represent actual unexpected failures, not business logic working as designed. Log queries become surgical because structured errors with proper context let you filter to a specific tenant or operation in minutes. Alerts become actionable because they fire only for conditions that demand investigation.</p>

<p>When classification is wrong, the opposite happens. Alerts fire for expected outcomes, so operators learn to ignore them. Dashboards become decoration because nobody trusts what the numbers represent. Every investigation becomes archaeology because the data that should answer your questions is buried under noise. No monitoring platform compensates for what the code got wrong at the source.</p>

<h2 id="context-is-authored-not-accumulated">Context Is Authored, Not Accumulated</h2>

<p>Getting the classification right is only half of it. The other half is what you include when something does fail.</p>

<p>The instinct is to compensate with volume: write verbose logs everywhere so you’ll have context when you need it. But a trace log is not a dump file. Every bug I’ve seen diagnosed from trace logs involved information that should have already been in the error or warning itself. The problem was never insufficient logging volume; it was that nobody authored the context where it mattered.</p>

<p>What actually solves bugs is understanding what the user did and what they sent, not tracing the code’s internal flow. If your logs carry a correlation key across services (most structured logging libraries support this out of the box) and your errors capture the operation, the input, and what went wrong, you have what you need to reproduce the problem. The approach is the same one that makes event-sourcing systems reliable: capture the context that led to a state so you can replay it. You don’t need to trace every intermediate step if you can reconstruct the scenario from the input and the outcome.</p>

<p>Failures should carry their own context. When an operation fails, the error log should include what was being attempted, what went wrong, and enough identifying information to correlate it. What gets logged must be intentional. You know the domain, so you know the potential inputs, what’s valid, and what’s sensitive. That knowledge lets you author a safe context: enough to reproduce the problem without exposing data that shouldn’t be in a log. If you don’t understand the domain well enough to make that distinction, that’s the source of the problem, not the logging infrastructure. Trace-level logging has its place for diagnosing specific flows when you can toggle it on temporarily, but it shouldn’t be your primary mechanism for understanding what your system did.</p>

<p>The difference between a useful error and a useless one is whether someone authored the context intentionally or hoped that raw volume would cover it.</p>

<h2 id="the-black-box-test">The Black Box Test</h2>

<p>Classification and context are design decisions, but most developers never test whether their logging actually answers the questions it needs to. One reason is the debugger habit. When something behaves unexpectedly, the instinct is to attach a debugger, set breakpoints, and step through execution rather than read the outputs.</p>

<p>Some organizations extend this habit into production with remote debugging, but that’s a security liability. Direct access to a running container, or any production process, exposes the environment regardless of the layer. You should be observing system outputs, not attaching to live processes.</p>

<p>Production should be a black box. If your default instinct when something breaks is to attach a debugger rather than read the outputs, you’ll never feel the pressure to make those outputs useful. The classification stays sloppy, the context stays thin, and the errors stay vague. Not because you don’t know better, but because you’ve never needed better.</p>

<p>Developers who diagnose from observable behavior, whether testing locally against containerized dependencies or against remote systems, build the discipline naturally. They feel the pain of vague errors and missing context firsthand, and they fix it at the source because they have no other option.</p>

<p>The practical test is straightforward: when something breaks, can you diagnose it from the system’s outputs alone? Or do you need to add logging, redeploy, and wait for it to happen again? If the answer is the latter, your code doesn’t explain itself yet.</p>

<p>That core discipline compounds when builders own what they operate. You don’t log payment declines as errors when you’re the one who gets paged for “high error rate on payment service.” You don’t dump verbose logs instead of authoring context when you’re the one parsing them at 3 AM. The feedback loop between writing code and living with it in production is what makes classification honest, context intentional, and alerts worth waking up for.</p>

<p>Better tooling alone won’t create that loop; only ownership will.</p>]]></content><author><name>Steven Stuart</name><email>stevenstuartm@gmail.com</email></author><category term="observability" /><category term="devops" /><category term="architecture" /><category term="operations" /><summary type="html"><![CDATA[Most observability failures trace back to code that doesn't classify its own behavior. When your system can't distinguish 'handled correctly' from 'actually broken,' no platform can compensate.]]></summary></entry><entry><title type="html">The Most Dangerous Sentence in Software Development</title><link href="https://stevenstuartm.com/blog/2026/02/07/you-cant-realign-if-you-cant-stop.html" rel="alternate" type="text/html" title="The Most Dangerous Sentence in Software Development" /><published>2026-02-07T00:00:00+00:00</published><updated>2026-02-07T00:00:00+00:00</updated><id>https://stevenstuartm.com/blog/2026/02/07/you-cant-realign-if-you-cant-stop</id><content type="html" xml:base="https://stevenstuartm.com/blog/2026/02/07/you-cant-realign-if-you-cant-stop.html"><![CDATA[<p>Something is broken in my approach to problem-solving, and I suspect it’s broken in yours too.</p>

<p>When I’m mid-implementation and a better idea surfaces, my first instinct isn’t to evaluate it; it’s to finish what I’m doing. Not because I’ve weighed the alternatives and made a conscious choice to stay the course. I just can’t seem to stop. The impulse to complete what’s in front of me overrides the signal to reconsider, and by the time I’ve finished, the switching costs are real and the moment for cheap redirection has passed.</p>

<p>In software, the most dangerous sentence is “let me just get this working first.”</p>

<p>It’s rarely pride, just a pre-rational impulse to proceed before changing direction, as though stopping mid-stride carries some invisible cost that continuing doesn’t.</p>

<h2 id="a-bias-faster-than-reflection">A Bias Faster Than Reflection</h2>

<p>This pattern has a name. In aviation, it’s called <a href="https://www.faa.gov/newsroom/safety-briefing/cfit-and-plan-continuation-bias" target="_blank" rel="noopener noreferrer">get-there-itis</a>. In cognitive science, the broader phenomenon is plan continuation bias. <a href="https://skybrary.aero/articles/continuation-bias" target="_blank" rel="noopener noreferrer">Research at NASA’s Ames Research Center</a> found that roughly 75% of tactical decision errors in airline accidents were decisions to continue the original plan despite cues suggesting a different course of action (Orasanu et al., 2001). These weren’t errors of skill or knowledge; they were errors of continuation.</p>

<p>When pilots flying under visual rules encounter weather that requires instruments and press on anyway, the <a href="https://www.aopa.org/training-and-safety/air-safety-institute/accident-analysis/vfr-into-imc/overview" target="_blank" rel="noopener noreferrer">fatality rate is 86%</a> (AOPA Air Safety Institute). These aren’t reckless or untrained pilots; many are experienced and fully aware of the danger.</p>

<p>The mechanism explains why awareness alone isn’t enough. When the original plan was well-justified, subsequent contradictory signals receive less cognitive weight. The better your reason for starting, the harder it becomes to hear the signal telling you to stop. As workload increases and you get deeper into execution, less mental capacity is available to reconsider. The original plan has inertia, and new information has to fight through it.</p>

<h2 id="let-me-just-get-this-working-first">“Let Me Just Get This Working First”</h2>

<p>The idea of finishing first and then reevaluating sounds reasonable. Finish what you’ve started, then evaluate from a position of knowledge rather than speculation. But watch carefully what actually happens. You spend another hour building out the current approach. You write tests around it. Other code starts depending on it. A colleague reviews it and builds understanding of how it works. All the while diving deeper and deeper into the work for an idea which has not yet proven its worth. At that point, “considering the other approach” means throwing away not just your work but the organizational investment in reviewing, understanding, and integrating what you’ve built.</p>

<p>The impulse to finish manufactures the very sunk costs that now appear to justify not switching.</p>

<p>This is the difference between proving and testing. Proving asks “can I make this work?” and the answer is almost always yes given enough effort. Testing asks “should I be making this work?” and that’s the question that actually matters. When someone says “let me just get this working first,” they’re proving, not testing. They want to see their assumption become real before they’ll allow a competing idea to be evaluated. The current assumption gets the full weight of implementation effort while the alternative gets a hypothetical conversation, maybe, later, if there’s time.</p>

<p><a href="https://en.wikipedia.org/wiki/Escalation_of_commitment" target="_blank" rel="noopener noreferrer">Barry Staw’s research on escalation of commitment</a> found something uncomfortable: people who feel personally responsible for the initial decision commit more resources to it when it starts failing, not fewer (Staw, 1976). The instinct isn’t to cut losses. It’s to double down, as though additional effort can retroactively make the original decision correct. In software, this looks like the developer who spends two more days making a questionable approach work rather than spending thirty minutes evaluating whether a different approach would have been simpler from the start.</p>

<h2 id="when-the-herd-feels-like-validation">When the Herd Feels Like Validation</h2>

<p>The individual version of this bias is problematic enough on its own. The group version is worse, and it doesn’t work the way most people assume.</p>

<p>Classic groupthink involves the active suppression of dissent: people silencing themselves or being silenced because the group demands conformity. That happens, but it’s not the most common pattern. The more common version is subtler and more passive. Nobody explicitly validated the direction. Nobody consciously suppressed alternatives. Everyone just assumed that someone else must have validated it, and the group’s momentum itself became evidence that the direction is correct.</p>

<p>This isn’t the bystander effect, where people assume someone else will act. It’s that adding your force to the herd’s direction feels rational because the herd is already moving. If everyone is going this way, someone must have confirmed it’s right. And even if nobody confirmed anything, the sheer mass of collective effort makes the direction feel too established to question.</p>

<p>This compounds with individual plan continuation bias. Each person on the team is locked into “finish what I’m doing” mode while simultaneously treating the group’s momentum as confirmation that the direction is right. The herd moving fast feels like progress. Questioning the direction doesn’t just feel unproductive; it feels like you’re slowing the team down, which in most team cultures carries real social cost.</p>

<p>The result is collective plan continuation bias. Individuals who can’t self-interrupt, operating inside a group that punishes interruption. Each person’s contribution feels small and the aggregate momentum feels like validation. Sometimes the herd is just running.</p>

<h2 id="when-good-advice-reinforces-the-bias">When Good Advice Reinforces the Bias</h2>

<p>The bias runs deep enough that even our corrective wisdom reinforces it. Consider the life lessons most people absorb without questioning.</p>

<p>“Think before you act.” Sound advice, except it assumes the thinking was sound. The bias doesn’t care whether you thought first; it cares that you committed to a direction. Once you’ve thought and decided, the decision has inertia. You thought, you chose, you proceeded, even if the thought was wrong. The problem was never acting without thinking. It’s acting without <em>reconsidering</em>.</p>

<p>“Finish what you started.” This is plan continuation bias repackaged as a character virtue. Discipline means following through, and quitting means weakness. The advice assumes that what you started is worth finishing, which is exactly the question the bias prevents you from asking. Persistence is genuinely valuable when the direction is right. When the direction is wrong, persistence is just the bias wearing a respectable disguise.</p>

<p>“The first step to recovery is admitting you have a problem.” In principle, yes. But notice what the phrasing assumes. “Admitting” implies you already know and just need to say it out loud. The actual first step is <em>recognizing</em> you have a problem, and recognition is exactly what the bias blocks. Those NASA pilots didn’t refuse to admit they were flying into dangerous conditions. They didn’t register it as dangerous in the first place. Recognition is the prerequisite that admitting takes for granted.</p>

<p>Each of these lessons skips past the moment that actually matters, the moment where you stop, reassess, and recognize that the current direction might be wrong. They treat that moment as though it happens automatically, as though thinking, persisting, and acknowledging are the hard parts. They aren’t. Stopping is the hard part, and our collective wisdom doesn’t just fail to address it; it actively discourages it.</p>

<h2 id="the-prerequisite-beneath-process">The Prerequisite Beneath Process</h2>

<p>This is the hardest part to accept. I’ve written extensively about values-driven development, about aligning before committing, about realigning after discovery. I believe that better disciplines produce better outcomes, and they do. But none of it matters if the people inside the process can’t stop long enough to let the process work.</p>

<p>Think about what “realign after discovery” actually requires. When new information surfaces mid-implementation, someone needs to notice it, recognize its significance, stop the current execution, communicate the discovery, and reconsolidate the agreement. Every step in that sequence is an interruption of momentum. At every step, plan continuation bias pulls in the opposite direction: keep going, finish what you started, evaluate later.</p>

<p>Some methodologies make this worse. Scrum’s sprint commitment locks the herd in for two weeks. Daily standups reinforce the current plan by asking everyone to report progress against committed work. Mid-sprint, questioning the direction isn’t just socially expensive; it’s structurally framed as disruption. Sprint reviews happen after two weeks of sunk cost have already accumulated, which means the only official moment for course correction comes when the bias is at its strongest. Scrum doesn’t fail to address plan continuation bias; it amplifies it.</p>

<p><a href="/blog/2025/11/17/shaped-kanban.html" target="_blank" rel="noopener noreferrer">Shaped Kanban</a> comes closest to accounting for this flaw. Circuit breakers assume that initial plans will prove wrong and someone will need permission to stop. Appetite-based time bounds treat abandoning work as a valid outcome rather than a failure. Shaping before commitment means less is invested when the “this isn’t right” signal arrives, giving the signal a chance to compete with momentum. Of the methodologies I’ve examined, it’s the only one that treats the inability to stop as a design constraint rather than a character flaw to overcome.</p>

<p>But even the best structural design can’t fully overcome a pre-rational impulse. Circuit breakers trip at defined boundaries; they don’t catch the continuous stream of smaller discoveries that arrive between them, the “this approach isn’t quite right” signal on a Tuesday afternoon that gets overridden by the impulse to finish before reconsidering.</p>

<p>Most management responses get the direction of the fix wrong by trying to correct the herd first through new intervals, ceremonies, and synchronization points. But herd-level intervals reinforce herd-level momentum. The fix proceeds from the individual outward: when individuals develop the capacity to stop and reconsider, the group benefits naturally. When organizations try to impose that capacity through process without addressing the individual first, they end up with synchronized momentum in the wrong direction.</p>

<h2 id="what-helps-when-awareness-isnt-enough">What Helps When Awareness Isn’t Enough</h2>

<p>If the problem were purely intellectual, knowing about plan continuation bias would prevent it. It doesn’t, because the impulse operates faster than reflection. But awareness is still the starting point, because you can’t build countermeasures for a pattern you haven’t recognized.</p>

<h3 id="reframing-what-stop-means">Reframing what “stop” means</h3>

<p>For most people, stopping feels like failure or waste. You were making progress and now you’re not. The reframe is that evaluation is itself the cheapest possible action, always cheaper than building more on a flawed foundation. This changes the emotional calculus even when the impulse is still there.</p>

<h3 id="keeping-switching-costs-low">Keeping switching costs low</h3>

<p>The impulse to continue draws power from real switching costs that accumulate with every hour of continued execution. The less you’ve invested, the easier it is to hear the signal telling you to change direction. Practices like cheap experiments before commitment, small commits, well-defined interfaces, and feature flags all serve the same purpose by keeping the cost of being wrong low for as long as possible.</p>

<p>This is why shaping work before committing to it matters. When you’ve defined clear boundaries and identified risks upfront, the moment of “this isn’t right” arrives before you’ve built the organizational investment that makes switching feel impossible.</p>

<h3 id="building-external-pause-points">Building external pause points</h3>

<p>Because the impulse operates faster than individual reflection, environmental design matters as much as personal discipline. Structural interrupts that externally force evaluation create pause points that don’t depend on someone having the self-awareness to stop on their own.</p>

<p>Circuit breakers, time boundaries, and explicit checkpoints make “keep going” an active choice rather than the default. When continuation requires justification instead of being automatic, the bias loses some of its power because you’re reasoning about whether to continue rather than just doing it.</p>

<h3 id="stopping-more-often">Stopping more often</h3>

<p>The natural objection is that questioning costs real productivity. Context switching is expensive, and developers invoke this constantly. But how much of that argument is genuine, and how much is the bias protecting itself? Four hours of uninterrupted coding on a misunderstood problem doesn’t produce the right solution. If you couldn’t stop to reconsider in the first place, then your “focused work” wasn’t productive flow; it was the bias running unchecked. Context switching away from something you didn’t understand and weren’t willing to re-examine isn’t losing momentum. It’s gaining perspective.</p>

<p>The <a href="https://en.wikipedia.org/wiki/Pomodoro_Technique" target="_blank" rel="noopener noreferrer">Pomodoro technique</a> was designed for productivity, but it accidentally created exactly the kind of permission structure this bias requires. Every 25 minutes, you stop. Not because something went wrong, but because the rhythm demands it. That forced pause is a moment where “am I still working on the right thing?” can surface without carrying the social or psychological cost that usually prevents reassessment. The number doesn’t matter. What matters is that stopping becomes part of the rhythm rather than an interruption of it.</p>

<p>There is a real cost to random interruption, and that’s exactly the argument that makes “wait until the sprint review” feel reasonable. But two weeks from now is almost always too late. If an individual can create a reassessment moment every half hour, the gap between that and a two-week sprint review reveals how rarely most teams actually pause to reconsider. Structural interrupts at the team level like circuit breakers, checkpoints, and hill chart reviews serve the same purpose at a larger scale by creating moments where questioning is expected rather than disruptive. There’s a difference between <a href="/blog/2025/09/19/rethinking-focus-software-development.html" target="_blank" rel="noopener noreferrer">being present in the moment and being focused to the exclusion of all else</a>. Presence means being receptive to signals while they’re still cheap to act on. Tunnel vision means the signal arrives and can’t get through.</p>

<h2 id="the-foundation">The Foundation</h2>

<p>This is the prerequisite for everything else I believe about building software. You can’t realign after discovery if you can’t stop long enough to receive the discovery. You can’t measure outcomes instead of activity if the impulse to continue makes activity feel like outcomes. You can’t build for change if you can’t change direction yourself.</p>

<p>Every methodology, every discipline, every process improvement I’ve advocated for assumes that when the signal arrives, someone will hear it and act on it. Plan continuation bias is the mechanism that prevents exactly that. Recognizing it, not just intellectually but in the moment, mid-implementation, while writing code that should have been reconsidered an hour ago, is the foundation that makes everything else possible.</p>]]></content><author><name>Steven Stuart</name><email>stevenstuartm@gmail.com</email></author><category term="plan-continuation-bias" /><category term="decision-making" /><category term="leadership" /><category term="cognitive-bias" /><category term="software-development" /><summary type="html"><![CDATA[Every methodology assumes that when discovery arrives, someone will stop and act on it. Plan continuation bias is the pre-rational impulse that prevents exactly that.]]></summary></entry><entry><title type="html">The Advantage of a Turtle with a Pen</title><link href="https://stevenstuartm.com/blog/2026/01/11/the-advantage-of-a-turtle-with-a-pen.html" rel="alternate" type="text/html" title="The Advantage of a Turtle with a Pen" /><published>2026-01-11T00:00:00+00:00</published><updated>2026-01-11T00:00:00+00:00</updated><id>https://stevenstuartm.com/blog/2026/01/11/the-advantage-of-a-turtle-with-a-pen</id><content type="html" xml:base="https://stevenstuartm.com/blog/2026/01/11/the-advantage-of-a-turtle-with-a-pen.html"><![CDATA[<p>People often cite Socrates as proof that the greatest thinkers don’t need to write. He held his entire philosophy in his head and defended it through pure reasoning on the spot. There’s wisdom in this. His constraint forced true internalization: knowledge held so deeply it could be defended in any moment.</p>

<p>But Socrates’ constraint was a choice, not an inability. And ancient philosophy, while profound, could fit in a single mind. Now, even for the average person, our modern world is far too complex for that. The world has grown past what internalization alone can hold.</p>

<p>This is where the turtle with the pen enters the scene.</p>

<p>People with exceptional memory can hold complex reasoning internally. They grasp concepts quickly and move on. They build impressive things and we need them. But their reasoning remains inaccessible to others, and their quick comprehension leaves no map of the territory they crossed.</p>

<p>Those who struggle develop different disciplines. They <em>must</em> write things down to remember. They <em>must</em> ask “why” to understand. They <em>must</em> prioritize ruthlessly because they can’t hold everything at once. These constraints force them to produce artifacts others can learn from and make strategic decisions about what actually matters.</p>

<p>The limitation becomes the mechanism for contribution.</p>

<p>Socrates’ constraint produced the sage. The turtle’s constraint produces civilizations. Both paths lead through discipline, but they scale differently. The sage’s wisdom dies with them unless someone else writes it down. The turtle builds foundations others can extend.</p>

<p>If you’ve ever felt slower than others, needed more time, written everything down while others seemed to just <em>know</em>, there is hope in the turtle’s path. Not despite the struggle, but through it.</p>

<h2 id="externalization-creates-compound-value">Externalization Creates Compound Value</h2>

<p>A brilliant insight that lives only in someone’s head dies with them, or transfers imperfectly through conversation. Written reasoning compounds. It can be referenced, refined, and shared at scale.</p>

<p>The person who writes everything down is building a knowledge base that outlives any single conversation. They’re not compensating for a deficit; they’re building something that scales beyond themselves.</p>

<h2 id="writing-forces-rigor-that-memory-doesnt">Writing Forces Rigor That Memory Doesn’t</h2>

<p>When you write something down, you discover the gaps in your reasoning. “I know this” becomes “wait, do I actually know this, or do I just <em>feel</em> like I know it?”</p>

<p>Memory can store conclusions without preserving the derivation. Writing demands you reconstruct the actual argument.</p>

<p>If you cannot articulate <em>why</em>, the <em>how</em> is lost with you. You might react correctly in familiar situations, but you can’t adapt when conditions change or teach anyone else to navigate novel circumstances.</p>

<p>Oral traditions preserved knowledge for millennia, but civilization could not grow past primitive means into higher abstractions without written proofs and instructions. Mathematics, engineering, law, science: each builds on derivations that must be examined, challenged, and extended. You cannot build a cathedral on “trust me, I know how arches work.”</p>

<p>The person who must slow down, test assumptions, and commit reasoning to a shared medium isn’t suffering a handicap. They’re being forced into the discipline that enables cumulative progress.</p>

<h2 id="the-empathy-gap">The Empathy Gap</h2>

<p>Even when those with exceptional memory write down their thoughts, they tend to disconnect from the audience. They can’t understand why others don’t understand what seems obvious.</p>

<p>Those who struggled have walked the path consciously. They know where the footholds are because they had to find them deliberately. They remember what it felt like to not understand, because that state was recent and real.</p>

<p><strong>The curse of expertise, accelerated.</strong> Everyone eventually forgets what it was like to not know something. But for those with quick comprehension, that forgetting happens almost instantly. They never dwelt in the confusion long enough to map its contours.</p>

<p><strong>Sincere effort is visible.</strong> Readers sense when someone has genuinely wrestled with material versus transmitting from assumed understanding. The struggle leaves traces: careful definitions, anticipated objections, worked examples. These are evidence of someone who knows where the hard parts are.</p>

<p>The missing ingredient isn’t simpler terms. It’s context.</p>

<p>When you struggle to recall and focus, context becomes everything. You can’t hold isolated facts, so you learn to ask “why” and “how are these things related.” You build webs of meaning because you have to. That habit turns out to be exactly what good teaching requires. The person who needed context to learn naturally provides context when they teach.</p>

<p>Understanding emerges from expansion compressed through teaching. The struggle forces both. You expand because you have to; you can’t skip research when things don’t click immediately. You compress because you must; explaining to others is how you retain anything at all.</p>

<h2 id="limited-focus-forces-prioritization">Limited Focus Forces Prioritization</h2>

<p>There’s a parallel pattern with attention. Someone who struggles to focus learns to prioritize ruthlessly and work in increments. They can’t chase every interesting thread, so they develop the discipline to identify what matters most.</p>

<p>Someone who doesn’t struggle tends to build everything at once. They hold the entire system in their head, optimizing locally across all components simultaneously. This produces impressive low-level work, but often with an overconfident view of scope. It’s easy for them to build intricate code and hard for them to focus on business objectives and team dynamics.</p>

<p>Both types are useful. But when it’s time to train and mentor others, you hope the “slow” person was taking notes and making high-level decisions. The person who couldn’t hold everything in their head had to decide what was worth holding. That constraint forced strategic thinking.</p>

<p>There’s another advantage to moving slowly: turtles produce less waste. The turtle measures value and defines success criteria before building. The hare can write a lot of code very fast, but the turtle asks whether that code should exist at all.</p>

<p>Architects and managers should strive to be turtles. The good ones bring clarity and unity. They translate between people who think differently. They provide the context that lets individual brilliance become collective progress.</p>

<h2 id="what-the-turtle-should-learn-from-socrates">What the Turtle Should Learn from Socrates</h2>

<p>There’s something genuinely lost when we can always reference instead of recall.</p>

<p>“I wrote about this somewhere” is not the same as “I know this well enough to defend it now.”</p>

<p>The person who offloads everything to documents can become lazy about understanding. They mistake having access to knowledge for possessing it. They can’t reason on their feet because their reasoning lives in files they’d need to look up.</p>

<p>Socrates couldn’t be lazy. Every belief had to be held deeply enough to defend in real time. That pressure produced a different kind of rigor: knowledge that had become part of him.</p>

<p>The turtle should aspire to this for what matters most. Not everything deserves deep internalization. But the things you value, the principles you build on, the reasoning behind your most important decisions: these should live in you, not just in your documents.</p>

<p>The pen extends reach; it shouldn’t replace depth.</p>

<h2 id="the-discipline-that-matters">The Discipline That Matters</h2>

<p>The real insight isn’t writing versus memory. It’s discipline and value.</p>

<p>Socratic discipline produces the sage who can reason through anything in the moment, powerful for dialogue and teaching. But it doesn’t compound beyond the sage’s presence. When Socrates died, his philosophy survived only because Plato wrote it down.</p>

<p>The turtle’s discipline produces artifacts that outlive any single conversation: documents, architectures, systems that others can learn from and extend. The turtle might not defend every position from memory, but the turtle builds civilizations.</p>

<p>Write to build what scales beyond yourself. Keep close to your mind what truly deserves articulation at any moment. The pen is a tool, not a substitute for understanding.</p>

<p>If you’ve spent your life feeling slower than others, needing more time, writing everything down, asking “why” when others seemed to just <em>know</em>, recognize what you’ve been building. Not despite the struggle, but through it.</p>

<p>Be the turtle with the pen. Build something that lasts, keep close what you value, and be ready to articulate, at any moment, the things that matter most.</p>]]></content><author><name>Steven Stuart</name><email>stevenstuartm@gmail.com</email></author><category term="learning" /><category term="teaching" /><category term="knowledge-sharing" /><category term="career-growth" /><summary type="html"><![CDATA[Those who must write to remember, ask 'why' to understand, and prioritize ruthlessly to focus end up developing the disciplines that let them teach, lead, and build things that outlast them.]]></summary></entry><entry><title type="html">How Shared Libraries Become Shared Shackles</title><link href="https://stevenstuartm.com/blog/2026/01/06/the-false-economy-of-shared-libraries.html" rel="alternate" type="text/html" title="How Shared Libraries Become Shared Shackles" /><published>2026-01-06T00:00:00+00:00</published><updated>2026-01-06T00:00:00+00:00</updated><id>https://stevenstuartm.com/blog/2026/01/06/the-false-economy-of-shared-libraries</id><content type="html" xml:base="https://stevenstuartm.com/blog/2026/01/06/the-false-economy-of-shared-libraries.html"><![CDATA[<p>This is a highly opinionated take on shared libraries and the damage they do to team autonomy and development tempo. Teams deliver value faster and more consistantly when they can make decisions, ship changes, and evolve their domains without coordinating across organizational boundaries. Shared libraries erode exactly that independence.</p>

<p>The principle applies anywhere domains and teams need independence, but this post focuses on distributed architectures because that’s where the consequences are most severe. When independently deployable components, owned and operated by different teams, get bound together by shared packages, those packages undermine the very independence the architecture was designed to provide.</p>

<p>After seeing costs explode for trivial tasks and critical production updates failing to deliver on time in nearly every organization I have witnessed, I am willing to take a rather “extreme” stance on the subject.</p>

<h2 id="shared-libraries-violate-core-principles">Shared Libraries Violate Core Principles</h2>

<p>Distributing components isn’t just about distributing work. It’s about the Single Responsibility Principle applied at the system level: clear ownership, implementation isolation, and infrastructural independence. These benefits are often implicit in the decision to distribute, but they’re the whole point. The share-nothing principle makes this explicit. Services should be autonomous, independently deployable, and free from implementation coupling. When services share nothing, teams can deploy, scale, and evolve on their own terms, at their own tempo.</p>

<p>Shared libraries violate these principles. They couple teams through shared implementation despite being distributed in name, creating little monoliths that bind development tempo across teams that were meant to operate independently. What’s at stake isn’t code organization; it’s each team’s ability to make decisions, ship changes, and evolve their domain without waiting on teams that have different priorities and different timelines.</p>

<p>Yet the pitch keeps coming: “We have this code in three places. Let’s consolidate it into a shared library. We’ll save time, ensure consistency, and make everyone’s life easier.” It sounds reasonable, it really does, yet it ignores decades of architectural pain and lessons learned. The decision only calculates the cost of duplication while potentially ignoring or incorrectly calculating the cost of sharing across teams, domains, and technical boundaries.</p>

<p>There are important distinctions to draw here, like external libraries versus internal ones, SDKs versus shared packages, and whether this applies beyond distributed systems. We’ll address all of those. But first, the costs.</p>

<h2 id="the-costs-nobody-calculates">The Costs Nobody Calculates</h2>

<p>When someone proposes a shared library, they calculate the savings: “This code exists in five services. If we consolidate, we only maintain it once.”</p>

<p>What they don’t always sufficiently calculate:</p>

<p><strong>Version conflicts and upgrade pain.</strong> Five teams (at worst) now depend on your library. They release on different cadences and at some point one or more teams require a breaking change. Now you’re either maintaining multiple versions indefinitely or forcing upgrades on teams that have other priorities. The “one place to maintain” becomes “one place that blocks everyone.”</p>

<p><strong>Teams blocked waiting for changes.</strong> A team needs functionality the library doesn’t have. They can’t just add it. They need to coordinate with the library owners, get the change approved, wait for a release, and then upgrade. What would have been a two-hour change becomes a two-week dependency chain.</p>

<p><strong>Debugging across boundaries.</strong> When something breaks, the investigation now spans your code and the library code. Your team doesn’t own the library. Maybe they don’t fully understand it. The abstraction that was supposed to simplify their lives has added a layer they have to dig through.</p>

<p><strong>Bloat or fragmentation, pick your poison.</strong> The library starts focused. Then another team needs something slightly different. Then another. The library accumulates features to serve multiple masters, becoming a grab-bag of loosely related functionality coupled together because they share a package, not because they belong together. The disciplined alternative is to split it into many small, focused packages, but that creates its own problem: an entourage of dependencies that each consuming team must track, version, and coordinate with. Instead of one bloated library blocking you, ten focused ones collectively recreate the same burden.</p>

<p><strong>Obscured accountability.</strong> Shared libraries don’t reduce your quality burden; they move it somewhere less visible. If the library has a bug, your service has a bug. Every service still needs its own load testing, chaos testing, penetration testing, and UAT regardless of whether the underlying code is shared or duplicated. The library doesn’t absorb responsibility for your service’s behavior. It just adds a dependency you don’t own and can’t fully verify.</p>

<h2 id="the-cohesion-and-coupling-diagnosis">The Cohesion and Coupling Diagnosis</h2>

<p>If two services genuinely need the same function, you have three possibilities:</p>

<p><strong>It’s a cohesion problem.</strong> That function belongs in one place and should be called, not duplicated. Extract it into a service with an API. Now there’s a clear owner, a clear contract, and no shared implementation coupling consumers together.</p>

<p><strong>It’s a coupling problem.</strong> You’ve drawn your boundaries wrong. The services that “need” the same code are actually more related than you thought. Reconsider where the boundary belongs rather than papering over the boundary violation with a shared dependency.</p>

<p><strong>It’s genuinely independent.</strong> The similarity is coincidental. Both services need to format dates or parse JSON or validate email addresses. Copy the code. Move on. The duplication costs less than the coordination, and the implementations can evolve independently as each service’s needs diverge.</p>

<p>A shared library is almost never the right answer because the problem it solves (duplicated code) rarely justifies the problems it creates (coupling, versioning, blocked teams).</p>

<p>The common rebuttal is “but if there’s a bug, I fix it once and it propagates everywhere.” Consider what code that would actually be in a well-architected distributed system. Cross-cutting concerns like logging, networking, and observability are handled by infrastructure through sidecars and service meshes. Security is already an acknowledged exception. Third-party libraries have their own maintenance cycles. What remains is business logic, and if your business logic is so coupled across services that a single bug requires simultaneous fixes everywhere, you don’t have a sharing problem, you have a boundary problem, which brings you back to the diagnosis above.</p>

<h2 id="dont-reinvent-the-wheel-vs-dont-share-internal-types">Don’t Reinvent the Wheel vs. Don’t Share Internal Types</h2>

<p>There’s a meaningful distinction between using established external libraries and sharing internal abstractions.</p>

<p>Using mature, well-tested libraries for universal problems makes sense. Logging frameworks, HTTP clients, serialization libraries, and authentication middleware exist because these problems are universal and well-understood. Someone else solved them better than you would, and the cost of depending on their solution is low because the solution is stable.</p>

<p>Sharing your internal <code class="language-plaintext highlighter-rouge">CustomerDto</code> across services is different. Sharing your “standard” repository pattern is different. Sharing your domain models between bounded contexts is different. These aren’t universal problems with stable solutions. They’re your internal abstractions, and forcing them on other teams assumes those teams should think the same way you do.</p>

<p>The distinction matters: external libraries abstract universal problems. Internal shared libraries impose your specific mental model on teams that might have legitimately different needs.</p>

<h2 id="sdks-are-different">SDKs Are Different</h2>

<p>There’s also an important distinction between shared libraries and SDKs published for external consumers.</p>

<p>An SDK abstracts what you expose: the public contract of a service or platform. A good SDK earns its existence by encoding integration complexity that would be expensive and error-prone for every consumer to reimplement: orchestrating multi-step workflows, managing state across API calls, handling idempotency, and abstracting version differences. The value isn’t hiding HTTP calls (documentation handles that); it’s centralizing integration logic complex enough to justify the maintenance cost across supported runtimes.</p>

<p>An SDK also has a different lifecycle. The platform is built first; the SDK comes afterward for a different audience. Its development and release cycles are separate from the internal teams building features, because the dynamics with external customers differ from the dynamics between internal teams.</p>

<p>A shared library abstracts how you think internally: your domain models, your patterns, your “standard way” of doing things. It exists because someone decided other teams should think the same way. The shared library serves a governance impulse, not the consumer. And unlike an SDK, it tries to couple internal teams to the same release cycle and the same implementation decisions.</p>

<p>The SDK says: “Here’s how to use our thing.”
The shared library says: “Here’s how you should build your thing.”</p>

<p>One is a service to consumers. The other is an imposition on autonomous teams disguised as help.</p>

<h2 id="your-runtime-already-solved-this">Your Runtime Already Solved This</h2>

<p>The shared library pitch often targets “utility code” that your runtime already provides. If you’re using .NET, the framework gives you HTTP clients, JSON serialization, logging abstractions, dependency injection, and configuration management. Why would you need an internal shared library wrapping <code class="language-plaintext highlighter-rouge">HttpClient</code> when <code class="language-plaintext highlighter-rouge">HttpClient</code> exists and is battle-tested by millions of applications?</p>

<p>The urge to share usually targets exactly this kind of code: wrappers, helpers, and utilities that add a thin layer over framework primitives. But the framework primitives are already shared. They’re already tested. They’re already documented. Your wrapper just adds coordination overhead on top of something that didn’t need wrapping.</p>

<p>This varies by ecosystem. For example, Python’s dependency management is notoriously painful, and shared internal libraries compound the problem. You’re coordinating versions across teams in an ecosystem that already struggles with version conflicts. The runtime that makes sharing easiest is often the one where sharing is least necessary.</p>

<h2 id="the-principle-is-broader-than-distribution">The Principle Is Broader Than Distribution</h2>

<p>An obvious question: if shared libraries are a problem in distributed systems, were they also a problem in the modular monolith that preceded them?</p>

<p>Wherever different teams own different domains, yes. In a modular monolith, shared packages between domains still couple teams to the same change cycles. The difference is severity. In a monolith, the blast radius is contained: teams share a deployable and version conflicts manifest as build errors rather than runtime failures. That pain is contained but manageable. In a distributed system, that same coupling spans deployment pipelines, release cadences, and versioning strategies. A change that would have been a merge conflict in a monolith becomes a multi-team coordination effort with blocked releases and stale dependencies.</p>

<p>Layered architectures sidestep this by design because layers already enforce separation; sharing across layers is a violation of the architecture itself, not a shared library problem. But in domain-oriented architectures, the discipline matters regardless of deployment topology. If Domain A and Domain B need to evolve independently, coupling them through shared implementation undermines that independence whether they’re projects in the same solution or services in different repositories.</p>

<h2 id="no-architecture-style-wants-this">No Architecture Style Wants This</h2>

<p>The shared library pitch assumes that code reuse across boundaries is inherently valuable. But examine any coherent architectural paradigm and the opposite becomes clear.</p>

<p><strong>Layered architecture</strong> separates concerns into distinct layers. If your presentation layer and your data layer share a library, you’ve coupled what you explicitly designed to be independent.</p>

<p><strong>Domain-driven architecture</strong> creates autonomous domains with clear boundaries. If Domain A and Domain B share implementation code, they’re not really autonomous. They’re a distributed monolith with extra steps.</p>

<p><strong>Functional/technical architecture</strong> defines components accessed through explicit interfaces. The behavior should live in a component that others call, not in a library that everyone imports.</p>

<p><strong>Polyglot architectures make it worse.</strong> The shared library pitch assumes a homogeneous technology landscape that rarely exists. If your organization has services in C#, Java, Python, and Go, do you maintain and keep four versions of every shared library in sync? In polyglot environments, the “shared” library becomes a second-class citizen in every language except the one the authoring team actually uses. The promise of consistency becomes a guarantee of inconsistency across language boundaries.</p>

<h2 id="the-api-client-library-obsession">The API Client Library Obsession</h2>

<p>The most common incarnation of shared library dysfunction is the API client package: a library containing contracts, DTOs, and client code that consumers are expected to import when calling your service. I have never seen this pattern result in anything short of chaos.</p>

<p>The pitch sounds reasonable: “We’ll publish a client library so consumers don’t have to write their own HTTP calls or define their own contracts.” But this solves a problem that doesn’t exist while creating several that do.</p>

<p><strong>Every API should have documentation describing its contracts.</strong> If your API is well-documented with clear schemas, consumers can generate or write their own clients trivially. The documentation is the contract. A client library doesn’t replace documentation; it’s a poor substitute for it.</p>

<p><strong>Every consumer has different needs.</strong> Service A might need three fields from one endpoint. Service B might need ten fields from a different endpoint. Service C might need to call the same endpoint but transform the response differently. When you force everyone to use your client library, you’re imposing your view of how your API should be consumed. But consumers know their own needs better than you do.</p>

<p><strong>Client libraries confuse application concerns with infrastructure concerns.</strong> Teams building client libraries inevitably add caching strategies, retry policies, circuit breakers, and connection pooling configurations. These aren’t client concerns. They’re infrastructure concerns that belong in service meshes, sidecars, and API gateways where they can be configured, observed, and tuned without redeploying applications.</p>

<p>A client library buries these decisions in application code where they’re invisible to operations and impossible to change without a coordinated release across every consumer. The library author predicts traffic patterns and failure modes as if every consumer will behave identically. They won’t.</p>

<p><strong>The absurdity becomes obvious with frontend consumers.</strong> Nobody would publish an npm package for their React app to import API contracts, or a Swift package for iOS. Frontend teams read documentation, call endpoints, and map responses to whatever structures suit their application. Backend services have the same needs. The consumer’s requirements don’t change based on what language they’re written in.</p>

<p>This reflexive reach for client libraries has been conditioned by years of cargo-culting patterns from contexts where they made sense (public cloud SDKs with complex auth flows) into contexts where they don’t (internal services with straightforward REST endpoints). It’s a tax on every consumer and a maintenance burden on every producer, justified by an efficiency that never materializes.</p>

<h2 id="the-governance-theater-problem">The Governance Theater Problem</h2>

<p>Shared libraries often emerge from a governance impulse: “Teams are doing things inconsistently. We need to standardize.”</p>

<p>The instinct isn’t wrong, and consistency matters. But shared libraries are governance theater. They create the appearance of consistency without addressing the underlying problem.</p>

<p>If teams are building things inconsistently, ask why. Usually it’s because they don’t share the same understanding of what matters, what the tradeoffs are, and what “good” looks like. That’s an alignment problem. It requires conversation, documentation, and shared values.</p>

<p>Forcing everyone to use the same library doesn’t create alignment. It creates compliance. Teams will use your library and still build inconsistent systems because the library doesn’t encode the thinking and testing.</p>

<p>Governance through values: “Here’s why we authenticate this way, here are the tradeoffs, here’s what we’re optimizing for. Align your implementation to these principles.”</p>

<p>Governance through code: “Use this library or you’re non-compliant.”</p>

<p>The first creates alignment while preserving autonomy. Teams understand the principles and can make good decisions in novel situations. The second creates coupling while providing the illusion of alignment. Teams comply without understanding, and the moment they hit a situation the library doesn’t cover, they’re lost.</p>

<h2 id="the-exception-security-protocols">The Exception: Security Protocols</h2>

<p>There’s one domain where shared libraries make sense. Shared libraries can work for security protocols like ingress handling, service-to-service authentication, and encryption standards.</p>

<p>Why security is different:</p>

<ul>
  <li><strong>The domain is stable and well-understood.</strong> Authentication patterns don’t change week to week. The library doesn’t need constant evolution to serve its consumers.</li>
  <li><strong>The cost of getting it wrong is catastrophic.</strong> Security isn’t a place for teams to make independent decisions and learn from mistakes. The blast radius is too large.</li>
  <li><strong>The surface area is thin and focused.</strong> A good security library does one thing. It’s not a grab-bag of utilities that grows to serve multiple purposes.</li>
  <li><strong>Autonomy isn’t the goal.</strong> You actually want teams to do security the same way. The coupling is a feature, not a bug.</li>
</ul>

<p>Even here, the library should be as minimal as possible. Provide the security primitive and get out of the way. The moment it starts accumulating “helpful” utilities beyond its core purpose, it’s sliding toward the problems that plague other shared libraries.</p>

<h2 id="what-to-do-instead">What to Do Instead</h2>

<p>When you feel the urge to create a shared library, pause and diagnose the actual problem:</p>

<p><strong>If it’s a capability multiple services need:</strong> Build a service, not a library. Expose an API. Now there’s clear ownership, independent deployment, and consumers that can’t get version-locked.</p>

<p><strong>If it’s a pattern you want to standardize:</strong> Write documentation. Explain the principles, the tradeoffs, and the reasoning. Let teams implement the pattern in their own codebases. They’ll understand it better than if they’d just imported your abstraction.</p>

<p><strong>If it’s truly just duplicated code:</strong> Let it be duplicated. The coordination cost of sharing exceeds the maintenance cost of duplication. And the duplicates can evolve independently as needs diverge.</p>

<p><strong>If it’s a security primitive:</strong> Fine. Build the library. Keep it minimal, stable, and focused. Recognize it’s a necessary evil, not a model to emulate.</p>

<p>The shared library is a solution to a problem that rarely exists in the form people imagine. Code duplication isn’t what slows teams down. Coordination overhead is. Obsessing over shared code compliance and version alignment diverts attention from what actually produces consistency: shared understanding of principles, tradeoffs, and what “good” looks like. Teams that understand the reasoning make good decisions without needing a library to make decisions for them.</p>

<p>Share values, and the shared library more often becomes unnecessary.</p>]]></content><author><name>Steven Stuart</name><email>stevenstuartm@gmail.com</email></author><category term="architecture" /><category term="distributed-systems" /><category term="microservices" /><category term="governance" /><summary type="html"><![CDATA[Shared libraries promise reuse and consistency but more often bind team autonomy and development tempo through coupling and coordination overhead. The consistency they claim to provide is better achieved by sharing principles, tradeoffs, and values rather than sharing implementation.]]></summary></entry><entry><title type="html">AI in Practice: The Skill Inversion</title><link href="https://stevenstuartm.com/blog/2025/12/31/ai-in-practice-the-skill-inversion.html" rel="alternate" type="text/html" title="AI in Practice: The Skill Inversion" /><published>2025-12-31T00:00:00+00:00</published><updated>2025-12-31T00:00:00+00:00</updated><id>https://stevenstuartm.com/blog/2025/12/31/ai-in-practice-the-skill-inversion</id><content type="html" xml:base="https://stevenstuartm.com/blog/2025/12/31/ai-in-practice-the-skill-inversion.html"><![CDATA[<h2 id="the-bottleneck-has-moved">The Bottleneck Has Moved</h2>

<p>When code generation was the constraint, technical skill was the differentiator. Developers who could implement faster, debug quicker, and architect more elegantly commanded premium value. Business understanding was nice to have, a soft skill that complemented the hard skills that actually mattered.</p>

<p>AI code generation is dissolving this hierarchy.</p>

<p>The developers I see thriving aren’t the ones who code fastest. They’re the ones who understand what to build, why it matters, and what tradeoffs are acceptable. Technical execution is increasingly commoditized. Business judgment is not.</p>

<h2 id="what-actually-made-me-better">What Actually Made Me Better</h2>

<p>I became a better developer as I became more business-minded. Not because I learned new languages or frameworks, but because I developed a different relationship with the work.</p>

<p>Understanding value and customer alignment changed how I approached every decision:</p>

<ul>
  <li>I stopped building the wrong thing well—the most expensive mistake in software</li>
  <li>I could evaluate tradeoffs against actual value, not abstract “best practices”</li>
  <li>I knew when “good enough” was actually good enough</li>
  <li>I understood the cost of delay—shipping imperfect beats perfecting endlessly</li>
</ul>

<p>Code knowledge tells you <em>how</em>. Business understanding tells you <em>what</em>, <em>why</em>, and <em>whether</em>. AI is getting remarkably good at <em>how</em>. It has no grasp of <em>why</em>.</p>

<h2 id="the-bifurcation">The Bifurcation</h2>

<p>The middle is hollowing out. Two paths are emerging, and they’re diverging, not converging.</p>

<p><strong>The Broad-and-Human Path</strong></p>

<p>This path centers on value judgment, customer alignment, tradeoff navigation, domain expertise, and system thinking. It requires broader context and serves fewer people directly. Becoming more human means developing empathy, judgment, relationships, and context that machines cannot replicate.</p>

<p><strong>The Deep-and-Mechanical Path</strong></p>

<p>This path leads toward R&amp;D, algorithms, performance optimization, novel architectures, security research, and low-level systems. It demands narrower focus and extreme depth. Becoming more mechanical means precision, relentless optimization, and working at the frontier where AI assistance runs out.</p>

<p>For those drawn to this path: the bar rises dramatically. Knowing a language well becomes pushing the boundaries of what’s computationally possible. Implementing algorithms becomes inventing them. Using frameworks becomes building them. Following security practices becomes discovering vulnerabilities and designing novel defenses. You compete globally for positions that require genuine innovation, building the substrates that AI and others build products on. This path demands excellence that few can sustain, but for those who can, it remains valuable precisely because it’s rare.</p>

<p><strong>The Vanishing Middle</strong></p>

<p>Between these paths, roles are losing leverage: the coder who implements specs without questioning them, the integration specialist whose value was knowing API quirks, the framework expert whose depth was a single ecosystem, the ticket-taker who translates Jira stories into pull requests.</p>

<p>These roles don’t vanish overnight. But the leverage shifts. One business-aligned architect with AI assistance can accomplish what previously required a team. Both paths are valid, and neither is easy. But the space between them is compressing.</p>

<h2 id="a-warning">A Warning</h2>

<p>Tool-orientation without value-orientation is increasingly precarious. AI is a better tool-operator than most humans. It doesn’t tire, doesn’t context-switch, doesn’t forget syntax. If your value proposition is “I can use the tools,” you’re competing on terrain where AI has structural advantages.</p>

<p>This isn’t new wisdom. Tool-orientation has always caused friction when divorced from value and alignment. The developers who thrived before AI were already the ones who understood the business context of their work. AI just accelerates the consequences.</p>

<h2 id="the-atrophy-concern-and-why-its-familiar">The Atrophy Concern (and Why It’s Familiar)</h2>

<p>There’s a darker worry beneath the surface: what happens to our skills as we rely on AI?</p>

<p>Right now, senior engineers with decades of hard-won intuition can leverage AI as a force multiplier. They know when AI is confidently wrong. They have the architectural judgment to evaluate generated code. They built debugging instincts from years of suffering through problems manually.</p>

<p>But every time AI handles something, you get a little worse at handling it yourself. Skills require practice to maintain. Outsource the practice and the skill decays. If AI stalls or regresses, will we still have the competence to engineer without it, or even to continue using it at our current level?</p>

<p>This concern is real, but it’s also not new.</p>

<p>Could you build a radio if you needed to? Could you manufacture a car, synthesize medicine, or grow enough food to feed yourself for a month? Civilization is dependency. Specialization is the trade. We gave up self-sufficiency for leverage a long time ago.</p>

<p>Every technological layer creates the same pattern: a new capability emerges, early adopters with pre-existing skills use it as force multiplier, the next generation learns with the tool rather than before it, and the underlying skill becomes specialized knowledge held by few. We don’t mourn that most people can’t forge steel or build semiconductors. We accept that specialists exist and the rest of us build on their work.</p>

<p>AI is another layer in this stack. Some skills will atrophy; that’s the trade. We will lose capabilities; that’s guaranteed. What matters is whether you’re positioning yourself to provide value at the new layer, or holding onto skills being absorbed into the substrate.</p>

<p>The people who thrived weren’t the ones who could build radios. They were the ones who understood what to do with radios.</p>

<h2 id="the-learning-inversion">The Learning Inversion</h2>

<p>This brings us to a question that sounds new but isn’t: how do people develop judgment without grinding through the middle?</p>

<p>The honest answer is that the old path was never as necessary as we pretended. We learned the hard way, spending years on syntax, framework quirks, and theoretical foundations before we were trusted with real decisions. Much of that time was waste. We learned “computer science” when we needed to learn “this job.” We studied theory for hypothetical problems while the actual problems sat waiting.</p>

<p>This was always a trade-versus-theory problem. Traditional education and career paths optimized for theoretical completeness, not practical judgment. “Learn the fundamentals first, apply later” sounds rigorous, but it mostly meant years of gatekeeping before you got to do the work that actually built intuition.</p>

<p>AI doesn’t just hollow out the middle. It offers a way through.</p>

<p>When coding takes a fraction of the time, you can redirect that effort toward what actually matters: the domain, the users, the constraints, the tradeoffs. You still need to understand security, architecture, and system thinking. But you can acquire that knowledge in context, while solving real problems, rather than stockpiling it in advance for scenarios that may never come.</p>

<p>Focus on <em>this</em> job, <em>this</em> problem, <em>this</em> domain. Generalist knowledge accumulates naturally from solving diverse real problems. It doesn’t require years of abstract preparation.</p>

<p>I say this as someone who took the long path. The grind taught me things, but it also taught me how much of it was unnecessary. That perspective is exactly what lets me tell you to skip what we went through. We know which parts mattered because we suffered through the parts that didn’t.</p>

<p>The middle was always a holding pattern, not a destination. AI just makes that visible.</p>

<h2 id="choosing-the-broad-path">Choosing the Broad Path</h2>

<p>If you’re drawn toward the human side of this bifurcation, the answer isn’t to learn more tools or chase the latest framework. The answer is older than AI.</p>

<p><strong>Market yourself, not just your skills.</strong> Skills are inputs. Value is output. Organizations don’t need people who can code; they need people who can solve problems that matter. Position yourself around the problems you solve, not the tools you use.</p>

<p><strong>Become value-oriented, not tool-oriented.</strong> Every technical decision exists in a business context. What are we trying to achieve? For whom? What does success look like? What’s the cost of being wrong? These questions matter more than implementation elegance.</p>

<p>This is the human work that AI cannot do. It requires empathy, judgment, relationship-building, and context that spans conversations, projects, and years. The middle is not a resting place; it’s a transition zone, and it’s narrowing. The tools will keep getting better. Either you’re wielding them toward value, or you’re being replaced by someone who is.</p>]]></content><author><name>Steven Stuart</name><email>stevenstuartm@gmail.com</email></author><category term="ai" /><category term="career" /><category term="architecture" /><category term="leadership" /><summary type="html"><![CDATA[As AI commoditizes code generation, the bottleneck shifts from technical execution to business judgment. The middle is hollowing out: you either go deep-and-mechanical or broad-and-human.]]></summary></entry><entry><title type="html">To Create is to Choose</title><link href="https://stevenstuartm.com/blog/2025/12/24/to-create-is-to-choose.html" rel="alternate" type="text/html" title="To Create is to Choose" /><published>2025-12-24T00:00:00+00:00</published><updated>2025-12-24T00:00:00+00:00</updated><id>https://stevenstuartm.com/blog/2025/12/24/to-create-is-to-choose</id><content type="html" xml:base="https://stevenstuartm.com/blog/2025/12/24/to-create-is-to-choose.html"><![CDATA[<blockquote class="pull-quote">
<p>"To create is to choose."<br />— Hadrian Marlowe, <em>Sun Eater</em> by Christopher Ruocchio</p>
</blockquote>

<p>The fictional character, Hadrian, speaks these words before sacrificing everything for a good he cannot prove. He cannot see the exact outcome. But across his centuries-long life, he has perceived a truth greater than himself, a highest good that has proven itself through experience and reasoning. And that truth demands sacrifice, not for itself, but for all of humanity. He doesn’t have a perfect solution. He chooses anyway, because standing still in the current of causality isn’t rest. It’s dissolution. Creation is the only alternative.</p>

<p>But Hadrian could only make that choice because someone once sacrificed everything to make him into the man he would become. The grand sacrifice requires a foundation, and that foundation is built through the quieter, more common sacrifice of mentoring. We create the people capable of choosing well by first choosing to invest in them.</p>

<p>So much of early childhood is about obedience. Don’t touch the stove. Hold my hand in the parking lot. Stay where I can see you. These aren’t arbitrary rules imposed for control; they’re survival constraints. A child who doesn’t obey may not get the chance to understand why the rules existed. But obedience isn’t the destination. It’s the minimum viable foundation. The parent who only demands compliance fails the child just as surely as the one who skips it entirely. At some point, the child needs to walk on their own. And that transition from obedience to understanding to independent action is where the real work of mentoring begins. This isn’t just about parenting. It’s about any relationship where one person helps another grow: teaching, coaching, mentoring. The pattern is the same, and getting it wrong carries the same consequences.</p>

<p>The perspective I’m sharing here didn’t come from a single place. It crystallized from five sources that arrived at similar structures through different paths: science fiction, Christian apologetics, literary fiction, moral philosophy, and clinical psychology. That convergence is part of why I trust it.</p>

<blockquote class="pull-quote">
<p>Truth isn't just something we search for. It's something that calls us to selfless action for the hope of future generations. What is true often reveals itself through what we're willing to sacrifice for others.</p>
</blockquote>

<h2 id="the-path">The Path</h2>

<p>Growth follows a causal chain. Each stage depends on what came before, and each stage is necessary but insufficient on its own.</p>

<p><strong>Obedience</strong> keeps you alive long enough to form habits and begin walking on your own. It is not understanding, but it creates the safety to eventually develop understanding.</p>

<p><strong>Knowledge</strong> provides orientation. It reminds you of what has been proven true, false, more useful, or less useful. Knowledge shared is not the same as knowledge internalized, and neither is the same as understanding. Without orientation, you wander.</p>

<p><strong>Understanding</strong> guides you toward better decisions. Knowing that something works is different from knowing why it works and when it applies. Understanding bridges that gap, though it cannot guarantee you’ll choose wisely.</p>

<p><strong>Action</strong> externalizes your decisions and tests your internal certainty against reality. You can understand something perfectly in theory and still be wrong. Action is where truth meets consequence.</p>

<p><strong>Reflection</strong> generates insight through honest examination. This is healthier when it includes perspective beyond your own, whether through collective dialogue or paired mentoring rather than solo rumination.</p>

<p><strong>Insight</strong> approaches truth. But it only arrives when the individual commits to gathering perspective beyond their own experience and assumptions.</p>

<p>Skip a step and the structure weakens.</p>

<h2 id="what-is-truth">What is Truth</h2>

<h3 id="what-truth-is-and-isnt">What Truth Is and Isn’t</h3>

<p>Truth exists independent of our perception. Physics didn’t begin when Newton published the Principia; we just finally had language to describe what was always operative. The discovery didn’t create the reality; it revealed it.</p>

<p>And yet, truth’s independent existence isn’t the point; the search for it is.</p>

<p>Truth is a force, always present, given dimension through choice. Our discovery doesn’t create it; our action reveals and maintains it. The honest seeker doesn’t claim to possess absolute truth but remains oriented toward it. The humility isn’t “there is no truth” but “I cannot contain all of it, yet I must still act.”</p>

<p>What is more useful and more good emerges through the search, through ideas tested against each other and proven through time. Not all ideas are equal. The search is how we find out which ones hold.</p>

<p>Even honest religious people will tell you they do not know, and do not need to know, the full thoughts and will of God. The search produces what is more useful and more good. The destination, if there is one, remains beyond full comprehension. And that’s acceptable, because the search itself is what improves us.</p>

<h3 id="the-fallacy-of-real-vs-true">The Fallacy of “Real” vs. “True”</h3>

<p>It’s easy to fall into the trap where the group or individual defines what can be true by what is experienced in the moment. “This is the real world,” people say, as if current circumstance is the arbiter of what’s possible or valid.</p>

<p>This is backwards. What is true can be self-evident or can require rigorous proof through generations. But it is truth that must be made into habit, understood, taught, and most importantly, decided on through action. Again and again, without losing orientation and without forgetting what has been and what could be.</p>

<p>To say that what is “real” defines what is true is to fall into nihilism. It states that there can be no objective truth, and subsequently no objective purpose, given the fickleness of humans and the ever-changing substance through which we experience life. But truth doesn’t need our permission to exist. It doesn’t need to be perceived. It operates whether we acknowledge it or not.</p>

<blockquote class="pull-quote">
<p>Truth exists. What separates growth from stagnation is whether we're still searching for it or assuming we've already found it.</p>
</blockquote>

<h2 id="the-danger-of-assumption">The Danger of Assumption</h2>

<p>Regression doesn’t usually announce itself. It creeps in when truth is assumed rather than searched for.</p>

<p>This often follows from lost historical context. When people forget why principles exist, what failures they emerged from, and what problems they solved, they start treating hard-won wisdom as arbitrary constraint. They discard it. Then they rediscover the failures their ancestors already paid for.</p>

<p>The assumption that current practice is sufficient, that the search is complete, is where regression begins.</p>

<p>The honest approach isn’t “we’ve arrived” but “we’re still searching.” Not because truth doesn’t exist, but because our grasp of it is always partial, always requiring maintenance.</p>

<h2 id="which-story-do-you-prefer">Which Story Do You Prefer?</h2>

<p>Not all ideas are equal. Some have been tested against alternatives and proven through time and collective strife. Others haven’t survived their first contact with reality. The search process itself acts as a filter.</p>

<p>But here’s the harder question: when the evidence doesn’t settle the matter, when you can’t prove which version is true, what then?</p>

<p>This isn’t relativism. Asking “which story do you prefer?” doesn’t mean truth doesn’t matter. It means that when evidence alone can’t settle the question, we must still choose. And that choice reveals something about us.</p>

<p>The version of the story we prefer, the one that accounts for both what has been proven and what we aspire toward, becomes our orientation. It shapes how we act, what we pursue, and what we pass on. What matters isn’t which story is easiest or most comfortable, but which story we’re willing to live by.</p>

<p>Your experience is real, but it’s also limited. The collective search across centuries has been stress-tested across contexts, cultures, and circumstances that no individual could encounter in a single lifetime. That search doesn’t guarantee truth, but it narrows the field. And when you must still choose between remaining possibilities, the honest path forward isn’t paralysis. It’s committing to the story that best accounts for what we know and what we hope for, while remaining open to revision as understanding deepens.</p>

<h2 id="what-we-owe-each-other">What We Owe Each Other</h2>

<p>The chain is fragile because halted progress doesn’t stay halted. It regresses. Standing still isn’t neutral. The current of causality keeps moving whether you paddle or not. Progress requires sustained effort against entropy. Regression requires nothing.</p>

<blockquote class="pull-quote">
<p>Millennia to build, a generation to forget.</p>
</blockquote>

<p>You can learn all there is to learn, and the very next generation can find itself at the beginning with little effort. Creation is the only alternative to dissolution. The choice to keep moving, to keep searching, to keep creating is what keeps us alive, and that choice is what we owe each other.</p>

<p>This debt isn’t abstract. It’s the parent who shifts from “because I said so” to “let me show you what happens.” It’s the teacher who explains not just what to do but why it matters. It’s the mentor who recognizes when obedience has become habit and understanding can begin.</p>

<h2 id="the-convergence-of-sources">The Convergence of Sources</h2>

<p>These ideas didn’t emerge from a single tradition. They arrived from multiple directions at once, which is part of why I trust them.</p>

<p><strong>Christopher Ruocchio’s <em>Sun Eater</em></strong> gives this post its title. Hadrian Marlowe doesn’t act from certainty; he acts from conviction. He perceives a good worth pursuing even though he cannot prove it will matter. He chooses anyway, because choosing is the only alternative to dissolution.</p>

<p><strong>C.S. Lewis’s <em>Screwtape Letters</em></strong> makes the fallacy of “real” versus “true” explicit. A senior demon advises: don’t attack truth directly. Keep the human focused on “real life,” the immediate, the mundane. Make truth feel abstract and impractical. The most effective corruption doesn’t announce itself; it quietly replaces orientation toward truth with absorption in circumstance.</p>

<p><strong>Yann Martel’s <em>Life of Pi</em></strong> poses the question directly. After surviving months at sea, Pi tells investigators two versions of his story: one with animals, one without. Neither can be proven. He asks: “Since it makes no factual difference to you and you can’t prove the question either way, which story do you prefer?” When the investigator chooses the story with animals, Pi responds: “And so it goes with God.” The story we choose reveals who we are.</p>

<p><strong>T.M. Scanlon’s contractualism</strong>, dramatized in <em>The Good Place</em>, reframes ethics from individual virtue to mutual obligation. It shifts from “what is right?” in the abstract to “what do we owe each other?” The choice to create isn’t just about personal meaning; it’s about the debt we carry to those who come after. The mentor owes the mentee orientation. The parent owes the child a foundation to stand on.</p>

<p><strong>Jordan Peterson</strong> argues that meaning comes from responsibility, not comfort. You find purpose by choosing to bear weight that matters. He also warns against ideological possession, where people stop searching and start assuming. The framework becomes a substitute for genuine inquiry, and regression begins even while the person believes they’ve arrived at truth.</p>

<h2 id="to-create-is-to-choose">To Create is to Choose</h2>

<p>The cycle continues: obedience becomes habit, knowledge becomes orientation, understanding becomes better decisions, action tests certainty against reality, reflection generates insight, and honest insight approaches truth.</p>

<p>Then you pass it on. The next generation begins again, not from zero if you did your job, but from wherever you managed to carry them before setting them down to walk on their own.</p>

<p>What is true often reveals itself through what we’re willing to sacrifice for others. The selfless act, the choice made for future generations rather than for ourselves, is how truth becomes visible. It’s how we know what we actually believe.</p>

<p>That’s the debt. That’s the obligation. To create is to choose, and choosing to create for those who come after is how we provide hope to those yet to come as much as for ourselves now.</p>]]></content><author><name>Steven Stuart</name><email>stevenstuartm@gmail.com</email></author><category term="leadership" /><category term="mentoring" /><category term="philosophy" /><category term="growth" /><summary type="html"><![CDATA[Progress requires sustained effort against entropy; regression requires nothing. The choice to create is what we owe each other.]]></summary></entry></feed>