Skip to main content
Back to philosophy

The Compiler Is Coming

Compilers nullified assembly expertise. Garbage collectors erased years of memory management discipline. The pattern is unmistakable, and it's happening again.

The Pattern of Abstraction

Every major paradigm shift in computing has followed the same arc. A skill set that was painfully honed and highly prized becomes suddenly inconsequential, while entirely new domains of excellence emerge in its place.

Assembly programmers spent years mastering register allocation, instruction scheduling, and the intimate knowledge of how their code mapped to silicon. Then compilers arrived, and that knowledge stopped being the daily work. It didn’t vanish, and understanding how hardware executes instructions remained valuable, but it was no longer what made someone productive. What replaced it was a new layer of concerns: structured programming, type systems, the beginnings of software architecture.

Higher-level languages introduced their own hard-won disciplines, like test suites, continuous integration, version control workflows, and design patterns. Then memory-managed languages like Java and C# arrived and nullified an entire category of expertise that good developers had built over years of painful experience: tracing malloc calls, thinking about stack growth direction and heap location, guarding against buffer overruns. These were real skills, and they took years to develop. And one day, the garbage collector made them irrelevant in the places that many migrated to work.

Each time, the same thing happened: the job didn’t get easier. The challenges took different forms.

The AI-Assisted Paradigm

The current transition follows this pattern exactly. AI coding assistants embody less a tool bolted onto the existing paradigm and more a shift in the level of abstraction at which engineers operate.

It’s natural to resist this framing. Previous transitions always had practitioners who insisted the new abstraction was inferior, that real engineering meant working closer to the metal and automation would produce worse results than skilled human effort. They were often right that something was lost. They were consistently wrong that the loss mattered more than the gain.

Consider that the compiler didn’t delete software engineering, but rather changed what software engineering meant. Memory management didn’t disappear as a concern. It just moved from the programmer’s daily attention into the runtime, freeing the programmer to focus on higher-order problems. I believe the AI-assisted paradigm is the next iteration of this same move. The only question is how we navigate it well.

What Fades

The skills that fade cluster around anything close to the implementation layer, the moment-to-moment of writing code.

Data structures and algorithms, as a demonstration of engineering capability, are the clearest example. This is the knowledge learned in undergraduate courses and practiced in coding challenges and whiteboard interviews. It was once important that an engineer could implement a binary search with a twist, or manipulate linked lists with precision. That knowledge hasn’t become false, but it’s no longer the point. It’s sufficient now to understand binary search as a concept and trust the implementation to the compiler. Where you once pulled a function from a library, you now describe the behavior you need and the AI produces it, often making optimization decisions you would have made yourself.

Syntax-level defensive habits follow the same path. Writing null == variable instead of variable == null so a dropped equals sign triggers a compiler error.1 Caching patterns learned through years of code reviews. Formatting instincts burned into muscle memory. These are the kind of skills that made someone a good programmer, and they simply don’t apply to working at a higher level of abstraction.

Real skills, built over years of practice, made irrelevant once the abstraction moved above them.

What Persists

Across these transitions, the engineering component of software engineering survives.

The ability to tell when a design will preclude future designs. Extensible architecture. Small choices with enormous downstream impacts when requirements change. Code smell2 detection, the right tool for the right job. The judgment that a refactor is too risky and the better move is a pragmatic workaround. The instinct that an edge case is lurking beneath a seemingly clean abstraction.

All of this remains essential. Design patterns, DRY, SOLID, caching strategies, data type selection, the trade-offs between CPU and memory, anticipating scale. None of this has been nullified. What’s changed is the medium through which this knowledge expresses itself. It flows through specifications and design documents rather than through code. The engineer’s education and experience inform the design that the AI implements. The knowledge is the same. The output format has changed.

Instincts That Actively Interfere

Some skills from the previous paradigm actively interfere with the new one.

The landscape of AI-assisted development changes so rapidly that it’s difficult to know when to release a best practice that has been genuinely superseded. Prompt engineering went through short eras where key techniques (“think step by step,”3 XML tag structures, explicit reasoning directives) were hard-won lessons that needed to be unlearned almost immediately when a more capable model was released or the AI harness changed. The instincts you build around working with AI are themselves volatile.

This is disorienting for experienced developers. The implementation domain rewarded accumulated instinct, where the longer you practiced, the better you got, and the improvement was durable. The specification domain, at least in its current early state, sometimes punishes persistence. The technique that worked brilliantly last month may be counterproductive today. Learning new skills is only part of it. You also have to recognize which old skills to release, and accept that some new skills may themselves be temporary.

Footnotes

  1. This defensive pattern is commonly known as a “Yoda condition,” named for the character’s inverted sentence structure. Placing the constant on the left side of the comparison ensures that an accidental single = produces a compile-time error rather than a silent assignment.

  2. M. Fowler, Refactoring: Improving the Design of Existing Code. Reading, MA, USA: Addison-Wesley, 1999. Fowler popularized the term “code smell,” originally coined by Kent Beck, to describe surface-level indicators of deeper structural problems — giving engineers a shared vocabulary for intuitions they already had.

  3. T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa, “Large language models are zero-shot reasoners,” in Advances in Neural Information Processing Systems, vol. 35, 2022, pp. 22199–22213. The discovery that appending “Let’s think step by step” to a prompt dramatically improved LLM reasoning performance was a watershed moment in prompt engineering — and one of the first techniques to be rendered unnecessary by subsequent model improvements.