Skip to main content
Infrastructure

Ode to My Motherboard: What We Lose Moving Up the Abstraction Stack

Every abstraction we adopt saves time and loses something. Here's what we give up when we stop touching the hardware — and why it still matters.

John Lane 2026-01-21 6 min read
Ode to My Motherboard: What We Lose Moving Up the Abstraction Stack

I keep a dead motherboard on the bookshelf in my office. It's a dual-socket server board from maybe 2009 — blown capacitors, one scorch mark, most of the VRM missing. It is completely useless and I have no intention of getting rid of it. Every so often someone asks why it's there, and the honest answer is that it reminds me of something I do not want to forget.

This is a piece about what we give up as we climb the abstraction stack, and why, after 23 years of running infrastructure, I think some of the things we're losing are worth mourning even as we admit the trade is usually worth it.

The direction of travel is up, and it's mostly good

Let me say this clearly before I get accused of nostalgia for its own sake: the movement of computing up the abstraction stack has been, on balance, a huge win. Bare metal became virtual machines. VMs became containers. Containers became managed services. Managed services became serverless. At each step, more engineers could do more things with less concern for the layers below.

The benefits are real. A team of five can now run workloads that would have required a team of fifty in 2005. Availability has improved. Deployment velocity has improved by orders of magnitude. Developers can ship a working service in an afternoon that would have taken weeks to provision physically. None of this is bad, and anyone arguing for a return to the old ways is selling something other than engineering.

But every abstraction has a cost, and the costs tend to show up in places the marketing materials do not mention. Here is what I think we are losing, and why the dead motherboard stays on my shelf.

We're losing the physical intuition

When you rack hardware for a living, you develop a feel for what a computer actually is. You know what a server sounds like when a fan is about to fail. You know what a drive sounds like when it's dying — that specific click, the one you never forget after the first time you hear it. You know how hot the back of a cabinet should be, and what it means when a particular row of LEDs is blinking wrong. You know the smell of a shorted power supply. None of this is romantic. All of it is diagnostic information that your body learned to process without ever being explicitly taught.

The engineers coming up now will not have that intuition, and it is not because they're worse engineers. It's because they never had the chance to stand in a cold aisle at 1 AM listening to a rack. The physical layer is now somebody else's problem — usually a hyperscaler's, and the hyperscaler's techs are not the ones writing the software that runs on the hardware. The feedback loop between "what the hardware is doing" and "what the software is experiencing" has been cut, and diagnostic information has become purely telemetric. This is fine right up until the telemetry lies, and then suddenly you're debugging a performance problem that is actually a failing NIC nobody will let you look at.

We're losing the mental model of cost

When you buy a physical server, you have an intuitive sense of what it costs. You know the rack, the power, the cooling, the warranty, the depreciation schedule. Cost is concrete. You can point to it. When you provision compute in a cloud console, cost becomes abstract — a meter that ticks up in the background, payable in 30 days, hidden in a bill that nobody reads in detail. The result is that engineering decisions get made without any felt sense of the financial consequences, and the bills arrive and surprise everyone and cost optimization becomes its own specialty, because nobody had the instinct that would have prevented the overspend in the first place.

I am not arguing that physical infrastructure was cheaper — sometimes it is, sometimes it isn't. I am arguing that the mental model was cleaner. You knew what you owned and what it cost to run. Now we mostly don't, and the people who do — the FinOps practitioners — are doing archaeology on the consumption of engineers who had no idea what their choices cost.

We're losing the discipline of constraint

Abstraction removes constraints, and constraints are how engineers learn judgment. When you have 128 GB of RAM in a physical server, you know exactly how much you have, and you make choices accordingly. When you have "as much memory as you want, charged by the GB per hour," you stop making those choices, and something atrophies.

The best engineers I have ever worked with were shaped by constraint. They learned to be parsimonious because they had to be. They optimized queries because the database was on a single disk and the disk was the bottleneck. They cached results because network round trips were expensive in ways they could measure. They wrote efficient code because inefficient code didn't fit. These habits produce better software in environments where constraints have been abstracted away, too — the abstraction didn't remove the underlying physics, it just hid them. And the engineers who learned under constraint produce software that runs meaningfully better at scale than engineers who did not, even when both are writing to the same modern APIs.

We're losing the craft of assembly

There is something specific that gets lost when you stop building a computer from parts. Choosing a motherboard involves a hundred small decisions — chipset, memory topology, PCIe lane allocation, power delivery, thermal design. Assembling a server — setting up RAID, flashing firmware, configuring BMC, validating memory, running burn-in — is a craft. It takes time and it rewards attention to detail and it produces, at the end, a thing that you understand completely because you put it together yourself.

I am not sentimental about doing this work at scale. Standing up a hundred servers by hand was brutal and nobody misses it. But doing it once in a while — building a single machine carefully, holding each component, understanding why each choice was made — is a form of education that cannot be replicated by reading documentation. The engineers who started their careers building their own computers have a specific kind of grounding. It's not essential to their work, but it shows up in the quality of their mental models, in the questions they ask, and in the kinds of bugs they're able to diagnose.

What to do about it

I don't think we should go back. Going back is not an option, and it wouldn't be the right choice even if it were. But I do think there are a few things worth doing if you want to keep some of what we're losing.

Let your engineers touch hardware at least once. If you run a private cloud or a colo footprint, take your senior engineers to the facility. Let them rack a server. Let them hear the fans. Let them trace a cable from a switch to a host. This does not have to be efficient. It has to happen at least once per career.

Make the bills visible. Put cloud costs in front of engineers, tagged by service, updated daily. Do not leave cost information to the finance team. The engineers making the decisions should see the consequences within a week.

Teach the stack, not just the top of it. When you onboard an engineer, spend an hour walking through what happens from the moment a packet hits the NIC to the moment a response is sent. Most engineers coming out of bootcamps have never been given that walkthrough, and it is the fastest way to build the intuition they are missing.

Keep your own dead motherboard. Metaphorically, at least. Keep something on your desk that reminds you the magical cloud is still, at the end of the chain, a building full of silicon drawing real electricity to do real work. The abstractions are convenient, but they are not the thing itself.

The thing itself still matters. That's what the motherboard is for.

Talk with us about your infrastructure

Schedule a consultation with a solutions architect.

Schedule a Consultation
Talk to an expert →