It’s just not worth buying computers at the moment.

In our present market, two turbulent forces entwine to null the value of anything less than monumental improvements. These are Brexit and memory price gouging. To qualify the clickbaity title, specifically I mean for technical computing in the UK.

Lies, damn lies, and marketing

In an earlier post I described that although an interesting technical observation, the idea of doubling in actual performance has been falsely perpetuated by marketing types. 10-20% improvement is more realistic however incremental improvements have at least been improvements. And a new machine has been a worthwhile investment over renewing maintenance on an old machine. Plus we like shiny new machines…

But there are many ways of measuring performance, and for many workloads even a 10% generational improvement is a falsehood.


To test my title hypothesis, consider SPECfp. This is a computer benchmark designed to test the floating point performance, however it differs from Linpack in that it more accurately represents Scientific / Technical computing applications. These tend to be very data-orientated and often push entire system bandwidths to the limits moving data on and off the CPUs

I collected data published on and using R extracted a comparable set of statistics for generations of Intel Xeon E5 CPUs, grouped them by their E5 sku number to compare generation-on-generation performance. Anyone who has benchmarked AMD for performance applications will naturally know why I’m only looking at Intel…

The below charts depicts those E5 numbers which occur in all four of the most recent generations being considered. There was an un-even time gap between generations, for further comparison I have also plotted this data against. A trend of decreasing improvements can be seen.

OK yes I’ve exaggerating, but not my much

In fairness to Intel, using these skus as the comparison point is a bit of a simplification. They will say we shouldn’t compare based on their E5 label, but whichever way we look at it the same patterns emerge. The below chart takes the highest performing CPU of each generation (including CPUs not represented on the earlier chart) and plots them against time.

I’ve taken the opportunity to join in the debunking of the “doubling in performance every two years” myth here by also plotting where that doubling in performance would have led to. I give this chart the alternative title: “Where marketing think computer performance has been going compared to where it has actually been going”.

Where marketing think computer performance has been going compared to where it has actually been going

The below chart plots the performance of all E5-2600 CPUs including those which do not occur in all generations for a fuller comparison agnostic to the names these products are given. Again the diminishing returns are apparent.


The Intimidating Shadow of Ivy Bridge

Returning to my hypothesis, specifically I’d like to zone in on the Ivy Bridge (v2) CPUs launched in the tail end of 2013. Initially priced at a premium, however as prices settled into 2014 many more were bought. Machines sold with 3-years maintenance are pretty standard in IT, and so a significant number of machines up for maintenance renewals or replacement are Ivy Bridge.

Comparing the highest SKU of each generation, we see only 12% increase in real performance Ivy Bridge to Broadwell. Comparing SKUs over generations we typically see around 22% improvement.

This is most worrying as with current memory prices and currency exchanges servers typically cost 20-25% more than they were 8-9 months ago.

Memory cost did decrease per GB from Ivy Bridge until recently, plus we now have DDR4 and SSDs are more sensible. But if you have a higher end Ivy Bridge server falling off warranty, it’s just not replacing it right now. Buy maintenance instead and hope Skylake is better.

The Myths and Marketing of Moore’s Law

Moore’s Law won’t end. Even when it ends it won’t end.

The Law follows that more components can be crammed into an integrated circuit with developments in technology over time. However transistors are getting so small that current leakage becomes a greater issue. In short this means there needs to be an amount of empty space between transistors for them to work predictably and without predictability you can’t build computers. This “empty space” (dark silicon) means even if we were to make transistors infinitely small, there would still be a finite limit on how many we could fit on a chip.

For electrical transistors at least, the current wording of Moore’s Law is ending. I won’t prophesies a paradigm shift to optical or quantum computers to take the next leg; although on the way they will not arrive in time. It won’t end for a much simpler reason…

What’s this doubling business?

The idea of doubling in “performance” always was a myth. Even in the frequency scaling hey-day we saw diminishing returns but a doubling in something sure was a good reason to buy a new computer. With recent CPU architectures we’ve only been seeing ~10% increase in performance for a die shrink and ~20% for a full nano-architecture redesign, which is why for many system owners the hardware refresh cycle can be five or more years.

Why it won’t end:

It’s not a law governing what will happen but an observation on what has happened. The prospect of selling computers funds innovation IT so marketeers will just adapt the law to observe something else. We old hats know this won’t be the first time. The real world implication of Moore’s Law is you buy a new computer every few years, which is why though the wording may change The Law will continue. And the myth of doubling with it.