The Myths and Marketing of Moore’s Law

Moore’s Law won’t end. Even when it ends it won’t end.

The Law follows that more components can be crammed into an integrated circuit with developments in technology over time. However transistors are getting so small that current leakage becomes a greater issue. In short this means there needs to be an amount of empty space between transistors for them to work predictably and without predictability you can’t build computers. This “empty space” (dark silicon) means even if we were to make transistors infinitely small, there would still be a finite limit on how many we could fit on a chip.

For electrical transistors at least, the current wording of Moore’s Law is ending. I won’t prophesies a paradigm shift to optical or quantum computers to take the next leg; although on the way they will not arrive in time. It won’t end for a much simpler reason…

What’s this doubling business?

The idea of doubling in “performance” always was a myth. Even in the frequency scaling hey-day we saw diminishing returns but a doubling in something sure was a good reason to buy a new computer. With recent CPU architectures we’ve only been seeing ~10% increase in performance for a die shrink and ~20% for a full nano-architecture redesign, which is why for many system owners the hardware refresh cycle can be five or more years.

Why it won’t end:

It’s not a law governing what will happen but an observation on what has happened. The prospect of selling computers funds innovation IT so marketeers will just adapt the law to observe something else. We old hats know this won’t be the first time. The real world implication of Moore’s Law is you buy a new computer every few years, which is why though the wording may change The Law will continue. And the myth of doubling with it.

Advertisements

time to take Java seriously again?

Like many Computer Science graduates Java was the first language I’d say I really learnt. Sure I’d dabbled in C and VB but Java is where I first wrote meaningful code beyond examples from the text book. Again like many Computer Science graduates, I turned my back on Java pretty soon after that.

The need is not to get the most out of your hardware but to get the most out of your data, as quickly and continuously as possible to retain your advantage.

My experience in video game programming as well as my current day job around research computing (although not in a programming capacity) both feature squeezing every drop out of hardware which sadly leaves little space for Java. In both code written in fast low-level languages is optimised to exploit the hardware it will run on.

remove-c-give-java

The ongoing data analytics and machine learning revolution, surely the most exciting area in IT at the moment, is bringing with it a data-centric approach of which we should all take note. The need is not to get the most out of your hardware but to get the most out of your data, as quickly and continuously as possible to retain your advantage.

Spark for example is written in Scala, which compiles into Java byte code to run on the Java Virtual Machine which itself finally runs on the hardware. Furthermore many Spark apps are themselves written in a different language such a R or Python which have to first interface with Spark. This is a lot of layers of abstraction each adding overheads which would be shunned by performant orientated programmers.

kill-java

Yet when I look at these stacks I instead see wonderful things being done and begin to see past my preconceptions.

I’m also seeing containers grow in prominence which are a natural fit for Java development. With S2I builds (source to image) developers can seamlessly inject their code from their git repository into a Docker image and deploy that straight onto a managed system.

Whilst C++ will remain the norm for mature performant orientated applications, hypothesis testing and prototyping to yield quick results is giving an extra life to Java.