The Sport of Programming


I’ve been envious of cryptographers for the many public challenges bodies such as GCHQ have been setting for many years now. I have always wanted to jump in but cryptography is not an area I have an interest in, and the barrier to entry for me has just been too high. Which is why I was delighted to see a competition in an area I do have some knowledge in, data analytics.

The Data Science Challenge was fronted by the UK Government’s Defence Science and Technology Laboratory promised to give ordinary members of the public the chance to play with “representative” defence data. Two competitions were set, a text classification and a vehicle detection competition. Both took the format of providing a training data set to create a model, and then scored were based on making predictions for an un-labeld test data set.


The text classification competition was detecting the topics of Guardian articles from the content, whilst the vehicle detection competition was detecting and classifying vehicles appearing in satellite images. I saw this as an excellent opportunity to practice two technologies I had not used much before, Spark and TensofFlow.

How’d I do?

Good. Tragically as the user area of the website has already been taken down by the time of writing this retrospective I can’t check my final standings, however I entered both competitions and from memory finished just outside the top 20 in each (of 500-800ish entrants in each competition)

Which I’m pretty happy with. I noted the top-10 in each competition did not enter both competitions, so I’m happy that my skill-set is general enough to pick up new (to me) technologies quickly and perform reasonably well, even if not quite matching those specialised in a particular area.

How’d I did it – Text Classification

I had been starting to learn Apache Spark in the run up to this competition as R was proving too difficult to parallelise efficiently for large data sets and thought it a natural fit here. I found the map-reduce aspect of Spark easy to pick up, it’s very similar to functional programming & lambda calculus I studied in University many years ago, which further goes to show nothing’s really new in IT. Even neural networks aren’t too evolved from the hyper heuristics of 10+ years past.

My solution was based on a comment from Dr Hannah Fry in the BBC4 documentary The Joy of Data that I watched a few weeks earlier, where she summarised that the less frequently a word is used, the more information it carries. For each topic I conducted a word-count and compared the frequency with which a word was used in the topic with the frequency with which it was used outside the topic. The words which saw the mist significant increase in use frequency were then used to classify topics.

I found setting thresholds for the number of distinct articles a word was used in to be key as this prevented words used many times in a small number of articles from selecting over-fitted keywords. Once the keywords for each topic were identified, it was easy to count them in all articles which reduced the problem to simple classification based on numerical data.

I experimented with a range of models including random forests and multiple variable linear regression, extreme gradient boosting showed the best accuracy.

At this point I was still quite far off the pace set by the leaders, I then extended my solution to also use bigrams (sequential pairs of words). This took a little more effort particularly as punctuation now had to be accounted for whereas previously it could all be stripped but a fun coding session later I was running.

There are obviously a lot more pairs of words than there are words, and this is where I met the computational limitations of my machine. Memory was manageable but I needed more compute to do more analysis on bigrams, and further trigrams. The majority of my code was Spark using pyspark so moving on to AWS would be fairly simple, but two driving forces made me stop there:

  1. There’s another competition and I really want to do both
  2. I’m a cheapskate and don’t want to pay AWS

How’d I did it – Vehicle Detection

Basically, I hacked somet together with TensorFlow and did surprisingly well.

This is far from anything I have done before, but I consider myself a well-rounded programmer and was keen to take up the challenge. I wrote my dissertation many years ago on Computer Vision and feature detection and so had some understanding of image processing, but had not yet touched neural networks.

With time now being of the essence since I spent too long on the first competition I dived into some tutorials and worked backwards. In my eyes the problem became find out what I can do, then hammer that into a format that answers the question.

I’d previously dabbled in a Kaggle digit recognition competition and used this as the starting point, however it was Rstudio’s Tensorflow tutorial that really got me up and running. With a little code modification to account for three colour channels I was able to pass image “chips” in labeld with what they contained (if anything, random un-tagged chips were also used) and use those to train a Softmax model, and then a Multilayer ConvNet, both using a range of different chip-sizes and chip-spacing to find a good balance.

An example source image, and two chips containing vehicle (not to scale)

As a beginner I started with the CPU-only version of TensorFlow but quickly moved to the GPU accelerated version using NVIDIA’s cuDNN library. Wow, the improvement was staggering. The training stage was just over 7 times faster using my modest GTX 960M (4GB version) than using just my i7-6700HQ.

Closing Thoughts

I enjoyed the challenge but there were a couple of points which let it down. Firstly the promise of playing with representative defence data was totally exaggerated, the data was articles from and google satellite images  of a UK city. It was nice to get the data in an easily machine processable format but this data is already publicly accessible via HTML and APIs.

Secondly although building a community was a stated goal, the competition was not set up to facilitate that. The leaderboard was limited to viewing the top 10 and the community forums already seem to have been taken down. Hopefully they can learn from Kaggle and its thriving community here.

but I am very satisfied how close I came to the winners in each competition and look forward to the next round. Time to see what else I can do with my growing Spark & neural networks knowledge.



It’s just not worth buying computers at the moment.

In our present market, two turbulent forces entwine to null the value of anything less than monumental improvements. These are Brexit and memory price gouging. To qualify the clickbaity title, specifically I mean for technical computing in the UK.

Lies, damn lies, and marketing

In an earlier post I described that although an interesting technical observation, the idea of doubling in actual performance has been falsely perpetuated by marketing types. 10-20% improvement is more realistic however incremental improvements have at least been improvements. And a new machine has been a worthwhile investment over renewing maintenance on an old machine. Plus we like shiny new machines…

But there are many ways of measuring performance, and for many workloads even a 10% generational improvement is a falsehood.


To test my title hypothesis, consider SPECfp. This is a computer benchmark designed to test the floating point performance, however it differs from Linpack in that it more accurately represents Scientific / Technical computing applications. These tend to be very data-orientated and often push entire system bandwidths to the limits moving data on and off the CPUs

I collected data published on and using R extracted a comparable set of statistics for generations of Intel Xeon E5 CPUs, grouped them by their E5 sku number to compare generation-on-generation performance. Anyone who has benchmarked AMD for performance applications will naturally know why I’m only looking at Intel…

The below charts depicts those E5 numbers which occur in all four of the most recent generations being considered. There was an un-even time gap between generations, for further comparison I have also plotted this data against. A trend of decreasing improvements can be seen.

OK yes I’ve exaggerating, but not my much

In fairness to Intel, using these skus as the comparison point is a bit of a simplification. They will say we shouldn’t compare based on their E5 label, but whichever way we look at it the same patterns emerge. The below chart takes the highest performing CPU of each generation (including CPUs not represented on the earlier chart) and plots them against time.

I’ve taken the opportunity to join in the debunking of the “doubling in performance every two years” myth here by also plotting where that doubling in performance would have led to. I give this chart the alternative title: “Where marketing think computer performance has been going compared to where it has actually been going”.

Where marketing think computer performance has been going compared to where it has actually been going

The below chart plots the performance of all E5-2600 CPUs including those which do not occur in all generations for a fuller comparison agnostic to the names these products are given. Again the diminishing returns are apparent.


The Intimidating Shadow of Ivy Bridge

Returning to my hypothesis, specifically I’d like to zone in on the Ivy Bridge (v2) CPUs launched in the tail end of 2013. Initially priced at a premium, however as prices settled into 2014 many more were bought. Machines sold with 3-years maintenance are pretty standard in IT, and so a significant number of machines up for maintenance renewals or replacement are Ivy Bridge.

Comparing the highest SKU of each generation, we see only 12% increase in real performance Ivy Bridge to Broadwell. Comparing SKUs over generations we typically see around 22% improvement.

This is most worrying as with current memory prices and currency exchanges servers typically cost 20-25% more than they were 8-9 months ago.

Memory cost did decrease per GB from Ivy Bridge until recently, plus we now have DDR4 and SSDs are more sensible. But if you have a higher end Ivy Bridge server falling off warranty, it’s just not replacing it right now. Buy maintenance instead and hope Skylake is better.

The Compute Landscape at the Beginning of 2017

For years the IT industry has accepted Intel as the only viable option. At the beginning of Intel’s reign the consensus was: “yeah Intel CPUs are way better than anyone else, let’s buy lots”. But now the feeling is: “oh, another incremental upgrade from Intel. What’s AMD up to? Ah still nothing. Fine buy more Intel…

Being fair it is mean of me to wail on Intel for AMD’s failure/refusal to compete, Intel have still been innovating just not at the rate we became accustomed to in the competitive years. 2017 looks to be an interesting year, a year we all get more choice.

Beyond Kidz wiv Graphics Cardz

NVIDIA have been pushing really hard for years now to establish themselves beyond gaming. Their GPU hardware offers excellent performance but despite creating a whole CUDA ecosystem to support their products, few made the leap. Incrementally faster horses were fine and we could all get on with our work.

Deep/Machine Learning is beginning to revolutionise IT. It’s stretching out beyond academia into more and more commercial uses. Soon if you do not have an analytics strategy you will not be competitive. This is an excellent area to use GPU accelerators; many machine learning applications involve a larger number of parallel computations proportional to the amount of data. And “big data” applications exploit scale-out designs beautifully.

Intel position their Phi co-processors (and lately Knight’s Landing processors) as a competitor to NVIDIA GPUs, but without significant direction no one really knows what to do with a large number of inferior Xeon cores in one box. Our E5 Xeons are often not at 100% utilisation, there’s little benefit moving to a platform with less memory per core, and less network bandwidth per core.

After years of unchallenged Intel dominance they are emitting the field of dreams aura of “If we build it, they will come”. This works for Xeon E5 chips as no one’s building anything else. But with NVIDIA building and aggressively supporting users move to their platform, accelerator users are flocking to NVIDIA leaving Phi and Knight’s Landing dead on the side of the road.

Are AMD about to ante up?

You’d think that as Intel have been cramming more and more cores into a box then AMD should have been quite competitive, until recently AMD were exceeding Intel in this metric. But their architecture is such that two “cores” share an ALU. This makes it not too dis-similar to Intel’s Hyperthreading where two virtual cores also time-share a physical core. Both get good utilisation out of their ALUs, but in most fair comparison Intel outperforms AMD.

AMD have been viewed as a cheaper “also-ran”. With the major exception of cloud providers, most of the industry has been moving to do more from less hardware. And even many cloud providers are using Intel (often E3s stacked high and sold cheap).

Intel have been coasting. The time is right for AMD to get back in the game. PCIe Gen 4.0 along with a refreshed nano-architecture could offer great potential for high-bandwidth applications.  Bandwidth between CPUS and accelerators, memory and the network.

Choice is Good

I’m speculating somewhat on AMD’s next platform and weather it will be any good, but NVIDIA certainly are well placed for 2017. The announcement of their Pascal architecture last year was a game changer for accelerators of which we are still feeling excitement. And IBM’s opening of their historically proprietary POWER platform into the OpenPower foundation opens the gates for more competitive POWER systems to break through.

I see more going on in compute now than there has been for years.

Hacking Tennis for luls and profit

As with many tech nerds, although employed in a specific area of IT I like to dabble in others in my free time. My most recent dabbling has been in data science. Although I say “science” I’m afraid my intentions are less noble than the word implies. I’m more interested in exploiting data for profit.

Odds of that?

Were I a bookmaker setting odds I could simply guestimate the probability of an outcome, knock a bit off for my “fee”, and offer those odds to my pundits. But where’s the profit if no one backs the looser?

The bookies have an awful lot of information at their disposal that they can use to balance a book. For example they know which teams / sports stars are popular with punters and will have a reasonable idea of how many bets they can expect when they offer any given odds. Were I setting odds I would be more interested in predicting how many people will take my odds and for what stakes than the messier business of predicting the outcome of a sporting event.

My goal as a book maker would be to make as much money as possible as reliably as possible. I would not be at all interested in “gambling”. I suspect larger bookmakers already do this, which would put an interesting inefficiency in the market ripe for exploiting in that odds are representative of the punter’s expectation of the outcome and not the probability of the outcome.

Why Tennis?

I like tennis. Well I don’t watch tennis, but if I were to I think I’d like it. Tennis is an ideal candidate sport for odds profiteering for a number of reasons:

  1. Singles tennis is a simple competition between two players without group dynamics and summing of component parts to account for
  2. It’s enjoyed by many for the sport itself, meaning a wide range of data is publicly available for fans enjoyment unlike horse racing where useful data is behind a pay-wall
  3. Underdogs win fairly regularly. In 2016, nearly 28% of matches were won by the underdog[1]

I see predicting which underdogs win as a good area to make money. I theorise there are unsupported, relatively unknown players that few pundits want to back. Bookies will incentivise with higher paying odds on these players to balance their book and remove the gambling element.

I have been exploring this area with machine learning algorithms with promising results.

First Pass

As a proof of concept I used datasets from and simulated predicting the 2016 season. I used an out-of-band validation technique where for a given day only data from previous days were considered to train the model, and the model was then used to predict that day. In my implementation training the model was the bottleneck, to shorten runtime I tested three days at once meaning the second and third tested days would be using an “outdated” model. I was careful to avoid leakage and deemed this an acceptable compromise as it could only make results worse[2]

I implemented some very simple features based on the data easily available, this was mostly game win percentage per set, and comparisons with competitor and used this to train a predictive model in R to calculate a rough probability of the underdog winning using only data that would have been available before each match.

This probability is combined with the betting odds to calculate a theoretical “average” return[3] for backing the underdog based on my assigned probability.

The Results

My results were very promising indeed. If you back every underdog you loose, some come in but not enough to recoup other lost stakes. But if you were to back every underdog my model estimates to have a theoretical return greater than 1.0 then you would make a profit.

The plot below illustrates the profit made and the number of bets made based on setting the threshold in different places.


The trick to maximising profit is deciding where to set the threshold for which underdogs you back. This is a conundrum as it is very dangerous to set the threshold for a predictive model with data after the fact.

My biggest criticism of the results is the small number of bets worth making were found. Setting the threshold at 1.5 results in only 200 matches identified as worth betting on across the whole year, and only 36 of these come in. The odds were high enough to recoup losses but these small quantities seem too much like “gambling” and vulnerable to fluctuation. With the limitation of only one reality to test outcomes  it is unfortunately impossible to know if this is the good or bad end of possible outcomes.

What next?

I am pleased with the direction of my results but do not believe them conclusive enough to put this into production. I only used a small number of “features” to train my model and believe there to be more valuable mining that can be done here.

The major bottleneck in my experiments was the time it took my computer to train the model in R. The winter holidays has been a good time for me to do this, not only have I had time off work to write my code but also time with family away from my computer allowing it to work whilst I don’t.

To make real progress I need more throughput. I do have experience in c++ but limited access to good machine learning algorithm implementations in it. Learning Spark seems like a good way forward, benchmarks I’ve seen place it way better than R and it’s scale out parallel design would allow me to add more cheap hardware if I see more good results.

Plus I may be looking for a new job in Data Science / Big Data in the near future and Spark is the feather to have in your cap right now.



[1] by Bet365’s odds, 734 of 2626 recorded matches (three were excluded for not having odds available).

[2] I’d argue “could” should be read as “should” if this were written by someone else.

[3] Warning, don’t discuss philosophy with a computer guy: A theoretical average where the same match is played a number of times simultaneously in which different results are possible. Assumes “fate” isn’t a thing but also that instances are finite.

The Myths and Marketing of Moore’s Law

Moore’s Law won’t end. Even when it ends it won’t end.

The Law follows that more components can be crammed into an integrated circuit with developments in technology over time. However transistors are getting so small that current leakage becomes a greater issue. In short this means there needs to be an amount of empty space between transistors for them to work predictably and without predictability you can’t build computers. This “empty space” (dark silicon) means even if we were to make transistors infinitely small, there would still be a finite limit on how many we could fit on a chip.

For electrical transistors at least, the current wording of Moore’s Law is ending. I won’t prophesies a paradigm shift to optical or quantum computers to take the next leg; although on the way they will not arrive in time. It won’t end for a much simpler reason…

What’s this doubling business?

The idea of doubling in “performance” always was a myth. Even in the frequency scaling hey-day we saw diminishing returns but a doubling in something sure was a good reason to buy a new computer. With recent CPU architectures we’ve only been seeing ~10% increase in performance for a die shrink and ~20% for a full nano-architecture redesign, which is why for many system owners the hardware refresh cycle can be five or more years.

Why it won’t end:

It’s not a law governing what will happen but an observation on what has happened. The prospect of selling computers funds innovation IT so marketeers will just adapt the law to observe something else. We old hats know this won’t be the first time. The real world implication of Moore’s Law is you buy a new computer every few years, which is why though the wording may change The Law will continue. And the myth of doubling with it.

time to take Java seriously again?

Like many Computer Science graduates Java was the first language I’d say I really learnt. Sure I’d dabbled in C and VB but Java is where I first wrote meaningful code beyond examples from the text book. Again like many Computer Science graduates, I turned my back on Java pretty soon after that.

The need is not to get the most out of your hardware but to get the most out of your data, as quickly and continuously as possible to retain your advantage.

My experience in video game programming as well as my current day job around research computing (although not in a programming capacity) both feature squeezing every drop out of hardware which sadly leaves little space for Java. In both code written in fast low-level languages is optimised to exploit the hardware it will run on.


The ongoing data analytics and machine learning revolution, surely the most exciting area in IT at the moment, is bringing with it a data-centric approach of which we should all take note. The need is not to get the most out of your hardware but to get the most out of your data, as quickly and continuously as possible to retain your advantage.

Spark for example is written in Scala, which compiles into Java byte code to run on the Java Virtual Machine which itself finally runs on the hardware. Furthermore many Spark apps are themselves written in a different language such a R or Python which have to first interface with Spark. This is a lot of layers of abstraction each adding overheads which would be shunned by performant orientated programmers.


Yet when I look at these stacks I instead see wonderful things being done and begin to see past my preconceptions.

I’m also seeing containers grow in prominence which are a natural fit for Java development. With S2I builds (source to image) developers can seamlessly inject their code from their git repository into a Docker image and deploy that straight onto a managed system.

Whilst C++ will remain the norm for mature performant orientated applications, hypothesis testing and prototyping to yield quick results is giving an extra life to Java.

Killed the car for a faster horse? Not me.

I was very happy for Microsoft to continue with their original xbox one plans. I just wasn’t going to buy it. At that point it stopped being my problem and started being Microsoft’s problem. Now enthusiasts who were in favour of Microsoft’s original direction are blaming the army of comentards of which I was one for halting progress but we must remember it was Microsoft’s decision to U-turn.

English: Steve Jobs shows off the white iPhone...
English: Steve Jobs shows off the white iPhone 4 at the 2010 Worldwide Developers Conference Español: Presentación del iPhone 4 por Steve Jobs en la Worldwide Developers Conference del año 2010 (Photo credit: Wikipedia)

Apple under the masterful leadership of the late Steve Jobs were a fantastic example of a company saying we’re going to do something new, something a bit different, something we think is better, and we hope you’ll come with us. People did. Apple and in particular Steve Jobs were genuine thought leaders.

Companies do not have a fundamental right to our money. Most of us live in fairly free countries where personal choice is paramount, and that extends to what we choose to spend our money on. You can produce something radically different but if you want us to buy it, we must want to buy it. That is where Microsoft failed.

I can see the argument for online games with massive worlds which evolve even when you’re not playing, games in which my save has an impact on my friends and we can interact directly and indirectly and that is very exciting, I can see games like this being huge. But if you make a game like that then you are making an online game. No one expects an online game to work offline.

Mandating to sign in, even if only once every 24hours, to play purely offline games such as Peggle is nothing other than intrusive DRM.

To those who would blame me and those like me for killing progress you must remember all we could ever have done is not buy it. Microsoft pulled the trigger.