Was WhatsApp overpriced?

As is to be expected, in the aftermath of the WhatsApp acquisition, there has been a lot of talk about how it was overvalued and Facebook payed too much. $19billion is a lot of cash, however you look at it.

But did Facebook overpay? Possbily.  

If we look back through history, a lot of big acquisitions have been decried as overvalued, sometimes rightly, sometimes not. When Facebook bought Instagram for $3billion there was a similar uproar, but now, things have worked out ok, and $3billion is looking more and more like a pretty good price.  When Google bought YouTube, there was an uproar over the valuation but no-one is now saying that wasn't a good purchase.  Incidentally, Google paid around $40 per active user when they bought YouTube, which is a slightly higher per-user cost than Facebook just paid for WhatsApp, at about $38 per user (although it's not really a relevant comparison in this case though, as you could argue that Google's projected monetisation model for YouTube is clearly different for Facebook and WhatsApp).

Facebook should have just built it

This is also a common line of argument. With $19billion  to play with, they could easily dedicate a decent dev team to building something awesome, and then market the hell out of it, possibly even incentivising user installs (If they wanted 450million users, they could give the first 450million installs $5 each!  disclaimer: app installed != active user).

My suspicion is if Facebook had built the app/platform themselves, it would have bombed.

Facebook as a company are becoming less and less focused, which is to be expected as their product becomes so multi-faceted, and are apparently getting worse at building fast/well. If they tried to dedicate a bunch of their guys to build it, it would be taking the opposite approach of WhatsApp - 32 engineers, 450million users, 99.9% uptime - all possible because they were so single-minded in their focus on the core of the product - You think an internal Facebook team could achieve those stats if they built the platform? I think that might be a stretch.  You think they would have been able push back on no advertising? no integration with the FB graph? No integration with existing FB messaging?  Every integration point is a potential pain point. Every pain point is a potential outage.

They also wouldn't be helped by their existing brand - people are wary of FB. Between the NSA scandal, their ongoing privacy policy changes and their *apparent* general disregard for their users I think people would be inclined to go to WhatsApp over a FB app.  As mentioned before, these days your phone is a social platform/graph in its own right - so the FB core product doesn't really have much more to entice users away from something like WhatsApp - Sure Google+ couldn't get the FB masses to budge, but their is no intrinsic value offered in FB that would really enhance an app platform like WhatsApp. In fact it's most likely the opposite and WhatsApp simplicity and absence of additional clutter in the product that makes it so appealing. It just works.

FB have promoted their messenger app pretty hard, and it hasn't been hugely successful. They have had to recently retire their email system. My gut feel is that an overly aggressive push for their home-rolled WhatsApp competitor would just serve to irk even more people and drive them away from their offering.

Stopping Google

So this is where it gets interesting.

WhatsApp apparently turned down a $10billion bid from Google, and finally chose to go with FB even when Google matched the $19billion offered.

Despite both being formidable tech giants, you might think there isn't that much real competition in the market place between Google and Facebook - Facebook email failed to make a dent in the world of email just as much as Google+ failed to make a real dent in Facebook's world of social networking. But, in my humble opinion, Google with WhatsApp could have been a game changer. For real.

Let's think about this:
  • Google have Android - the most pervasive and widely used mobile OS available. Available on a range of devices, including relatively low-powered affordable ones (critical in developing markets)
  • Google have strong working relationships with several hardware manufacturers with their Nexus hardware range, but now also have skin in the game having bought up part of Motorola and are now producing their own (suprisingly good) handsets
  • Google have bout Nest (home automation software) - more on that below..

Google have the software, hardware and market dominance in the mobile space. Can you think of a reason Google couldn't/wouldn't take on the mobile carriers? They are building more and more infrastructure around the US, and in heavily populated areas is there any reason they couldn't shun the carriers traditional networks and create adhoc mesh networks - using the actual mobile devices to power the network. Sure it wouldn't work everywhere, but couple that with traditional mobile infrastructure and it is looking more and more like a viable option.

Now let's consider: 
  • Google has the infrastructure, software and hardware to take on traditional carriers
  • They have viable monetisation models across their products to make cost not a big deal, coupled with falling back on to a mesh network where available they could potentially offer people free mobile plans - have an android device and Google account? then call/surf/message for free - no ongoing costs
  • Tied in with Nest home automation, that is a pretty promising value proposition

This is all good and well, but why should Facebook care if Google move into the mobile carrier space? As I have said before, your phone as a social platform/graph means that traditional social networks like Facebook that currently have the market advantage by being the incumbents start to loose that edge - and actually, if your social graph can be built up based on phone numbers/contacts then it can start to look more and more worrying for them. And that's what WhatsApp would have offered Google - a social graph built up based on phone numbers as identities - plus the ability to message pretty much anyone else in the world, regardless of device/OS/carrier.

You think there wouldn't be a massive rush towards Google/Android if they started offering free mobile plans? If they offer that, I think it's game over. And WhatsApp would have been one step towards that.


ColdFusion: StructCopy & Magic Structs

Quick post about a Coldfusion oddity I came across this week, whilst attempting to use CF's built in StructCopy function.

Coldfusion has two primary mechanisms to clone a Struct (that's a map to Java folk): StructCopy() and Duplicate().  StructCopy is a shallow copy, where as Duplicate is a full blown deep copy - so if you are attempting to clone a complex nested struct then duplicate is probably the function for you (although, beware, as you may expect it comes with some performance penalty!).

What I actually wanted to do was clone the URL scope (which is for all intents and purposes is just a Struct of key/value pairs of query string params in the URL).  I just wanted to clone the current URL scope struct of query params so I could alter the struct (add additional params etc) without actually affecting the URL scope (normal FP type stuff). As the URL scope is always just going to be a struct of String key/value pairs (Strings being immutable), I figured the shallow copy StructCopy function would do (the struct would always be a simple single level struct, and all key/values would be Strings - so any changes to them would not affect the original URL scope).

Oh no. It doesn't work.

To be fair to CF, the URL scope isn't a straight forward Struct - it is actually a Coldfusion URLScope object - but just masquerades as a Struct most of the time, writeDump()'ing it labels it as a Struct, passing it to a function that requires a Struct argument - no probs.

Here is some example code:

In the above scenario, after the structDelete - both Structs output as the same thing. The key "rob" has been removed from both Structs. Well actually, just the one. But actually the cloned struct isn't cloned at all, it's just the original URL scope again.

To me, that sucks. Really.

Like I said, I get that URL is not really a true struct, so I don't blame CF for not wanting to play nicely (although duplicate( url ) will work), but returning the URL structure? not cool.

There are a few options that I see can happen if CF doesn't want to StructCopy:
  • Throw an exception. To me, this is the best option. Everyone knows where we are, and really it is an exception - if we are saying truly, that StructCopy cannot copy a URLScope object, then its an exception.
  • Return an empty struct - not ideal, but again, forces handling of this potential - kind of StructCopy saying, look guys, I tried to copy but failed, so here's an empty struct.

In no circumstance is it cool to just silently return the URLScope object.  Appearing to be working correctly (you get a what looks like a struct back, with all the same keys/values as your original struct, so all good right?) but actually providing the exact opposite functionality you actually want. Dangerous.

I was lucky that the code I was writing flagged up the errors immediately in an obvious fashion - but if you are cloning just to avoid an unusual edge-case, you may be in for an unpleasant suprise.


Year of Code: Can it be that it was all so simple

I'm not sure if we should be pleased with the UK Governments Year of Code or not. Sure, it's a train wreck, it has been a shambles in term of organisation and PR - sometimes in a spectacular fashion, like when the well connected but not so well informed "Director of Coding" appeared on Newsnight - and undoubtedly from what we have seen, it will be poorly executed.

But on the plus the initiative is there. The government have recognised the importance of the tech industry to the country and its economy, which has got to be a good thing?

Every now and then the Government suprises us, with positive, forward thinking iniatives such as David Cameron embracing and pushing Silicon Roundabout (London's answer to Silicon valley) and spending time and energy on pushing the tech agenda for the city and country, or their design principle guidelines. And I think this opportunity has a lot of potential.

Did Lottie Dexter (the founder/director of Year of Code) make an ass of herself on Newsnight? Sure. But that is really just about her being poorly prepared/thought through. Were her comments about learning to code in a day worrying and possibly betraying a deeper lack of understanding of the task at hand? Sure. But all she needs is people around her who do understand these things - I have no problem at all with Lottie Dexter not knowing how to code - it's a classic pattern of non-tech CEO and tech co-founders that has been common place in Silicon Valley for years. It's great for her saying she wants to learn to code, and its undoubtedly good PR for the initiative, but that should be all it is - PR. She is not the CTO or Head of Engineering or anything else vaguely tech related. And the good news is she has got a lot of smart tech folk on board, who will have a deep understanding of tech and code, take a look at some of the board:

The list goes on. A lot of smart tech folks.

Really though, I would have liked to have seen more. I would have liked to seen the government take a leaf out of Obama's book and just hand it over to the startup ecosystem altogether - Obama completely handed over his election campaign tech to a bunch of great tech startup guys and left them to it, and it turned out pretty great (that's a long read, but well worth it). Really incentive-ise and provide the means to startup folks to really be creative with some ideas. If there is an industry that is just waiting to be disrupted it's got to be education - and whilst the government won't be opening up the whole education system for innovation and disruption any time soon, these changes to the tech curriculum are a real opportunity to start testing the waters with alternative and innovative approaches to education and inspiring children to become life-long learners.

On the whole, it's a positive step. The government has recognised that the tech curriculum is important and have an initiative with lots of smart folks on board.  The main concern I have is the tech curriculum and what happens with it - with all this emphasis on learning to code, and curiculums that get kids making e-cards. I have my doubts that coding or low level computing should be taught at primary school level. The best would probably be something like starting actual programming at comprehensive/secondary school level.  If you want a good grounding in tech, teaching kids at primary school to create e-cards is probably not the way to do it. Teaching basic logic and reasoning would, in my oppinion, be a much better grounding. And I'm not talking Formal Logic, I mean just thinking through puzzles, thinking rationally to work things out. It may sound odd, but I would love to see us start teaching kids from a young age about logic and problem solving through play.  Lego, Brio, etc. are great tools for learning to think logically, whilst also being creative

As a thought experiment, how about we give a bunch of primary school classes a Brio train set. Get kids in to teams, give them each a bunch of track and set the following objectives:
  • Make a train track that uses all the pieces (give them lots of pieces, including the various track switching pieces and merging/splitting pieces)
  • Design a track whereby every piece of track can be driven down in either direction without ever taking a train off the track (this is actually quite hard - I quite often find tracks I build with my son to end up with segments whereby once you take a particular junction you are destined to follow the same track route for ever)
  • Move the teams around and get the kids to try and find flaws in the design as per the point above.

One exercise doesn't make a curriculum, but it's an approach - it takes rational thinking, team working, problem solving and hopefully, is engaging and fun. Getting children enjoying learning and problem solving is a good grounding for going on into tech - after all, being able to design train tacks to a particular specification and being able to identify flaws in them is not that much different from designing code - understanding the use of track switches and considering the different implications/paths of multiple switches/junctions is exactly the same as being able to understand and think through conditional routing through code but without the syntax. I might be missing something on this "design an e-card" curriculum, but Brio is already sounding a whole lot better.

The re-think of the tech curriculum offers an ideal opportunity to shake up the way we think about education and take a step away from traditional test-focused, memorising of information approach and a step towards a more engaging approach that also helps teach children the joys of learning.


Observations on the WhatsApp Acquisition

Yesterday's acquisition of WhatsApp is a great example of a few trends throughout the industry:

1) The importance of product

WhatsApp currently has 450 million active users, double that of Twitter, and is still growing at an insane rate (which probably hurts even more given Twitter's recent growth struggles), why is that? Sure, in no small part it is because they are the first major company to really disrupt and take on the SMS/MMS monopoly - Why ever pay for text messages through your provider if you can send pictures/chats to anyone in the world for (essentially) nothing? - but their success in becoming that major player is their single focus on the core of their product, they wanted to provide mobile messaging - and thats what they focused on. As users grew they could have spun out with gaming/video chat/desktop clients/etc but they stayed focused on their core product and did it well.

It is telling that founder Jan had this note stuck on his desk:
No ads!
No games!
No gimmicks!

Also, choosing to monetise by annual subscription rather than in-app adverts showed dedication to the statements above - having used pricing as a way to throttle growth to a rate they could manage, they grew confident that the model would work and didn't need to detract from their product with  gimmicks or ads.

Whats more, by focusing on the simplicity and usability of their core product, it also meant they could focus on the technical stability of their platform -  I doubt if they had spun out they would have been able to maintain their impressive tech stats - 32 engineers, 450million active users, 99.9% uptime.  Twitter's "Fail Whale" was infamous during their period of growth as their platform couldn't scale to handle it, but with a more accelerated growth WhatsApp's simplicity has allowed them to maintain 99.9% uptime - failing to keep platform stability could have easily seen them lose their position in the market and not be signing the deal they have.

2) The importance of mobile

Mobile is the platform of growth for the future. We are, and will continue to see more and more mobile-first (if not mobile only) startups. Instagram, Snapchat, WhatsApp - the three biggest recent startup exits - all mobile focused.

And should we be suprised? Let's have a look:
  • On average, people use their phones 150 times a day
  • There are currently ~1 billion people using smart phones
  • But, there are still ~6 billion using feature phones - so huge room fro growth

Smartphone usage has exploded, other mobile computing is growing as traditional computing devices decline.  There is a huge market opportunity for new mobile users, and for many parts of the world, mobile is providing first experience of the internet.

What's more, the mobile eco-system is offering a new opportunity for startups to take on social incumbents - For a long time, Facebook has had the crown of social, and could not be tumbled by others efforts, primarily because it's where people are. It's where people's photos are. It's where people's social graphs are. Sure Facebook's design is kinda clunky now, and kinda ugly, with terrible ads all over the place, but who is going to try and convince their entire social graph to move?  Exactly.  But the mobile eco-system offers something different - your mobile device contains your contacts/address book, your photos, your videos - more and more, your mobile device is becoming your social platform that really holds your social graph.

Mobile is the future.


Raspberry Pi: Let's light up the (hello) world

There's a light, that shines, special for you, and me..

Soldering complete and Occidentalis installed, it is time to start learning how all this stuff works. Plus check that my soldering isn't so bad that it has destroyed my Cobbler.

From a quick bit of googling I started with the HelloWorld project:

Hello World: Python

So there are a tonne of hello world pythons tutorials and examples, so I won't dwell on this.
  1. from the shell (I setup wifi connection & then ssh on to the Pi so I can work from my normal desktop - I also installed vim - you can either install vim or use your preferred editor) run: 

        vim helloworld.py

    In the editor enter the following:
        print("Hello World")

    Exit and save the file (:wq)
  2. From the command line run
        sudo python helloworld.py
That's it. You will see the Hello World output in the shell. Yay.

Hello World with Lights On

So, the above was trivially easy and kinda pointless - but I did actually go through that, really just to make sure python was behaving and so I had a working baseline to go from.

So next I wanted to do a Hello World equivalent with hardware - the simplest thing to do seemed like a script that instead of printed hello world, just switched on a light briefly.

This assumes you have followed the same path as me and have opted for Occidentalis, and have soldered your Pi Cobbler (the basics are still really the same even if not using a cobbler, and are just breaking out your pi using a GPIO ribbon cable).

  1. If you opted not to install the Occidentalis OS (mod'd Wheezy OS packaged with stuff if you are planning on doing hardware stuff), then you will need to install some extra bits and pieces (you may already have them installed anyway, but running this won't do any harm - will just print the message saying its already there):
        sudo apt-get install python-dev
        sudo apt-get install python-pip
    sudo pip install RPi.GPIO

  2. Connect your Pi to the cobbler using the provided ribbon cable - plug the cobbler into your breadboard so it straddles the break down the middle (See photo below of the final setup of my breadboard for this experiment)
  3. Using a standard jumper lead, connect the ground on the cobbler to the -ve power rail on the breadboard (in this case mine is the purple-ish rail as you can see in the photo, the ground on the cobbler will be labelled GND). This is always best practice.
  4. Connect a resistor from the Pi Cobbler plug 23 to one of the free holes not linked to the cobbler (the resistor is needed to reduce the power - without one of these you will just blow your LEDs)
  5. Connect the +ve leg of the LED (the longer of the two legs) to the -ve power rail that we plugged the GND into, and the shorter LED leg into a hole in line with the resistor we plugged in (again, see photo below for final setup - although ignore the surplus resistor - that's not doing anything, I juts forgot to remove for the photo)

Now that the hardware is all connected, we need to update our simple helloworld script to control the light.  So again, run vim hellolightworld.py and enter the following script:

Now save and exit that script, then go to the shell and run python hellolightworld.py  and watch that little sucker flicker on briefly!


Raspberry Pi & Adafruit Cobbler: Getting Started

It's yours, the world in the palm of your hand..

As mentioned last time, I had to get my hands dirty and do some soldering. Having not really done soldering since my early teens, and even then , I can't really remember what or why I was soldering. I just remember having a consciousness of what soldering was and the experience.  Anyway, needless to say, I was pretty useless at it - not helped by the fact that I was using a soldering iron from the 70's that was undoubtedly designed to fix washing machines, and solder itself to match. The pins on the Cobbler were actually closer together than the diameter of the solder i started with, which made things pretty hard for precision soldering.

To give you an idea of the size of the cobbler, here is a photo pre-assembly:

I bought the Adafruit Pi Cobbler to make prototyping easier, and a lot of the tutorials/articles around the web suggested they were a good idea, and being an electronics rookie I dutifully did what I was told.
I also noticed when I was in Maplins recently that there is now a slightly cheaper version of the cobbler available, I don't know about its quality but in shape/size/design it looks pretty similar to the Adafruit cobbler. The advantage of this is you can just stroll into your local maplin and grab one for not much more than a fiver (that is five english pounds). The red one above is the non-branded one, the blue one is the Adafruit version.

After almost doing irreparable damage to the cobbler with my antique soldering iron and solder, I also resorted to a cheap hobbyist soldering iron and solder from Maplin. It cost less than a tenner (£10) and made a huge difference. Not drinking whisky whilst soldering on the second run possibly also made a difference as well, but that theory would need to be further tested.

All in all, it took me about two-three hours, over two evenings to get the thing soldered (there are 52 pins that need soldering), and considering I haven't soldered for probably 15-20 years (and that was as a young boy, so probably just soldering wires ends with my dad or something) it went reasonably well.

Lessons Learned

1. Don't use solder/soldering irons from the 70's. Washing machine repair evidently requires less precision that modern electronics
2. Don't drink whisky whilst soldering (not proven, but I definitely saw a decline in soldering quality on the first night).
3. Soldering irons get hot - touching them to test if they have cooled down/heated up is not advised.


Raspberry Pi: My Shopping List for getting started

As mentioned, I am completely new to electronics stuff really so I have been trying to get everything sorted and ready to get started. Coming from a Java/JVM dev background, I'm going to try and write up my experiences and things I think are different/difficult etc.

One of the first thing I did, having read a fair few tutorials/blogs on the topic, was work out all the components I needed/wanted to buy. Most of the components are really cheap, and I really didn't want to be caught short mid-development finding out I had to go out and buy something (a luxury that never happens with my usual development process - the most I have to do is go on the web and look for something).

Before I get started, here are the things I bought, which might be useful if planning on following the other posts.

They are all amazon affiliates links - which means if you buy via these links amazon gives me some cash, but won't cost you anything.



Spring 4: XML to @Annotation Configuration

I got 99 problems, but an XML based security configuration aint one..

I have for a long time preferred to use Spring (or generally Java) annotations over XML config for my development (I concede the point on XML configuration allowing a central place to understand all Controllers etc, but with modern decent IDEs this info can still be viewed centrally even if using annotations).

A while ago I made the switch to pure code configuration for my Spring webapps, with the exception of the web.xml (dependency on using Tomcat7 that I didn't want to enforce just yet) and the security config - which Spring didn't support.  Along with Spring 4, Spring have announced that they will now support security config as code, which is great news!

XML Configuration

Below is an example of a typical XML based configuration.

This basically turns on security for the webapp, it:
  • Adds a single intercept URL rule (in this case, our rule says all URLs are open to everyone - so not that secure! We would likely add more URL intercept rules above this as we have more URLs we want to secure - these rules are read top to bottom and fall through until a match is found, so permitAll must be the lowest rule found)
  • Defines a login form - including location of the form, the submission target URL, login failure redirect URL and the logout URL. Spring will automatically handle all these. If you submit a form to the login-processing-url then Spring will intercept it and attempt to authenticate - you don't need a specific controller handler defined.
  • Defines an authentication manager - this will be used to authenticate submitted requests. In this case, I have overriden the standard Spring User Service with my own implementation (this will be a common requirement as you will want to authenticate against users in your DB etc)

The config is all pretty simple really - the only unpleasantness is the requirement to have XML!

Code Configuration

And here is the config in code using the latest and greatest Spring dependencies.

Let's have a lookat what is going on here.
  • I am @Autowiring in my custom User Service class - this will be used in the Authentication manager later.
  • The configure() method is where we set the URL intercept rules and login form - it should be pretty self explanatory reading it through if you are familiar with old fashioned xml config (will come back to CSRF config shortly - but it is NOT recommended that you disable this! this is just in development)
  • The registerAuthentication() method sets up the Authentication Manager using our User Service implementation. Whilst setting up the Authentication Manager, we also define a password encryptor - you will note in the code config I am using the BCrypt password encoder, and in XML using a base64 MD5 hash - but whichever encoder you want to use you can configure it in the same fashion using the secondary method to instantiate and return the encoder desired (although you probably shouldn't be using MD5 over BCrypt.. really, you shouldn't ).

Observations and Gotchas

As you can see in the above code example, I am explicitly disabling CSRF. As of Spring 3.2 onwards, the security layer provides CSRF protection by default - so if this is not disabled then you have to provide a CSRF token to prove your request is legit. Should you need to disable it, you can using the above syntax, but obviously CSRF protection is built in to help you out, so probs best not to disable it on a prod system!

The link above details how to include tokens in requests for your webapp. It's really pretty easy to do.

The second point to note is one that caught me out.  Previously in Spring, when submitting a form to authenticate, the default field names had to be j_username and j_password. If you are hoping to switch the config out on an existing webapp then you have to make sure you update your login form with the correct field names (Spring 3.2 with XML config you still need j_*, its only Spring 3.2 & Java config..)

As always, the code is on GitHub - feel free to check it out.


Handsome - A ColdFusion Dashboard

A while ago I came across a dashboard webapp made by Shopify called Dashing. It's a fairly simple concept, it's a web dashboard using Ruby & some slick js libraries to create your own custom dashboard with stats/graphs/etc.  The Shopify guys had made it for use internally showing their own stats and had open sourced.

I had a look around, and it looked pretty nice, and I was interested in how they had put it together, so decided to have a go at making my own version of it.

Obviously the real work is in making all the different widget adaptors so they can be re-used & fed any data needed to make it highly customisable - the kind of work that can only be done/worthwhile if working on an actual product with real data/use-cases.

You can see an example of the Shopify guys library over on Heroku (when it's running) - and their code is on their GitHub.

The version I built was originally built with ColdFusion 10 and a bunch of javascript libraries - including backbone.js.  Here are some screenshots - it's pretty simple, just some drag & drop components with pretty colours and widgets - so far an RSS reading widget, a line chart, a gage and a basic text widget(latest tweet).

My code is all here. My demo is here (demo is actually the same code dropped into a quick spring app, as I don't have CF hosting) - running on AppFog - when it's running.  I actually wrote this some time ago last year, but figured I might as well jam it up here, with some screenshots


JVM Memory Management Primer: Groovy PermGen the Prequel

I am shortly going to be writing a post about managing PermGen memory with Groovy in production, but before getting to that this is a primer/reminder on some key parts of JVM memory management.

Out of Memory: Heap Space

Throughout dev on a JVM project, it is not entirely uncommon to see an OOM exception along the way. The most common will be a "Heap Space" OOME - The Heap is the part of memory that the JVM stores data/objects that are created by your application. For example, if you create a class called "User" and then create an instance of it, the instance of the class will be placed in the Heap.  This class of OOM are more common as it can happen if you attempt to load to much data or if your application is growing during dev and you haven't configured max heap space size (e.g. if you have a low heap space size defined and load a lot of data into your application you may see this error)

The Heap is split in to two** sections (also called generations) - the Young(sometimes called nursery) and the Old (sometimes called tenured generation). The JVM uses a generational Garbage Collection approach:

  • Young Generation - This is where most newly instantiated objects are placed. As many objects are relatively short lived, many are born and then die in this space. This space is collected more frequently and quickly.  This itself is normally split into two generations - Eden & Survivor - These just represent the age of the object, and as an object survives more GC it is promoted up, after a set threshold it is promoted into..
  • Old/Tenured Generation - This space stores objects that have outlived the young generation. These are assumed to be long lived objects, GC of this generation is less frequent and takes a lot longer (as it has to inspect all objects)

There are several GC strategies that handle GC of the heap, and can be tuned for your application (depending on what is important to you) - you can read more on the Oracle docs about strategies.

** In some versions/JVMs PermGen space is also part of the heap, so that would make it three. Also the Young generation is sometimes split further, but for a high level review of OOM Heap Space - we will consider two

Out of Memory: PermGen

The other class of OOME you might see is a PermGen OOM.  By and large this is rarer than the heap space errors as the Permanent Generation (PermGen) is the part of memory that stores Classes. In our User class example, the instances of the User class are on the heap, but the Class itself will be stored in the Perm Gen (in this example, there maybe any number of instances of the User class - one per user! - but there should only be one version of the Class). Just using standard Java, perm gen won't often be a problem, as there should be a static number of total classes for your application and as long as you haven't set the PermGen size too low then this shouldn't happen, but with the use of dynamic languages the risk of PermGen errors become more frequent, with languages like Groovy dynamically creating adhoc classes.

Whilst PermGen stores classes rather than object data, it still obeys the simple principle of GC: If a class no longer has any references to it, then that class could be subject to GC; whilst there are still objects that hold reference to a class then it cannot be GC'd.

A final important thing to note is as the norm in Java/JVM is for a static number of classes to be used, the default GC strategy is to not collect PermGen - So if you are doing anything that involves changing PermGen and adding more classes to that memory then you will inevitably see a memory leak.

Next post I will go into details about a few Groovy memory gotchas I have come across after 18months of production Groovy code.


Eclipse & LESS - Better Development time with Incremental Builds

All source code for this app is on GitHub at the moment - feel free to fork/download/etc

So one of my recent experiments I decided to start playing with LESS (CSS pre-processing, letting you use variables etc in CSS code, which has been long needed) - It's all actually relatively simple (although you can do some nice, powerful stuff with it) to code with if you are coming from an development background, so I found myself quickly getting to grips with basics like defining variables for colour schemes (very basic stuff, but sooooo much better than having to maintain masses of CSS and every time you want to tweak the colour scheme having to search for hex codes..).

However, I found myself wanting to start using LESS on new/existing web app projects which were largely Spring MVC/Java based apps, which was more problematic. LESS requires pre-processing/compilation to generate CSS, which is kind of a pain in the ass when trying to prototype web app projects - being used to just altering some CSS and then hitting refresh in the browser (at most, having to re-load the tomcat server inside  eclipse), having to try and generate the full LESS/CSS files everytime anything changes was not something I was going to be doing.

Thankfully, having done some digging, I managed to get it all setup and working like a charm - Every change I made to the LESS, Eclipse performed an incremental build (like it does for other compiled code like Java etc) and voila! I could just change the LESS and press F5 in the browser.  Here's what I did:


  • Eclipse
  • Maven
  • LESS
  • WRO4J Maven plugin

Directory Structure

As I am using maven, I am of course following the standard /src/main/webapp convention - so that is assumed here.
In the webapp root, I created a new "less" folder, alongside the "css" - This holds all my *.less files - I won't go into detail about LESS here, but assume you will only include valid, compiling LESS files here (non-compiling LESS files will cause you a headache here)

As you can see, in my normal "css" folder I have only included vendor CSS files - I will not be placing any custom CSS here (All valid CSS is also valid LESS, so even if I wanted to just write CSS, I can do in my "less" directory).

Maven Configuration

The incremental build will be performed by the use of a Maven plugin. This will also mean that any formal full build/packaging that you do will also include the LESS compile step.

The below configuration simply defines a "group" that will attempt to compile and the target folders we will generate the artefacts to (more on that later).

We also define a little more confuration regarding what we want WRO (Web Resource Optimizer) to do:

/WEB-INF/wro.properties defines exactly what optimization we want to perform - in this case I have selected some pre & post processing steps:

/WEB-INF/wro.xml defines the location of the source files:

The above config will make Eclipse perform the incremental builds on your LESS files as well as generating a CSS file if you perform a regular maven build.  As can be seen below, Eclipse incremental build has generated a web-all CSS file in the target "css" directory.

Including Generated CSS

The simple part of this is of course to include it in the webpage header as an import - Despite it not being in your /src/main/webapp/css folder, you can still reference it the same as the static vendor stuff we have included as we know it will be generated there for us (either as part of full app build/deploy or incrementally in Eclipse).

There is also another gotcha at this point - You have to check that your Eclipse will be deploying the contents of your m2e-wtp directory to your webapp. This should be happening by default, but you can check (and fix if necessary) by right-clicking on your project in Eclipse and going to "properties".  In the left hand menu, select "Deployment Assembly" - this will show the directories/libraries that will be deployed as part of your application (obviously this only takes effect if you are deploying the app to a server inside of Eclipse).

As you can see below, my Eclipse defaults to deploy /target/m2e-wto/web-resources to the root of the webapp. If yours doesn't just add an entry for this and everything should work!


If you are struggling to generate the LESS files in the m2e directory, then there is a chance you have an compilation problem with your LESS files.  With the incremental build, if LESS compilation fails then it will fail silently - for full details you should run the full maven build to see errors logged.


Spring MVC 4.0

I have had a post coming for some time on my thoughts experience in starting to play with the changes that come as part of the latest major upgrade to the Spring suite.

There are lots of changes, that were announced at last year's Spring One event - including a flashy new spring.io website and several new packages to the framework such as Spring Boot (which they describe as an oppionated way to start building Spring apps - basically out of the box components that reduce work, but configured to Spring's tastes and conventions).

The interesting parts for me are in Spring MVC (as that is where I don most of my Spring dev) - most notably, the ability to write Spring apps in pure Groovy is of interest (as I do a lot of Groovy dev, and once you get used to the luxuries of its collection based closure functions then its hard to go back!) and also the ability to now configure the Spring Security stuff also entirely programatically (my previous attempt of pure code config spring app were thwarted only by having to use XML configuration for the security).

I am yet to convert an app to Groovy, but I have created a basic web app using the Spring 4.0 Milestone releases and converted the security config to code.  Full source code of the webapp is on github.

I will stick that up on GitHub soon, and when I do I will write in more detail about it (I also played around with LESS on that project and configured MAven/Eclipse to build the LESS/CSS files nicely, which I will also write up soon)

As it happens, I also stuck the demo app up on cloud hosting service AppFog.com (given that CloudFoundry is no longer with us in it's free form.. which kinda sucks, as that was really nice). It's by no means a complete app, and doesn't really do anything - just lets you connect your Facebook/Twitter accounts and see all your contacts etc - but as you can see - there is a lot of filler text and the LogOut link is always in the navbar etc.. But anyway, its here, for now..

Spring 4 App homepage

Spring 4 App Dashboard



Android ORM - GreenDAO - Simplifying DB access in your app

I am currently working on a new Android app, and as usual, it requires a considerable amount of DB interaction.  I would say DB interaction isn't that difficult in Android - you can create a few simple classes to setup your schema and then DAO functions for the various domain objects that you want to map to the DB.


Just a pain in the ass.

Coming from working on a variety of Java type ORMs (Hibernate et al), this is really just all boiler plate that is painful to write, so this time around I decided to look for an Android ORM. The first result in google was greenDAO, which claims to be being used on over 10million installed apps (AppBrain lists it as being used on Pinterest and Zynga apps), which seemed like pretty good reason to take a look.

So I started taking a look at the code on GitHub - the first thing I noticed was that it wasn't a traditional ORM that I expected, really it's more of a code generator rather than an ORM like Hibernate - I was expecting to just define some POJO entities, maybe marked up with annotations or config  that did all the magic - but what is actually going on is you have to create a "generator" project that defines your domain entities and then that produces several DAO and domain object classes that can be generated directly into your normal Android app.

Does generated code always suck?

I saw the words generator and alarm bells started to ring - I have worked with and on projects that generate code and it so often seems to have descended in to pain - often needing to slightly tweak the generated code, which then results in painful re-generation or just walking away from the generator altogether and being left with having to manually maintain generated code.

However, I had a look at the generated code and it was actually quite nice - not that different from if I had written it, and what swung it for me was the performance. I had briefly pondered building a lightweight ORM for Android more in the style of Hibernate (I love an annotation), but then I thought on the performance impact - reading annotations on POJOs at runtime - even if cached - the performance footprint was likely to be pretty big would suck for people with older/less powerful devices.

So actually, removing the pain of having to create all the DAO objects etc, whilst not having a prohibitive performance impact made this a pretty strong choice.

Let's get down to business, don't got no time to play around, what is this..

So lets have a look at how we generate the code - its actually relatively easily to programmatic-ly configure your entities for generation.

The above is an example of how you can configure two simple domain entities (which will map to a table - this is like defining a Hibernate @Entity POJO). In the above simplified, hypothetical design, we have two real domain entities "Email" and "People" - emails can be sent to multiple people, and people can recieve multiple emails.

As you can see, we define a very simple People entity. and then programatically add three columns - an ID, a name and an email address. Simple - and as you can see(or imagine) the API provides programatic constructs to configure not null, column types, etc.

All in all, so far it has been a good experience - It has removed a certain amount of the pain from developing the DAO/DB layer boiler plate stuff and let me concentrate on the important stuff about the app. However, one gripe I do have is that it doesn't currently support modelling of many-to-many relationships - the best you can do is create an entity to model what is effectively a join table, then have a one-to-many relationship from that to either side of your "real" many-to-many relationship.

Again, in the above example, we try manage the many-to-many relationship between people and emails (emails can have may people recipients, and people can be recipients to many emails). As mentioned - we have had to model this join explicitly with an EmailPeople entity, and then add the OneToMany relationship between People/Email to our new Join entity.  This is no real headache to do, but is undoubtedly going to be a performance hit - The underlying code will undoubtedly be being cleverly efficient in retrieving all the EmailPeople entities for a given email (assuming attempting to get all recipients to an email) - but from that list we then need an efficient way to load all the People enties linked to that list of EmailPeople entites.