Technical aptitude testing & recruiting

Having spent a reasonable amount of time on both sides of the interview table, plus having co-founded NerdAbility, it may come of little surprise that I am pretty opinionated on the topic.

It's a pretty divisive topic, and a lot of people feel quite strongly about a lot of this stuff, but here's my opinion anyway..


Tech tests

First up is the question of whether or not potential candidates should have to take some kind of tech test. Personally I like them. But with a few caveats:

  1. They should be easy. 
    This might sound counter intuitive, but I prefer a relatively simple tech test.  In reality, I'm not really convinced that you gain that much from the more complex tests, and at worst, probably just get false negatives and end up mistakenly ruling out great candidates.  I have seen tests that take days (unit testing/mocking/designing/coding/reviewing/re-factoring etc.) and if anything they put candidates off, and as mentioned probably don't provide much info.

    Something nice and simple, let's say a modest estimate of an hour all in is probably about right.  As ranted about by many folk, famously Jeff Atwood, a basic FizzBuzz programming test will rule out a lot of people.

    Further more, when I review test submissions I don't really care about if they work - what I am really looking at is the coding style and overall approach. Coding style, class structuring, use of nice libraries/core functionality/data structures, unit tests.  If you have a simple test that should take no more than an hour to code then candidates really don't have any excuse not to make their best efforts with how the code is structured, unit tested etc - so you can set the bar pretty high.
  2. They shouldn't be timed
    Timing the tests just blur the lines - if you are timing the tests then you have to lower the bar. With a simple, non-timed test, you can say you will rule out people who haven't submitted unit-tests for example, but if you set a time limit then you have to excuse people - and will inevitably find yourself saying things like

    "Well, sure its 200 lines of code all in the main() method, and variable names like 'tmpString', and it probably doesn't work, and it's not unit-tested.. but maybe they were rushed for time.. maybe we should bring them in..? "

    It happens, the bar slips lower and lower, and eventually the test is serving no purpose other than ruling out those people who just can't be bothered.
  3. They should be core technology concepts
    There is no point having tests that test specific domain knowledge or expect experience beyond core language competencies. Even if your business is in a very specific niche, you are going to do much better hiring great tech folk if you test core competencies rather than specific libraries/tools/technologies.
  4. They should be done before hand
    On-site testing adds a different dimension to the test - whether it be whiteboard or on a machine, there are other variables that can end up being a distraction. On a whiteboard candidates can end up worrying over exact method signatures and missing semi colons (but whiteboard is much preferable to actual coding - In my opinion, you should never expect a candidate to bring a laptop, and asking a candidate to use another machine/OS/IDE is also fraught with potential distractions).



What's the point?

As mentioned, I am not a big believer in using the tests to really measure if someone is a great programmer. For me, they serve two simple purposes (well three actually, but I will mention the third point later)
  1. They can be bothered. They are actually interested enough in the role and the company to invest their own time and energy. This will rule a few people out who are just machine gunning resumes out to lots of companies blindly, or those who are just after some interview practice.
  2. They have demonstrated an understanding of core technology approaches/patterns - Simple things like Single-responsibility, unit testing (use of good assertions, sensible testing, messaging etc), class organisation shows that they have actually spent a reasonable amount of time programming and keep up technology.  A nice little example of this, that I like to see, is use of Java's Collections/Arrays convenience methods (assuming testing Java!)  Arrays.asList( "1", "2", "3" ) makes declaration of explicit lists easier (nice for testing etc) and shows a knowledge of core Java stuff.


So all we have so far is know they can be bothered and that they have a good understanding/interest in program design/architecture - not much more that can indicate whether or not they are an awesome developer.



The interview

This is where the test really comes into its own.  I think using the test to drive the tech interview is really a great way to go.

You can walk through the code, ask about design decisions, and with a little extra thought you can easily push into variations of the test, how would they handle other constraints and you can continue to push through varying degrees of complexity, and if you are consistent with your tests you get a consistent sliding scale to compare candidates - you know exactly at what point did each candidate get to on the scale of questions. See this article as a great example of this technique more generally (also fun/good practice to try working through the problems the author is asking before reading the answers to see how far you get).

This approach of starting very easy and working up a sliding scale of difficulty is a common practice used by big co's like Google et al.



An example

Here's an approach I quite like

Tech test: the problem

Given a webpage address, find the most common word on the page.

This tests core technology concepts, stuff like good usage of HashMap (or similar) data structure for counting words, extensive enough to need some proper unit testing, but quick enough to complete relatively quickly.


Interview: followup

There are a few follow up questions that can be used to further probe the candidates understanding:
  1. What would you need to change if I wanted the most common word on a whole website (e.g. wikipedia) - this can go into how to crawl webpages and potential pitfalls if that is a relevant area, but otherwise can go into challenges regarding the amount of information needed to be stored, e.g. if you have limited memory, how can you store the info etc
  2. If I wanted the top 5 most common words how would you change it?  This is interesting as there are a variety of solutions, and unless they go straight for the optimal solution you can keep asking if they can think of any better solution. 

    For example, they might just keep track of the top 5 words during the counting, which is pretty efficient, but less flexible when top 5 becomes Top X words;

    Alternatively they may just count all the words, then implement a comparator for the Map entries and sort them all and just take the top X - which is flexible but is always going to be  O(nlogn) performance (sorting is always at best nlogn)

    Another approach is to use a Heap (PriorityQueue in Java), and heapify the counted set (heapify can be completed in O(n) time) then just take the top X elements from the queue (X being a lower order constant not dependent on the size of the dataset, and log n being lower order than the linear time to heapify the data upfront,  so performance is O(n))

    You can also follow up this question with further questioning about performance and Big-O - if thats something that you think is interesting/relevant for the position - which it might not be..



Whatever test you choose, as long as you have some sensible and interesting questions to follow up with, I think it makes for a pretty productive process and in many ways is optimized for the candidate making the best impression they can.

0 comments:

Spring MVC & custom routing conditions

I have recently been building a Spring MVC app (well, actually using the Spring Boot project - which is quite nice in parts, and crazy frustrating in others - but the underlying mechanics are the same).  The application is actually a re-build of another application, so it has involved a lot of playing and exploring Spring source code to try and replicate the apps functionality like for like.

One of the first things I found, was that Spring doesn't really cater for the concept of a single app running on different sub-domains.  I assume the thinking is that you would build separate applications for different sub-domains, but in this case, we just have a single app.

There are two main speed-bumps I have come across so far:
  1. Controller routing based on subdomain
  2. Security considerations based on subdomain

Basically, as you can probably imagine, once you have subdomains on a single web application, the URL path is no longer unique (e.g. http://automateddeveloper.blogspot.com/ is clearly not the same as http://blogspot.com/ )


The rest of this post, I will look at the routing element of it. I will do another post later in the week about the Spring security stuff (I haven't solved all of that yet - but got far enough to make it work).


Understanding the subdomain

The first thing you need to sort out is a common and consistent way to determine the subdomain of any given request. There are a variety of ways to do this, for example, you could parse it from the request in apache and set a header, so your app doesn't have to worry about it, or you could just parse it from the request object server name. I will assume you have some bean/service/helper class to do this everywhere (although for now we only need it in one place).


An annotation

First up, we need a new annotation, that we can easily apply to a Controller, just like we would use the @RequestMapping - at the moment, the nice built in Spring RequestMapping handling allows you to define a URL path (plus other bits and pieces) to map a request to a URL to a given controller & method - what we want is to also specify the subdomain element of the requested URL

Defining an annotation is simple:

For the sake of simplicity of this example, I will only allow it to be used at class level (so no method based subdomain routing - but that will be pretty easy to do once you have understood the rest of the post).  You will also note that the value is defined as a String array, this will allow us to define mappings to multiple subdomains if needed.


The mapping condition

So, that was easy. Obviously, at this point the annotation doesn't actually do anything - you can add it to all the controllers you like, but it won't actually make any difference to your request routing.

To get our new annotation involved, we can implement something called a RequestCondition.  This is exactly what it sounds like, Spring lets you implement additional request conditions that must be satisfied for a request mapping.

The condition could be based on any logic you like, but in our case we just simply need to check for an annotation and then examine the value provided.  If the annotation value matches our incoming request then the condition is met, easy!  Returning the condition indicates to Spring that the condition has been met, returning the null value indicates that the condition is not met.


Adding the condition to the mapping handler

Usually, we would use the standard Spring RequestMappingHandlerMapping to handle all the routing of requests based on the URLs, but now we need to also ask Spring to consider our new custom condition from above.

This is a simple case of extending the normal RequestMappingHandlerMapping class and adding our new condition as a custom condition.  Luckily, this is really easy:

Allwe are doing is checking to see if the handler class (our controller) has our new @Subdomain annotation and if it does, we register our new custom condition for consideration.



Basically..

That's really all there is to it - we can then decorate our controller class with @Subdomain("subdomain") and have them handle the routing of requests. 

As you may have noticed, this is a pretty nice pattern for any kind of custom routing you might want to use - the same template could be used for routing by any request info/header or any user info (e.g. routing requests to different controllers based on their logged in role etc)

0 comments:

Christmas Sandwiches & an Android app


Following some banter on Twitter last week about christmas sandwiches (we have something of a tradition of eating, reviewing and ranking as many christmas sandwiches as we can find in the build up to christmas), I set a goal of creating a mobile app to help with this before the end of the week.

So, over the course of three nights (mostly commuting time) I built and launched a Christmas Sandwich rating app.  The app is basic. It let's people review sandwiches, and then read other people's reviews of sandwiches and see average ratings given to each sandwich.



It is an Android app that uses parse.com as the cloud backend to allow users to view other users reviews. At some point in the future, when I get the chance, I will put the code on github and write another post about what is going on, but in the meantime..

Get the app on the Play store

Enjoy!





0 comments:

Twitter API & Smartphone as your identity

At Twitter's Flight conference this week, they announced a couple of new toys for developers: Fabric and Digits.

Fabric is a new platform for building mobile apps

The Fabric platform is made of three modular kits that address some of the most common and pervasive challenges that all app developers face: stability, distribution, revenue and identity.


It seems everyone has to be building a platform these days, but really it sounds like Twitter, having previously shut down developers on their API/platform a while ago, realised that actually letting devs on their platform (now that it includes advertising APIs - thanks to their MoPub acquisiting a while ago) is going to be profitable for them.

Maybe it will be well received, and the way the market changes maybe people will go for it - but I wouldn't be building an app based around a Twitter platform given their track record.  The company are a lot older, and maturer now (and they probably released their API too early last time around), but it still seems like quite a risk to have something as central as login/revenue tied to Twitter (I dont like third party logins anyway..).



Anyway, the much more interesting product they released was Digits - which allows sign-in with your mobile number - which as I have ranted before, is an idea that I love, and seems like a really smart move by Twitter. Both in as much that it will move twitter to that space of smartphone is your social-identity/network and also offers a lot more value than just another platform that serves ads and supports login etc.

0 comments:

Amazon & the circle of life

Last week, Amazon announced the planned opening of their first bricks & mortar store - planned to be opened in central New York, not far from Macy's department store. The move is an attempt to provide some of their customers with a traditional face-to-face customer service.

If you have been following other recent Amazon announcements and expansions, you will be familiar with their other recent moves:

Same day delivery - Amazon has been a dominant power in e-commerce for a long time, and given the small margins they operate, and being loss-leaders in some products to drive business elsewhere, it's going to be difficult for any newcomer to genuinely compete, and as Amazon continue to spread more and more into all forms of goods, its also slowly making it harder for shops to operate in a specialist vertical.  However, there is always going to be times when convenience and the ability to have something immediately trumps price, so there was always going to be a market share that Amazon will loose. A lot of the time people will pay a small percentage increase in price for being able to have the product in their hands in a few hours.  Same day delivery will reduce that - again their will be the premium cost of the delivery service, but customers can have products instantly (almost).  There are some significant logistical challenges to do this, but being in a position to do this also opens up another opportunity..

Fresh groceries (having been doing non-perishable groceries for some time). One of the biggest challenges to Amazon rolling out fresh grocery deliveries more widely (currently only available to parts of North America) is building the capacity to enable fast, same day delivery of fresh goods; so the goods can leave a chilled warehouse and be with the customer in a short time - what is a short time exactly? It seems from existing supermarket delivery services that most people are happy with same day delivery (or at least fixed day delivery - e.g. the goods leave the warehouse on the same day they arrive to the customer), using refrigerator delivery vehicles, which seems simple enough for an individual case - in the UK Amazon could ship from their large Swansea warehouse in the morning and deliver the goods to most parts of the UK the same day. The inevitable problem becomes when this starts to scale up, As soon as even a small percentage of the UK want to order their weekly fresh groceries from Amazon the problem gets a lot tougher, and Amazon would need to have a large, distributed warehouse/delivery infrastructure to enable this kind of efficient, same day grocery deliveries.  The kind of infrastructure that existing supermarkets like Tesco or Sainsburys already have, having long built the infrastructure to provide a similar level of stock control and service to their supermarkets.


It's no secret that Amazon returns very little profit to their shareholders, and continues to plough the majority of their revenue into new business and further expansion - a large part of the investment going into expanding their delivery and warehouse capacity and distribution.

Scaling same day delivery is a hard problem to solve, if only because the solution is really just more distributed warehouses. Amazon have tried to ease the scaling problem with Amazon lockers and local partners/shops that can take delivery and can then be picked up from by the customer at a convenient time (a local shop that a customer can pick up their goods from on the way home from work for example), but really their move to bricks and mortar is really just another step to a long established industry.

Is Amazon really disrupting groceries/shopping? Or are they just competing against the usual retail giants?  Neither same/fixed day delivery or fresh grocery delivery is a new feature. Providing both online and physical stores is also not a new model, so really all we are seeing is another retail powerhouse slowly marching onwards and upwards to a fairly conventional business plan**, maybe the only thing of interest is that it is doing it in a different order (Tesco scaled from bricks and mortar, to online shopping, to same day delivery; Amazon are simply starting from online and expanding to the others).


So I guess we will need to wait for drone-delivery to see any real innovation in the retail space.



** Conventional business plan for the retail/groceries aspect of the business - there is still lots of innovation and interesting things that Amazon are doing, with mobile devices, could services, video delivery/production etc.

0 comments:

Graph data structures - Searching

Previously we looked at an introduction to graph data structures and designed a very basic Graph implementation.  I also mentioned that the main things we would likely want to do with a graph is search/explore the graph.


There are two primary tools that we will use to explore graphs - these are basic computer science concepts, and should be familiar to everyone who has studied computer science at uni and faced graphs before.


Depth First Search (DFS) 

The concept behinds DFS is that given a starting point or root node, we will
search as deep as we can one route before backtracking (e.g. select a neighbour to the root node, visit that neighbour, then select a neighbour from that node and visit - continue this until we reach a node that we cannot follow any more edges from, and then backtrack up the graph considering alternate neighbours at each step)




DFS is pretty simple to implement and nataurally uses a Stack data structure to keep track of the backtracking (the easiest way to do this is to solve recursively using the implicit Stack)

Below is a simple Java implementation of DFS using recursion to handle the backtracking.







Breadth First Search (BFS)

This search takes the different approach of looking as wide as possible before moving down a level. For example, we will visit all the immediate neighbours of the root, then select one of the immediate neighbours and visit their immediate neighbours - then backtrack up to visit other neighbours at this level.



(image also from wikipedia)



BFS is also fairly simple, and pretty close to DFS but it uses a Queue (FIFO) structure rather than a stack to visit nodes from the root down first.  Below is example code of BFS implemented in Java.



0 comments:

Graph data structures - An introduction

With most problems it is largely pretty clear which data structure is going to be appropriate to use - If you just care about storing a list and iterating over it, then you probably want an Array based structure - if you particularly care about the elements being unique you could look as a Set. If you want to capture a key-value pair dictionary data structure then you can use a Map.

Graph data structures are no different, and are very applicable to a subset of problems, and once familiar with graphs and the common algorithms it becomes quite easy to quickly identify the problem type as a graph problem.


The basics

A graph has two main elements:
  • Node - a given data point in the graph
  • Edge - a connection that joins any two Nodes. A graph can be "directed" or "un-directed" - this simply determines whether the Edge goes both ways or is purely one way. 

A graph is a data strucutre that stores a set of connected elements - the easiest way to understand the structure is with a real world example, the most famous is probably like the social-network graph. If you think about your profile on any popular social network (Facebook/LinkedIn/etc), your profile is a Node in the graph, and each of your friendship/connections is an Edge to another Node in the graph

A while ago, a Facebook intern created a visualisation of the Facebook social graph around the world - you can't really make out the individual Nodes/Edges, but you get the idea.


In Facebook's graph, it is un-directed, that is, when you become friends with another Node in the graph the relationship goes both ways - you are their friend and they are yours.

Twitter, however, is a directed graph - once you follow someone an Edge is created between your Node and theirs, but they don't automatically follow you as well, so the Edge has direction.


If you are a LinkedIn user, you may have noticed whilst browsing another user's profile a widget saying something like

"X of your connections can introduce you to someone who knows Y"


What LinkedIn is telling you, is that the shortest path between your Node and Y's Node in the graph is 3 Edges (3 "hops" - following an Edge between nodes is often called a "hop") - To be able to do this, LinkedIn is searching the social graph space to discover the shortest path (and number of unique paths that are of the shortest distance) between you and this other user.


Graph representations

There are two primary graph representations:
  • Adjacency Matrix - This is a matrix/2-D array that captures the relationship between every node. Every node is mapped against the X and Y axis, and the value in the intersecting cell determine if their is an Edge between the nodes. E.g. if we wanted to know if there was an edge between Node "4" and node "13" we would look at matrix[4][13] - the value there would tell us. Normally, value of 1 represents an edge, but other values can be used (for example, if it is a weighted graph the values could represent the weights, or if it is a directed graph it could use -ve/+ve values to represent direction).  This representation is good for "dense" graphs.
  • Adjacency List - This representation is simply a List of all Edges and a List of all Nodes. This is a simple representation and is more memory efficient for "sparse" graphs

For the code samples here, I will focus on the Adjacency List representation


Graph representation - Java

Below is a very rudimentary implementation of a Graph class in Java. It uses a Adjacency List representation and will be used in later examples I go through.


Below is a sample unit test setup that shows how a simple graph can be initialised:



In the next post I will go through basic techniques for searching and exploring graph spaces, as well as a post looking at how to solve the LinkedIn shortest path recommendation problem.





0 comments:

Quick Sort - A Java implementation

And now the same for a Quick Sort I implemented. Normally Quick Sort also runs in O(nlogn) time, but its worth noticing that the implementation below just uses the first element as the pivot value, which is not an optimal pivot (performs very badly in partially sorted lists for example), so will leave it to you to think about better ways to choose the pivot value.



0 comments:

MergeSort - A Java implementation

I wrote a Java implementation of Merge Sort a little while ago, just for fun really. I was just about to close the file, noticing it was still open in my IDE, and thought I might as well just post it here quickly. It might be interesting for someone along side the Merge Sort analysis I previously wrote up.



0 comments:

Tech cheat sheets - Maps

Also sometimes called an associative array, symbol table or dictionary - Maps are collections of key-value pairs and are probably one of the other most common data structures you might come across day-to-day.

HashMap

The most commonly used Map implementation in Java is probably the HashMap. The HashMap makes use of the equals() and hashCode() method on Java's Object API.

The basic premise is that the HashMap has a collection of "buckets", each which can hold several objects. When an object is added to a HashMap the hashCode() method on the key is used to select the bucket to use, the object is then added in that bucket.  For retrieval, its the same process - hashCode() is used to determine the bucket, then the entries in the bucket are inspected and the equals() method is used to determine the match.  In Java's HashMap, the buket is essentially implemented as a linked list (not a LinkedList - but Entry<k,v> has a pointer to the next entry)

The obvious implication of this, is that the performance is dependent largely on the design of a good equals() and hashCode() method.  For example, if you designed a hashCode() method that always returned the constant 1 (which would be legal, as the Java contract is that if two Objects are equal() then they must have the same hashCode(), but if two Objects have the same hashCode() they do not need to be equals()) - then it would mean all entries would be put in a single bucket.


The hashCode() method

Designing a good hash code implementation is very important - for performance (see this stackoverflow discussion on the performance impact of large HashMaps with poor hashCode implementations), but also if your hash code is erroneous then your HashMaps might just not work and you may insert objects in your Map and never be able to retrieve them (if hashCode() doesn't return consistent values for example, it could be placed in a bucket, then when trying to retrieve it generates a different hashCode() so looks in a different bucket).

If you know the complete key set, and it fits in to the Integer range (hashCode() returns an int) then you could design a perfect hashing algorithm that allows every unique key to have its own bucket, so guarantees O(1) time for insert/retrieve. However, in practice this is also quite unlikely, so ideally want to design for as even a spread across buckets as possible.


Performance

Due to the dependency on the implementation of the objects used as keys, and the data set, the worst case vs best case performance is varying.

Search/insert/remove - All these operations suffer the same problem - in best/average case performance these can be done in constant O(1) - However, the worst case (all elements in one bucket) the performance drops to linear O(n)

In practice, HashMaps are usually more efficient than search trees and other look ups, which is why they are very commonly used.

0 comments:

Tech cheat sheets - Stacks & queues

A Stack data structure is a Last-In-First-Out (LIFO) list.  There is a Stack<T> Interface, but the recommended Java structure is the Deque (another interface featuring an Array and Linked implementation).

A Queue data structure is simply the opposite, First-In-First-Out (FIFO) structure. The current Java recommendation is also to use the Deque (noramlly pronounced "deck" if you were interested,  and stands for Double-Ended Queue)

Having read the discussion of ArrayList vs LinkedList, many of the same considerations apply - but given the common use-pattern of stacks/queues, the different implementations make sense.


Deque

Java's Deque implements the Queue interface, and can be used as either a Queue or a Stack, offering methods appropriate for either use.


ArrayDeque vs LinkedDeque

Similar to ArrayList, the Array based implementation is the most popular, and, by-and-large the most recommended implementation to use.

Based on what we already know about ArrayList and LinkedLists, and what we know about Stack vs Queue behaviour, there would be a natural use for each (e.g. LinkedList seems like a good option Stack/LIFO - we can easily add to the list by adding new objects to the front of the list, and then popping objects off the stack by removing from the head of the List - both operations O(1) - compared to the cost of adding to the front of an ArrayList that requires a lot of copying ).

However, in the ArrayDeque implementation it is a circular array - so no copying is required and add/remove is a constant O(1), and the LinkedList implementation creates a very slight performance overhead by using additional memory creating "nodes" for each object in the list.






0 comments:

iWatch & wearable tech

Much has been made lately of Apple's latest announcements of the iPhone 6 and their new Apple watch, and really, I'm pretty late to the party on this one.  I don't normally comment much on Apple announcements, but this time I fancied a rant.


Personally, it's not to my tastes. I know the strap is inter-changeable, and you can change the watch face screen, but I'm not a fan.  In part, because I am more of a fan of classic watch design (so really, some of the photos of the Moto-360 look closer to what I would want a smartwatch to look like), but generally, it feels a little garish, and a bit like something designed 5 years ago.  The curved, almost bubble like glass shape of the device, the square watch face.



I'm not sure what it is.

Maybe its because it feels at odds with current web design trends on flat design. Maybe its because it feels like the original iPhone matured/evolved from it's original curved shape to the current sharper, flatter designed shape and this still seems to hark back to the original iPhone design.

Anyway, as an aside, I think if I was going to spend hundreds on a flash-y digital watch, I would quite like this one:



Sure, I can't check emails on it, but it looks nice.  But then I'm not really someone who should be commenting on style and fashion, so will stick to tech trends..


Is wearable tech the next big thing?


Honestly, I think probably not. Not for the time being at least. I'm sure some smart people will work the market out eventually, but for the time being, and with the current incarnations of smartwatches, I don't think the market is really there.


So, here's the thing - I'm not really sure what the point of the apple watch is (and I guess that is what needs to be cracked before the market can take off). I think, it's going to face the same challenges that tablets have faced - it needs to grow up and work out what it is. It needs to understand what its purpose is and find its niche - it's not going to be good enough to be the same as a smartphone but a different form.


Let's have a look at the markets:

Smartphones:
  • Defined and fairly standard "upgrade-cycles" - the expectation and standard of upgrading devices every 12-24 months is fairly established in the west, and this is both driving existing customers to newer, better devices, but is also slowly migrating existing feature phone users to smartphones.
  • Everyone has one - at its core, as a phone/communication device it provides that roaming communication functionality and is at least one per-person (not shared)
  • Convenience - easy to carry and roam.

Tablets:
  • No real defined upgrade-cycle (in terms of device contracts) and too early to see patterns being established - optimistically will fall back to traditional PC cycles of approximately every 5 years
  • Not per-person devices - You might be safe to typically expect everyone in an average household to have a (smart)phone, but probably only one/two tablets per house.
  • It doesn't solve any real problem - sure, its a little better for watching movies than a smartphone, but apps, browsing, emails and other comms are not noticably better. Further, tasks that might need greater control, or screen real estate - like creating spreadsheets, or presentation slides for example - people still fall back on using regular PCs. There needs to be a purpose/task/app that is made for a tablet sized device - where tablets solve a problem that only they can. As of yet, we're still waiting.

Smartwatches:
  • Still yet to see upgrade cycles - will be interesting if carriers/manufacturers try to tie these into existing mobile contract structuring. It will make adoption even harder if they try to sell it with a 1-2 year lifecycle for sure.
  • Aimed to be per-person, as a sidekick to your smartphone - but if all it offers is your smartphone in a slightly different form, its again going to be a tough sell - Sure, its slightly more convenient than getting your phone out of your pocket, but that seems like a dubious USP to base an entire market on.
  • It doesn't solve a problem - Again, there needs to be a task/area/job where the smart watch is the answer. Where it fills a need that simply can't be filled by a smartphone (or can be, but is a pain in the ass)



For a change, let's have a look at some smartphone/tablet data:


Tablet sales struggle: Apple iPad growth projections by quarter

(Source: Computer World: As tablet growth slows, Apple may face a year-long iPad sales contraction )

Just focusing on Apple for the moment, Benedict Evans presents some interesting data analysis of their recent sales/revenue numbers. Firstly, we see that iPad sales have flattened out and basically settled where they are for the last two years, whilst iPhones have continued to see growth year on year:

http://ben-evans.com/benedictevans/2014/4/25/ipad-growth
(Source: Benedict Evans - iPad growth - Apple's trailing 12months sales)


More generally, if we look at the comparison of sales across PC vs Android/Apple smartphones, we see that PCs have levelled out, but the smartphone continues to see huge growth:

http://ben-evans.com/benedictevans/2014/4/25/ipad-growth
(Source: Benedict Evans - iPad growth - General shipping - PCs vs Smartphones)




It really looks like the smartphone market is continuing to surge. With standard upgrade-cycles, and low end Android smartphones becoming more widely accessible, this trend seems set to continue.

On the other hand, until the tablet market works out its purpose and finds a niche, I think it will continue to stagnate with fairly flat growth. 

It feels to me like a similar fate awaits smartwatches, until someone comes up with a compelling problem that the watch form factor solves, I think it will struggle to see big growth in the market.




0 comments:

BBQ Sauce

Continuing the recipe theme, I also created a bbq sauce last summer. It's a sweet, tomato sauce based recipe, and went down pretty well when I served it.


Ingredients


  • 400ml tomato sauce (I just used sainsburys own brand)
  • 50ml Southern Comfort (optional)
  • 1 table spoon yellow mustard (just normal hot-dog mustard, like Frenchs)
  • 1 table spoon chilli powder
  • 80g sugar
  • 60ml cider vinegar
  • 1 teaspoon garlic powder
  • 60ml Worcestershire sauce
  • 1 teaspoon smoked paprika

Just slam the ingredients in a sauce pan for a while and reduce to a bbq-sauce consistency.
(to be honest, you can knock out a quick "cheats" bbq suace in two minutes that will go down pretty well - as above but reduce the ketchup to roughly 250ml, sugar to about 60g and then just a few glugs of cider and a few of worcestershire sauce and mix it up - tweaking ingredients to taste and it should be ok!)




Here it is on some chicken legs..


0 comments:

Tech cheat sheets - Lists & arrays

In Java, arrays and lists are an ordered collection of non-unique elements. They are probably one of the most common data structures you might have come across in your day-to-day programming.

Array vs List

In most cases, in Java you are more likely to use Lists over arrays - Lists provide more functionality as part of the API than the array does, so given the option, most people will use a list.

The two most common use cases for an array in java are:
  1. An array of primitive types - Java generics only supports object references. Although Java autoboxing reduces the need for this, as you can still insert int type values into List<Integer> for example.
  2. Micro optimization in performance critical systems

Most other cases, people generally use Lists, as the List interface offers more/convenient functionality, and also offers further control over type of List:


ArrayList

Array list is a simple array based implementation of the List interface.

Performance

add( T item ) - Adding a single element to an ArrayList using this add method will just add the element at the end of the list, which is very cheap - O(1)

add( T item, int index) - Adding a single element to an ArrayList at a specified position is less performant, as it needs to copy all elements to the right of the specified position, so this is more expensive and runs in linear time - O(n)

remove( T item ) - Similar to adding at a specified position, this is less performant  as it involves an array copy (plus, if we remove a specific Object rather than an item at a given position, it still potentially needs to access all items in the list). Again, linear time - O(n)

set/get( int index ) - This is very cheap in an arraylist, as it is just backed by an array, so can be looked up in constant O(1)


LinkedList

LinkedList in Java is a double linked list implementation of the List interface (e.g. every element in the list stores a pointer to the previous & next elements in the list - and these pointers are used to access/traverse the list).

Performance

add( T item ) - Adding a single element to a LinkedList using this add method will also just add the element at the end of the list same as the ArrayList, which is very cheap - O(1)

add( T item, int index) -Again, adding at a given position (** using this method!) is more expensive - Insertion in a linked list is cheap, as the pointers just need to be adjusted, however, you have to traverse the list to find the position, which puts you back into running in linear time - O(n)

remove( T item ) - Again, this method has the same performance/issues as the above add at position method, having to traverse the list to find the element to remove. So again, runs in linear time - O(n)

set/get( int index ) - As we need to traverse the list to find the position, it also needs to potentially traverse every element in the list, so runs at linear time O(n)


** As you will note from the above summary, ArrayList is better for get & set methods and equal performance for add remove methods. However, LinkedList does have the benefit of being able to use an Iterator to add/delete elements in constant O(1) time - e.g. if you are iterating a list and already at the position you wish to insert/delete then it is very cheap - see JavaDocs



Conclusion


By and large, for most simple List cases (not considering wanting to use Sets, Queues, Stacks etc) the most common choice is ArrayList as it offers, generally, the best performance.

If you know you need to have a very large list, and know that you will always be inserting new elements towards the head of the list, then LinkedList may be a better alternative - if you only ever insert at the start of a list, LinkedList will perform in constant O(1) time, where as ArrayList will perform O(n) time.

There are futher implementations of the List interface in Java, such as Stack, Vector.  I will look at some of those in different posts.

0 comments:

Cream money management - A Side project

Cash rules everything around me, it's all about the money, dollar dollar bill yo



I have for some time been bemoaning the state of online banking in the UK offered by the major banks I have banked with. It feels like they just can't be bothered - they are confident that no-one is going to disrupt them so they just don't make any effort.

My current bank makes it almost impossible to manually pay off your own credit card online, and all it really offers is a list of transactions against an account - the only good thing it offers is the ability to download your transactions.

It's frustrating. They have so much information, but provide so little.  So I told my wife I would make us an app to make this better.

Features of the app are:

  • create multiple accounts to manage together
  • upload statement/transaction lists as exported from online bank providers
  • automatically categorise and tag as many imported transactions as possible based on a set of rules
  • using sensible full text search, attempt to categorise and tag any remaining transactions based on other similar transactions that have been tagged
  • allow manual tagging/categorisation of transactions (that will then feed back into later imports of transactions etc)
  • link together transaction groups - identify recurring transactions and group them so they can be automatically categorised and tagged and also provide alerts/warnings on changes in payments (e.g. if a recurring transaction suddenly increases in cost, then it likely suggests that a fixed price deal has come to an end etc, so the system identifies this and alerts the user to the change
  • easy filtering and cutting up of data based on category, tags, date, price, description etc.. basically anything
  • tonnes of charts to show the different cuts of data and interesting points


0 comments:

Best free stock images

As internet speeds generally continue to increase, the trend to have full screen background images continues.  I like to have full screen images on splash screens and landing pages, so here are some of the best (free) image resources I have found online:


  • unsplash.com - 10 new completely free hi quality photos every 10 days. Lots of great photography, with nice landscape/outdoors stuff (archive)
  • Death to the stock photo - an email based subscription, where original high quality shots are sent to you via email
  • The Pattern Library - I love this, not stock photography, but an art project of background textures/patterns. Just scroll down to check them out. This is my favourite.
  • Subtle Patterns - Similar to above, not stock photos, but nice, free to use textures. Simpler than the pattern library, but lots of nice backgrounds to use.
  • Free Images (formerly stock.exchg) - If you can ignore the site design & adverts, and wade through the 90s style stock graphics, there are still alot of nice images to use.
  • Pic Jumbo - More high quality, large images





0 comments:

On to the next one: 2014's quick dry rub

4:51 PM , , , 0 Comments

Something a little different this time.

Last year, most of my bbq involved my variation on Kansas-city dry rub (will dig out the recipe and post that sometime). And a few weeks ago, I decided to make a new dry rub for this summer - but was in the mood for something more herb-y. Initially I planned to experiment with jamming in some oregano, thyme etc - but in the end, on discovering I didn't really have any of these things to hand, and finding a jar of this:

It was due to expire later this year, so I decided to cheat and just stick that in, and see how it worked out (ingredients listed were: Sage, Marjoram, Thyme, Oregano, Parsley, Basil  - no mention of ratios though).

It was pretty good - I based the measurements purely on the amount of the mixed herbs I had left, and made about a jar.


Ingredients


  • 3 table spoons dark brown sugar
  • 1 table spoon salt
  • 1 table spoon smoked paprika
  • 4 table spoons sainsburys mixed herbs
  • 1/2 table spoon garlic powder
  • 1/2 table spoon onion powder
  • 1/2 teaspoon all spice
  • 1 table spoon light brown sugar



Basically, just measure the ingredients in a bowl, mix them up and stick them in a jar.

It tasted pretty good - a nice mix of sweet but herb-y.  I have since used it on an adhoc roast-potato/tomato/bake thing as well, which worked pretty well (generally, I have found most dry rubs work well as adhoc seasoning of potato wedges/chips/etc.) and of course a mandatory bbq'd chicken (left half new rub, right half the old kansas city variation):

0 comments:

Creating a Java URL shortener

A long time ago I posted a brief article about using Google's URL shortening API to easily create shortened URLs for your application (e.g. if you wanted to post stuff server side to twitter and cared about character usage, you could fire your URL over to Google's API and just use the result).

However, recently I have had to think about how to shorten URLs myself, and really, it's pretty easy to implement yourself.


Creating a unique, repeatable identifier for a URL
I think a lot of people's first instinct might be to go for hashing the URL string - this isn't a good idea for a few reasons though:
  • Length - most normal hashing algorithms (md5/sha-*) produce long strings, which kind of goes against the point of a url shortener
  • Unique-ness - Obviously, if this is going to be a URL identifier then it needs to be unique, and hashes by their very nature are not unique - which means you would need to handle the scenario where a URL creates an already used hash and has an alternative
  • Look-up - as hashes are not (easily) reversible, you would need to look up the URL using the hash as the db key - which may not be ideal given a very large set of URLs (imagine number of URLs bitly.com has)


Thankfully there is a viable, easy solution available.


Lets first think about our database structure for persisting our URLs - In the simplest case we could probably get by with two columns:
  • id (DB generated sequence ID)
  • url - text field to capture the URL value

Generating the identifier from a DB
  1. Now, if you provide a String URL value, your code just needs to insert it into the table, this will create the row and the unique ID.
  2. Next, fetch that unique numeric ID, and convert it to base-62 (this will convert the numeric value into the base-62 representation (rather than normal base10, it will allow 0-9, a-z, A-Z as characters.  This gives you both an identifier in the form of "1jLPSIv" but also provides a massive ID space that can fit into relatively few characters (that will be part of the shortened URL) - 6 characters of base 62 provides a possible 6^62 different unique combinations (1.7594524073e+48 in total)

You may not want to directly leak the DB ids to the URL, in which case you can easily salt the IDs as you choose appropriate.



Bit.ly example

Looking at bit.ly shortening URLs it would appear that they follow the same pattern. I shortened two of my previous blog post URLs one after another, and the URLs generated as follows:

bit.ly/1oYgrsG

bit.ly/1tR63Zb

If we look at those two url identifiers, they look a lot like base-6, and if we convert the identifiers to base 62(and let's look at base-64 as well, just for funsies they may be using that)

URL - Base64>Base10 - Base62>Base10
1oYgrsG - 3685493160710 - 103119489480
1tR63Zb - 3690751293019 - 107587946123

As you can see, they look reasonable - there's a chance that there were a few URLs created between my two URLs, but I suspect there is a reasonable amount of salting going on so it is just a one-by-one increment. (although interestingly, if you get a bit.ly url and then just increase the last char by one - assuming base62 - then it will likely provide a new url - so you can have your own game of manual-webpage-roulette!)

2 comments:

Algorithm analysis & complexity: MergeSort - part 1

I have just signed up for the Coursera Algorithm Design & Analysis course from Stanford. I have been meaning to take it up for a while but kept missing when it was running. As it happens, this time I have all but missed it, with the final assessments due already, but I have signed up and am going through the material and lectures on my own.

I am only on week one so far, but it is all good - the lecturer seems good and articulates the content well (so far the presentation and sophistication of the course is better than the Scala & FP course and the Logic courses I have previously done on Coursera).

After I completed the Functional Programming & Scala course, I wrote up some notes on some of the techniques and problems in Groovy and thought it might be fun to write up some groovy notes from the algos course.

Week one is covers an introduction to algorithm analysis, divide and conquer (with merge sort) and some other stuff.


Mergesort - Analysis

Mergesort has three key steps
  1. Split the list in to two halves
  2. Recursively sort the first half
  3. Recursively sort the second hald
  4. Merge the two sorted halves
This is a well known example of the divide&conquer paradigm, the recursive sorting continues to recurse and divide the list into two halves until sorting is trivial (e.g. base case of one item in each list).


Let's look at the merge step:

(pseudocode taken from the course)
result = output [length = n]
leftList = 1st  sorted array [n/2]
rightList = 2nd  sorted array [n/2]
i = 1

j = 1
for k = 1 to n
  if leftList(i) < rightList(j) 

    result(k) = leftList(i)
    i++ 

  else [rightList(j) < leftList(i)]
    result(k) = rightList(j)
    j++
end



The above should be fairly straight forward to understand:
  • result stores our final merged list, and will be length n (number of elements in starting array)
  • leftList/rightList - these are the two inputs to the merge, and are simple the two halves of the list that need to be merged. This assumes that both lists are already sorted prior to merging
  • i/j - These are pointers to what we consider the "head" of each list. Rather than removing items from lists we will simply keep track of a pointer for each list to track how many items of each list have been merged
  • We then iterate through all "n" items, comparing the "head" of our two lists, the smaller value getting added to our result list


Complexity analysis of merge step:

In the course, the lecturer states that the worst case running time = 4n + 2 which he then further simplifies to = 6n  (this simplification is simply because n must always be at least 1, so 4n + 2 will always be no worse than 6n.

Let's look at how we get to that figure (we will then later look at what the asymptotic run time is). This figure is simply the number of operations that need to be executed when this algorithm is run so let's count those:

result = output [length = n]
leftList = 1st  sorted array [n/2]
rightList = 2nd  sorted array [n/2]
i = 1

j = 1
In the above, the only operations that are actually set are i & j (the others are just describing the function input/output). So that costs us 2 (constant regardless of how big "n" is)

for k = 1 to n
  if leftList(i) < rightList(j) 

    result(k) = leftList(i)
    i++ 

  else [rightList(j) < leftList(i)]
    result(k) = rightList(j)
    j++
end



In the above, we have one condition checking/incrementing "k" in the for statement (executed every iteration, so this will be done "n" times); we then have an IF conditional check, this will also be done in every iteration (so this is another "n" operations); then depending on which branch of that condition is executed, there are always a further two operations executed (the assignment of the head to result and the increment of the head pointer) - another two operations executed "n" times.

So the above block executes 4 operations for each of the "n" iterations. The first block executes 2 operations =

4n + 2

Seems straight forward enough.  Really though, this is just a fairly simplified piece of pseudo code, and there is a bunch of additional code that is needed to handle other stuff, edge cases etc (e.g. what happens if you have added all of the left list to the result and only have the right list left? obviously that can just be appended to the results, but the above code doesn't handle this). Then there are further questions about different language implementations that might have more operations to do this (not to mention difference when it gets compiled down to machine code) - we will get more into this later when we talk about asymptotic performance. For the time being we will just say the run time (worst case) for merging two lists is 4n + 2.


Now, let's look at recursively sorting..

This part of the algorithm is also relatively simple, we just keep recursing through the list, splitting the list in to two halves until each half of the list is just one element (one element is already sorted!), then we can merge those lists easily.

So, if we are recursing through and splitting the list into two halves, how many times will we have to perform the merge?

Let's step through this, with an example input list of 8 elements (n=8)
  1. One list of 8 elements
  2. Two lists of 4 elements
  3. Four lists of 2 elements
  4. Eight lists of 1 elements
this is just a basic binary tree - see the illustration below taken from wikipedia:


As you can see in the above tree, it splits down into individual elements, then it re-combines them using the merge technique. To work out the complexity of the algorithm we need to know two things:
  1. How deep is our tree going to be?
  2. What is the runtime of a given depth in the tree (e.g. for a given depth "j" in the tree, what is the cost of re-merging all the lists at that level

The first question is relatively easy, as it is a binary tree we know the depth is simply going to be log2(n)+1 - How do we know that? the log of a  number is simply how many times it needs to be divided by 2 (assuming 2 is the logarithm base) before the number is <= 1 - which is exactly what we are doing when we are recursively dividing our lists in half until we reach the single element lists.

E.g. the log2(8) = 3 (8/2=4; 4/2=2; 2/2=1)



So now lets calculate the running time for any given level of the tree.  Lets consider a list of length "n" and we want to know a few things:

How many merge operations do we need to perform for depth "j" of a list "n"?
In a similar way to how we calculated the depth, we can work out the number of merges (or number of leaves at the level of the tree).  If you have halved the lists each step 0..j then the number of leaves will be 2 to the power j.

E.g. The first level down, after just halving the original list, we have j=1, so expect 2 power 1 = 2 leaves.
Second level down, we again halve each of those two lists, we have j=2, so expect 2 power 2 = 4 leaves

Note that j cannot exceed the depth of the tree, which we already know.


What is the complexity of each of these merge operations?
We know the complexity of a merge from our earlier analysis of the merge step as 6m (where m is the size of the halved lists - not the original value "n" of the starting list size), and we know we have 2 to the power j of these merges, which means we know at any level of the tree, the run time performance is:

(2 power j) * (6m)

Now the value of "m" will also be dependent on the value of "j" - because as we get further down the tree, we have a greater number of merges to perform, but the size of the lists are also getting smaller (so the actual value of "m" is decreasing), so what is the value of 6m when we are at level "j" of the tree?  We can work that out in a similar fashion - we know the original "n" (size of starting list) has been halved "j" times  - so it is simply

(n)/(2 power j)

So now we can combine those and resolve the algebra.. we can see that the 2 power j cancels each other out (intuitively really, as we are increasing the number of merges, but at the same time and at the same rate reducing the size of the lists to be merged:

(2 power j) * 6 * (n/(2 power j)) = 6n




What is the overall complexity of the tree?
So we now already know the complexity of any given level of the tree (independent of the depth) and we know the depth of the tree (number of levels):

6n * (log2(n)+1)  =  6nlog2(n) + 6n

So there we have it - we have calculated the worst case running time for the merge sort.


There are a few further things that we will need to consider in later posts: In one post I will have a quick look at asymptotic analysis (e.g. "Big-Oh" notation) and what this means; in another post I will look at a Groovy implementation of it and the analysis of that.








0 comments:

Groovy bug - Stackoverflow calling super methods

I recently stumbled upon a bug in the version of Groovy that I was working with (Groovy 2.1.5), it's a strange one, that seems to have been solved in Groovy 2.2 but I thought I would post it here in case it was useful to anyone else.

The problem is demonstrated in the code below, but the exact use case seems to be having an inheritance structure > 2 classes deep, whereby the method return type changes.



The result in running the above code in Groovy 2.1.5 is a StackOverflow - it seems as though Groovy just recursively calls the doStuff() method on Child class until it errors rather than calling the method on the parent class.  Running on Groovy 2.2.0+ appears to run correctly.

I posted the question on stackoverflow.com, and it was suggested that it was maybe related to this bug, which has a very re-assuring resolution comment:

looks like the issue was fixed somewhen between 2.2.2 and 2.3.0

0 comments:

Yo: One reason it doesn't suck, completely

10:34 PM , , , 0 Comments

There has been a lot of talk about Yo recently. Initially it seems like the talk was all driven by the fact that this seemingly pointless app had raised $1million in angel funding (a good way to generate publicity and hype I guess if you happen to have a milli lying around). A lot of people speculated this as further proof of the tech bubble they had been long predicted. Others just talked about how crappy it was (albeit indirectly)

At a glance, neither the app nor the funding round seem particularly interesting - The app sounds distinctly like a novelty app that has only generated interest (and therefore users) on account of the funding, and seems destined to be a blip in the tech-history books much like Chat Roulette etc (although I can't really see it hitting he heights of Chat Roulette - at least people still make an occasional joke about Chat Roulette - I think give it three months and no one will be talking about Yo). The funding is less interesting when you hear the full story: according to Forbes, the app was created by a chap called Or Abel, after his former boss, Moshe Hogeg apparently asked him to make the app so he could buzz his PA without having to call, when Or Abel then switched up the idea a little bit Moshe lead the funding round (so a guy invested in what was basically his own idea, probably wasn't the world's hardest pitch).

Context based messaging

One reason cited for the app not being that lame, is apparently that it is actually trying to fill in a gap - providing context-based-messaging e.g. when you ping someone a "yo" they know the context of the message so you don't need to know anything else.

I don't agree. At least with the examples cited so far, it still seems pretty lame. The main example that has been used is the World Cup - you could subscribe to WORLDCUP by yo-ing them and then you will get a "yo" back everytime a goal is scored. Which honestly doesn't sound that great. I have been following the WC pretty avidly, but getting intermittent messages saying someone scored doesn't sound useful, and the main problem is that I would then have to launch another app to actually get the context (e.g. which team scored).  There are lots of other solutions to this that provide simple notifications including the context.


Some other examples:

Wanna say "good morning"? just Yo. 
Wanna say "Baby I'm thinking about you"? -- Yo. 
"I've finished my meeting, come by my office" -- Yo. 
"Are you up?" -- Yo.

These all make sense, but what next? How do you continue the conversation? how do you confirm/agree that all important context of the message? All these simple "yo"s all drive users to other apps - which means more effort/taps so why bother starting the conversation in Yo at all? why not just use WhatsApp/SMS/Facebook/GChat/etc from the start?


Ok, so clearly Context-Free messaging is a bad idea..

However, in my opinion, there is a glimmer of hope for the app, but the question for me is really just does it have enough runway to execute it before it just becomes another novelty app in the history books.

A few years back, just before Twitter went all dick-ish with their API and started locking out the entire developer community and third party eco-system there was a few good articles discussing an alternative business model for Twitter, and rather than becoming a media company (that it has become, rich content including pics/videos & trying to drive all eyeballs onto twitter.com or official apps - not via third party clients) it should become a global messaging system - it had the infrastructure made to be a massive pub-sub/notification system (I think at the time there was a better article that I read, but can't find the link now, so that one will have to do - if anyone thinks they know the one I mean then please add to the comments!).


This is a space that I think Yo could step up and fill. And its possible that they are thinking the same - having already announced their API, they already suggest some simple notification systems that could utilise it - In which case, it could become a really interesting platform.


So who knows, there is potential for it to become something pretty neat. Odds are on it will just end up a passing gimmick though.




0 comments:

The Idiot Box: Disrupting TV

There has been a lot written in recent months about the changing role of content on the web and devices, particularly recently with Apple's acquisition of Beats (things just aint the same for gangsters) - and Benedict Evans recently wrote questioning whether content is actually still king. And I think we can agree he makes a good point, when it comes to music, it is no longer a USP, its just expected. Between YouTube, SoundCloud, Spotify, Google's Play services etc there is no reason people can't just stream any music they like, fairly seamlessly, and switch between providers/apps just as easily.  Apple had invested massively in iTunes, but the iTunes buy-download model isn't what people want any more (hence buying Beats, actually for their streaming service maybe).


A more interesting area is Television - content is still a key factor, NetFlix, Amazon Instant(formerly LoveFilm) etc are largely compared entirely based on their content - as a service there is little between them, other than content.  So really it's no suprise to see all the normal big players getting involved in the market Google (ChromeCast etc), Amazon Instant, Apple TV.

Within 5years I think we will see a massive shift in viewing patterns, with pretty much all new TVs sold today being web-enabled I think its an inevitability that people will move to all on-demand services rather than being dependent on scheduled programming. Within a further 5years I wouldn't be surprised if scheduled programming was all but dead and gone.

We recently bought a NowTV box - at just £10 its a pretty low barrier to web-enabling your TV - and we have pretty much switched over to entirely on-demand service - despite having a PVR we still go for the on demand options.


I think there will be some interesting things that come from this:

Platform Fragmentation

I think this is the biggest problem facing the market at the moment, and to me looks like a massive opportunity. At the moment there is so much fragmentation across the platform: Playstations, Xbox, Android, iOS, TVs (Sony, Samsung, etc) - all have their own platforms, and any on-demand service that wants to offer an app on every platform needs work from either the platform owners or the service providers - if its the platform owner then the provider looses control of UI/UX app features and consistency across platforms, and if its the service provider then they have a lot of work to do to support the different platforms, and they will inevitably have to make decisions whether or not to bother with each platform.

At the moment, in the UK, if you buy a web-enabled device then you can't guarantee that it will have the basic UK free-to-air OD services (BBC, ITV, Channel4, Channel5) - When I bought the NowTV it didn't have Channel4 apps and still doesn't have ITV (and that's a platform backed by BSkyB, which is a fairly large organisation and platform). There are already some Sony TVs that are no longer supported and provider apps are no longer being developed/maintained/supported. 

Android and iOS aside, all platforms will suffer this problem - that is until someone comes with a platform standard/OS that is open and can be re-used across devices. And given Android's prevalence, and Google's investment in TV it would seem like the best placed candidate to tackle that, but let's see!


What is driving creation production?

With scheduled TV there are quiet times, early hours of the morning, working week daytime - and content is created for that specifically. Providers have to each fill their schedules for these hours, so these are commissioned/bought/run.  However, if scheduling was to end, and it was all on-demand then would people still make this content? I'm sure there will still be a demand for some of this content, but people who watch it just because its on will obviously diminish. Students in the UK have had a tradition of watching daytime TV, whether it be Countdown, Diagnosis Murder or Quincy - but in the era of on-demand why would they search out this content? 


0 comments:

Ideas

So this is just a quick note on some things that I am personally finding really interesting right now

Education 

as mentioned, I think this is going to be cracked soon, and maybe Khan academy will do it. Either way, I think whoever does it would need to be a "full stack" startup. I have thought about trying some things - like a GitHub type system for open-sourcing education materials and resources, creating open-standards for curriculum and educational texts etc - but they have always just been tools or things around the periphery. I don't think I have the resources right now to be thinking full stack!


Android first

Android is the most pervasive mobile OS (yes, there are some caveats about the stats, such as numbers coming from China, and the value of the customer compared to other iOS), and Google continue to widen their reach (recently purchasing Nest etc) I think we are going to seeing our first truly Android first apps. If Instagram was built today, would it be iOS first? Probably, but I think that landscape is changing. I think coupled with the following areas this could be a big one.  If I'm building a mobile app it's going to be Android first. Silicon valley is under-invested in Android - There is also growing appetite for it:



Smartphone as your social graph

I have mentioned this one a few times here. Smartphone usage is continuing to increase (even whilst tablet sales falter) and more and more existing mobile users across the world upgrade. Further to this, your smartphone is really where you social graph is. Your address book has your contacts in it, your phone number gives you a unique identifier, and as WhatsApp proved, it gives great power to mobile first startups to disrupt the social network incumbents. It could be argued that Google+ was never going to succeed to usurp Facebook as king of the social networks because people are lazy, and essentially creatures of habbit - if all your friends are on Facebook, why try to convince your entire network to switch to g+? But the smartphone takes away that power, as your network is on your device, not a particular platform.  Couple that with the fact that so many use their phones to double up as cameras it now means most of our photos/videos are also on the device.

Television

I was recently talking with some colleagues about the future of TV and I speculated that in 5-10 years we may see the end of scheduled television, and everything will be only on demand. This would leave some interesting questions/problems:

  • One big problem as I see it is the fragmentation of device software - if you are creating an on demand app, you need to think about Android, iOS, browsers, Xbox, PS, not to mention all the TV manufacturers that have their own software running on their web enabled TV. This means at the moment, if you buy a new web enabled TV or set top box, the on demand apps may not be available, and may not be consistent. I think this could be a good market for Android and wouldn't be surprised if they do make some bold moves in this area (yes, bolder than Chrome Cast) - I would think it would make sense for them to be linking up with TV manufacturers to as the de-facto TV OS (would benefit from android app eco system - even if it would mean a LOT more work for app developers to fix up for another range of screen sizes
  • Another interesting implication of such a switch would be would we see a decline in produced content. At the moment there is a lot of content created specifically for quieter times of the viewing schedule (mon-fri afternoons, early morning, etc) - what might be considered as "filler", but we might see a decline in this as consumers will have complete control of what they want to watch, and there won't be any watch it because its on mentality.
I think its an interesting areas where battles are really going strong, with big players in the content production/distribution space (Netflix/Amazon/YouTube) as well as Google/Apple etc taking on the incumbents in hardware.  I just today saw a link for a site called Glass that is dedicated to ongoing conversation about this topic.

0 comments:

Sal Khan has changed my mind

This lecture is great - It's by Sal Khan from Khan Academy - and Sal talks about how Khan Academy started up and some of its goals.

I had previously started writing an article about the current raft of tech-education startups (Khan Academy, Udacity, Coursera, etc) and how I didn't think they were really in a place to disrupt education in the UK/US. I thought they were great in providing the lectures/materials to everyone with a web connection, but didn't see how they were going to change education systems here.

I thought they were all focused on providing a tech solution, and I didn't really think that a technology could replace human lead education, and without the personal engagement and stimulation to encourage individual learning it would inevitably lead to distraction and local optima in knowledge - but the things that Khan Academy are doing with schools are really exciting.

Well worth a listen.

0 comments: