Showing posts with label analysis. Show all posts

Amazon & the circle of life

Last week, Amazon announced the planned opening of their first bricks & mortar store - planned to be opened in central New York, not far from Macy's department store. The move is an attempt to provide some of their customers with a traditional face-to-face customer service.

If you have been following other recent Amazon announcements and expansions, you will be familiar with their other recent moves:

Same day delivery - Amazon has been a dominant power in e-commerce for a long time, and given the small margins they operate, and being loss-leaders in some products to drive business elsewhere, it's going to be difficult for any newcomer to genuinely compete, and as Amazon continue to spread more and more into all forms of goods, its also slowly making it harder for shops to operate in a specialist vertical.  However, there is always going to be times when convenience and the ability to have something immediately trumps price, so there was always going to be a market share that Amazon will loose. A lot of the time people will pay a small percentage increase in price for being able to have the product in their hands in a few hours.  Same day delivery will reduce that - again their will be the premium cost of the delivery service, but customers can have products instantly (almost).  There are some significant logistical challenges to do this, but being in a position to do this also opens up another opportunity..

Fresh groceries (having been doing non-perishable groceries for some time). One of the biggest challenges to Amazon rolling out fresh grocery deliveries more widely (currently only available to parts of North America) is building the capacity to enable fast, same day delivery of fresh goods; so the goods can leave a chilled warehouse and be with the customer in a short time - what is a short time exactly? It seems from existing supermarket delivery services that most people are happy with same day delivery (or at least fixed day delivery - e.g. the goods leave the warehouse on the same day they arrive to the customer), using refrigerator delivery vehicles, which seems simple enough for an individual case - in the UK Amazon could ship from their large Swansea warehouse in the morning and deliver the goods to most parts of the UK the same day. The inevitable problem becomes when this starts to scale up, As soon as even a small percentage of the UK want to order their weekly fresh groceries from Amazon the problem gets a lot tougher, and Amazon would need to have a large, distributed warehouse/delivery infrastructure to enable this kind of efficient, same day grocery deliveries.  The kind of infrastructure that existing supermarkets like Tesco or Sainsburys already have, having long built the infrastructure to provide a similar level of stock control and service to their supermarkets.


It's no secret that Amazon returns very little profit to their shareholders, and continues to plough the majority of their revenue into new business and further expansion - a large part of the investment going into expanding their delivery and warehouse capacity and distribution.

Scaling same day delivery is a hard problem to solve, if only because the solution is really just more distributed warehouses. Amazon have tried to ease the scaling problem with Amazon lockers and local partners/shops that can take delivery and can then be picked up from by the customer at a convenient time (a local shop that a customer can pick up their goods from on the way home from work for example), but really their move to bricks and mortar is really just another step to a long established industry.

Is Amazon really disrupting groceries/shopping? Or are they just competing against the usual retail giants?  Neither same/fixed day delivery or fresh grocery delivery is a new feature. Providing both online and physical stores is also not a new model, so really all we are seeing is another retail powerhouse slowly marching onwards and upwards to a fairly conventional business plan**, maybe the only thing of interest is that it is doing it in a different order (Tesco scaled from bricks and mortar, to online shopping, to same day delivery; Amazon are simply starting from online and expanding to the others).


So I guess we will need to wait for drone-delivery to see any real innovation in the retail space.



** Conventional business plan for the retail/groceries aspect of the business - there is still lots of innovation and interesting things that Amazon are doing, with mobile devices, could services, video delivery/production etc.

iWatch & wearable tech

Much has been made lately of Apple's latest announcements of the iPhone 6 and their new Apple watch, and really, I'm pretty late to the party on this one.  I don't normally comment much on Apple announcements, but this time I fancied a rant.


Personally, it's not to my tastes. I know the strap is inter-changeable, and you can change the watch face screen, but I'm not a fan.  In part, because I am more of a fan of classic watch design (so really, some of the photos of the Moto-360 look closer to what I would want a smartwatch to look like), but generally, it feels a little garish, and a bit like something designed 5 years ago.  The curved, almost bubble like glass shape of the device, the square watch face.



I'm not sure what it is.

Maybe its because it feels at odds with current web design trends on flat design. Maybe its because it feels like the original iPhone matured/evolved from it's original curved shape to the current sharper, flatter designed shape and this still seems to hark back to the original iPhone design.

Anyway, as an aside, I think if I was going to spend hundreds on a flash-y digital watch, I would quite like this one:



Sure, I can't check emails on it, but it looks nice.  But then I'm not really someone who should be commenting on style and fashion, so will stick to tech trends..


Is wearable tech the next big thing?


Honestly, I think probably not. Not for the time being at least. I'm sure some smart people will work the market out eventually, but for the time being, and with the current incarnations of smartwatches, I don't think the market is really there.


So, here's the thing - I'm not really sure what the point of the apple watch is (and I guess that is what needs to be cracked before the market can take off). I think, it's going to face the same challenges that tablets have faced - it needs to grow up and work out what it is. It needs to understand what its purpose is and find its niche - it's not going to be good enough to be the same as a smartphone but a different form.


Let's have a look at the markets:

Smartphones:
  • Defined and fairly standard "upgrade-cycles" - the expectation and standard of upgrading devices every 12-24 months is fairly established in the west, and this is both driving existing customers to newer, better devices, but is also slowly migrating existing feature phone users to smartphones.
  • Everyone has one - at its core, as a phone/communication device it provides that roaming communication functionality and is at least one per-person (not shared)
  • Convenience - easy to carry and roam.

Tablets:
  • No real defined upgrade-cycle (in terms of device contracts) and too early to see patterns being established - optimistically will fall back to traditional PC cycles of approximately every 5 years
  • Not per-person devices - You might be safe to typically expect everyone in an average household to have a (smart)phone, but probably only one/two tablets per house.
  • It doesn't solve any real problem - sure, its a little better for watching movies than a smartphone, but apps, browsing, emails and other comms are not noticably better. Further, tasks that might need greater control, or screen real estate - like creating spreadsheets, or presentation slides for example - people still fall back on using regular PCs. There needs to be a purpose/task/app that is made for a tablet sized device - where tablets solve a problem that only they can. As of yet, we're still waiting.

Smartwatches:
  • Still yet to see upgrade cycles - will be interesting if carriers/manufacturers try to tie these into existing mobile contract structuring. It will make adoption even harder if they try to sell it with a 1-2 year lifecycle for sure.
  • Aimed to be per-person, as a sidekick to your smartphone - but if all it offers is your smartphone in a slightly different form, its again going to be a tough sell - Sure, its slightly more convenient than getting your phone out of your pocket, but that seems like a dubious USP to base an entire market on.
  • It doesn't solve a problem - Again, there needs to be a task/area/job where the smart watch is the answer. Where it fills a need that simply can't be filled by a smartphone (or can be, but is a pain in the ass)



For a change, let's have a look at some smartphone/tablet data:


Tablet sales struggle: Apple iPad growth projections by quarter

(Source: Computer World: As tablet growth slows, Apple may face a year-long iPad sales contraction )

Just focusing on Apple for the moment, Benedict Evans presents some interesting data analysis of their recent sales/revenue numbers. Firstly, we see that iPad sales have flattened out and basically settled where they are for the last two years, whilst iPhones have continued to see growth year on year:

http://ben-evans.com/benedictevans/2014/4/25/ipad-growth
(Source: Benedict Evans - iPad growth - Apple's trailing 12months sales)


More generally, if we look at the comparison of sales across PC vs Android/Apple smartphones, we see that PCs have levelled out, but the smartphone continues to see huge growth:

http://ben-evans.com/benedictevans/2014/4/25/ipad-growth
(Source: Benedict Evans - iPad growth - General shipping - PCs vs Smartphones)




It really looks like the smartphone market is continuing to surge. With standard upgrade-cycles, and low end Android smartphones becoming more widely accessible, this trend seems set to continue.

On the other hand, until the tablet market works out its purpose and finds a niche, I think it will continue to stagnate with fairly flat growth. 

It feels to me like a similar fate awaits smartwatches, until someone comes up with a compelling problem that the watch form factor solves, I think it will struggle to see big growth in the market.




Algorithm analysis & complexity: MergeSort - part 1

I have just signed up for the Coursera Algorithm Design & Analysis course from Stanford. I have been meaning to take it up for a while but kept missing when it was running. As it happens, this time I have all but missed it, with the final assessments due already, but I have signed up and am going through the material and lectures on my own.

I am only on week one so far, but it is all good - the lecturer seems good and articulates the content well (so far the presentation and sophistication of the course is better than the Scala & FP course and the Logic courses I have previously done on Coursera).

After I completed the Functional Programming & Scala course, I wrote up some notes on some of the techniques and problems in Groovy and thought it might be fun to write up some groovy notes from the algos course.

Week one is covers an introduction to algorithm analysis, divide and conquer (with merge sort) and some other stuff.


Mergesort - Analysis

Mergesort has three key steps
  1. Split the list in to two halves
  2. Recursively sort the first half
  3. Recursively sort the second hald
  4. Merge the two sorted halves
This is a well known example of the divide&conquer paradigm, the recursive sorting continues to recurse and divide the list into two halves until sorting is trivial (e.g. base case of one item in each list).


Let's look at the merge step:

(pseudocode taken from the course)
result = output [length = n]
leftList = 1st  sorted array [n/2]
rightList = 2nd  sorted array [n/2]
i = 1

j = 1
for k = 1 to n
  if leftList(i) < rightList(j) 

    result(k) = leftList(i)
    i++ 

  else [rightList(j) < leftList(i)]
    result(k) = rightList(j)
    j++
end



The above should be fairly straight forward to understand:
  • result stores our final merged list, and will be length n (number of elements in starting array)
  • leftList/rightList - these are the two inputs to the merge, and are simple the two halves of the list that need to be merged. This assumes that both lists are already sorted prior to merging
  • i/j - These are pointers to what we consider the "head" of each list. Rather than removing items from lists we will simply keep track of a pointer for each list to track how many items of each list have been merged
  • We then iterate through all "n" items, comparing the "head" of our two lists, the smaller value getting added to our result list


Complexity analysis of merge step:

In the course, the lecturer states that the worst case running time = 4n + 2 which he then further simplifies to = 6n  (this simplification is simply because n must always be at least 1, so 4n + 2 will always be no worse than 6n.

Let's look at how we get to that figure (we will then later look at what the asymptotic run time is). This figure is simply the number of operations that need to be executed when this algorithm is run so let's count those:

result = output [length = n]
leftList = 1st  sorted array [n/2]
rightList = 2nd  sorted array [n/2]
i = 1

j = 1
In the above, the only operations that are actually set are i & j (the others are just describing the function input/output). So that costs us 2 (constant regardless of how big "n" is)

for k = 1 to n
  if leftList(i) < rightList(j) 

    result(k) = leftList(i)
    i++ 

  else [rightList(j) < leftList(i)]
    result(k) = rightList(j)
    j++
end



In the above, we have one condition checking/incrementing "k" in the for statement (executed every iteration, so this will be done "n" times); we then have an IF conditional check, this will also be done in every iteration (so this is another "n" operations); then depending on which branch of that condition is executed, there are always a further two operations executed (the assignment of the head to result and the increment of the head pointer) - another two operations executed "n" times.

So the above block executes 4 operations for each of the "n" iterations. The first block executes 2 operations =

4n + 2

Seems straight forward enough.  Really though, this is just a fairly simplified piece of pseudo code, and there is a bunch of additional code that is needed to handle other stuff, edge cases etc (e.g. what happens if you have added all of the left list to the result and only have the right list left? obviously that can just be appended to the results, but the above code doesn't handle this). Then there are further questions about different language implementations that might have more operations to do this (not to mention difference when it gets compiled down to machine code) - we will get more into this later when we talk about asymptotic performance. For the time being we will just say the run time (worst case) for merging two lists is 4n + 2.


Now, let's look at recursively sorting..

This part of the algorithm is also relatively simple, we just keep recursing through the list, splitting the list in to two halves until each half of the list is just one element (one element is already sorted!), then we can merge those lists easily.

So, if we are recursing through and splitting the list into two halves, how many times will we have to perform the merge?

Let's step through this, with an example input list of 8 elements (n=8)
  1. One list of 8 elements
  2. Two lists of 4 elements
  3. Four lists of 2 elements
  4. Eight lists of 1 elements
this is just a basic binary tree - see the illustration below taken from wikipedia:


As you can see in the above tree, it splits down into individual elements, then it re-combines them using the merge technique. To work out the complexity of the algorithm we need to know two things:
  1. How deep is our tree going to be?
  2. What is the runtime of a given depth in the tree (e.g. for a given depth "j" in the tree, what is the cost of re-merging all the lists at that level

The first question is relatively easy, as it is a binary tree we know the depth is simply going to be log2(n)+1 - How do we know that? the log of a  number is simply how many times it needs to be divided by 2 (assuming 2 is the logarithm base) before the number is <= 1 - which is exactly what we are doing when we are recursively dividing our lists in half until we reach the single element lists.

E.g. the log2(8) = 3 (8/2=4; 4/2=2; 2/2=1)



So now lets calculate the running time for any given level of the tree.  Lets consider a list of length "n" and we want to know a few things:

How many merge operations do we need to perform for depth "j" of a list "n"?
In a similar way to how we calculated the depth, we can work out the number of merges (or number of leaves at the level of the tree).  If you have halved the lists each step 0..j then the number of leaves will be 2 to the power j.

E.g. The first level down, after just halving the original list, we have j=1, so expect 2 power 1 = 2 leaves.
Second level down, we again halve each of those two lists, we have j=2, so expect 2 power 2 = 4 leaves

Note that j cannot exceed the depth of the tree, which we already know.


What is the complexity of each of these merge operations?
We know the complexity of a merge from our earlier analysis of the merge step as 6m (where m is the size of the halved lists - not the original value "n" of the starting list size), and we know we have 2 to the power j of these merges, which means we know at any level of the tree, the run time performance is:

(2 power j) * (6m)

Now the value of "m" will also be dependent on the value of "j" - because as we get further down the tree, we have a greater number of merges to perform, but the size of the lists are also getting smaller (so the actual value of "m" is decreasing), so what is the value of 6m when we are at level "j" of the tree?  We can work that out in a similar fashion - we know the original "n" (size of starting list) has been halved "j" times  - so it is simply

(n)/(2 power j)

So now we can combine those and resolve the algebra.. we can see that the 2 power j cancels each other out (intuitively really, as we are increasing the number of merges, but at the same time and at the same rate reducing the size of the lists to be merged:

(2 power j) * 6 * (n/(2 power j)) = 6n




What is the overall complexity of the tree?
So we now already know the complexity of any given level of the tree (independent of the depth) and we know the depth of the tree (number of levels):

6n * (log2(n)+1)  =  6nlog2(n) + 6n

So there we have it - we have calculated the worst case running time for the merge sort.


There are a few further things that we will need to consider in later posts: In one post I will have a quick look at asymptotic analysis (e.g. "Big-Oh" notation) and what this means; in another post I will look at a Groovy implementation of it and the analysis of that.