Cream money management - A Side project

Cash rules everything around me, it's all about the money, dollar dollar bill yo



I have for some time been bemoaning the state of online banking in the UK offered by the major banks I have banked with. It feels like they just can't be bothered - they are confident that no-one is going to disrupt them so they just don't make any effort.

My current bank makes it almost impossible to manually pay off your own credit card online, and all it really offers is a list of transactions against an account - the only good thing it offers is the ability to download your transactions.

It's frustrating. They have so much information, but provide so little.  So I told my wife I would make us an app to make this better.

Features of the app are:

  • create multiple accounts to manage together
  • upload statement/transaction lists as exported from online bank providers
  • automatically categorise and tag as many imported transactions as possible based on a set of rules
  • using sensible full text search, attempt to categorise and tag any remaining transactions based on other similar transactions that have been tagged
  • allow manual tagging/categorisation of transactions (that will then feed back into later imports of transactions etc)
  • link together transaction groups - identify recurring transactions and group them so they can be automatically categorised and tagged and also provide alerts/warnings on changes in payments (e.g. if a recurring transaction suddenly increases in cost, then it likely suggests that a fixed price deal has come to an end etc, so the system identifies this and alerts the user to the change
  • easy filtering and cutting up of data based on category, tags, date, price, description etc.. basically anything
  • tonnes of charts to show the different cuts of data and interesting points


Best free stock images

As internet speeds generally continue to increase, the trend to have full screen background images continues.  I like to have full screen images on splash screens and landing pages, so here are some of the best (free) image resources I have found online:


  • unsplash.com - 10 new completely free hi quality photos every 10 days. Lots of great photography, with nice landscape/outdoors stuff (archive)
  • Death to the stock photo - an email based subscription, where original high quality shots are sent to you via email
  • The Pattern Library - I love this, not stock photography, but an art project of background textures/patterns. Just scroll down to check them out. This is my favourite.
  • Subtle Patterns - Similar to above, not stock photos, but nice, free to use textures. Simpler than the pattern library, but lots of nice backgrounds to use.
  • Free Images (formerly stock.exchg) - If you can ignore the site design & adverts, and wade through the 90s style stock graphics, there are still alot of nice images to use.
  • Pic Jumbo - More high quality, large images





On to the next one: 2014's quick dry rub

Something a little different this time.

Last year, most of my bbq involved my variation on Kansas-city dry rub (will dig out the recipe and post that sometime). And a few weeks ago, I decided to make a new dry rub for this summer - but was in the mood for something more herb-y. Initially I planned to experiment with jamming in some oregano, thyme etc - but in the end, on discovering I didn't really have any of these things to hand, and finding a jar of this:

It was due to expire later this year, so I decided to cheat and just stick that in, and see how it worked out (ingredients listed were: Sage, Marjoram, Thyme, Oregano, Parsley, Basil  - no mention of ratios though).

It was pretty good - I based the measurements purely on the amount of the mixed herbs I had left, and made about a jar.


Ingredients


  • 3 table spoons dark brown sugar
  • 1 table spoon salt
  • 1 table spoon smoked paprika
  • 4 table spoons sainsburys mixed herbs
  • 1/2 table spoon garlic powder
  • 1/2 table spoon onion powder
  • 1/2 teaspoon all spice
  • 1 table spoon light brown sugar



Basically, just measure the ingredients in a bowl, mix them up and stick them in a jar.

It tasted pretty good - a nice mix of sweet but herb-y.  I have since used it on an adhoc roast-potato/tomato/bake thing as well, which worked pretty well (generally, I have found most dry rubs work well as adhoc seasoning of potato wedges/chips/etc.) and of course a mandatory bbq'd chicken (left half new rub, right half the old kansas city variation):

Creating a Java URL shortener

A long time ago I posted a brief article about using Google's URL shortening API to easily create shortened URLs for your application (e.g. if you wanted to post stuff server side to twitter and cared about character usage, you could fire your URL over to Google's API and just use the result).

However, recently I have had to think about how to shorten URLs myself, and really, it's pretty easy to implement yourself.


Creating a unique, repeatable identifier for a URL
I think a lot of people's first instinct might be to go for hashing the URL string - this isn't a good idea for a few reasons though:
  • Length - most normal hashing algorithms (md5/sha-*) produce long strings, which kind of goes against the point of a url shortener
  • Unique-ness - Obviously, if this is going to be a URL identifier then it needs to be unique, and hashes by their very nature are not unique - which means you would need to handle the scenario where a URL creates an already used hash and has an alternative
  • Look-up - as hashes are not (easily) reversible, you would need to look up the URL using the hash as the db key - which may not be ideal given a very large set of URLs (imagine number of URLs bitly.com has)


Thankfully there is a viable, easy solution available.


Lets first think about our database structure for persisting our URLs - In the simplest case we could probably get by with two columns:
  • id (DB generated sequence ID)
  • url - text field to capture the URL value

Generating the identifier from a DB
  1. Now, if you provide a String URL value, your code just needs to insert it into the table, this will create the row and the unique ID.
  2. Next, fetch that unique numeric ID, and convert it to base-62 (this will convert the numeric value into the base-62 representation (rather than normal base10, it will allow 0-9, a-z, A-Z as characters.  This gives you both an identifier in the form of "1jLPSIv" but also provides a massive ID space that can fit into relatively few characters (that will be part of the shortened URL) - 6 characters of base 62 provides a possible 6^62 different unique combinations (1.7594524073e+48 in total)

You may not want to directly leak the DB ids to the URL, in which case you can easily salt the IDs as you choose appropriate.



Bit.ly example

Looking at bit.ly shortening URLs it would appear that they follow the same pattern. I shortened two of my previous blog post URLs one after another, and the URLs generated as follows:

bit.ly/1oYgrsG

bit.ly/1tR63Zb

If we look at those two url identifiers, they look a lot like base-6, and if we convert the identifiers to base 62(and let's look at base-64 as well, just for funsies they may be using that)

URL - Base64>Base10 - Base62>Base10
1oYgrsG - 3685493160710 - 103119489480
1tR63Zb - 3690751293019 - 107587946123

As you can see, they look reasonable - there's a chance that there were a few URLs created between my two URLs, but I suspect there is a reasonable amount of salting going on so it is just a one-by-one increment. (although interestingly, if you get a bit.ly url and then just increase the last char by one - assuming base62 - then it will likely provide a new url - so you can have your own game of manual-webpage-roulette!)

Algorithm analysis & complexity: MergeSort - part 1

I have just signed up for the Coursera Algorithm Design & Analysis course from Stanford. I have been meaning to take it up for a while but kept missing when it was running. As it happens, this time I have all but missed it, with the final assessments due already, but I have signed up and am going through the material and lectures on my own.

I am only on week one so far, but it is all good - the lecturer seems good and articulates the content well (so far the presentation and sophistication of the course is better than the Scala & FP course and the Logic courses I have previously done on Coursera).

After I completed the Functional Programming & Scala course, I wrote up some notes on some of the techniques and problems in Groovy and thought it might be fun to write up some groovy notes from the algos course.

Week one is covers an introduction to algorithm analysis, divide and conquer (with merge sort) and some other stuff.


Mergesort - Analysis

Mergesort has three key steps
  1. Split the list in to two halves
  2. Recursively sort the first half
  3. Recursively sort the second hald
  4. Merge the two sorted halves
This is a well known example of the divide&conquer paradigm, the recursive sorting continues to recurse and divide the list into two halves until sorting is trivial (e.g. base case of one item in each list).


Let's look at the merge step:

(pseudocode taken from the course)
result = output [length = n]
leftList = 1st  sorted array [n/2]
rightList = 2nd  sorted array [n/2]
i = 1

j = 1
for k = 1 to n
  if leftList(i) < rightList(j) 

    result(k) = leftList(i)
    i++ 

  else [rightList(j) < leftList(i)]
    result(k) = rightList(j)
    j++
end



The above should be fairly straight forward to understand:
  • result stores our final merged list, and will be length n (number of elements in starting array)
  • leftList/rightList - these are the two inputs to the merge, and are simple the two halves of the list that need to be merged. This assumes that both lists are already sorted prior to merging
  • i/j - These are pointers to what we consider the "head" of each list. Rather than removing items from lists we will simply keep track of a pointer for each list to track how many items of each list have been merged
  • We then iterate through all "n" items, comparing the "head" of our two lists, the smaller value getting added to our result list


Complexity analysis of merge step:

In the course, the lecturer states that the worst case running time = 4n + 2 which he then further simplifies to = 6n  (this simplification is simply because n must always be at least 1, so 4n + 2 will always be no worse than 6n.

Let's look at how we get to that figure (we will then later look at what the asymptotic run time is). This figure is simply the number of operations that need to be executed when this algorithm is run so let's count those:

result = output [length = n]
leftList = 1st  sorted array [n/2]
rightList = 2nd  sorted array [n/2]
i = 1

j = 1
In the above, the only operations that are actually set are i & j (the others are just describing the function input/output). So that costs us 2 (constant regardless of how big "n" is)

for k = 1 to n
  if leftList(i) < rightList(j) 

    result(k) = leftList(i)
    i++ 

  else [rightList(j) < leftList(i)]
    result(k) = rightList(j)
    j++
end



In the above, we have one condition checking/incrementing "k" in the for statement (executed every iteration, so this will be done "n" times); we then have an IF conditional check, this will also be done in every iteration (so this is another "n" operations); then depending on which branch of that condition is executed, there are always a further two operations executed (the assignment of the head to result and the increment of the head pointer) - another two operations executed "n" times.

So the above block executes 4 operations for each of the "n" iterations. The first block executes 2 operations =

4n + 2

Seems straight forward enough.  Really though, this is just a fairly simplified piece of pseudo code, and there is a bunch of additional code that is needed to handle other stuff, edge cases etc (e.g. what happens if you have added all of the left list to the result and only have the right list left? obviously that can just be appended to the results, but the above code doesn't handle this). Then there are further questions about different language implementations that might have more operations to do this (not to mention difference when it gets compiled down to machine code) - we will get more into this later when we talk about asymptotic performance. For the time being we will just say the run time (worst case) for merging two lists is 4n + 2.


Now, let's look at recursively sorting..

This part of the algorithm is also relatively simple, we just keep recursing through the list, splitting the list in to two halves until each half of the list is just one element (one element is already sorted!), then we can merge those lists easily.

So, if we are recursing through and splitting the list into two halves, how many times will we have to perform the merge?

Let's step through this, with an example input list of 8 elements (n=8)
  1. One list of 8 elements
  2. Two lists of 4 elements
  3. Four lists of 2 elements
  4. Eight lists of 1 elements
this is just a basic binary tree - see the illustration below taken from wikipedia:


As you can see in the above tree, it splits down into individual elements, then it re-combines them using the merge technique. To work out the complexity of the algorithm we need to know two things:
  1. How deep is our tree going to be?
  2. What is the runtime of a given depth in the tree (e.g. for a given depth "j" in the tree, what is the cost of re-merging all the lists at that level

The first question is relatively easy, as it is a binary tree we know the depth is simply going to be log2(n)+1 - How do we know that? the log of a  number is simply how many times it needs to be divided by 2 (assuming 2 is the logarithm base) before the number is <= 1 - which is exactly what we are doing when we are recursively dividing our lists in half until we reach the single element lists.

E.g. the log2(8) = 3 (8/2=4; 4/2=2; 2/2=1)



So now lets calculate the running time for any given level of the tree.  Lets consider a list of length "n" and we want to know a few things:

How many merge operations do we need to perform for depth "j" of a list "n"?
In a similar way to how we calculated the depth, we can work out the number of merges (or number of leaves at the level of the tree).  If you have halved the lists each step 0..j then the number of leaves will be 2 to the power j.

E.g. The first level down, after just halving the original list, we have j=1, so expect 2 power 1 = 2 leaves.
Second level down, we again halve each of those two lists, we have j=2, so expect 2 power 2 = 4 leaves

Note that j cannot exceed the depth of the tree, which we already know.


What is the complexity of each of these merge operations?
We know the complexity of a merge from our earlier analysis of the merge step as 6m (where m is the size of the halved lists - not the original value "n" of the starting list size), and we know we have 2 to the power j of these merges, which means we know at any level of the tree, the run time performance is:

(2 power j) * (6m)

Now the value of "m" will also be dependent on the value of "j" - because as we get further down the tree, we have a greater number of merges to perform, but the size of the lists are also getting smaller (so the actual value of "m" is decreasing), so what is the value of 6m when we are at level "j" of the tree?  We can work that out in a similar fashion - we know the original "n" (size of starting list) has been halved "j" times  - so it is simply

(n)/(2 power j)

So now we can combine those and resolve the algebra.. we can see that the 2 power j cancels each other out (intuitively really, as we are increasing the number of merges, but at the same time and at the same rate reducing the size of the lists to be merged:

(2 power j) * 6 * (n/(2 power j)) = 6n




What is the overall complexity of the tree?
So we now already know the complexity of any given level of the tree (independent of the depth) and we know the depth of the tree (number of levels):

6n * (log2(n)+1)  =  6nlog2(n) + 6n

So there we have it - we have calculated the worst case running time for the merge sort.


There are a few further things that we will need to consider in later posts: In one post I will have a quick look at asymptotic analysis (e.g. "Big-Oh" notation) and what this means; in another post I will look at a Groovy implementation of it and the analysis of that.








Groovy bug - Stackoverflow calling super methods

I recently stumbled upon a bug in the version of Groovy that I was working with (Groovy 2.1.5), it's a strange one, that seems to have been solved in Groovy 2.2 but I thought I would post it here in case it was useful to anyone else.

The problem is demonstrated in the code below, but the exact use case seems to be having an inheritance structure > 2 classes deep, whereby the method return type changes.



The result in running the above code in Groovy 2.1.5 is a StackOverflow - it seems as though Groovy just recursively calls the doStuff() method on Child class until it errors rather than calling the method on the parent class.  Running on Groovy 2.2.0+ appears to run correctly.

I posted the question on stackoverflow.com, and it was suggested that it was maybe related to this bug, which has a very re-assuring resolution comment:

looks like the issue was fixed somewhen between 2.2.2 and 2.3.0