Spring-Security: Different AuthenticationEntryPoint for API vs webpage

This is just a real quick post, on a little bit of Spring that I came across today. It's a very simple thing, but, in my opinion, beautiful in it's simplicity.

I found myself working on some Spring-Security stuff, and an app where I needed to define my AuthenticationEntryPoint (I am in the process of adding the security stuff, so this is not done yet).  Simple enough - normally in config you can just add it to the exception handling setup. However, this time I wanted to define two different entry points: one for when a user attempts to access an API (JSON) and one for normal site pages.

It's not unusual to have an API baked into an app (maybe under /api/** etc), and the ideal behaviour would be to return an appropriate HTTP Status code for the API (401) plus a JSON payload, and for a logged in web page the user would be bounced to the login page before continuing.


Having dealt with this split for error handling, controller routing and security elsewhere, I assumed I would have to implement a custom AuthenticationEntryPoint, chuck in a fer IF statements checking the logged in user and requested URL and either redirect or respond with the status appropriately. However, Spring has us covered with its DelegatingAuthenticationEntryPoint - which is what it sounds like, and super simple to use.  Probably best demonstrated with the code (because it's just that simple!)

In our normal configure method we just set the entrypoint as usual. But in the DelegatingAuthenticationEntryPoint we simply initialise it with a map of RequestMatcher: AuthenticationEntryPoint (defined in Groovy above, so we have nice Map definition etc - would be slightly more verbose in Java)  - The RequestMatcher can be any implementation you like, but of course simple path matchers will probably work fine; For the AuthenticationEntryPoint there are also lots of really nice Spring implementations - including the two used in the example above, which perfectly provide what I need.


This genuinely elicited an "awww yeah" from me.

Sentiment analysis of stock tweets

Having previously wired up a simple spring app with Twitter to consume their tweet stream relating to last year's Rugby World Cup - mostly just to experiment with the event-driven programming model in Spring and Reactor - I thought on a whim, why not see if I can find some nice sentiment analysis tools to analyse the tweets, so rather than just consuming the number of tweets about a given topic, I could also analyse if they were positive or not.


Now, that probably sounded like a fairly glib comment. And to be honest, it was: sentiment analysis is very hard, and the last time I looked most efforts were not up to much. Added to that, to make it actually effective, you need some pretty specific training data - for example, if you had a model trained using this blog and then tried to apply that to another sort of text - say tweets - then it's most likely not going to perform well.  Tweets are particularly different as people use different language, grammar and colloquialisms on twitter (in part due to the 140 chars limit) compared to normal writing.

But still, I had my laptop on my commute home on the train, so I figured why not see if there are any simple sentiment analysis libraries that I could just drop in and run the tweets through.  Sure the resulting scores would likely be way off, but it would be an interesting experiment to see how easy it was (and if done, could we then find a decent training set to re-train our model so it was more accurate at analysing tweets).


A quick google later and I came across Stanford's Core NLP (Natural Language Processing) library, via the snappily titled "Twitter Sentiment Analysis in less than 100 lines of code!" (which seemed just as flippant as my original suggestion, so seemed like a good fit!).  Surprisingly, it was actually just as easy as I had hoped that it might of been! The libraries are nicely available in the maven repo, coming with a pre-trained model (albeit trained on film reviews) and are written in Java.  A lot of the code is taken from the approach outline in the above article and the Stanford Core NLP sample class, but its pretty simple and I managed to process a few thousand tweets last night having set it all up on my commute and analyse the sentiment (producing wildly in-accurate sentiment scores - but who's to know, right?!)

(I switched to streaming stock related tweets - mostly just so I could include references to Eddie Murphy in Trading Places)

Updating our dependencies

I will skip the normal app setup and Twitter connection stuff, as I was just building this on top of the app I had previously done for the RWC (which already connected to the Twitter streaming API and persisted info to Redis.

All we need to do here is add the two Stanford dependencies - you can see I also added a dependency for Twitter's open-source library - this provides tweet cleanup/processing stuff, and really just used to extract "cashtags" (like a hashtag, but starting with a $ used on Twitter to indicate stock symbols, e.g. $GOOGL etc).


Spring configuration

Next up, as we are using Spring its super easy to just add the configuration so we can let Spring manage our Stanford NLP objects and inject them into our service class that will have the code to analyse the sentiment

Now we have told Spring to manage the main Stanford class we need and the simple Twitter Extractor class. For the StanfordCoreNLP class we are passing in some properties for what text analysis we want to use (this can usually be done with a properties file, but I was feeling lazy so did it programatically - you can see details of which Annotators are available here: http://stanfordnlp.github.io/CoreNLP/annotators.html )


Next up, based on the code examples we have seen, we need a little bit of code to analyse a piece of text and return a score - So I created a simple Spring service called SentimentService that I later wire into my event listener.

That's mostly it really, in my event listener instead of just persisting the tweet along with its labels I also run the analysis and also save the score.


 (Analysis of a couple thousand tweets - an average score plus number of tweets for each symbol)

As always, all the code is available on GitHub, so feel free to fork it and play yourself (and if you manage to find a training set to accurately analyse tweets then let me know!)

Mobile: Web vs Native (again)

It seems like by now, pretty much everyone has weighed in on the native vs web conversation for mobile development, and being as I haven't yet, I thought why not? The age-old question being, should you make a native mobile app or just go with a standard responsive website? (or just use some common technology to wrap up your mobile website in an app).

The reason that I started thinking about this again was because an article as circulated at work, which seemed to be saying that you should do both native and web, as they have different strategic values - which, whilst I largely disagree with that, I think the underlying point being made was that there are different reasons for going down either path.

A quick caveat:

I will get this out the way up front - I'm not talking about any scenarios where you are in a very mobile-centric business - so if you want to make use of phone hardware like camera etc, or you are specifically a mobile-first type app, or one that is intrinsically linked with the mobile as an identity, then yes. Native is the only option.  This discussion is more for normal existing businesses that might (or not) already have a website and is at the point of inflection of whether to  build a native app or not11.

--- 

So, native or responsive web?

My general rule of thumb is as follows: Don't go native.

You would be right in thinking that's a fairly sweeping rule. But I think probably fair - and whys that? I think there are two primary concerns that make the cost and effort with building a native app not worthwhile:

1. Discoverability

The web is a great place for discovering things - just the word "web" portrays it quite nicely, from any given point there are undoubtedly loads of little threads(I'm talking links) that could be followed at no real cost - assuming it's not a dodgy looking link, the barrier to prevent someone following a link is practically non-existent - So you read a nice featured article about a new company/product on a blog, at the end of the article they link to their site, you click it. I mean, why not right? if it ends up looking crappy the back button saves you and what have you lost? a few seconds? Easy.

This is something that is clearly missing in the mobile app eco-system - assuming you aren't some behemoth of a company that millions of people want (or need) to interact with, a mobile app isn't going to help increase your customer base. Sure it might give you a richer experience for the handful (I'm talking in web scale here) of customers - but its not going to help grow customers, it will cost you time and money to produce/maintain and that's before you start having to work on directing existing customers to your mobile app:

So again, if you are a bank, who has millions of customers who have specific, regular needs to transact with you, or if you are a hot new social-local-sharing company then sure, go for that enhanced UX.  But an app is only going to be any good if the users already have the intent to transact with you. If a user is browsing the web on a mobile, your conversion rate is going to be a lot higher with a link through to a responsive website than to an app download.


2. Scalability

For me, this is the deal breaker. I'm not talking technical scalability - will your servers be able to withstand the un-doubted, soon to be approaching, mass of people who will rush out and download your app as soon as its released (see my previous point), I'm talking customer-to-app scalability.

Imagine your a regular business - you have thousands of happy customers, maybe your website even gets tens or hundreds of thousands of uniques a day - so lets build an app right?

The problem is, there is a limit to how many apps that a user will have on their phone - on a standard android phone you will likely have two screens worth of app icons when you un-box it, so there is a limit to how many more apps they will install - Again, this is how it is not like the web: visiting a website is free, but there is a much greater barrier to installing and keeping an app (let alone using it) - and when the phone is getting full, or sluggish, or the user wants a bit more space to download a show from BBC iPlayer, then the apps that aren't regularly used are going to get the chop.

When you are competing with limited resources and the big players - Facebook, messaging, banks, geo-based stuff (maps, uber etc) - then it's hard to make a compelling case for the phone user to keep or even install the app.  This demand makes it even harder to convert your loyal web customers to mobile - and weighed up against the fact that you could make an awesome(and consistent) responsive web experience, the choice looks more clear for me.


Benedict Evans says you should build an app if people are going to put your app on their homescreen, which seems like pretty sound advice - and given the size of a a phone homescreen, this makes for fairly few companies.


Final caveat:

If you are building the app because it came out of a side-project organically, or a hackathon or something similar, then by all means - there's lots of fun to be had and lessons to be learnt in building, testing and launching a mobile app - so if you don't mind the potential cost then definitely go for it!



Spring, Reactor & Event driven programming on the JVM

Following my previous outing playing with the Spring-Boot microservices stuff, I once again found myself looking through some of the Spring libraries and came across the Reactor integration stuff.  It looked interesting, and thought I would have a quick look at the asynchronous event-driven model.

As always with Spring Boot, it was super simple to get an app up and running - it's worth noting that the app doesn't really get that much into the async processing side (or really expose scenarios with potential benefits/pitfalls of the approach) - but it gets an app up and running pretty easily.


Event streaming

Obviously, to get started I needed some kind of event source (I could have just stubbed out some code to just randomly create events in the system, but I wanted something more real). Obviously, with the modern web & big data, we have a tonne of events that we could use. I went for the obvious choice of the Twitter streaming API - as it exposes a streaming API I could just connect to that and on receipt of any tweet then just push an event on to my EventBus for any interested parties to process.

The basics of connecting to the Twitter streaming API are pretty simple using Spring-Boot and Spring-Social - I just created a simple Spring-Boot webapp that just exposed a simple page to connect to Twitter (OAuth via Spring-Social) and then on connection just connected to the streaming API and started listening.




First I just connected to the sample "firehose" stream, which is supposed to be 1% of the total twitter stream (I saw reported that there are 500million tweets a day, so you'd be looking at about 5million random tweets sent a day) - But I decided to consume the tweet events to pull out data about the current ongoing Rugby World Cup (England 2015), so I switched to the search stream limiting by references to the world cup.

The searching stream provided a reasonable number of tweets, and I think whilst I was running during the England vs Australia game I processed ~500,000 rugby related tweets.


Configuring reactor

Getting reactor up and running is really easy with Spring:

The EventBus configuration is interesting, as it has options to use different patterns including the LMAX pattern - in this case I just went with a standard thread pool approach.

The tweet-eater

To be honest, the use of reactor was overkill for this experiment, as the Spring-Social API allows you to define listeners for the streaming APIs which have a handle tweet method, much like an event consuming interface. But as it was just an experiment I continued and just used that listener to push the events on to the event bus.

As you can see above, its pretty simple - the listener just creates the event object and pushes it onto the EventBus.  So, with that done, and the basic Spring-Social connection setup to listen to the stream we will have a nice flow of events being pushed (at quite a high rate!) onto the EventBus. Now we just need something to consume those events.


Event consumers

The first consumer I created was just a very basic logging consumer - all it did was count events and then log numbers

Pretty simple


Next up,  I created a basic consumer to inspect the tweets, identify rugby teams mentioned and persist the data to Redis - this was also pretty easy, as Redis integration works pretty simply and Twitter created a set of standard team hashtags for the competition - So I just mapped those to countries, and setup my consumer to check for those

I quickly stuck some lipstick on the interface to display number of tweets per country and we were off!



Like I said, it's only barely scratching the surface of async event based programming, but it shows how easy it is to get the EventBus up and running with Spring.

As always, the code for the full app is over on GitHub - if you just clone the project and add in your own Twitter API keys in the config then you should just be able to build the JAR and run it directly.

Spring-Boot & Netflix OSS - An adventure into microservices

Honestly, I still need convincing on microservices.

I can see that they are a compelling argument compared to a monolithic application, but I think I need to get my head around some of the challenges they face - the first one that comes to mind being how to effectively define the microservice boundaries - as it seems to me a lot of the applications I have ever worked with are monolithic because these boundaries are so blurred.


Anyway, I wanted to do some tech stuff, so decided to start building out an application using the microservice architectural pattern and Spring Boot seemed like a good place to get started.

This is very much a work in progress, and I am continuing to progress through different aspects of the application and at the moment there is very little actual code written (in part that is due to the simplicity that Spring-Boot provides).  All code is being kept up to date in GitHub so feel free to have a look at that.


There are lots of great blogs covering this stuff already, so won't re-cover their work, the following article gives a great write up of the Netflix OSS and the Spring integration which is worth reading:

http://callistaenterprise.se/blogg/teknik/2015/04/10/building-microservices-with-spring-cloud-and-netflix-oss-part-1/

(Image from: Building Microservices with Spring-Cloud and Netflix OSS)


Getting started: A service registry - Eureka

One of the first things that is needed is a central Service Registry to allow service-discovery - this is not a new concept to microservices and is an approach used by SOA.  Straight out of the box, Spring-Boot provides integration with Netflix's OSS application Eureka, that provides this.  I opted to have a dedicated application for my registry (code can be seen here) and it really is as simple as adding the relevant dependencies to the build.gradle file, adding an @EnableEurekaServer annotation to our application config then a simple config file defining the server port/name etc and its done!  You can just run gradle assemble in that project to build the JAR file, then run java -jar [the new JAR file] and the application will spin up - you should then be able to go to http://localhost:1111 and you will see the Eureka dashboard (with no microservices registered of course).



My first microservice

So, I had Eureka up and running, but it was looking pretty lonely with no services registered.

A microservice in Spring is also very simple - as really, all it is is a simple web application that runs in its own process with a limited domain - so spinning up a Spring Boot MVC RESTful webapp with a single controller/endpoint is enough to get me a microservice (even just a tweet would do it..)
So we can create our new microservice to do anything we like, in my case I created a QuoteService (the application is slowly evolving into an insurance engine).  Just having the standalone app isn't helping much, so we need to add some configuration to tell the service to register with our Eureka server - this will make our new microservice discoverable by other services wanting to use it. 

Again this is quite simple: we need to tell our application it should try and register with Eureka, and we should add the config to do so:

You can see that we simply annotate our application config in java, and then add some properties that define where the Eureka server is hosted and that's basically it.

Now if we build the project JAR and start up again (and we still have our Eureka service registry running) then after 30seconds or so you should see the Quote-Service registered and ready to use.


On to the next one.. 

Now, we have a microservice, and we have a registry that makes it discoverable, but still - just one microservice is pretty lonely. So next I created another dummy RESTful microservice, this time called ProductService which just followed the same pattern as the first.

Once that was started up then the Eureka dashboard started looking a bit happier with the two services registered - the obvious next challenge is seamless interaction between the two: splitting the services into their own processes is all good an well, but meaningless if you can't easily integrate them.  The way I look at it is when reading the application code of a service (or application using microservices) then it should just look like a normal application with sevice classes - there shouldn't be any fanfare around the fact that my service class actually gets the data from a dedicated microservice over HTTP/AMQP rather than just getting directly from the DB in the traditional way.


So, still just stubbing out the endpoints, I updated my QuoteService endpoint to make a call to my ProductService, and then just jammed that response into the JSON response I was returning anyway:

As you can see, it could be a standard controller in a normal monolithic application from this point, we are just calling a method on our autowired ProductService class and returning that.


So the really interesting part is in the ProductService class - at the moment this isn't a really elegant, abstracted class yes, so there is still some boiler plate, but that will have the advantage of making it clear what is going on:

As you can see, it's just making a REST call to the Product microservice and returning the response cast as a Map - but the really nice part of this is that the service url is just the service name (in this case "PRODUCT-SERVICE", that is injected to the class)  and with the RestTemplate annotated with Spring-Cloud's @LoadBalanced that microservice will be looked up in Eureka (and load balanced if there is more than one PRODUCT-SERVICE running).

So our setup is starting to take shape now - we have two microservices, both registered with Eureka and able to interact with each other in a fairly clean, loosely coupled way.


Don't push me, 'cos I'm close to the edge..

As your microservices start to proliferate, you will get different levels of service granularity, and undoubtedly you won't just want to expose all your microservices as a public API.  One option would be to create a RESTful application and define nicely named endpoints you want to expose and then use the standard integration described above to integrate it.

Fortunately, there is an easier way - Netflix provides a library called Zuul that can be simply configured to map URL patterns to given defined service names (and again looks up in the Eureka service registry).  Much like Eureka, this is super easy to setup and just needs an annotation and the config again:

And the config is pretty easy to understand:

As you can see, we just define service names against URL patterns.

Now once all are apps are up and running, and the microservices are registered on Eureka then you have a single API interface to start interacting with the services (rather than having to access each service on its designated port etc).


Conclusion

 So that's as far as I have got - I wired up the QuoteService to MongoDB so the data all gets persisted there (and have added a get quote endpoint which gets the same data from mongo) and starting to wire up the product service with JPA.  So far it's been enjoyable, and things are making more sense than when I started - but there are a few questions still:

  • It seems like there is still duplication of service names throughout the different projects - for example the ProductService name ("product-service" - case insensitive) is proliferated throughtout - the service itself defines it, the QuoteService needs to know the name of the service, the Zuul edge server needs to know the name etc.  I guess this is unavoidable as these are intrinsic dependencies but still seems a bit flaky.
  • It feels like the Service classes could be factored out - our ProductService class that allows HTTP REST interactions with the Product microservice would likely need to be re-used across all applications/microservices that need to use the Product microservice

NerdAbility - A presentation

I recently had to give a presentation about NerdAbility, so for fun, here are the slides I put together. It was just an intro to the product, where the idea came from and what it was trying to solve, followed by a brief jump into one or two interesting elements of the project from a tech point of view.