Machine Learning with AWS & Scala

Recently, in an attempt to starting learning React, I started building an akka-http backend API as a starting point. I quickly got distracted building the backend and ended up integrating with both the Twitter streaming API and AWS' Comprehend sentiment analysis API - which is what this post will be about.

Similar to an old idea, where I built an app consuming tweets about the 2015 Rugby world cup, this time my app was consuming tweets about the FIFA world cup in Russia - splitting tweets by country and recording sentiment for each one (and so a rolling average sentiment for each team).


Overview

The premise was simple:

  1. Connect to the Twitter streaming API (aka the firehose) filtering on world cup related key words
  2. Pass the body of the tweet to AWS Comprehend to get the sentiment score
  3. Update the in memory store of stats (count and average sentiment) for each country

In terms of technology used:
  1. Scala & Akka-Http
  2. Twitter4s Scala client
  3. AWS Java SDK

As always, all the code is on Github - to run it locally, you will need a Twitter dev API key (add an application.conf as per the readme on the Twitter4s github) and you will also need an AWS key/secret - the code will look for credentials stored locally but you can also just set them in environment variables before starting. The free tier supports up to 50,000 Comprehend API requests in the first 12 months - and as you can imagine, plugging this directly into twitter can result in lots of calls, so make sure you restrict it (or at least monitor it) before you leave it running!


Consuming Tweets

Consuming tweets is really simple with the Twitter4s client - we just define a partial function that will handle the incoming tweet. 

The other functions about parsing countries/teams are excluded for brevity - and you can see its quite simple - each inbound tweet we make a call to the Sentiment Service (we will look at that later) then pass it with the additional data to our update service that will then store it in memory. You will also see it is ridiculously easy to start the Twitter streaming client filtering by key words.


Detecting Sentiment

Because I wanted to be able to stub out the sentiment analysis without being tied to AWS, you will notice I am using the self-type annotation on my twitter class above, which requires a SentimentModule to be passed in at construction - I am using a simple cake pattern to manage all my dependencies here. In the Github repo, there is also a Dummy implementation, that will just pick a random number for the score, so you can still see the rest of the API working - but the interesting part is the AWS integration:
Once again, the SDK makes the integration really painless - in my code I am simplifying the actual results a lot to a much cruder Positive/Neutral/Negative rating (plus a numeric score -100..100).

The AWSCredentials class is the bit that is going to look in the normal places for an AWS key.


Storing and updating our stats

So now we have our inbound tweets and a way to asses their sentiment score - I then setup a very simple akka actor to manage the state and just stored the API data in memory (if you restart the app, the store gets reset and the API stops serving data).

Again, very simple out of the box stuff for akka, but it allows easy and thread safe management of the in-memory data store. I also track a rolling list of the last twenty tweets processed, which is managed by a second, almost identical, actor.


The results

I ran the app during several games, below are some sample outputs from the API. The response from the stats API is fairly boring reading (just numbers) but the example tweets show two examples of a positive and neutral tweet correctly identified (apologies for the expletives in the tweet about Poland - I guess that fan wasn't too happy about being beaten by the Senegalese!) - you will also notice, the app captures the countries being mentioned, which exposes one flaw of the design: in the negative tweet from the Polish fan loosing two goals to Senegal, it correctly identifies the sentiment as negative, but we have no way to determine the subject - as both teams are mentioned, the app naively assigns it as a negative tweet to both of the teams, where as on reading, it is clearly negative with regards to Poland (I wasn't too concerned for my experiment, of course, just an observation worth noting).

Sample tweet from the latest API:

Sample response from the stats API:

When I finally did get around to starting to learn React, I just plugged in the APIs and paid no attention to styling, which is a round about way of apologising for the horrible appearence of the screenshot below (I'm really sorry about the css gradient)!





0 comments:

API Conf 2018 - Product Management for Engineers

Last week I attended and spoke at the 2018 API Conference in Berlin.

Having written about the topic before, my talk was titled: Your API as a Product - Thinking like a Product Manager (really aimed at engineers/architects/technologists).

It was recorded, so hopefully I will be able to share the video in the future, but the thrust of the talk was based around the concept that we are all building products on some level, and even if we don't have direct input into a commercial product that we might be writing code for, we have some output: code, bug reports, designs etc. So it makes sense, if we are all building products, and those products will all have users and therefore a user experience (your code, bug reports, design docs etc will be used by others, or maybe you yourself will be the user), that we try and learn from the discipline of Product Management, as it is focussed on building better products and better user experiences for the end users.

Slides are below, and really your best bet is to scroll down to the references section at the end and start watching the real Product Managers' talks.


0 comments:

Generic Programming with Scala & Shapeless part 2

Last year I spent some time playing with, and writing about, Scala & Shapeless - walking through the simple example of generating random test data for a case class.

Recently, I have played some more with Shapeless, this time with the goal of generating React (javascript) components for case classes. It was a very similar exercise, but this time I made use of the LabelledGeneric object, so I could access the field names - so I thought I'd re-visit here and talk a bit about some of the internals of what is going on.


Getting started

As before, I had to define implicits for the simple types I wanted to be able to handle, and the starting point is of course accepting a case class as input.

So there are a few interesting things going on here:

First of all, the method is parameterised with two types: caseClassToGenerator[A, Repr <: HList] A is simply going to be our case class type, and Repr is going to be a Shapeless HList.

Next up, we are expecting several implict method arguments (we will ignore the third implicit for now, that is an implicit that I am using purely for the react side of things - this can be skipped if the method handles everything itself):

implicit generic: LabelledGeneric.Aux[A, Repr], gen: ComponentGenerator[Repr],

Now, as this method's purpose is to handle the input of a case class, and as we are using shapeless, we want to make sure that from that starting input we can transform it into a HList so we can then start dealing with the fields one by one (in other words, this is the first step in converting a case class to a generic list that we can then handle element by element).  In this setting, the second implicit argument is asking the compiler to check that we have also defined an appropriate ComponentGenerator (this is my custom type for generating React components) that can handle the generic HList representation (its no good being able to convert the case class to its generic representation, if we then have no means to actually process a generic HList).

Straight forward so far?

The first implicit argument is a bit more interesting. Functionally, all LabelledGeneric.Aux[ARepr] is doing is asking the compiler to make sure we have an implicit LabelledGeneric instance that can handle converting between our parameter A (the case class input type) and Repr (the HList representation). This implicit means that if we try to pass some type A to this method, the compiler will check that we have a shapeless LabelledGeneric that can handle it - if not, we will get a compile error.

But things get more interesting if we look more at what the .Aux is doing!


Path dependent types & the Aux pattern

The best way to work out what is going on is to just jump into the Shapeless code and have a dig. I will proceed to use Generic as an example, as its a simpler case, but its the same for LabelledGeneric:

That's a lot simpler than I expected to find, to be honest, but as you can see from the above example, there are two types involved, there is the trait parameter T and the inner type Repr, and the Generic trait is just concerned with converting between these two types.

The inner type, Repr, is what is called a path dependent type, in scala. That is, the type is dependent on the actual instance of the enclosing trait or class. This is a powerful mechanism in Scala (but one that can also catch you out, if you are in the habit of defining classes etc within other classes or traits). This is an important detail for our Generic here, as it could be given any parameter T, so the corresponding HList could be anything, but this makes sure it must match the given case class T - that is, the Repr is dependent on what T is.

To try and get our head around it, let's take a look at an example:

Cool, so as we expected, we can see that in our Generic example, we can also see that the type Repr has been defined matching the HList representation of our case class. It makes sense that we want the transformed output HList to have its own, specific type (based on whatever input it was transforming), but it would be a real pain to have to actually define that as a type parameter in the class along with our case class type, so it uses this path-dependent type approach.

So, we still haven't got any closer to what this Aux method is doing, so let's dig into that..


The Aux Pattern

We can see from our code that the Aux method is taking two parameters, firstly A - which is the parameter that our Generic will take, but the Aux method also takes the parameter Repr - which we know (or at least pretend we can guess) corresponds to the path dependent type that is defined nested inside the Generic trait.

The best way to work out what is going on from is to take a look at the shapeless code!

As we can see, the Aux type (this is defined within the Generic object) is just an alias for a Generic[T], where the inner path-dependent type is defined as Repr - they have a pretty decent explanation of what is going on in the comments, so I will reproduce that here:

(that is abbreviated for the more relevant bits - they have even more detail in the comments that can be read).

That pretty nicely sums up the Aux pattern - the pattern allows us to essentially promote the result of a type-level computation to the higher level parameter. It can be used for a variety of things where we want to reason about the path dependent types, besides this, but this is a common use for the pattern.



So thats all I wanted to get into for now - you can see the code here, and hopefully with this overview, and the earlier shapeless overview, you can get an understanding of what the LabelledGeneric stuff is doing and how Shapeless is helping me generate React components.

0 comments:

JAXLondon 2017: Agile Machine Learning [VIDEO]

Last October a colleague and I gave a talk at the JAXLondon Conference about Machine Learning in an agile, commercial environment (I then also gave the talk again in November in Munich at the W-JAX Conference).




The video of the talk is now available - the first half (and end section) is mostly softer stuff, where I talk about lessons learnt from doing ML research in a commercial environment and the middle section is my colleague, Sumanas, talking about how Word2Vec works and some more interesting demos of using it in an interesting application!




In related conference speaking news, I will be in Berlin next month for the API Conference to talk about Product Management and API Design: https://apiconference.net/api-design-documentation/your-api-as-a-product-thinking-like-a-product-manager/



0 comments:

Monster dash - Making an android game with my son

9:03 PM , , 0 Comments

At the start of last year, 2017, I set a resolution, of sorts, to build a mobile app with my older boy. He was just getting into playing games on mobile phones and tablets, and lots of them were just simple side scrolling platform games, where your character just had to run and avoid minor obstacles and perils.

Needless to say, whilst I managed to start it, it wasn't until after Easter this year that I actually managed to complete it. Now, this did appear to pose one problem: over a year later since the idea, my boy was playing far more sophisticated games so when I pitched the idea of making a mobile game together, his plans went far beyond the simple side scrolling platform game I had in mind! I was a bit unsure if he was going to be totally underwhelmed by the finished product that we were making, but in the end, having seen his creation come to life he was thoroughly pleased.



When we finally had something working (albeit fairly primitive) he took it into school as a show-and-tell, and surprisingly, the other children were all very impressed - I assume just because of the feat of making a game, despite in paling in comparison to actual games they undoubtedly all played.





Anyway, the source code for the game is all on Github (maybe one day we will make a game and publish it, so he can see that process and actually have other people play his game) and can be found here: https://github.com/robhinds/monster-dash.  I can't take much credit for the legwork on this one though - knowing that the game concept I was after was very simple, I figured there would be a how-to somewhere lurking on the internet, and sure enough we stumbled upon this: http://williammora.com/a-running-game-with-libgdx-part-1 - If you want to have a go I would recommend following William's series of articles explaining the hows and whats. I modified (read mangled) his source code a bit, simplified parts and added flourishes here and there, but its very similar in theory.



To be honest, the hardest part was cleaning up the images - He drew them, then I just snapped them with my phone, used a selection of online tools to remove backgrounds and chop them up for my animation, then loaded them into the app.

My boy really enjoyed working on it, and still asks if we can make another app, with even grander ideas, so I highly recommend it!


0 comments:

We need to talk about AI

Ethical and Regulatory questions facing AI





Regardless of area of expertise, most of us are probably already aware of the momentum around Artificial Intelligence (AI). Between self driving cars, home assistants (Alexa, Google Home, et al) and the growing capabilities of our mobile devices there is no escaping the ever looming presence of AI in our lives.

Furthermore, it seems unlikely that this will slow down anytime soon. A recent Narrative Science study found that AI adoption grew by 60% in the last year with 61% of organisations having reported to have implemented AI within their business, and a Gartner report predicted that by 2020 85% of customer interactions will be managed without human intervention.

But despite this growth, there is still a question mark over whether, and if so, how, the field should be regulated. Having been brought up on decades of sci-fi about AI going rogue and robots enslaving the human race, it feels like there is both the fear of this possible future, whilst also scepticism that these fears are only the stuff of movies. Elon Musk has famously warned of the future risks of AI: “I think we should be very careful about artificial intelligence. If I had to guess at what our biggest existential threat is, it’s probably that” whilst others, including Mark Zuckerberg, have downplayed the claims of doomsday scenarios as irresponsible.

So what's the big deal? AI already permeates so many aspects of life and business, but considering for a moment that these technologies could be being used to control autonomous cars on public roads, determine people’s credit score or suitability for a job, to detect illness or even in policing and judicial decision making - it is pretty clear that we should have a good understanding of these technologies and clear systems of accountability and control in place. In all these examples getting a decision wrong has the potential to ruin lives, yet there is still limited regulation, control or even understanding of the algorithms, the data and their usage.

A common analogy is with other heavily regulated industries: big pharma companies can’t release drugs without thorough testing and approval, yet several big tech companies have already started testing autonomous vehicles on public roads with limited regulatory controls (that’s not to say that they have had a completely free pass, there are varying levels of regulation, depending on the region. Arizona has long been promoting itself as an AI friendly state to try to attract business from big tech, making it as easy as possible for companies to test self driving cars with minimal regulatory friction, and they recently saw the first fatality from a self-driving car).

In its 2017 report, the AI Now Institute recommended that AI be outright ban from use in any high risk areas, such as criminal justice, healthcare, welfare and education and further measures for other domains - which given the potential impact of errors in these domains, seems like a fairly sensible starting point.



Uncertainty and the unknown


One key aspect that is especially troubling is the lack of understanding of both the data and the underlying technology. This isn’t necessarily a surprise - we have computers being trained on millions of data points, to the point of being able to outperform humans at their tasks, so it should come as no surprise that both the inner workings and the end results could be beyond easy comprehension.

This problem has been demonstrated by several high profile mishaps from large tech companies, showing that even companies that have a wealth of resources and technical expertise in the domain can be caught out - such as Microsoft’s AI chatbot Tay, who quickly became racist when released into the wild. Clearly Microsoft had neither intended nor envisaged that end result. Similarly, when Google translate revealed gender bias in pairing “he” with “hardworking” and “she” with “lazy” - it clearly wasn’t an intentional or foreseen behaviour, but eventually revealed itself with wider usage.



Understanding where bias in AI comes from


To get a better understanding of where these biases and blind spots come from, let’s take a look at how AI learns. Broadly speaking, there are three primary approaches to training AI: Supervised, Unsupervised and Reinforcement.

Unsupervised learning is where the AI is fed very large amounts of raw data - for example an entire corpus of fictional texts - and it is left to work out patterns or groupings. That is, it doesn’t know a right or wrong answer, but can identify related things from the dataset and group them together (for example, AI reading popular fiction might group together terms such as “batman” and “wonder woman”, but it would have no knowledge of what these terms actually mean).

Supervised learning is where the AI is fed very large amounts of marked up data - that is, for each input, it also gets passed the expected output. An example of this is if you had a large set of photos (say Google Photos) which are pre-tagged with descriptions of what is in the photo, the dataset could be used to train an AI to identify contents of a photo.

Reinforcement learning is similar to supervised in as much as the algorithm gets information as to whether or not it is performing well (like knowing the answer for a given input) but is in the form of a feedback loop and works more like a trial-and-error approach to learning (it might have a general fitness score function that can be used by the algorithm to determine whether or not its response to given input has been successful or not and adjust its response for the next cycle). The simplest example of this is something like AlphaGo/AlphaZero, where an algorithm learns to play a game like Go or chess by trial and error and gets feedback on its attempted response from the game itself.

Both Supervised and Unsupervised learning cases require vast amounts of data to accurately train AI, which really leads us to one of the primary challenges for building fair and ethical AI: sourcing the data to train on. AI is dependent on these huge datasets, and finely tuned to all the details and subtle underlying patterns, regardless of whether we are aware of them or not, and as we will see, getting objective, raw data sets of sufficient magnitude is rife with challenges.



Institutional bias


Similar to the concept of Conway’s Law, which states “any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure”, the data we naturally generate in action, conversation and interactions as a society or organisation will naturally reflect the values, beliefs and structure of the society (or organisation). There is an intrinsic and inescapable subjectivity in all big data, best described by Lisa Gitelman in her book Raw Data is an Oxymoron:

Objectivity is situated and historically specific; it comes from somewhere and is the result of ongoing changes to the conditions of inquiry, conditions that are at once material, social, and ethical

A simple example of this could be in criminal statistics: if a police force stop-and-search a particular demographic more heavily than others, then that will be reflected in the numbers and therefore that cultural subjectivity influences the data set - this subjectivity will then naturally carry over to, and likely be amplified by, the trained AI as it becomes finely tuned to the data (an example of this was seen where some software used to inform sentencing decisions relied on data that had institutional bias, which resulted in a racial bias in the risk assessment - strengthening the AI Now report’s proposal of banning AI use in these areas).



Finding complete & representative data


Compounding this problem is the fact that researchers working in AI face the challenge of finding datasets that are big enough and permitted for such use, which can be hard to come by, meaning they often make-do with incomplete or skewed datasets. For example, the popular community discussion web site Reddit makes its vast historic dataset publicly available, which is a rich source of natural text and conversation, and makes for a very tempting dataset for engineers and researchers to take advantage of - however, Reddit is a very specific subset of the internet, and the real world demographic, meaning that whilst there is undoubtedly a lot that can be learnt from that wealth of data, any AI trained on it will be heavily subjective.

There have been several reports finding that these incomplete or skewed data sets just further add to the bias. The 2017 AI Now report said:

data can easily privilege socioeconomically advantaged populations, those with greater access to connected devices and online services

Which is to be expected when you think about it really - always connected people with mobile devices will naturally be generating a lot more date than those without easy access to computers. On a very simple level, the core regular users of reddit, for example, will likely have access to mobile devices or in the very least have available access to computers and the internet - which rules out large parts of the population - not to mention the inclination to partake in the online community.

There are also other challenges that are intrinsic to the way AI currently works: if we have a dataset where a particular demographic is only reflected by 1% of the data, then the AI could claim to achieve 99% accuracy whilst being completely inaccurate for all of that 1% minority. Furthermore, we know that there is a strong relationship between the amount of training data and the accuracy of AI, so in the scenario we have a perfect representation of the population, by definition, all minority groups will have a smaller selection of data points to train on so inevitably the performance of the AI for minority groups will fare worse.

Finally let’s consider again that we have a huge, rich dataset (the idea scenario), and we try to intentionally exclude sensitive features that might explicitly encode bias: race, gender, age, etc. There are still loads of data points that may still act as a indirect proxy to these features, so even without including gender, age and sex in the input data, it is easy to see how these features can get encoded in other data points such as names, location, interests, communication style. This makes it even harder to detect and prevent bias in our datasets.

There is no objectivity in big data.



How can we address the problem?


Some of these examples might have clearer cases of existing bias that we need to be address in training our AI, but a tougher challenge is how can we address the more subtle biases hidden in the cultural objectivity that we might not even be aware of? We all carry our own opinions and biases that subconsciously affect our opinions and attitudes toward things - but if we are not consciously aware of those, we need to think about how we can ensure that developers training AI can have the foresight to engineer around these biases?

This issue highlights one often recommended  approach to tackling the problem of having a greater emphasis on the need for diversity in the teams building AI. Both diversity in terms of individual identities but also cross-functional teams. Statistically and broadly speaking, AI is often developed by teams of engineers with limited diversity, which results in a limited range of views when thinking about the dataset and in what goals are optimised for in the training process. The 2017 AI Now report recommended:

“​stakeholders​ ​in​ ​the​ ​AI​ ​field​ ​should release​ ​data​ ​on​ ​the​ ​participation​ ​of​ ​women,​ ​minorities​ ​and​ ​other​ ​marginalised​ ​groups within​ ​AI​ ​research​ ​and​ ​development.

Aside from trying to recognise subtle bias in the data, we also need to consider that the objective norm, and what we consider to be ok at the moment is changing. Going back to Lisa Gitelman’s quote: “Objectivity is situated and historically specific”. If you could get a dataset from even just two decades ago, it’s not hard to imagine that AI trained on that would have un-acceptable biases because the societal norm and general attitudes to race, gender and identity, etc have changed significantly since then.

As a simple example, take the motor insurance industry. For decades, insurance companies identified young male drivers as a particularly high risk of accident so traditionally charged much higher premiums for that demographic - previously a widely accepted approach, and one based in statistics: young male drivers were statistically more likely to have an accident behind the wheel. But then, in 2012 EU gender discrimination regulation came into effect that prevented companies charging men more than women, so now the insurers have stopped that categorisation for pricing despite the data being available. If that was AI it would need to be re-trained with a modified dataset, with gender probably removed from the data and thought put into other data points that would also need to be removed (names, for example, might very easily be a broad proxy to gender). Whilst this is a simpler example, as its a binary change in legislation with clear requirements, there are also the more gradual shifts in attitude where it becomes a lot fuzzier - like the changes in attitudes on race, gender and secuality over the last thirty years.

We previously discussed the idea that even if we exclude socially salient data points, such as gender, those features can still get encoded via other proxies in the data, and this example of the change in EU regulation and its effect on the insurance industry provides an interesting case study in exactly that phenomenon. There was an article written in the Guardian following the EU ruling, explaining that, despite the ruling meaning insurers couldn’t charge more because a driver was male, male premiums have actually increased in comparison to female premiums since. The reasoning they provide, is that rather than classifying on the crude, data point of gender, the system instead places greater importance on a wider set of data points, and it turns out that these other data points are really just acting as encoded proxies (they list car size, occupation, vehicle modifications). The article makes the observation that MoneySupermarket released a study showing that 8 out of the worst 10 occupations for drink/drug drive incidents were the building trade, with midwives being the least likely to have a drink/drug drive offence, the suggestion being that building trade is predominantly male, and midwives, predominantly female.



It certainly seems to me like there are still lots of challenges as to how we can foresee potential problems and how to tackle them. A key starting point will be ensuring teams working in the area have a good understanding of the dataset they are working with: where it comes from, any inherent bias or blind spots and which of the data points might need modifying or weighting due to their contextual/social salience. This will need to be driven through agreed best practices and AI development standards from organisations like AI Now and from academia, as well as a need for appropriate regulatory controls (although these face their own challenges, which I will discuss in a later article).

I also believe that these challenges mean an even greater need for for diversity of the teams -  both in terms of the race, background, gender etc of the team, and also cross-functional members, not just engineers but also working closely with the specific domain experts for the field.




Photo credits:

Heading Photo by Alex Knight on Unsplash
Anonymous person Photo by Andrew Worley on Unsplash

0 comments:

Serverless with AWS Lambda & Scala

About a year ago, I started looking at AWS's serverless offering, AWS Lambda. The premise is relatively simple, rather than a full server running that you manage and deploy your docker/web servers to, you just define a single function endpoint and map that to the API gateway and you have an infinitely* scale-able endpoint.

The appeal is fairly obvious - no maintenance or upgrading servers, fully scalable and pay per second of usage (so no cost for AWS Lambda functions that you define whilst not being called). I haven't looked into the performance of using the JVM based Lambda functions, but my assumption is that there will be potential performance costs if your function isn't frequently used, as AWS will have to start up the function, load its dependencies etc, so depending on your use case, it would be advisable to look at performance bench marking before putting into production use.

When I first looked into AWS Lambda a year ago, it was less mature than it is today, and dealing with input/output JSON objects required annotating POJOs, so I decided to start putting together a small library to make it easier to work with AWS Lambda in a more idiomatic Scala way - using Circe and it's automatic encoder/decoder generation with Shapeless. The code is all available on github.

Getting Started

To deploy on AWS I used a framework called Serverless - this is a really easy framework to setup serverless functions on a range of cloud providers. Once you have followed the pre-requisite install steps, you can simply run:

serverless create --template aws-java-gradle 

This will generate you a Java (JVM) based gradle project template, with a YML configuration file in the root that defines your endpoints and function call. If you look in the src folder as well, you will also see the classes for a very simple function that you can deploy and check your Lambda works as expected (you should also take the time at this point to login to your AWS console and have a look at what has been created in the Lambda and API Gateway sections. You should now be able to curl your API endpoint (or use the serverless cli with a command like: serverless invoke -f YOUR_FUNCTION__NAME -l).

ScaLambda - AWS Lambda with idiomatic Scala

Ok, so we have a nice simple Java based AWS Lambda function deployed and working, let's looking at moving it to Scala. As you try to build an API in this way you will need to be able to define endpoints that can receive inbound JSON being posted as well as return fixed JSON structures - AWS provides its inbuilt de/serialisation support, but inevitably you will have a type that might need further customisation of how it is de/serialized (UUIDs maybe, custom date formats etc) and there are a few nice libraries that can handle this stuff and Scala has some nice ways that can simplify this.

We can simply upgrade our new Java project to a Scala one (either convert the build.gradle to an sbt file, or just add Scala dependency/plugins to the build file as is) and then add the dependency:



We can now update the input/output classes so they are just normal Scala case classes:


Not a huge change from the POJOs we had, but is both more idiomatic and also means you can use case classes that you have in other existing Scala projects/libraries elsewhere in your tech stack.

Next we can update the request handler - this will also result in quite similar looking code to the original generated Java code, but will be in Scala and will be backed by Circe and it's automatic JSON encoder/decoder derivation.



You will see that similar to the AWS Java class we define generic parameter types for the class that represents the input case class and the output case class and then you simply implement the handleRequest method which expects the input class and returns the output response.

You might notice the return type is wrapped in the ApiResponse class - this is simply an alias for a Scala Either[Exception, T] - which means if you need to respond with an error from your function you can just return an exception rather than the TestOutput. To simplify this, there is an ApiResponse companion object that provides a success and failure method:

All the JSON serialisation/de-serialisation will use Circe's auto derived code which relies on Shapeless - if you use custom types that cannot be automatically derived, then you can just define implicit encoder/decoders for your type and they will be used.

Error handling

The library also has support for error handling - as the ApiResponse class supports returning exceptions, we need to map those exceptions back to something that can be returned by our API. To support this, the Controller class that we have implemented for our Lambda function expects (via self type annotations) to be provided an implementation of the ExceptionHandlerComponent trait and of the ResponseSerializerComponent trait.

Out of the box, the library provides a default implementation of each of these that can be used, but they can easily be replaced with custom implementations to handle any custom exception handling required:

Custom response envelopes

We mentioned above that the we also need to provide an implementation of the ResponseSerializerComponent trait. A common pattern in building APIs is the need to wrap all response messages in a custom envelope or response wrapper - we might want to include status codes or additional metadata (paging, rate limiting etc) - this is the job of the ResponseSerializerComponent. The default implementation simply wraps the response inside a basic response message with a status code included, but this could easily be extended/changed as needed.

Conclusion

The project is still in early stages of exploring the AWS Lambda stuff, but hopefully is starting to provide a useful approach to idiomatic Scala with AWS Lambda functions, allowing re-use of error handling and serialisation so you can just focus on the business logic required for the function.


0 comments: