Skip to main content

FaaS in Action with AWS + Serverless Framework for Java Developers

Serverless is becoming popular and Function as a Service is one of the trends in the serverless world. So being familiar with this technology is a really good addition to the profile as a developer. But from the very beginning of serverless, the community was preferring to use javaScript as the programming language. The reason for that is Serverless was initially made for less compute-intensive and more I/O intensive services where the JavaScript is performing well. So, that leads JavaScript to hold nearly 90% of the serverless world. But moving forward the way we use serverless and utilize computational power has changed. So there is nothing wrong for a Java developer to try out the FaaS and build comprehensive services. Therefore, I will be giving a practical example for developing an API in Java and deploying it in the AWS cloud.

There are a few prerequisites to follow along

  • Basic knowledge developing an API
  • Basic idea about AWS

There won't be any cost from using AWS Free Tier to do this exercise. If you don't have an account already, feel free to create one by following the link https://portal.aws.amazon.com/billing/signup

Before we start, I'll list the services and technologies we will be using here.

  1. Serverless Framework - This is an open-source CLI tool that will help you to develop, deploy, debug and secure your services. Also, it will work on top of AWS CloudFormation and help us to set up all the necessary AWS services automatically.
  2. AWS CloudFormation - This will help us to keep track of the resources/ resource group in AWS
  3. AWS Lambda - Lambda is the actual compute service we are going to use to build and run our FaaS services. There are a lot of advantages of using this such as the auto-scaling feature, different language support, large community support, matured service in the serverless technologies, etc. I am not going to cover a lot of theories here as it's a purely practical aspect of FaaS. If you want to learn more about feel free to check the AWS Docs.
  4. AWS S3 - S3 is an object storage service in which we are going to store our code artifacts.
  5. Amazon API Gateway - This is a fully managed serverless technology and will be used to expose our HTTP endpoints and Manage them on top of Lambda. It's more reliable and beginner-friendly but feel free to use any API gateway based on your preference.
  6. AWS Cloudwatch - Cloudwatch is being used to monitor the resources and create event.

Everything starts with a design. So, Let's see the high level architecture of what we are going to build

As a first step, you need to set up the Serverless CLI in the local machine. Since the serverless is well documented I highly recommend following this doc in order to set it up.

https://www.serverless.com/framework/docs/getting-started/

Now we are going to create the project. To do that execute the following command in your CLI.

The source which is being used can be found in GitHub also: https://github.com/vithushanms/aws-serverless-java-demo.git

Now we should be able to see the project "myservice" if we open it up in an IDE/text editor. we can see the source as well as serverless.yml, This serverless.yml is the most important part of our service. Because this is the place where we are going to specify all the infrastructural configurations and function-relevant configurations. So out of the template, you'll notice most of the lines will be commented and that's for your reference of how to work with serverless. We will be replacing that with our own configurations in a while. Before that let's have a look at the source.

In the com.serverless package we could see three java files. let's explore one by one

The first thing we are going to look at here is Handler.java, If you are familiar with MVC, this is the replacement of C (controller) in FaaS. But it's slidely different from the controller. Here you will be using individual handlers for each endpoints and each handler implements the RequestHandler from AWS SDK. And the handleRequest method will be the place where we decide our response. After the implementation, this Class name and the method name will be configured as a function in serverless.yml file (will be explained in the latter part of this article).

Let's take a deep dive into the implementation given in the example. Here it takes two parameters. The first one is going to be the request itself and the second one is going to be the context. You might be wondering what the hack is context. Well, if you look at the high-level architecture of what we are developing. you will notice all our requests are coming to the Gateway first. Then they are being based on the Lambda through the proxy integration. So at Lambda level, it's not HTTP context anymore! it's Gateway context as the gateway can manipulate the request in its way. In that case, the elements which are added by AWS API gateway will be distributed to the Lambda via this gateway context. Similarly if the requests directly comming to Lambda, it will be Lambda context. I hope that makes sense. If you need further understanding feel free to refer the docs by clicking here. The given template uses the request body parameter which is input to create the Response from the pre-defined response model (Response.java file). Then building the gateway response. I know that most of you might be confused about why two response model. The reason for that is since we are using the API gateway has the standard model to accept the response from Lambda. then only it will be able to pass that to the client properly. That model is what ApiGatewayResponse.java and it has the Builder too. this Builder method makes our life easier to construct a Gateway Response. So we can keep this ApiGatewayResponse.java as it is and reuse this. As you see in the example of the given template our response body should be the body of the gateway response and it will serialize our model inside the builder.

Now let's tweak few things according to our preference. Let's say we need an endpoint where we pass the firstName, lastName, and the year of birth where it will give us the full name and the age. I guess it's good to start with the Response model.

Here, I have modified Response.java in a way that serves our purpose. It might not be the best way to do it, but it would be fine just to try this out. Now let's add a requestDto and modify our handleRequest method too.

It was not a huge change though. In addition to this, you can use a repositoryInterface or anotherInterface via dependency injection to build your response as well. Anyway, I am not going to load this with a lot of buzzwords rather I'll write a separate article about enterprise-level applications and serverless best practices. Now let's have a look at the serverless.yml file.

If you are familiar with yml files that will be a benefit. But nothing complex here. I just removed everything from the template and added only the basic configurations. In the first section, we are just setting the name of the service. then we are configuring the provider section with the cloud service, runtime, stage, and the region where we are going to deploy this. Then the package section gives the path to the deployment artifacts according to the packaging configuration. I am going to use it as it is. Feel free to modify the packaging if you wish. Next to that, we have the functions section where we declare the function configurations. Here we can add more multiple functions one by one in the right indentation and bundle them to service as well as set the name of the function according to our preferences (the function name is hello in this example). handler is the place where we specify the path of the handler in the {package name}.{class name} format. The event is the configurations of the trigger point for that function. in our case, it's only the HTTP gateway. Also, keep in mind this is just a bare minimum configuration to deploy this. In real-world serverless applications, you will be building a lot of infrastructural components from this yml.

Now it's time to think about deployment. You may have this question in your mind. The local CLI tool is going to identify your AWS and account deploy it correctly as you didn't configure any credentials yet. Well, that is absolutely correct. let's do it now. But I am not going to explain it here as it's a bit lengthy instructions rather I am going to give you the place where you can refer to the well-documented guide. Click here to navigate there.

Once you are done with the credentials setup you can run the following command in your terminal.

After the successful deployment, You can view all the resources in your AWS management console as well.

Now let's test our deployed service. There are two ways of testing it. The first one is executing the invoke-command in the serverless CLI. The other one is using a client or API testing tool. you can click here to know more about the invoke. I am going to stick to an API testing tool.

Congratulations! If you were able to follow along. you have built your first FaaS application. I hope this will be a good start. As a piece of advice I would say try few real-world use cases by yourself and if you are new to AWS echo-system are good to go with AWS journey as well.

I'll be writing another article soon in this topic for enterprise-level applications. follow me @LinkedIn to get insteant updates. Feel free to reach out to me any time.

Have a great journey ahead!!

Comments

Popular posts from this blog

Batch processing in modern Java applications with Spring Batch.

In the world of software development, Batch processing has been one of the challenging areas to implement in the early stages. But these days there are plenty of solutions available out of the box in the frameworks and platforms to do batch processing. In this article, I will share my experience with one such tool, Spring Batch. This will enable you to do batch processing by configuring this with your existing Spring Boot applications. Before we jump into the Spring Batch let me brief you about batch processing. the name might make you fear a lot about it in case if you are new to this. It’s nothing but processing the transactions (data) as small chunks or groups without any manual interaction from the user of the application. You may ask why you want to do that. The answer is because it has a lot of benefits in terms of performance and the efficiency of the application as you deal with a large dataset.  Now let’s understand how it’s going to work and how we are going to implement this

What inspired spaceX to perform a historical achievement

As we know on Saturday, May 30th at 3.22 p.m EDT SpaceX has successfully launched the crew dragon’s demo-2 with NASA astronauts Bob Behnken and Doug Hurley on top of Falcon-9 . For American soil, it’s a 9 years gap from the STS shuttle. Also, this is the first privately-owned space vehicle to bring humans to Low Earth Orbit. The biggest question is “How the SpaceX managed to do this?”. If you want to know the answer you have to know the journey of SpaceX from the beginning. In 2002, SpaceX was formed by Elon Musk and Tom Mueller (Present CTO of SpaceX) with the vision of colonizing humans in the Red Planet. In the early 2000s, SpaceX was a joke for the space exploration community. Everyone knows that transporting humans to Mars is a big deal. Still, the biggest achievement in mars is NASA’s Curiosity Mars Rover. Elon Musk was very clear about this at the time when SpaceX was founded. Also, his fortune was not enough to go for launch at that time. But they were ready to face the challen

Does 5G use the same engineering that used behind a weapon?

5G is the fifth generation of wireless data/resource sharing technology and it is becoming more popular for its conspiracy unlike the 4G or previous technologies. Of course, 5G enables tons of benefits in our day to day work, Super-fast internet for instance. But the sad truth is there are some negative myths about it. So I decided to cover one of those. Sometimes back the US military developed a weapon called  ' Active Denial System (ADS)' to control the crowd by heating the surface of the target . it falls in the category of non-lethal technology that uses the  millimeter-wave  electromagnetic energy to work. In 2010 it was deployed with the United States military. Since t he purpose of the ADS is to make targets feel like something is hurting(heat) their skin while it is working, it is using   94 to 95 GHz exposure  millimeter-wave under very high power density. just like we do with sunlight and a magnifying glass .  The important thing is 5G also uses the s