Categories
infrastructure

Deploying a prototype with JHipster

In my previous post, I looked at the minimal infrastructure for a hobbyist webapp in JHipster. Now I want to look at the process for putting a prototype into production.

What do I mean here by a prototype? I mean a simple first cut of a production site. It needs to be simple, while achieving the standards required of a professional site. It also needs to be better documented and reproducible than a one person hobby-site: I should be able to hand over the code and documentation to another developer and never have them call me for help.

The background to this post is a requirement to produce a simple platform for a local company. The work needs to be supportable by other people, as I don’t want to be the only person who can work on this. This project provided a good opportunity to look at moving beyond the hobby-site model.

Ultimately, the changes made on top of what was done last time are adding continuous integration; planning zero-downtime deployment; and sketching out a roadmap for the future.

Platform

In the previous post I discussed the trade-offs between AWS and Digital Ocean. AWS has powerful infrastructure, but is only worth taking on when the infrastructure costs are lower than the additional time needed to manage that infrastructure. There also needs to be a commitment to pay the ongoing platform costs, when long-term budget and timescales are not yet set. For the same reason, I dismissed the idea of a managed database, preferring the quick and cheap option of installing MySQL on the server.

For this initial prototype set-up, I think we are still at the stage where digital ocean has the lead. However, one of the downsides of this is losing a resilience in the system, so good monitoring needs to be in place.

Jenkins

As this is a professional site, the deployments need to be accountable – which means the builds need to be repeatable. JHipster provides support for setting up continuous integration, and this is so simple that it is inexcusable not to use it. By using Jenkins for production builds we can be sure these are done correctly, with no danger of files outside version control polluting the project.

I added a new digital ocean server using their Jenkins tutorials and soon had a CI server up-and-running. The first time I tried this, I had a few problems with the machine running out of memory, but assigning some virtual memory fixed this.

A basic change deployment process

A deployment process should be completely automated – that way you can be sure the process takes place the same way every time. However, if the process has not been automated, it does at least need to be documented.

If the process is not being automated at the start of a project, then some critical thought needs to be given as to when this will happen. As more features get added, performing an automated deployment only gets more complicated. It’s technical debt: over years, days can be lost on something like manual deployments, while time can’t be found to fix things there and then.

At this stage I am assuming a single production server, configured as in my previous post, ie with an Apache Server to handle HTTPS connections and proxy to the application server.

The deployment process should be clearly documented so that anyone working with the site can find it. That process will look something like this:

  • Create a ticket, summarising the changes that need to be made. You might not use Jira, but some sort of tracking system should be used, to provide a trail of events.
  • Define a series of (preferably automated) tests that will define whether the changes are successful.
  • Create a branch, add the tests and make the changes. Database updates must be backwardly-compatible and work with both the new and current version of the application (for example, take care removing columns).
  • If a code reviewer is available, they should review the changes before they are merged. If everything seems fine, the branch should be rebased against master then merged.
  • The changes are picked up by Jenkins and a production jar is produced.
  • The production jar is started using a new port via a command line option (-Dserver.port=XXXX). If database changes are included in liquibase, they will be applied at this point.
  • The Apache configuration is amended to point to this new version. Apache is then restarted
  • The old application server is removed.

We could use a load balancer for the deployments, which currently costs $10.80 a month on digital ocean. This would offer several more options for deployment, but these would also add costs and take more time to set up.

As we are using JWT, users don’t notice the switch of servers. There are issues with JWT, such as the difficulty of repudiating tokens when we need to end the session, but that is not an issue in this case.

Recovery

One of the most interesting trade-offs in these systems is between preparing for the possibility of error or dealing with the issues that actually occur. Getting a site close to 100% uptime is incredibly expensive – for example, what if the provider suffers an outage? should you be able to fall back to another cloud provider? Time/money might be better spent on getting things out there and exploring user responses.

I would suggest there are two important things to consider:

  • Given the time/money trade-off, what are acceptable SLAs for this simple site?
  • If the whole system were to be deleted, can the site be rebuilt in a sensible amount of time? How much data would be lost?

What’s missing

The above outlines what I would need for a minimum viable site deployment. There is some distance between this and what I would expect from a fully-featured production webapp. I will cover these in future posts:

  • A full continuous integration pipeline, including quality assurance tools such as Sonar and findbugs.
  • A more nuanced git branching strategy.
  • Spreading the site across multiple hosts, increasing the robustness and allowing scaling.
  • Capacity and error logging/alerting. This needs to be persistent, and to immediately communicate serious problems. Ideally, load spikes can be responded to automatically.
  • Better database recovery planning.
  • Clearer deployment tracking, so we can identify which version of the application a bug report occurred against.

The other thing to consider is at what point the savings would justify moving across to AWS.

Conclusion

The changes between this type of application and a hobbyist version are minimal, but JHipster supports us in getting continuous integration running, which is a great help. The main changes are tightening up the process, so that other people can become involved in this process as possible.

Categories
infrastructure

Deploying a hobby site with JHipster

This is the first in a series of posts looking at JHipster deployment. This post considers the most basic deployment. Later posts will look at more maintainable set-ups, including continuous integration and deployment pipelines

For me, one of the most exciting things about JHipster is that it makes it feasible to build hobby websites on a full-stack Java/Javascript platform. Basic functionality can be added within a few hours, and pretty good user administration comes as standard.

But, for all web-applications, getting from a feature-complete version to a live application can be a problem. There are several things to consider such as reliability, monitoring and security. This post looks into the minimal configuration required for a hobby site. I’m planning to maintain it as a work-in-progress, so if you have any questions or suggestions, please leave a comment below.

What do we mean by a hobby website?

By ‘hobby website’, I mean something that is created and run for fun. Since it’s not intended to make money, the requirements for uptime and stability are lower than a professional site, and we are probably making a few shortcuts in terms of process. We want it to chug along by itself and check in with it at weekends; we assume we will get pinged for errors, but we won’t respond to them immediately. And we also probably want to spend more time working on code rather then infrastructure.

Indeed, I suspect most people considering a hobby site are not going to run full test suites. While proper testing is more efficient in the long run, and essential on large or collaborative projects, some people enjoy the immediacy of seat-of-the-pants development. Certainly, reputational risks are lower when doing a hobby website. I’m not saying that it’s correct to avoid setting up a proper process with continuous delivery – but for most people that’s not what they want from a project they work on in their spare time.

Platform

Unless you have a very generous employer, you’re going to need somewhere to host the application and in the long term, this will cost something. If you’re eligible for the free tier on AWS, or GCP’s free offerings, that’s a great help, otherwise you’re looking at a paid option. While a lot of people offer virtual servers, two popular options are Amazon’s AWS and Digital Ocean.

While AWS is a powerful and sophisticated platform, it’s probably overkill for most hobby sites. Setting up an AWS account involves a number of subtleties, and the additional features can be expensive. So, while I would tend towards the power of AWS for a commerical application, I favour Digital Ocean for hobby sites and simple deployments. Digital Oceans’s virtual servers are about half the price of an Amazon server ($5 a month for the lowest level, $10 for something a little more powerful). Since we’re talking about sites that aren’t going to make money, reducing costs is probably good.

Getting a server up-and-running on Digital Ocean is straightforward, and they provide great step-by-step tutorials. The first step is the basic server set-up. Digital Ocean seem to be offering a managed database service, but it’s also easy to install MySQL and set up a new user.

Setting up email is best done through a third-party supplier. Mailgun‘s SMTP service is easily to configure with JHipster, and offers up to 10,000 emails a month for free.

Running the app

There are various options for running an app on a server, and Spring Boot’s documentation has a section on deployment and installation of Spring Boot apps. The main takeaway from this is do not run the Spring Boot application as root.

I’ve actually been quite lazy. Rather than setting up a service, I ran my application from the command line and backgrounded it. I wouldn’t do this on an app I was producing professionally – but I have a monitoring service on the app and it’s stable enough for my needs.

One other thing – the Spring Boot app should be listening on port 8080, since we’re not going to have it communicating directly with the internet. Instead, we will use an Apache proxy to handle HTTPS:

HTTPS

HTTPS is a non-negotiable part of any web project these days. You can run a web application over HTTP but:

  • Browsers will give lots of warnings and this provides a horrific user experience (and quite rightly so!)
  • You’re risking users sending sensitive data in the clear. Many users have horrible password strategies, and reuse the same one for multiple sites. So, an error on your site risks exposing other accounts and causing catastrophic loss for that user. While you aren’t responsible for other people’s bad password management, eliminating unnecessary risks is good.

Spring Boot supports HTTPS, but it’s a bit of a hassle to set up multiple connectors so that an app can listen on both HTTP and HTTPS so that HTTP requests redirect to a secure connection. Given the efficiency of the EFF’s Certbot scripts, it’s easier to set up Apache as a reverse proxy. In addition, terminating the HTTPS connections with Apache makes sense as Apache is better at managing HTTPS than Spring Boot.

Once again, Digital Ocean offers some useful tutorials on installing and setting up Apache and using certbot to set up HTTPS. Certbot configures the apache instance to redirect HTTP requests to HTTPS.

The basic set-up in the tutorials is for a static site, but it’s a simple matter to updatew this. First, from the command line, enable some additional Apache modules:

sudo a2enmod proxy
sudo a2enmod header
sudo a2enmod headers
sudo a2enmod proxy_http

Then, finally, remove the DocumentRoot directive from the ssl.conf configuration and add in the lines to proxy HTTP requests to port 8080:

ProxyPreserveHost on
RequestHeader set X-Forwarded-Proto https
RequestHeader set X-Forwarded-Port 443
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/

Preventing Terrible Things

Whenever anything is put live on the Internet, thought needs to be given to the risks and costs involved. It’s impossible to be 100% safe, but obvious problems need to be prevented. We don’t want our hobby website to ever give us a horrible sinking feeling that something has gone very, very wrong. So it’s worth thinking about the things that could go wrong with any application exposed to the internet:

  • Running up massive bills with a hosting provider through insecurity in our user account with them.
  • Leaking user data (which has potential legal ramifications)
  • The host being compromised, allowing it to be used for nefarious activities
  • Losing user data (which might annoy people, causing them to annoy us)

The measures discussed in this post should mitigate most of these risks, but you should take time to consider things such as data and password security to make sure you’re comfortable with the application you’re running.

One thing to be aware of is that all dependencies should be regularly upgraded to make sure the latest versions are used. The JHipster upgrade tool is provided for this.

This is a good point to remind you that care must be taken with the secrets stored in the JHipster application’s application-prod.yml file. Do not carelessly commit these secrets to git and then push to a public repo. Sounds obvious, but JHipster automatically adds this file to the repository.

Database Backups

Any application which creates and maintain data needs some sort of backup. A manual solution is going to be more hassle than it’s worth, as is a full up-to-the second solution, such as would be used on a professional site.

One option is to pay a little extra to use the new managed (Postgres) database services on Digital Ocean, and let them handle backups. I’ve gone for a quick-and-dirty daily backup using automysqlbackup with the files produced rsynced automatically to another server. It means a catastrophic failure could wipe out up to 24 hours of data, but the solution was something I could set up in 15 minutes, and I am OK with the risk.

Monitoring

Given the simplicity of the app I’m working with, I’ve not bothered producing persistent monitoring of the application beyond the logs on the server. If, for some reason, the Digital Ocean server vanishes, these will be gone. I’m OK with that.

However, I do want to know if, for any reason, the Java application server falls over. I’m using Uptime Robot‘s free offering to email me if the app’s homepage stops responding. This is enough for the first version of my application.

Thank you to Laurence Barry and Alex Tawse for their feedback on a draft of this post