Deploying a prototype with JHipster

In my previous post, I looked at the minimal infrastructure for a hobbyist webapp in JHipster. Now I want to look at the process for putting a prototype into production.

What do I mean here by a prototype? I mean a simple first cut of a production site. It needs to be simple, while achieving the standards required of a professional site. It also needs to be better documented and reproducible than a one person hobby-site: I should be able to hand over the code and documentation to another developer and never have them call me for help.

The background to this post is a requirement to produce a simple platform for a local company. The work needs to be supportable by other people, as I don’t want to be the only person who can work on this. This project provided a good opportunity to look at moving beyond the hobby-site model.

Ultimately, the changes made on top of what was done last time are adding continuous integration; planning zero-downtime deployment; and sketching out a roadmap for the future.

Platform

In the previous post I discussed the trade-offs between AWS and Digital Ocean. AWS has powerful infrastructure, but is only worth taking on when the infrastructure costs are lower than the additional time needed to manage that infrastructure. There also needs to be a commitment to pay the ongoing platform costs, when long-term budget and timescales are not yet set. For the same reason, I dismissed the idea of a managed database, preferring the quick and cheap option of installing MySQL on the server.

For this initial prototype set-up, I think we are still at the stage where digital ocean has the lead. However, one of the downsides of this is losing a resilience in the system, so good monitoring needs to be in place.

Jenkins

As this is a professional site, the deployments need to be accountable – which means the builds need to be repeatable. JHipster provides support for setting up continuous integration, and this is so simple that it is inexcusable not to use it. By using Jenkins for production builds we can be sure these are done correctly, with no danger of files outside version control polluting the project.

I added a new digital ocean server using their Jenkins tutorials and soon had a CI server up-and-running. The first time I tried this, I had a few problems with the machine running out of memory, but assigning some virtual memory fixed this.

A basic change deployment process

A deployment process should be completely automated – that way you can be sure the process takes place the same way every time. However, if the process has not been automated, it does at least need to be documented.

If the process is not being automated at the start of a project, then some critical thought needs to be given as to when this will happen. As more features get added, performing an automated deployment only gets more complicated. It’s technical debt: over years, days can be lost on something like manual deployments, while time can’t be found to fix things there and then.

At this stage I am assuming a single production server, configured as in my previous post, ie with an Apache Server to handle HTTPS connections and proxy to the application server.

The deployment process should be clearly documented so that anyone working with the site can find it. That process will look something like this:

  • Create a ticket, summarising the changes that need to be made. You might not use Jira, but some sort of tracking system should be used, to provide a trail of events.
  • Define a series of (preferably automated) tests that will define whether the changes are successful.
  • Create a branch, add the tests and make the changes. Database updates must be backwardly-compatible and work with both the new and current version of the application (for example, take care removing columns).
  • If a code reviewer is available, they should review the changes before they are merged. If everything seems fine, the branch should be rebased against master then merged.
  • The changes are picked up by Jenkins and a production jar is produced.
  • The production jar is started using a new port via a command line option (-Dserver.port=XXXX). If database changes are included in liquibase, they will be applied at this point.
  • The Apache configuration is amended to point to this new version. Apache is then restarted
  • The old application server is removed.

We could use a load balancer for the deployments, which currently costs $10.80 a month on digital ocean. This would offer several more options for deployment, but these would also add costs and take more time to set up.

As we are using JWT, users don’t notice the switch of servers. There are issues with JWT, such as the difficulty of repudiating tokens when we need to end the session, but that is not an issue in this case.

Recovery

One of the most interesting trade-offs in these systems is between preparing for the possibility of error or dealing with the issues that actually occur. Getting a site close to 100% uptime is incredibly expensive – for example, what if the provider suffers an outage? should you be able to fall back to another cloud provider? Time/money might be better spent on getting things out there and exploring user responses.

I would suggest there are two important things to consider:

  • Given the time/money trade-off, what are acceptable SLAs for this simple site?
  • If the whole system were to be deleted, can the site be rebuilt in a sensible amount of time? How much data would be lost?

What’s missing

The above outlines what I would need for a minimum viable site deployment. There is some distance between this and what I would expect from a fully-featured production webapp. I will cover these in future posts:

  • A full continuous integration pipeline, including quality assurance tools such as Sonar and findbugs.
  • A more nuanced git branching strategy.
  • Spreading the site across multiple hosts, increasing the robustness and allowing scaling.
  • Capacity and error logging/alerting. This needs to be persistent, and to immediately communicate serious problems. Ideally, load spikes can be responded to automatically.
  • Better database recovery planning.
  • Clearer deployment tracking, so we can identify which version of the application a bug report occurred against.

The other thing to consider is at what point the savings would justify moving across to AWS.

Conclusion

The changes between this type of application and a hobbyist version are minimal, but JHipster supports us in getting continuous integration running, which is a great help. The main changes are tightening up the process, so that other people can become involved in this process as possible.

Deploying a hobby site with JHipster

This is the first in a series of posts looking at JHipster deployment. This post considers the most basic deployment. Later posts will look at more maintainable set-ups, including continuous integration and deployment pipelines

For me, one of the most exciting things about JHipster is that it makes it feasible to build hobby websites on a full-stack Java/Javascript platform. Basic functionality can be added within a few hours, and pretty good user administration comes as standard.

But, for all web-applications, getting from a feature-complete version to a live application can be a problem. There are several things to consider such as reliability, monitoring and security. This post looks into the minimal configuration required for a hobby site. I’m planning to maintain it as a work-in-progress, so if you have any questions or suggestions, please leave a comment below.

What do we mean by a hobby website?

By ‘hobby website’, I mean something that is created and run for fun. Since it’s not intended to make money, the requirements for uptime and stability are lower than a professional site, and we are probably making a few shortcuts in terms of process. We want it to chug along by itself and check in with it at weekends; we assume we will get pinged for errors, but we won’t respond to them immediately. And we also probably want to spend more time working on code rather then infrastructure.

Indeed, I suspect most people considering a hobby site are not going to run full test suites. While proper testing is more efficient in the long run, and essential on large or collaborative projects, some people enjoy the immediacy of seat-of-the-pants development. Certainly, reputational risks are lower when doing a hobby website. I’m not saying that it’s correct to avoid setting up a proper process with continuous delivery – but for most people that’s not what they want from a project they work on in their spare time.

Platform

Unless you have a very generous employer, you’re going to need somewhere to host the application and in the long term, this will cost something. If you’re eligible for the free tier on AWS, or GCP’s free offerings, that’s a great help, otherwise you’re looking at a paid option. While a lot of people offer virtual servers, two popular options are Amazon’s AWS and Digital Ocean.

While AWS is a powerful and sophisticated platform, it’s probably overkill for most hobby sites. Setting up an AWS account involves a number of subtleties, and the additional features can be expensive. So, while I would tend towards the power of AWS for a commerical application, I favour Digital Ocean for hobby sites and simple deployments. Digital Oceans’s virtual servers are about half the price of an Amazon server ($5 a month for the lowest level, $10 for something a little more powerful). Since we’re talking about sites that aren’t going to make money, reducing costs is probably good.

Getting a server up-and-running on Digital Ocean is straightforward, and they provide great step-by-step tutorials. The first step is the basic server set-up. Digital Ocean seem to be offering a managed database service, but it’s also easy to install MySQL and set up a new user.

Setting up email is best done through a third-party supplier. Mailgun‘s SMTP service is easily to configure with JHipster, and offers up to 10,000 emails a month for free.

Running the app

There are various options for running an app on a server, and Spring Boot’s documentation has a section on deployment and installation of Spring Boot apps. The main takeaway from this is do not run the Spring Boot application as root.

I’ve actually been quite lazy. Rather than setting up a service, I ran my application from the command line and backgrounded it. I wouldn’t do this on an app I was producing professionally – but I have a monitoring service on the app and it’s stable enough for my needs.

One other thing – the Spring Boot app should be listening on port 8080, since we’re not going to have it communicating directly with the internet. Instead, we will use an Apache proxy to handle HTTPS:

HTTPS

HTTPS is a non-negotiable part of any web project these days. You can run a web application over HTTP but:

  • Browsers will give lots of warnings and this provides a horrific user experience (and quite rightly so!)
  • You’re risking users sending sensitive data in the clear. Many users have horrible password strategies, and reuse the same one for multiple sites. So, an error on your site risks exposing other accounts and causing catastrophic loss for that user. While you aren’t responsible for other people’s bad password management, eliminating unnecessary risks is good.

Spring Boot supports HTTPS, but it’s a bit of a hassle to set up multiple connectors so that an app can listen on both HTTP and HTTPS so that HTTP requests redirect to a secure connection. Given the efficiency of the EFF’s Certbot scripts, it’s easier to set up Apache as a reverse proxy. In addition, terminating the HTTPS connections with Apache makes sense as Apache is better at managing HTTPS than Spring Boot.

Once again, Digital Ocean offers some useful tutorials on installing and setting up Apache and using certbot to set up HTTPS. Certbot configures the apache instance to redirect HTTP requests to HTTPS.

The basic set-up in the tutorials is for a static site, but it’s a simple matter to updatew this. First, from the command line, enable some additional Apache modules:

sudo a2enmod proxy
sudo a2enmod header
sudo a2enmod headers
sudo a2enmod proxy_http

Then, finally, remove the DocumentRoot directive from the ssl.conf configuration and add in the lines to proxy HTTP requests to port 8080:

ProxyPreserveHost on
RequestHeader set X-Forwarded-Proto https
RequestHeader set X-Forwarded-Port 443
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/

Preventing Terrible Things

Whenever anything is put live on the Internet, thought needs to be given to the risks and costs involved. It’s impossible to be 100% safe, but obvious problems need to be prevented. We don’t want our hobby website to ever give us a horrible sinking feeling that something has gone very, very wrong. So it’s worth thinking about the things that could go wrong with any application exposed to the internet:

  • Running up massive bills with a hosting provider through insecurity in our user account with them.
  • Leaking user data (which has potential legal ramifications)
  • The host being compromised, allowing it to be used for nefarious activities
  • Losing user data (which might annoy people, causing them to annoy us)

The measures discussed in this post should mitigate most of these risks, but you should take time to consider things such as data and password security to make sure you’re comfortable with the application you’re running.

One thing to be aware of is that all dependencies should be regularly upgraded to make sure the latest versions are used. The JHipster upgrade tool is provided for this.

This is a good point to remind you that care must be taken with the secrets stored in the JHipster application’s application-prod.yml file. Do not carelessly commit these secrets to git and then push to a public repo. Sounds obvious, but JHipster automatically adds this file to the repository.

Database Backups

Any application which creates and maintain data needs some sort of backup. A manual solution is going to be more hassle than it’s worth, as is a full up-to-the second solution, such as would be used on a professional site.

One option is to pay a little extra to use the new managed (Postgres) database services on Digital Ocean, and let them handle backups. I’ve gone for a quick-and-dirty daily backup using automysqlbackup with the files produced rsynced automatically to another server. It means a catastrophic failure could wipe out up to 24 hours of data, but the solution was something I could set up in 15 minutes, and I am OK with the risk.

Monitoring

Given the simplicity of the app I’m working with, I’ve not bothered producing persistent monitoring of the application beyond the logs on the server. If, for some reason, the Digital Ocean server vanishes, these will be gone. I’m OK with that.

However, I do want to know if, for any reason, the Java application server falls over. I’m using Uptime Robot‘s free offering to email me if the app’s homepage stops responding. This is enough for the first version of my application.

Thank you to Laurence Barry and Alex Tawse for their feedback on a draft of this post

Setting up an online radiostation

Recently, I was set an interesting challenge: set up an Internet radio station. I knew very little about this but was relieved to find out it was easier than expected.

Having reviewed the different option, the best one seemed to be icecast2, which is actually the backing for a lot of commercial services. Getting the basics up and running was relatively straighforward, with a couple of gotchas.

Virtual Server

I used a digital ocean server to host the radio station. These are reasonably-priced servers, with 1TB of transfer on the basic level. The physical hypervisors have about 1GBps. In a 4-year-old forum post, it was suggested that “You should expect and plan on 300Mbps of available bandwidth (up and down) in order to plan your deployment and get the most out of your services.”

This covers a decent number of users, and hitting this limit would be a nice problem to have. Potential solutions in this case would finding a way to peer the radio through a more powerful icecast server. As far as bandwidth use went, looking at someone else’s calculations suggested that this would not be a problem in my case. So, I set up a basic digital ocean server to see how that would cope.

The server I went for was a little beefier than the basic model, with 2 CPUs and 3TB of transfer – far more than was needed. Within a few minutes I had a server set up.

Icecast

Installing icecast is easy on ubuntu using the standard repositories. During testing we had a number of issues that seemed to be caused by not using HTTPS. In the end, it turned out that these were actually caused by using Ogg rather than MP3 for the streams. This was a good thing to discover, as getting HTTPS support working was quite a chore.

I found a good guide to setting up Icecast and Liquidsoap on the Linux Journal site: Creating an Internet Radio Station with Icecast and Liquidsoap, and it’s worth following this closely.

However, these versions of Icecast do not include SSL support. Doing this requires building icecast from source, using the SSL option. However, installing from source doesn’t do a great job of setting up the software locations and services.

I ended up settling on a slightly weird way of installing icecast, by using apt-get to do the config, then compiling and installing the binary from source.

I was fortunate to find some helpful documentation for compiling icecast with SSL. However, this needed a little tinkering, mainly due to dependencies and the commands I needed were:

sudo apt-get update
sudo apt install git gcc build-essential libcurl4-openssl-dev libxslt1-dev libxml2-dev libogg-dev libvorbis-dev libflac-dev libtheora-dev libssl-dev autoconf libtool pkgconf
mkdir src
cd src/
git clone --recursive https://git.xiph.org/icecast-server.git
cd icecast-server; ./autogen.sh
./configure --with-curl --with-openssl
make install
mkdir /etc/icecast2
mkdir /var/log/icecast2

I then copied the pre-prepared icecast config file into /etc/icecast2, along with the pre-prepared SSH certificate file. Then there were some last bits of set-up. I had to explicitly enable the icecast daemon in /etc/default/icecast2 to declare I had changed the passwords (done via copying across the config)

chown icecast2:icecast /etc/icecast2/icecast.*
systemctl enable icecast2
systemctl start icecast2

At this point, the icecast server would be visible on port 8000. I had to do a little work to ensure that the correct XSL files were being pointed to, but it was fairly obvious (from a 500 error) that this needed changing.

Liquidsoap

While icecast handles the streaming, a second application is needed to generate those streams, and this is the role of liquidsoap. Again, the basics of this were provided by some helpful documentation online, this time from the linuxjournal.

adduser liq
gpasswd -a liq sudo
apt-get install opam
su - liq
opam init
# say yes to changing profile
eval opam config env
opam install depext
opam depext taglib mad lame vorbis cry ssl samplerate magic opus liquidsoap
opam install taglib mad lame vorbis cry ssl samplerate magic opus liquidsoap
sudo mkdir /var/log/liquidsoap
sudo chown liq:liq /var/log/liquidsoap/
mkdir /home/liq/archive
mkdir /home/liq/playlist

I then copied across the liquidsoap config file, and tested it – liquidsoap complained about a fallible source, as there was not yet a fallback file in place. The script I’m using has a test.mp3 file defined in the configuration that can be used when all other sources fail.

Other tasks

I needed to copy some other people’s SSH keys to the server to allow them to work on it. Then I needed to set up apache, and configure the firewall. Digital ocean provide instructions for this, which are comprehensive and include firewall management. Once basic apache was set up, I created the new vhost for the station and transferred the pre-prepared configuration files. Then test that this configuration was successful.

Pointing the DNS to the new server is simple enough, but there are also SSL certificates needed for both Apache and icecast. The simplest way to do this is via certbot

sudo add-apt-repository universe
sudo apt-get update
sudo apt-get install certbot python-certbot-apache
certbot --apache

There is also a certificate needed for icecast, as some players are unhappy with unencrypted streams.

cat /etc/letsencrypt/live/pilgrimradio.info/fullchain.pem /etc/letsencrypt/live/pilgrimradio.info/privkey.pem > icecast.pem

Other things

I used uptime robot to set up a basic monitoring service for the site. Compiling the appropriate liquidsoap script took some work, but the main site had some great examples.

We needed a website to send people, with links to the streams and audio players. This was easy enough to do, with the icecast admin screen providing example HTML for linking to the streams and embedded audio players.

I could write another whole post about scheduling and organising content, but that is for another day.The above is a rundown of some of the issues I faced. Every set-up is going to run into it’s own problems. I’m happy to try answering any questions left in the comments though.

The TDD Lie

The requirements for pretty-much every developer job I see these days includes Test Driven Development. Which is exciting – everyone recognises the importance of unit tests.

In my previous post I talked about how often companies claim to do agile, but don’t succeed in practise. And I see the same thing happening with test-driven development.

It’s easy to tell which companies want to use TDD but don’t in practise: you ask what percentage coverage they have for their code.

In its purest sense, TDD demands that tests are written before any new code is written. If you’re coding in this way (as suggested in  Kent Beck’s Test Driven Development book) your coverage should be close to 100%. Of course, there are other ways one might approach TDD, such as writing business tests before the code. This is definitely better than adding a few tests at the end of a project.

But, if you’re tracking the coverage and have chosen a figure less than 100%, then there is a choice being made about which code is not covered. I’ve never seen a standard coverage threshold working out well. The obvious indication  of this is how often the coverage of projects is just over the required threshold.

I’m a great believer in high coverage (while acknowledging the limitations of that). And if you don’t grow your code with the  tests then you tend to end up with code where it’s difficult to add code at the end.

If you’re listing  TDD in job requirements and not practising it, then it’s worth asking why not. A lot of times it’s a fundamental limitation to the company – we know we should have tests but we’re rushing. It’s the same problem that many companies have with agile – these things are obviously a good idea, but difficult to do in practise.

As Bob Martin pointed out,”You know what you believe by observing yourself in a crisis. If in a crisis you follow your disciplines, then you truly believe in those disciplines. On the other hand, if you change your behaviour in a crisis, then you don’t truly believe in your normal behaviour” (from Chapter 11 in the Clean Coder).

If you know you should be using TDD but aren’t, then you need to ask why not. And, if you can’t fix that, stop lying that you’re doing TDD – admit what you’re actually doing and focus on improving that.

“We’re agile, but…” talk notes

I spoke at Brighton Lean Agile last week, stepping in at short notice to cover for a friend. The event was great fun and I enjoyed the discussions I had with everyone. Rather than share the slides, I thought I’d do a quick blog post, allowing me to add some context to the bullet points.

Photo by @timrabjohns

Introduction

The talk discussed how agile doesn’t always work, and what can be done about it.

Overview

As Churchill put it: “it has been said that agile is the worst form of project methodology except for all those other forms that have been tried from time to time.”

While I might sound cynical about agile in what follows, I still believe in it. I think that the principles it promotes, such as regular reviews, tailoring methods to the team etc. are incredibly important. At the same time, a good waterfall project beats a bad agile one.

Agile is:

  • Simple
  • Flexible
  • Popular

While agile is detailed and subtle, there are core principles that are easily applied. It is flexible – there are accounts of it being applied to many industries and sizes of team. And it is popular – a large number of people have experience of it.

However, most companies face problems with the implementation, and even fundamental aspects. Every IT job ad I see these days has agile as a requirement. But listing something in a set of job requirements is not the same as understanding or implementing it.

Issues with the daily meeting

I believe in the principle that ‘the way you do anything is the way you do everything’. I’ve seen a lot of issues with the daily meeting, and feel that many wider issues with agile could be alleviated by getting this right. The stand-up is particularly useful for experimentation since it happens daily, and only lasts 15 minutes (in theory!). Note how ‘briefing scenes’ turn up in movies and TV shows a lot – this meeting should also be compelling, informative and entertaining. Problems seen include:

  • The stand-up is boring
  • Too many people in the stand-up
  • The stand-up is not relevant to the entire team,
  • The stand-up is at the wrong time
  • The stand-up shifts with a manager’s schedule
  • The stand-up becomes a reporting tool
  • Some developers think agile is silly

Some things I’ve tried

  • Balancing co-ordination vs social vs reporting
  • Adding initial reports to slack (particularly useful across timezones)
  • Changing who runs the standup (making team members feel more invested in agile)
  • Enforcing promptness
  • Track who is talking and who isn’t
  • Asking what we gain from it
  • Be high energy, be entertaining

Most of the points above are self-explanatory. I think it’s important to look at what an organisation needs from the daily meeting, and to focus on providing that without distraction or fluff (for example, if you do need people to report to management, find a more effective way to do that). The energy is also important – attendees need to think about their contribution and how to make it effective.

Estimation Problems

  • The estimates are already set (by a team member or management) and planning is a sham
  • Low expectations – developers aren’t pushed
  • No theme to the sprint
  • Team members don’t understand some stories (specialisations)
  • Nobody cares about the estimates – the code is produced late without any investigation
  • Translation is made between story points and time (worse when compared between teams or the translation being used is not explicit)
  • Estimates aren’t believed by management
  • Pressure is applied to reduce estimates: “But how long if you try harder…”
  • Estimates are given as self-protection – Padding or straight-out refusal to play
  • Punishing bad news
  • The team are told how, not what and why, meaning they estimate that method rather than considering wider goals
  • Boring

In the discussion we talked a little about how teams refusing to give estimates is usually a symptom of a larger problem. Another issue is estimates as negotiation rather than a best-effort prediction (ie padding for safety). Punishment for failing to hit previous estimates often leads to this sort of padding.

I concluded this section by saying that estimation sessions were something I often wished to cancel, possibly in favour of the person who was most informed/influential doing the estimates themselves. One participant said they had abandoned estimates in favour or sizing stories in a predictable manner.

One important thing to consider is whether the information in the estimates will change anything. If not (for example, schedule and scope are already set) then there may be little point spending time working out how long something will actually take.

When agile doesn’t fit the business

  • External factors (deadlines, releases, tradition) restrict agile
  • Product owner issues:
    • No time to do the role
    • Don’t want to do the role (the product owner is the most senior person who can’t avoid doing it)
    • Product owner has no power (proxy product owners)
  • Too much long-term planning – looking months ahead rather than supporting the team’s current work
  • Committing to futures that might not happen
  • Disconnect between stories and business
  • No idea how much a story costs to produce
  • Team composition shifting so you can’t learn anything
  • Technical debt is ever-growing
  • Multiple developer commitments outside project
  • Agile not adopted/understood/respected above the team level
  • The project is scheduled with no slack for developers being called away by emergencies.
  • Planning based on what people would like to happen, rather than taking note of what will happen
    • An example of this is ‘schedule compression’ – the deadline doesn’t move, but everyone resolves to try harder and be more efficient.
  • Not warning people about coming problems in the hope that someone else announces a serious issue first

Why aren’t Things Better?

I made two suggestions – the first was a reference to Sturgeon’s Law, which claims that “90% of everything is crap”. Maybe agile is difficult, everyone is doing their best, and we just have to be happy that things are working better than they would without agile.

My other suggestion was looking at cargo cults. This is a pejorative term in agile, but this betrays a lack of sympathy/understanding of the actual cargo cults.

Cargo cults were seen on a number of Polynesian islands, particularly after world war 2, where the residents of those islands built planes and airstrips from local materials, supposedly in the hope of cargo planes landing on the island. It’s given as an example of people imitating something without understanding the principles.

However, this idea that cargo cults are foolish is a racist and limited one. Indeed, the idea that they are about the arrival of cargo and that people actually expect planes to land are probably misapprehensions. For those involved, there are other reasons for setting out these airstrips. Western anthropologists have potentially had a blind-spot in their own obsession with monetary goods and missed these deeper motives.

In fact, these cargo cults are a success, if you understand why they have been set up. They are also effective simply on the terms for which they have mocked, since the cults have brought anthropologists and TV documentary crews to the islands in question. But to see why they succeed, you need to have sympathy/empathy with why cargo cults exist.

Sabotage

A lot of agile implementations seem to feature a sort of sabotage, where people involved do things that are directly against their best interests.

I closed by suggesting that, when an organisation is doing things that are obviously wrong and harmful to agile, there is often a deeper, more compelling system in place. This then formed the basis of a final discussion section.

How to Build a Webapp in Java

tldr; building a webapp is complicated – particularly if you’re doing it right. JHipster offers a promising solution.

I was a little shocked recently when I realised that I didn’t know how to make a web-application.

It’s not like I’m inexperienced. I’ve been working on the web since 2000; but my job involves focussing on specific areas. Even when helping with strategy, I have very little input into the front-end. Being responsible for an application going live and doing it all myself are very different things.

Part of the problem is how intimidating front end development has become, with flurries of Javascript frameworks, UI libraries and strange pre-compilers. Angular has two versions that are so different as to be incompatible, but with similar names. Tutorials date quickly and beginners can’t tell if they are current. There is a huge amount of knowledge needed to build a web application.

Now, obviously, as the project website says, Spring boot “makes it easy to create stand-alone, production-grade Spring based applications that you can ‘just run’“.  My initial impulse to cobble something together from Spring and Thymeleaf would allow me to get something live. But this is an unsatisfying solution. Server-generated template pages don’t feel particularly powerful even with a good MVC framework. They have several problems:

  1. They tend to have code for the view tangled up with the application, with no clear boundary
  2. Coupling the templates to the server-side code misses the separation of architectures like REST. Adding, say, a mobile client would involve a lot of hassle to get the two different front ends working.
  3. More modern architectures are available for front-end. Using REST and angular allows rapid development of flexible front ends, less coupled to the backend architecture.

Another question that arose was user management and security. This is one of those things that every site needs, but tends to be tedious to code (as well as inviting errors). Any decent user management system needs account creation, password resetting, different roles etc. For most projects, these are going to be the same. My basic assumption would be that Spring security would be useful for that, but that is really a framework for creating a security framework, and there are no obvious mature implementations. Setting up email account confirmation is a drag.

I weighed up a few options for authentication, particularly since I wanted one that supported a stateless web-server. After speaking to a friend, JWT seemed to be the way to go, but this still seemed like a lot of work. And I’d not even started grappling with the whole front-end stack.

I started working in a new office recently and a colleague suggested JHipster as a solution to my problems. I’d heard about the technology via InfoQ but hadn’t paid it a lot of attention. Part of this was the name, which sounded insincere and jokey. And my cursory look suggested it was ‘just’ angular with spring boot. I’d missed something very interesting.

According to the JHipster mini-book, JHipster allows Java devs to “be hip again”. The knowing irony of the framework’s name is irritating, but this idea of being ‘hip’ is good. Web development has changed a great deal, and JHipster provides a simple way to get started with up-to-date technologies, combining bootstrap, angular (or react) and Spring boot. The project started back in October 2013, with the first release in December of that year.

JHipster is similar to Ruby on Rails, providing rapid development of a stack based upon entity definitions. The ‘Hipster’ part of the name is referring to the need to keep up with the modern tools. JHipster produces the plumbing and boilerplate for a whole stack including a variety of best of breed tools. It also adds in a basic user-management system for free. JHipster prepares multiple files for each entity:

  • Database tables
  • Liquibase changes
  • JPA entity definitions
  • Spring data repository
  • Spring rest controller
  • Angular route, controller and service
  • HTML page

The best way to get a flavour of how JHipster works it to look at Matt Raible’s 16-minute demo of building a blog application, get started with JHipster 5.

Full-stack development is hard work, even with JHipster – you need to be familiar with both Spring and angular. But this does put you a lot closer than ever before to putting live a full-stack application using best of breed technology.

speaking at Brighton Lean Agile on 9th jan 2019

Due to a last-minute drop-out, I’m speaking at Brighton Lean Agile on January 9th. The title of the talk is “We’re agile, but…” and looks at some of the ways I’ve seen agile going astray in implementations.

Despite the simplicity and popularity of Agile, it often goes wrong in practise. Drawing on 15 years’ experience of Agile as developer and PM, James Burt will lead a discussion on why agile sometimes fails, and what can be done about it.

The session is currently full but there is a waiting list, as people sometimes drop out as the event approaches. Refreshments are provided by the sponsor, Bright Interactive.

Book review: Beyond Java by Bruce A Tate

Tldr; things looked bleak for Java in the mid-noughties, but the language and eco-system have undergone massive changes, allowing Java to thrive and survive.

My new job has a daily commute, which means time to read. I’m focussing on tech books and recently picked Bruce A Tate’s Beyond Java. I originally read this in July 2006, the year after it was published. The book described why Java was doomed, with Tate saying in the introduction, “Java’s reign will end. It’s not a question of if but when.” (pviii). While the predictions have turned out to be wrong it is interesting to re-read the book and see how Java escaped its fate.

Revisiting the mid-noughties was fascinating in itself: a time when AJAX was just emerging, Web 2.0 was the next big thing and folksonomies would change everything. It was an environment that caused obvious problems for Java. Its flaky text-processing was just no good for large XML files, and Java was forced to deal with a lot of XML. In addition, Java tended to be used for the middleware on top of databases, which it was not particularly suited for (p52).

Even with early versions of Spring and hibernate, making Java applications was a chore, with overwhelming amounts of boilerplate and XML config. Even with ORMs, Java required you to work with both entities and database code, and to keep them consistent.

Tate talked about this difficulty in getting software running, and all the tools that were needed. Compared with PHP, Java felt slow to develop in, and its advantages weren’t sufficient compensation. It did seem that Java was on the ropes. I remember dropping out of Java to do simple text tasks. Ruby-on-rails and similar frameworks promised the chance to get applications running quicker.

The book is a pessimistic one. Tate saw the problems as unsolvable and, given Java’s history and trajectory, this was a fair assumption to make. Tate looked at Java critically and suggested other platforms developers should investigate. Reading the book with hindsight, one can see Tate’s predictions have failed, but the discussion is still interesting.

In the 12-or-so years since Tate was writing, Java has managed a comeback. Decent dependency management from maven and gradle has eradicated the nightmare of setting classpaths. The language has been made more expressive with new features, such as the functional constructs in Java 8. And the growing fashion for opinionated configuration has also made things easier – getting Spring Boot running is incredibly easy. The culture of the language has changed for the better too. In the past, checked exceptions proliferated everywhere, and now people tend more towards throwing runtime exceptions and handling everything at a single sensible point.

One particularly strong improvement is the Spring framework. At the time Tate was writing, Spring was a mess of XML configuration and cryptic errors. With annotations, it has become easy to use and unobtrusive. New platforms such as Spring Boot allow developers to release production-ready features out quickly.  At the same time, Java/Spring offers a mature platform, where code can be taken to production and supported at massive scale. Ruby-on-rail’s promises have faltered, with Twitter famously moving to Java because of scaling issues. The XML problem has disappeared as JSON takes over as a less robust, easier to use standard. I don’t miss the days of choosing between SAX and DOM parsers for XML.

Around the time Tate was writing, a debate raged about variable typing: strong vs weak, static vs dynamic. Java’s static and strong typing felt too rigid for many people. This debate seems to have reached an interesting conclusion: Javascript has been restricted by typescript while Java 10 has introduced the var keyword.

(I’ve always seen the inflexibility of Java as a positive. While it’s less expressive than languages such as python or, um, perl, the code it produces tends to look very similar. It’s a good language for non-rock stars – who massively outnumber the sort of people capable of producing cryptic code given the slightest chance).

At the time of writing, Tate saw Java’s fate as connected with that of Sun. Tate wondered what might happen if Sun disappeared, wondering who might take over Java. The resulting situation, with Oracle running Java (and MySQL!) was hard to predict but has worked out well – even with the occasional fake news scare that Oracle would start charging for Java.

Tate discussed a number of contenders for Java’s throne. Perl has lost popularity over the last decade, with Python offering a more structured (and readable) alternative. Perl, like PHP, was a great tool for its time but ultimately a ladder to better things. The book makes a quick mention of functional languages: Clojure seems to have come and gone, but  Scala and Kotlin (first released in 2011) seem to be gaining traction. Ruby itself is much less popular. Java seems to have been saved by the flexibility of the JVM, which offers more powerful options for those that need them.

For me, the most interesting discussion was around state in web-servers, and the problems of that. Tate talks about continuation servers as an upcoming opportunity. This is another debate that has been settled, with REST winning out over stateful alternatives. I worked with Seam for a time – it was an interesting idea, but so alien to the way most developers worked, that its subtleties tended to be fought with rather being  made use of.

One problem that Tate raised which is still present is the approachability of Java. Even in the innocent days of 2005, the simplest useful Java application required use of a large number of tools. This issue is present in a lot of fields, including front-end dev work, where increasing sophistication has made things much more complicated. There are solutions to this, with JHipster being a promising one, but these work to hide the complexity. Just because the tools are set up for you doesn’t mean they aren’t there and can’t go wrong in strange and unpredictable ways.

EDIT: Tom Hume pointed out  that the discussion above doesn’t include Android, which was originally released in 2008 and has become hugely popular, with over 2 billion users. As someone who  focuses on backend dev, I didn’t engage with this, but Android has been contributed significantly to Java’s survival. Having  said that, Kotlin is becoming more popular in Android, threatening Java’s dominance on that platform. Interestingly, Kotlin is failing to see significant uptake in backend dev, despite recently becoming compatible with Spring.

If I had more time, I’d also look at the JVM platform as distinct from Java. While Groovy and Clojure have faltered, Scala (launched 2004) and Kotlin (launched 2011) are both popular JVM languages

Openlayers and Angular

I recently set up a simple Openlayers page, which proved to be an interesting experience. As mentioned in that post, one problem I had was working with so many unfamiliar technologies. It’s a long time since my first tinkerings with Javascript, and I’ve paid little attention to it. Since then, front-end development has exploded into a variety of frameworks and libraries that interact in strange and spooky ways. Node! Typescript! Less! Angular! Jquery!

In the long term, I want to get back into full-stack development; but in the short term, I just want to put a mapping site live. I’ve settled on JHipster as a platform, since it makes managing the database and middle-tier incredible easy, as well as setting up a lot of the Angular infrastructure. The problem is that almost all the Openlayers examples that I can find are written with JQuery.

I found a few clues on what to do, such as a Stackoverflow post on How do I dynamically add popups to markers using OpenLayers 5 in Angular 4?, but my meagre experience of Angular meant I couldn’t get the Popper library this used to work. But the answer gave me enough of a clue to get the syntax and code set up for a very basic implementation of Openlayers popups within angular. The work is in done within the controller, and builds and executes correctly.

It’s far from perfect – because the component is embedded within another page, the pop-up doesn’t appear with the feature, rather it is placed a short distance above. There is also a problem with the click detection being flakey – the window has to be resized for this to work reliably.

But the good thing is, I now have a proof of concept for Openlayers working within JHipster. As scrappy as this is, I do seem to have them working together now. It’s going to be ugly, but I have enough to build a basic application with.

First steps with Openlayers

tldr; first steps in setting up an Openlayers map with markers loaded from a GeoJSON file

I spent a chunk of this weekend wrestling with writing software. Which was a good way to relax after a late night dancing at Is that all there is to a Disco? Saturday was a slow day: writing code, watching Escape Plan 2 with Rosy Carrick (not a good film) and getting to grips with Openlayers.

Most of the time I learn about programming through toying with examples. While this is easy in a subject I know well, it can prove more difficult in new areas. Working with Javascript libraries can be particularly difficult since it’s hard to tell which versions particular examples are for – with Openlayers there are several mutually incompatible versions with very different APIs. They say you should only do one new thing at a time, and learning Openlayers was held back by my poor Javascript skills. I also needed to get to grips with the particular way the mapping geometries worked.

But I managed to beat it all into submission and produce a working demo. I wanted to load some data from GeoJSON and display location markers on an Openstreetmap map. It’s not very much, but it is something I can later add to a larger project.

Introduction

According to Wikipedia , Openlayers is “an open source JavaScript library for displaying map data in web browsers as slippy maps”. It supports multiple map vendors and different data formats. The main project site  says that version 5.2 is the latest. There are links on the site to

Basic Example

The quickstart page shows how to set up a simple Openlayers map. CSS and HTML are used to create a page-wide div that can hold the map. The Javascript to create and display the map is simple:

var map = new ol.Map({
target: 'map',
layers: [
new ol.layer.Tile({
source: new ol.source.OSM()
})
],
view: new ol.View({
center: ol.proj.fromLonLat([37.41, 8.82]),
zoom: 4
})
});

This creates a map object and sets it into the map object. A single layer is set up with OSM (openstreetmap) as the source. The view allows the map’s centre, resolution and rotation to be set. Note that zoom level 0 is zoomed out fully.

  • As the tutorial explains:
    A layer is a visual representation of data from a source. OpenLayers has four basic types of layers:
    Tile – Renders sources that provide tiled images in grids that are organized by zoom levels for specific resolutions.
  • Image – Renders sources that provide map images at arbitrary extents and resolutions.
  • Vector – Renders vector data client-side.
  • VectorTile – Renders data that is provided as vector tiles.

The center of the view is set using  a conversion from longitude and latitude. This is one of the confusions with open streetmap. It uses a co-ordinate system called Spherical Mercator projection (known as EPSG:3857),  and the usual longitude/latitude co-ordinates are held in a projection called EPSG:4326.

Adding Markers

The next stage was finding a way to add markers to the map. A lot of the examples for this were using different versions of Openlayers. The one that I ended up adapting was a recent tutorial on mediarealm.

This piece of code gave me enough of a clue to be able to set up a layer with a marker:

function add_map_point(lat, lng) {
var vectorLayer = new ol.layer.Vector({
source:new ol.source.Vector({
features: [new ol.Feature({
geometry: new ol.geom.Point(ol.proj.transform([parseFloat(lng), parseFloat(lat)], 'EPSG:4326', 'EPSG:3857')),
})]
}),
style: new ol.style.Style({
image: new ol.style.Icon({
anchor: [0.5, 0.5],
anchorXUnits: "fraction",
anchorYUnits: "fraction",
src: "https://upload.wikimedia.org/wikipedia/commons/e/ec/RedDot.svg"
})
})
});
map.addLayer(vectorLayer);
}

We create a new vectorlayer which contains  another vector with all of the features. This could then display markers with a small red dot.

GeoJSON

Ultimately, I wanted to use some GeoJSON  to provide a set of features. This proved to be tricky, as I had features in longitude/latitude co-ordinates, and couldn’t see how to make the import with that taken into account. Fortunately, it’s easy to iterate over objects in javascript, and I simply looped over the GeoJSON, making the conversion with one object at a time before adding it to the layer.

var featureArray;
for (feature of placedata){
        featureArray.push(new ol.Feature({
 geometry: new ol.geom.Point(ol.proj.transform([parseFloat(feature. longitude), parseFloat(feature.latitude)], 'EPSG:4326', 'EPSG:3857')),
name: feature.name,
description:feature.description
 }))
}

Once loaded, this feature array can then be dropped easily into a vector layer and added to the map. The source of the placedata array can be changed – I started out with a hardcoded version, and then added in one from a REST endpoint. I created a quick REST server using the Spring Boot REST example as a basis, and changing it to return place data.

Loading the data from GeoJSON turned out to be the fiddliest part of this because of the conversions between different projections.

Pop-ups

The last thing I wanted to do was add pop-ups to the markers. There was an example of this in the Openlayers workshop. The way this works is simple – an onClick event for the map is added, checking whether there is a feature at the point in question. There’s a lot of code to the example, but it was easy enough to amend and get working.

Summary

It took a little longer than planned, but I managed to set up a simple HTML page that displayed openstreetmaps tiles, with markers provided by a REST endpoint.