Categories
java programming springboot

Refactoring and microservices

In recent cloud projects, I keep seeing the same Spring application anti-pattern. There are controllers for a number of REST endpoints. Each REST endpoint calls a separate class, which carries out the business logic for that action. The problem is that such classes can easily grow to a thousand lines or more, and I’ve often seen single methods over a hundred lines long – an anti-patten sometimes referred to as ‘god classes’. Code is sometimes extracted to private methods within these classes, which can obscure that there is a single execution flow hundreds of lines long. The addition of unit testing means that long, repetitive tests with complicated set-up are needed to provide coverage for branches deep within these classes. These complicated tests then make it difficult to refactor the code.

This problem comes from applying sensible principles in the wrong way. We have the Controller logic separate from the Business logic, and the Model managed by Spring Data classes. It’s a rough MVC pattern – and Spring makes this separation very easy. The problem is that the Controller logic is usually trivial, just an annotation that might as well have been put on the Service class. It’s this Service class that you really want to be split out into smaller classes.

One of the promises of microservices is that they should be nimble, something that can be quickly built and replaced. But such large classes produce microservices which are, basically, tiny monoliths. The complex tests act as a drag on refactoring, making the services little tangles of legacy code.

The Single Responsibility Principle is the sort of thing that comes up in interviews as one of the SOLID Principles, and I’ve never heard anyone argue that it’s a bad thing. Which makes it all the stranger that it does not seem to be applied in practise. Everyone seems to agree that god classes are a bad thing,

One answer here, which I’ve proposed before is to use TDD properly. This is the ideal way to solve the problem, preventing it from happening by applying best practise. In his recent book on Software Engineering, Dave Farley suggests that proper use of TDD avoids this sort of coupled code:

The strongest argument against TDD that I sometimes hear is that it compromises the quality of design and limits our ability to change code, because the tests are coupled to the code. I have simply never seen this in a codebase created with “test-first TDD.” It is common—I’d say inevitable—as a result of “test-after unit testing,” though. So my suspicion is that when people say “TDD doesn’t work,”  what they really mean is that they haven’t really tried TDD, and while I am sure that this is probably not true in all cases, I am equally certain that it is true in the majority and so a good approximation for truth.

The other potential solution is to enforce good class design with method size limits in quality-checking tools such as sonar. This restricts developer autonomy in an unpleasant manner, although this is better than the alternative of unmaintainable code. Farley suggests using tools to reject any method of more than a certain number of lines and parameters. He writes:

I will establish a check in the continuous delivery deployment pipeline, in the “commit stage,” that does exactly this kind of test and rejects any commit that contains a method longer that 20 or 30 lines of code. I also reject method signatures with more than five or six parameters. These are arbitrary values, based on my experience and preferences with the teams that I have worked on.

There are actually good arguments for this in that, as Farley points out, “Most optimizers in compilers simply give up trying once the cyclomatic complexity of a block of code exceeds some threshold”. But the most important thing here is that such limits force people out of writing procedural, linear code to produce business actions, and decompose these into single-responsibility classes. There are ways to write poor code within these constraints, but it’s not so easy to do.

Categories
springboot

The complexity of a simple Java project

I recently built a small prototype using Spring Boot. It wasn’t much of a site, just two forms and two reports. As it was only being demonstrated locally, there was no need to worry about deployment and hosting. What’s fascinating is how many different technologies were used in such a simple piece of development:

  • Java 8
  • Git
  • Gradle
  • Spring Boot
  • Spring Security
  • Spring Data
  • Flyway
  • JPA
  • MySQL
  • HTML
  • Thymeleaf
  • Angular
  • CSS
  • Bootstrap
  • REST
  • Webjars
  • JUnit
  • Mockito
  • Jacoco

Obviously, it’s great how easy it is to use so many different technologies together: building software and handling dependencies through gradle is much easier than doing it by hand; using Spring Data to handle persistence saves a huge amount of time and configuration.

Having said that, a full-stack demo in Java has involved a huge number of technologies – 19 in total. Not all of these require competency (and in some, like Angular, I’d like to know a lot more).  Some, like flyway or jacoco could be ignored – although I’d rather have them in place early than add them later.

When I first leaned Java it was relatively straightforward. My first job required Java, HTML, CVS, make, JSP, JDBC and MySQL. I wouldn’t want to return to that era – the applications I produce now are far more sophisticated than anything we could have done back in 2000. But I do wonder how easy it is for new developers to gain confidence and competence in so many tools. Consider how much harder it would be to get a simple site up-and-running in Java compared with PHP.

Which poses an interesting question – how does a new developer get to the point where he or she can produce something interesting? And how can we know that we are using each of these 19 technologies competently and securely?

Categories
springboot

Deploying sites quickly on Spring Boot

According to the author Thomas Mann, “A writer is someone for whom writing is more difficult than it is for other people”. And, working as a tech-lead, I found myself turning into someone for whom writing Java was more difficult than for other people. When a piece of software might need supporting for years there are a lot of things to consider.

One of the attractions of Spring Boot is that  it offers a way to get sites live very quickly. In practise, a production environment offers so many potential issues that a Spring Boot deployment still requires a significant amount of work. A simple hobby site… should, in theory, be very quick.

So, I’m going to try to put a Spring Boot site live and see how long it takes. The site will be simple – a map of Brighton where users can add points of interest. I’m going to use heroku to simplify deployment and cobble together some Javascript.

I’m not racing to get this live, but taking this at a leisurely pace – and the hours won’t be contiguous. Let’s see how it goes.

Hour 1

  • Set up a new git repo.
  • 0c7b180 – Created a basic Spring Boot project, using Accessing JPA Data with REST as a template. Rather than storing Person objects, I used PlaceOfInterest objects. Within 15 minutes I could write and read from an in-memory JPA datastore.
  • 2efcede Added some static pages to the Spring Application, which I had lying around already.
  • 1cad492 Next was setting up Heroku (which meant retrieving an ancient login). The free service should suffice for the basics. Heroku have a guide to deploying Spring Boot applications. This didn’t mention the Procfile, which is described in a post by Nicholas Paul Smith.  Testing the app through curl, it is persisting data, albeit to an in-memory database.

So, in the first hour, I have a Spring Boot project running on heroku. The next step is to add a persistent postgres database.

Hour 2

Hour 2 was spent in Emporium Cafe, struggling to connect the Spring Boot application through hibernate to a local PSQL database. Since I was trying to do things fast, this was something of a hack-and-slash effort – for example, resorting to creating a hibernate_sequence object manually. Setting up postgres was a bit of a drag as I’m used to MySQL.

The database URL was passed via a JDBC_URL environment variable as I’ll need this for Heroku. At the end of the hour I had a local application writing to the postgres database. The next session will start with tidying up the source code that was added and committing it.

  • 179356f – Added a basic SQL file. This was generated by Hibernate with the addition of a sequence. This needs to be tidied up.

Hour 3 (and a bit)

  • 30b5346 Starting by committing the database work from the day before
  • 4fdc2c7 Added a new file to keep track of what needs doing for this site.
  • 2a217b0 I added a new layer to the map, which loaded in a tab-separated data from a new controller. We’re using Openlayers 2 for expediency, as this simple format is not available in Openlayers 3.  This was a rather frustrating process: the markers were not displaying but there was no error. This commit has a single marker appearing about 400 miles South of Ghana.

The next session will start with working out why Openlayers is not displaying the marker where I expected,  in Sussex, England. This was annoying enough to keep me looking into it for another 15 minutes, so hour 3 was a little longer than planned. As usual with mapping bugs, the problem is not setting the projection.

Hour 4

A slightly shorter hour, to make up for overrunning the previous night.

  • ba69d68 Added the projection to the POI layer, and generated the TSV file from the database. This means that the markers on the map are now connected to Postgres.
  • 74b718d Set the app up on Heroku – which required changing the database url value; and, for some reason, having to reinstall  some heroku command line plugins, which slowed things down. I also had to add database objects  to the external database.
  • 1a436bc Cosmetic changes, including a title for the page and removing the default favicon – although this is still appearing for some reason.

At the end of the fourth hour, I have a site running on a remote server that can read/update a database to display locations. Given my slowness setting up postgres and fixing the Openlayers issue, it looks like a basic CRUD site could be put live in a couple of hours. Next up: adding a form to the page for new data.

Hour 5

At this point, I’m into playing with Javascript, setting up a form to submit data. Clicking on the map now updates a form, and submitting the form adds entries to the database. It’s a little clunky – the new object doesn’t appear on the map without refreshing; the form looks ugly; and there is an error handler firing on the AJAX request. A slightly slow hour, getting to grips with Javascript, but the main user-facing functionality is now there.

  • 4452c5d Added a form to the site so that new places-of-interest can be submitted.

To Be Continued?

There are a few obvious things to do next. The TSV file should only show visible points, the page needs tidying and I’d like to use a proper domain in place of the Heroku-supplied one. I’m very pleased with the speed and simplicity of progress so far.