Categories
infrastructure

Java Infrastructure Part 7 – Adding coverage checking

Coverage testing is considered to be an essential part of development nowadays, but I don’t think many people reflect deeply enough about what it involves, and why they are doing it.

Coverage tools measure how much of the code is exercise by the tests that are run and, broadly, a higher number is better. But it doesn’t tell you how good the tests are. You can have problems such as high coverage with redundant tests, code being exercised without being tested properly, and unmaintainable test code. Sometimes coverage is used as a replacement for understanding unit testing.

The other important question is how high the coverage should be. 100% coverage is extremely hard to achieve, so much so that many people suggest it is a waste of time. Having said that, all other levels of coverage are somewhat arbitrary. Some places I’ve worked have aimed for an ‘appropriate level of coverage’, but never have time to review and enforce that.

Good  coverage is difficult to add retrospectively; easily testable code needs to be written in a certain way. The Michael Feathers book Working Effectively with Legacy Code is still an excellent guide to salvaging untested code, despite being almost 12 years old. Actually implementing Feathers’ recommendation takes a care and diligence that few people bother with. It’s better to aim for excellent coverage from the start.

The good thing about having very little code in our new project is that we can get our coverage up to 100%.  Adding jacoco is a simple matter of putting a single line into the build.gradle file:

apply plugin: 'jacoco'

The jacoco reports can then be produced locally with the command  ./gradlew clean test jacocoTestReport

The test case covered the Greeter class’s greet method, but not the main method. Perhaps controversially, I’ve chosen to remove this method rather than add a test for it – this was originally used as a test harness and that behaviour is not needed when we have the unit tests. And less code means less to keep track of.

Something interesting happens when we add the jacoco reports to Jenkins. Initially I’ve added the Jacoco reports to the main job for the project. This doesn’t matter when the project is small, but will become a problem later on. We want to maintain a fast response to errors. The current run is taking about 30 seconds, which is on the outside edge of acceptability.

A new post-build action for Jacoco
A new post-build action for Jacoco

A problem comes with the results for the build. No coverage results are produced, and an error occurs on the Jenkins command line although not the build: java.io.IOException: Incompatible version 1006

Jacoco results are not working
Jacoco results are not working

The problem here is that the Jenkins gradle plugin doesn’t work with all versions of Jacoco. This appears to be a problem with gradle versions which can be solved by forcing version 0.75 of gradle. This is now working:

jacoco

(The branch coverage is at zero because the code currently has no branches)

The problem with this adding this line to the build file is that it is a significant piece of technical debt. We have one tool’s version restricted due to compatability with one. This may cause problems if we introduce another tool requiring a specific version; and we have to keep track of when to remove the version. It doesn’t take long for the purity of a greenfield development to disappear.

The commit for the latest version is 2f7307d. Next it’s time to look at adding Spring Boot to the project.

Categories
infrastructure

Java Infrastructure Part 6 – Unit testing

I’ve been using JUnit for about fifteen years. In that time it has become a central part of Java development. While that’s great, unit testing is still problematic. Most people agree that automated testing is necessary, but its exact form is more controversial.

There are two main candidates for a unit testing framework, JUnit and TestNG. These packages have very different aims – in Senior Developer interviews I would sometimes as the difference between them. I like the question as there are a lot of right answers and the chosen response says a lot about the developer.

The simplest answer is that they are broadly similar. They do have a different order of argument in their assertions, which has led to TestNG having a separate class with JUnit assertion order.Some companies I’ve worked for have solved the problem of using JUnit or TestNG by going for both – and then used both argument orders for TestNG.

There’s a good JUnit/TestNG comparison on Mykong.com which points out that the main difference between the two is that JUnit doesn’t have some of the features included in TestNG. But the best answer (which I never received) is that the philosophy of the two are very different. JUnit is a unit testing framework, and as such does not encourage certain practises, for example, having tests that need to run in a particular order. JUnit 5 Alpha was recently announced and the new features don’t appear to violate this goal.

There are often problems with what unit testing actually is. Unit tests should be small, independent, deterministic and low-level. They should not have any direct dependencies on files, database or underlying OS (random number generation, current time, etc). Tests at higher levels are important too, but these should be clearly separate. By not following these rules, developers risk producing slow, brittle unit tests that require complicated set-up. Good unit tests actually force the code to be written in a more testable way, reducing dependencies, and using smaller elements.

Adding tests to the project is simple enough. There is a new class, GreeterTest, and a few lines in the build.gradle file. The changes to the build include a JUnit dependency, which in turn means adding a link to the central maven repository. Gradle is written so that dependencies can be added with little trouble, but this does bear thinking about a little. How does one know that the files being downloaded have not been tampered with? Are we accessing these repositories efficiently? This will be discussed later in detail later, but meanwhile we need to add a note to the TODO page.

Some years back, I gave a talk on unit testing. It was about an hour, and still only scratched the surface. While unit testing is easy, it introduces a lot of issues around how a project will work. I’ve seen company after company get tangled in unit tests. Simply adding JUnit to a project is not enough without some consistency and a real rigour is needed around how tests are used.

An example with this is the use of set-up methods in the tests. These become complicated, and end up with test subclasses and classes, making fixing tests a chore. Better to have the set-up in each individual test method, and if these become unwieldy then examining our object model. This then means that a broken test can be read from start to finish and understood on a single IDE screen. Yes, it produces duplication, but the aims of test code are very different to production code.

Introducing unit testing also requires the processes and infrastructure to support it. We’ve added Jenkins to the project, and broken unit tests will cause the build to ‘break’, to turn red. Continuous integration relies on tests running quickly, to allow a tight feedback loop for developers. Over time, slow builds become slower, and never quite get fixed. Also, the rules about not committing to a broken build need to be taken seriously. Too often, companies have unit test suites that break in specific ways, and developers are forced to understand when a broken build matters and when it is acceptable. This is far too confusing. Using unit testing means following certain rules and methods of development.

Adding JUnit to Jenkins is relatively simple. Make sure that the JUnit plugin is installed, then edit the build for the project to publish the JUnit results. The build will now fail if the tests fail, and full details can be seen within Jenkins.

jenkins-junit-plugin

The latest commit is a3b7fa1. In the next part, we’ll be looking at adding coverage checking to the project.

 

Categories
infrastructure

Java Infrastructure Part 5 – Introducing Jenkins

An interesting effect of writing a series of posts like this is how it clarifies your thinking. I originally planned to introduce a continuous integration server after Javadoc, JUnit and so on. But, as I’ve researched and thought about this, I’ve decided that a continuous integration server is a fundamental tool for development. It should be at the heart of any project.

Good development requires automation. Rather than have any steps carried out manually, we should automate them from the start. I’ve known colleagues who saw Jenkins as the powerhouse of an organisation; that one could have hundreds of jobs, not just passively monitoring repositories to run builds, but to promote code, run reports and even deploy software. Jenkins provides the plugins and the framework for a finely-grained permissions system, based on specific tasks, rather than all the underlying grants and credentials needed.

The problem with CI is that it takes a significant amount of investment and commitment to put in place retrospectively. An organisation that is able to deploy code manually may not feel excited about spending time and energy just to simplify those builds, even when deployments become unwieldy enough to prevent growth. CI also requires discipline – it takes a lot of courage to stop a large organisation until failing unit tests or transitory broken builds are resolved. It’s far easy to carry on with a broken system that seems to work than to push towards an efficient, modern build.

Jenkins runs inside its own application server, separate to the built software. It is available for download from http://jenkins-ci.org/, where there is a Java Web Archive available. The current version is 1.650 and, as discussed in the last section, we need to note this for later use as we scale up.

We need to introduce and document a new environment variable here, JENKINS_HOME, specifying the location where Jenkins stores its internal files. A major issue with Jenkins is that it doesn’t do a good job of separating code from configuration. This poses the question of how to run, maintain and restore Jenkins instances. I will avoid the question of restoration just now – I suspect it will be much simpler after virtualisation is introduced.

The command to run Jenkins is simple; java -jar jenkins.war. The server can then be accessed at its default location, http://localhost:8080/. Running Jenkins on a local machine is not really satisfactory in the long term, but will do for now.

Some initial configuration is required. Again, for the time being, this is system specific and can be found by clicking Manage Jenkins then Configure System. We can point to the current JDK or download a new one. Location of the JDK to be used is another ambiguity that must be dealt with.

I will  skip over some of the steps here – there are many good tutorials about Jenkins available, including a very useful O’Reilly book, which I have been using as a reference. The main steps I followed were:

  1. Install the git plugin (version 2.4.2)
  2. Install the gradle plugin (version 1.24)
  3. Install the blue/green balls plugin. By default, Jenkins has its successful builds shown as blue. The Jenkins blog notes that this plugin is in the top ten – and also points out that the red/blue colour scheme is a Japanese thing.

Having set up the basic environment, we add a new freestyle project to build our code. We use gradlew, with both the clean and build targets.

We test the build with running the jar, and that seems to work just fine.

Happy green build
Happy green build

So, there we have it, a slightly clunky local build of Jenkins. I wouldn’t say that this Jenkins set-up is particularly good.  However, even with those limitations, it provides a heartbeat for the upcoming stages of the project. If you’d like any more detail on steps that I’ve skipped over, please leave a comment and I’ll edit the text.

The latest commit on github is 392d98e

Categories
brightonjava

March’s Brighton Java – Scala Types

IMG_20160302_193759

On Wednesday it was Brighton Java, where Richard Dallaway gave a preview of his QCon session,  Types working for you, not against you. Richard has spoken at Brighton Java twice before (an Introduction to Scala and Lambda Expressions in Java 8) and it was good to welcome him back.

While I don’t know Scala at all, the talk was a fascinating insight into the language. By focussing on types, the talk gave an idea of Scala’s power and flexibility. Richard described Scala as an ‘unopinionated language’, allowing for a great freedom of style. It was also interesting to learn a little about Monoids and the ways they could be used.

The talk reminded me that I really need to look into Scala more.  While I recently became excited about groovy, I found this quote from James Strachan, the language’s designer: “if someone had shown me the Programming in Scala book… back in 2003 I’d probably have never created Groovy”. Richard listed some good starting points for investigating Scala – see the threads here.

Wednesday was the first Brighton Java  meeting held at the Brandwatch HQ. We had a record number of signups, with 50 people turning up on the night. I’m pleased to see this much engagement from Brighton’s Java community.

On a personal level, I’m delighted with how Brighton Java has grown. The audience has increased massively since the first meeting back in August 2012. The ongoing support of Brandwatch and the Skiff have been fantastic. I’ve also really appreciated the help of my co-organisers Luke and James. Things have really got going since they came on board.

Now to organise our April events. Yes, for the first time we’re going to be doing two events in a month. Let’s see how this goes.

IMG_20160302_200419

Categories
infrastructure

Java Infrastructure Part 4 – The Build System

It’s about time we added a build tool to the project. It’s possible to create jars by hand, but that soon becomes time-consuming and error prone. Having a repeatable build process launched with a single command is pretty much essential to doing anything interesting with software.

Over the years I’ve used make, ant, maven and gradle. The one of these I like least is ant. It seems to produce massive, thousand-line monstrosities that are unreadable and inscrutable. And while ivy is fairly similar to maven’s dependency management, it doesn’t seem as natural to me. Having said that, maven can also get unwieldy, with simple builds that get out of hand.

I’ve not used Gradle a great deal, but it seems an obvious choice. A significant reason is its success – Gradle is the standard tool for Android Studio and Spring. Popularity is often under-rated as a reason for choosing tools or frameworks, but means examples and expertise are easier to find. There may be many good reasons for lesser-used frameworks, but knowing there is a vibrant community around a platform is a major plus.

However, I’m still cautious about Gradle. I’ve found some of the plugins I’ve used unhelpful, with the missing options harder to find than they were with maven. I also find the documentation focuses too much on how to do certain tasks rather than explaining the underlying concepts and assumptions. On top of that is a growing suspicion that Groovy may result in scripts that are write-only, impossible to read back later on, just like Perl scripts used to be.

(There’s an example in the documentation of the power of dynamically-generated tasks and their potential for chaos. The script

4.times { counter ->
    task "task$counter" << {
        println "I'm task number $counter"
    }
}

creates four tasks, which can then be called as

> gradle -q task1
I'm task number 1

I can see some powerful uses for this, but I can also see myself struggling to work out where on earth a failing task comes from)

Despite some teething problems with the Artifactory plugin at work, I’ve enjoyed using Gradle so far. I love groovy for its concision and charm and there’s an optimism to using a new tool, particularly when the documentation explains how much better it is. It may turn out that maven would be a better choice but, because we’re working on infrastructure rather than code, we should have a lot more freedom to change things later.

Gradle uses the same concept of configuration over convention as maven. Past experience tells me that it’s easy to work with the grain of such things that fight the tool, so we will move our source directories from src/ to src/main/java/ in line with this.

Because we’ve used the standard directory layout, the initial build script is extremely simple. In fact, it’s just a single line in our initial build.gradle file:

apply plugin: 'java'

Running the command ‘gradle build’ results in the jar file being built. Nice and straightforward – but I feel a slight sense of nervousness that so much happens with a single command. For example, if we had not moved the source directories, gradle would still happily produce a jar file, just one with nothing in it.

Introducing a new tool means something else to track. As well as noting the current version in the readme and todo files, Gradle also offers a mechanism for reducing the risk of different versions being used – the Gradle wrapper. This is a script that checks whether the required version of Gradle is available on the local machine. If not, the version is downloaded and stored locally. This requires us to add a new gradle wrapper task to the script, then execute the gradle wrapper command.

task wrapper(type: Wrapper) {
    gradleVersion = '2.11'
}

The wrapper adds several new files – gradlew and gradlew.bat scripts, as well as a jar file and configuration in the gradle/wrapper folder. This is intended to be commited to git, so that anyone building the project in future can use the correct version of gradle via the gradlew command. This version is downloaded and stored centrally so that it can be used by other gradlew scripts as needed.

However, this convenience introduces a new issue, one we will face again when we introduce dependency management: how do we make sure that the code we download is safe? There’s an interesting discussion of risk in a post called How to Take over the computer of any Java developer. Basically, we need to make sure that the code we download has not been tampered with.

A basic level of security is provided by the distributionSha256Sum property which is added to the gradle-wrapper.properties and checks that the zip file downloaded from http://services.gradle.org/distributions/gradle-2.11-bin.zip is the one expected. Of course, this in itself requires finding ” the SHA-256 hash of a known Gradle distribution”. We’d probably be OK in trusting the (HTTP) download, but this isn’t really good enough. It’s going to be added to the TODO list, and dealt with after we’ve looked at dependency management.

The latest git commit is cd8e97a. In the next part we’ll look at adding a continuous integration server.

Categories
infrastructure

Java Infrastructure Part 3 – A problem with compilation

In theory, compiling our Java class is straightforward: drop into the command line, use the javac command, then test by running the main method.

command

Which is fine on my laptop  – I now know I can compile the code and run it. But problems can arise as the code in question becomes more complex, or if it needs to run on other machines. The latter is a certainty – putting aside failure of this laptop, I want to run this code on a server at some point. (That is, unless I decide to develop directly on the production machine. That seems such an appalling idea that I find myself wondering whether there is some bizarre case to be made for it).

Using java -showversion reveals that I am using 1.8.0_72. The latest version at the moment is Version 8 Update 73, which was released on February 5th 2016 – I’m writing this on the 21st. There are two problems here.

  1. How do I make sure that this code is always handled with a consistent version of the Java SDK? I don’t want to risk inconsistent behaviour between different machines.
  2. How do I make sure that I am running the latest version of the SDK? Looking at  the release notes for version 73 to see the differences between this and the version, I notice that there are some security patches that I’m not taking advantage of.

This problem will occur with every tool that is used. A similar problem will occur when we start adding some dependencies to the software, but we will deal with that separately.

There is also a certain amount of configuration that remains implicit when I am running on a single machine. Right now this doesn’t matter much, but these sorts of problems become a nightmare as  the software grows – what are my environmental variables? What is the underlying OS?. It would only take a few minutes for a developer to set up a new machine to run this code now, but as we add databases, continuous integration etc, we end up with that becoming more difficult.

Consistency sounds like an obscure problem (and is low-risk for the Java SDK), but when it does arise, it’s vicious. You don’t want a bug on the server that can’t be easily spotted on development machines. If the development and production environments are the same then every bit of work carried out confirms that the code works as it should.

What are some options for dealing with these issues?

  • Document a target environment fully and allow people to follow that as closely as they want/need to. There are still problems when doing this, but it’s more than a lot of companies bother with.
  • Use a bespoke local machine build image – the question then becomes how to keep existing machines in sync with this. Over time, the machines diverge from the original image, or the that image needs updating. This can be complicated by machines needing special builds for testing etc.
  • Find a way to develop cleanly using docker/vagrant or similar. The code is executed and possibly compiled within VMs.These can be rebuilt every time.

Build images and documentation are both useful first steps, but ultimately, the VM-based solutions feel right. At this point, I am going to put in some TODOs to cover this. It’s unsatisfactory, as there are now 5 of them (compared to just 11 lines of code, 5 of which are blank or single characters). This issue needs to be dealt with soon, but I want to put in a bit more structure to make this easier. However, as a stop-gap we should also note our current build environment in the README file.

The latest git revision is eb62957. In the next part, I’ll be adding an automated build tool.

Categories
infrastructure

Java Infrastructure Part 2 – A ‘simple’ Java class

Even though we’re focusing on infrastructure, we should have some code to play with. We’ll use a simple Java class, one that merely says hello, and build everything else around it. We will put this onto a server, with (eventually!) a pipeline to deploy it, and a user management system so it can say hello to specific users; but we’re not going to add any functionality beyond greetings until that is all working.

Our first version of the class is this:

package com.riddlefox.greeting;

public class Greeter {
   public static void main (String[] args) {
       System.out.println(greet("world"));
   }

   public static String greet(String name){
       return "Hello " + name;
   }
}

And we create this in the src/com/riddlefox/greeting. It’s a simple class that would work in almost any version of Java. But it already makes a lot of assumptions. These aren’t necessarily problems, as long as we’re aware of them. What can we say about this first class and what it implies about the project?

  1. We have placed this into a fairly uninformative package/folder structure. At this point, there isn’t any need for sub-modules, particularly with no other files to distinguish it from. A src folder in the project root and a package name of  com.riddlefox.greeting are probably good enough for now.
  2. I haven’t added any Javadoc. Arguably, the class is too simple to need it yet, and I want to avoid the sort of Javadoc that simply repeats the method defintion. We’ll add a TODO about adding in Javadoc later on when the project is a little larger, and the method might be used without access to the source.
  3. I have misgivings around the use of the static keyword. It’s mainly there to make the main method concise. In terms of a single class this probably isn’t the end of the world, but if we add much more code such issues of style will become important.
  4. I also have misgivings around the name. As the joke goes, there are only two hard problems in computer programming, and I’m avoiding one of these hard problems. Again, as a project grows naming becomes more important.
  5. String management is an issue here. Changing the strings requires recompiling the class. There is also no means of internationalisation. These can both be added to the TODO list.

The issue of String management is a difficult one. It’s good to be able to update the strings on an application without redeploying, particularly when you have a monolith that takes time to deploy. However, it also adds a level of obfuscation to the code. If the application is easy to update, then redeploying it might not be a problem.

I’ve spent so long talking about this class that I am not going to actually compile it yet – that can wait until the next part. And we’re going to spend that entire post just talking about that. In the meantime, the latest version of the repository is on github.

PS – If you don’t know the joke, there are said to be two hard problems in computer science: cache invalidation, naming things, and off-by-one errors.

Categories
infrastructure

Java Infrastructure Part 1 – Version Control

If your code isn’t in version control, then it doesn’t really exist. It’s too easy to lose the code on a single computer – a hard drive failure, or maybe a mistaken rm command. And it’s also easy to make a change that breaks something and not be able to get back to a working version. At those times a stored copy is a lifesaver. The basic requirement of a Version Control System is to keep code safe. But, over the years it’s become much more than that.

I’ve used a lot of different VCS – sourcesafe, RCS, CVS, SVN and git. At my first job, we used a series of network folders, with a lotus notes DB to keep track of who was working on each file. Different versions of the folders identified the different development environments, from development through to live. Promoting the site  would involve copying code from one set of folders to the next.

At first, version control is about making sure the software is safe and providing a history. This in itself makes it invaluable. But tools like SVN and git make collaboration easier. Of all the tools I’ve used, git is the first one that I’ve loved.

A lot of the tutorials on git treat it like a distributed SVN. This may be helpful in getting started, but soon leads to confusion. My favourite tutorial is Git from the Bottom Up, which discusses git in terms of the objects it uses internally. That makes it much easier to understand: git is a time machine, allowing you to open up alternative timestreams, recombine them, and do it all without opening up paradoxes. As long as you understand what you’re doing.

Git adds a lot of great features. git bisect is great for finding where bugs were introduced. Git stash is great for when you need to change what you’re working on. Git detects moved files better than svn does. But the best thing about git is the branching model. Rather than have branching being something difficult, as it can be in SVN, git is based treats branching as something that should be commonplace.

The big problem with version control systems is how you fit them into the company’s working methods. Git enables people to collaborate effectively but also provides challenges. This is a topic that I owe a whole post on its own. The Death of Continuous Integration is an excellent talk on the topic by Steve Smith.

So, before we write any code for our new project, we need to set up a repository. Github is a convenient place to host these archives, and that’s where I’ll be putting the java-infrastructure code.

Screenshot - 160216 - 18:37:27

Our repository is created with a Java gitignore file and a readme. To that I am going to add a single file, a TODO. The Readme provides a quick overview of what this project is for. The TODO is a simple reference to track things that need to be added to this project. The first item in this file is a note to add a better issue tracking system.

It’s not much of a project yet, but at least we know anything we add is safe in a repository. The current state can be found as commit 94d34c6.

Next up: writing some Java code.

Categories
infrastructure

Java Infrastructure Part 0 – The Long Way

I’ve been programming for a long time and worked with a lot of different companies. I’ve seen a range of architectures, organisations and processes. I started coding before the Agile Manifesto was signed, so I’m old enough to remember that projects were sometimes still successful under waterfall – but that’s another story.

Writing classes and putting applications online are easy enough. Most companies are working on well-understood problems. Despite this, two issues tend to emerge. The first is maintenance. A lot is written about refactoring and managing software, but it rarely works in practise. No matter how clever the devs, code tends to end up more complicated than it needs to be and change becomes difficult.

The second issue is linked to the first, and that is infrastructure. It’s easy enough to write a new piece of code and put it live. It’s so easy that a lot of people focus on writing features for a new application. Deployment tends to be figured out in the closing weeks of the project. After all, the first deployment is relatively straightforward. The problem comes as things grow more complicated.

Once a piece of software is live and has users, it’s hard to switch the deployment strategies.The Internet is now sufficiently established that it’s not  appropriate to shut down the system every time you need to make a change. The first few deployments are simple, quick. As the system grows, it takes longer and longer to redeploy by which time there are a lot of other things competing for attention.

Adding infrastructure to a large project is a challenge. One doesn’t want to risk breaking those obscure sections of config files, placed there to handle one specific situation. Obsolete sections are left in the config because nobody is quite sure if a line does something or not. In the end, only one or two long-established developers are able to change the infrastructure. After they leave things become even more difficult.

What I want to do with this series – both on my blog and on github – is to build up a generic piece of software with simple Java code, but to build a rich infrastructure around it. I think there is a lot to learn from this – and at the end I’ll have a good base to work from with my own future projects. I hope to learn about making infrastructure flexible, which, as I’ve said above, it a rare thing.

Java is easy, but being a professional developer requires much more: version control, continuous delivery/deployment, build management, monitoring, IDEs, logging frameworks, email management etc, etc. This is what I’m going to focus on.

Some years back, I studied deconstruction for my MA. At the start of the course, the Professor read a short Kafa story, the Next Village:

My grandfather used to say: Life is astoundingly short. To me, looking back over it, life seems so foreshortened that I scarcely understand, for instance, how a young man can decide to ride over to the next village without being afraid that -not to mention accidents- even the span of a normal happy life may fall far short of the time needed for such a journey.”

After reading those sixty-six words, the Professor sighed. “We could spend all ten weeks on that piece”. I’m not planning to be quite that meticulous, but this is going to be quite detailed. Based on the notes I’ve made so far, no code gets compiled until Part 3.

Categories
springboot

Deploying sites quickly on Spring Boot

According to the author Thomas Mann, “A writer is someone for whom writing is more difficult than it is for other people”. And, working as a tech-lead, I found myself turning into someone for whom writing Java was more difficult than for other people. When a piece of software might need supporting for years there are a lot of things to consider.

One of the attractions of Spring Boot is that  it offers a way to get sites live very quickly. In practise, a production environment offers so many potential issues that a Spring Boot deployment still requires a significant amount of work. A simple hobby site… should, in theory, be very quick.

So, I’m going to try to put a Spring Boot site live and see how long it takes. The site will be simple – a map of Brighton where users can add points of interest. I’m going to use heroku to simplify deployment and cobble together some Javascript.

I’m not racing to get this live, but taking this at a leisurely pace – and the hours won’t be contiguous. Let’s see how it goes.

Hour 1

  • Set up a new git repo.
  • 0c7b180 – Created a basic Spring Boot project, using Accessing JPA Data with REST as a template. Rather than storing Person objects, I used PlaceOfInterest objects. Within 15 minutes I could write and read from an in-memory JPA datastore.
  • 2efcede Added some static pages to the Spring Application, which I had lying around already.
  • 1cad492 Next was setting up Heroku (which meant retrieving an ancient login). The free service should suffice for the basics. Heroku have a guide to deploying Spring Boot applications. This didn’t mention the Procfile, which is described in a post by Nicholas Paul Smith.  Testing the app through curl, it is persisting data, albeit to an in-memory database.

So, in the first hour, I have a Spring Boot project running on heroku. The next step is to add a persistent postgres database.

Hour 2

Hour 2 was spent in Emporium Cafe, struggling to connect the Spring Boot application through hibernate to a local PSQL database. Since I was trying to do things fast, this was something of a hack-and-slash effort – for example, resorting to creating a hibernate_sequence object manually. Setting up postgres was a bit of a drag as I’m used to MySQL.

The database URL was passed via a JDBC_URL environment variable as I’ll need this for Heroku. At the end of the hour I had a local application writing to the postgres database. The next session will start with tidying up the source code that was added and committing it.

  • 179356f – Added a basic SQL file. This was generated by Hibernate with the addition of a sequence. This needs to be tidied up.

Hour 3 (and a bit)

  • 30b5346 Starting by committing the database work from the day before
  • 4fdc2c7 Added a new file to keep track of what needs doing for this site.
  • 2a217b0 I added a new layer to the map, which loaded in a tab-separated data from a new controller. We’re using Openlayers 2 for expediency, as this simple format is not available in Openlayers 3.  This was a rather frustrating process: the markers were not displaying but there was no error. This commit has a single marker appearing about 400 miles South of Ghana.

The next session will start with working out why Openlayers is not displaying the marker where I expected,  in Sussex, England. This was annoying enough to keep me looking into it for another 15 minutes, so hour 3 was a little longer than planned. As usual with mapping bugs, the problem is not setting the projection.

Hour 4

A slightly shorter hour, to make up for overrunning the previous night.

  • ba69d68 Added the projection to the POI layer, and generated the TSV file from the database. This means that the markers on the map are now connected to Postgres.
  • 74b718d Set the app up on Heroku – which required changing the database url value; and, for some reason, having to reinstall  some heroku command line plugins, which slowed things down. I also had to add database objects  to the external database.
  • 1a436bc Cosmetic changes, including a title for the page and removing the default favicon – although this is still appearing for some reason.

At the end of the fourth hour, I have a site running on a remote server that can read/update a database to display locations. Given my slowness setting up postgres and fixing the Openlayers issue, it looks like a basic CRUD site could be put live in a couple of hours. Next up: adding a form to the page for new data.

Hour 5

At this point, I’m into playing with Javascript, setting up a form to submit data. Clicking on the map now updates a form, and submitting the form adds entries to the database. It’s a little clunky – the new object doesn’t appear on the map without refreshing; the form looks ugly; and there is an error handler firing on the AJAX request. A slightly slow hour, getting to grips with Javascript, but the main user-facing functionality is now there.

  • 4452c5d Added a form to the site so that new places-of-interest can be submitted.

To Be Continued?

There are a few obvious things to do next. The TSV file should only show visible points, the page needs tidying and I’d like to use a proper domain in place of the Heroku-supplied one. I’m very pleased with the speed and simplicity of progress so far.