Categories
brightonjava

March’s Brighton Java – Scala Types

IMG_20160302_193759

On Wednesday it was Brighton Java, where Richard Dallaway gave a preview of his QCon session,  Types working for you, not against you. Richard has spoken at Brighton Java twice before (an Introduction to Scala and Lambda Expressions in Java 8) and it was good to welcome him back.

While I don’t know Scala at all, the talk was a fascinating insight into the language. By focussing on types, the talk gave an idea of Scala’s power and flexibility. Richard described Scala as an ‘unopinionated language’, allowing for a great freedom of style. It was also interesting to learn a little about Monoids and the ways they could be used.

The talk reminded me that I really need to look into Scala more.  While I recently became excited about groovy, I found this quote from James Strachan, the language’s designer: “if someone had shown me the Programming in Scala book… back in 2003 I’d probably have never created Groovy”. Richard listed some good starting points for investigating Scala – see the threads here.

Wednesday was the first Brighton Java  meeting held at the Brandwatch HQ. We had a record number of signups, with 50 people turning up on the night. I’m pleased to see this much engagement from Brighton’s Java community.

On a personal level, I’m delighted with how Brighton Java has grown. The audience has increased massively since the first meeting back in August 2012. The ongoing support of Brandwatch and the Skiff have been fantastic. I’ve also really appreciated the help of my co-organisers Luke and James. Things have really got going since they came on board.

Now to organise our April events. Yes, for the first time we’re going to be doing two events in a month. Let’s see how this goes.

IMG_20160302_200419

Categories
infrastructure

Java Infrastructure Part 4 – The Build System

It’s about time we added a build tool to the project. It’s possible to create jars by hand, but that soon becomes time-consuming and error prone. Having a repeatable build process launched with a single command is pretty much essential to doing anything interesting with software.

Over the years I’ve used make, ant, maven and gradle. The one of these I like least is ant. It seems to produce massive, thousand-line monstrosities that are unreadable and inscrutable. And while ivy is fairly similar to maven’s dependency management, it doesn’t seem as natural to me. Having said that, maven can also get unwieldy, with simple builds that get out of hand.

I’ve not used Gradle a great deal, but it seems an obvious choice. A significant reason is its success – Gradle is the standard tool for Android Studio and Spring. Popularity is often under-rated as a reason for choosing tools or frameworks, but means examples and expertise are easier to find. There may be many good reasons for lesser-used frameworks, but knowing there is a vibrant community around a platform is a major plus.

However, I’m still cautious about Gradle. I’ve found some of the plugins I’ve used unhelpful, with the missing options harder to find than they were with maven. I also find the documentation focuses too much on how to do certain tasks rather than explaining the underlying concepts and assumptions. On top of that is a growing suspicion that Groovy may result in scripts that are write-only, impossible to read back later on, just like Perl scripts used to be.

(There’s an example in the documentation of the power of dynamically-generated tasks and their potential for chaos. The script

4.times { counter ->
    task "task$counter" << {
        println "I'm task number $counter"
    }
}

creates four tasks, which can then be called as

> gradle -q task1
I'm task number 1

I can see some powerful uses for this, but I can also see myself struggling to work out where on earth a failing task comes from)

Despite some teething problems with the Artifactory plugin at work, I’ve enjoyed using Gradle so far. I love groovy for its concision and charm and there’s an optimism to using a new tool, particularly when the documentation explains how much better it is. It may turn out that maven would be a better choice but, because we’re working on infrastructure rather than code, we should have a lot more freedom to change things later.

Gradle uses the same concept of configuration over convention as maven. Past experience tells me that it’s easy to work with the grain of such things that fight the tool, so we will move our source directories from src/ to src/main/java/ in line with this.

Because we’ve used the standard directory layout, the initial build script is extremely simple. In fact, it’s just a single line in our initial build.gradle file:

apply plugin: 'java'

Running the command ‘gradle build’ results in the jar file being built. Nice and straightforward – but I feel a slight sense of nervousness that so much happens with a single command. For example, if we had not moved the source directories, gradle would still happily produce a jar file, just one with nothing in it.

Introducing a new tool means something else to track. As well as noting the current version in the readme and todo files, Gradle also offers a mechanism for reducing the risk of different versions being used – the Gradle wrapper. This is a script that checks whether the required version of Gradle is available on the local machine. If not, the version is downloaded and stored locally. This requires us to add a new gradle wrapper task to the script, then execute the gradle wrapper command.

task wrapper(type: Wrapper) {
    gradleVersion = '2.11'
}

The wrapper adds several new files – gradlew and gradlew.bat scripts, as well as a jar file and configuration in the gradle/wrapper folder. This is intended to be commited to git, so that anyone building the project in future can use the correct version of gradle via the gradlew command. This version is downloaded and stored centrally so that it can be used by other gradlew scripts as needed.

However, this convenience introduces a new issue, one we will face again when we introduce dependency management: how do we make sure that the code we download is safe? There’s an interesting discussion of risk in a post called How to Take over the computer of any Java developer. Basically, we need to make sure that the code we download has not been tampered with.

A basic level of security is provided by the distributionSha256Sum property which is added to the gradle-wrapper.properties and checks that the zip file downloaded from http://services.gradle.org/distributions/gradle-2.11-bin.zip is the one expected. Of course, this in itself requires finding ” the SHA-256 hash of a known Gradle distribution”. We’d probably be OK in trusting the (HTTP) download, but this isn’t really good enough. It’s going to be added to the TODO list, and dealt with after we’ve looked at dependency management.

The latest git commit is cd8e97a. In the next part we’ll look at adding a continuous integration server.

Categories
infrastructure

Java Infrastructure Part 3 – A problem with compilation

In theory, compiling our Java class is straightforward: drop into the command line, use the javac command, then test by running the main method.

command

Which is fine on my laptop  – I now know I can compile the code and run it. But problems can arise as the code in question becomes more complex, or if it needs to run on other machines. The latter is a certainty – putting aside failure of this laptop, I want to run this code on a server at some point. (That is, unless I decide to develop directly on the production machine. That seems such an appalling idea that I find myself wondering whether there is some bizarre case to be made for it).

Using java -showversion reveals that I am using 1.8.0_72. The latest version at the moment is Version 8 Update 73, which was released on February 5th 2016 – I’m writing this on the 21st. There are two problems here.

  1. How do I make sure that this code is always handled with a consistent version of the Java SDK? I don’t want to risk inconsistent behaviour between different machines.
  2. How do I make sure that I am running the latest version of the SDK? Looking at  the release notes for version 73 to see the differences between this and the version, I notice that there are some security patches that I’m not taking advantage of.

This problem will occur with every tool that is used. A similar problem will occur when we start adding some dependencies to the software, but we will deal with that separately.

There is also a certain amount of configuration that remains implicit when I am running on a single machine. Right now this doesn’t matter much, but these sorts of problems become a nightmare as  the software grows – what are my environmental variables? What is the underlying OS?. It would only take a few minutes for a developer to set up a new machine to run this code now, but as we add databases, continuous integration etc, we end up with that becoming more difficult.

Consistency sounds like an obscure problem (and is low-risk for the Java SDK), but when it does arise, it’s vicious. You don’t want a bug on the server that can’t be easily spotted on development machines. If the development and production environments are the same then every bit of work carried out confirms that the code works as it should.

What are some options for dealing with these issues?

  • Document a target environment fully and allow people to follow that as closely as they want/need to. There are still problems when doing this, but it’s more than a lot of companies bother with.
  • Use a bespoke local machine build image – the question then becomes how to keep existing machines in sync with this. Over time, the machines diverge from the original image, or the that image needs updating. This can be complicated by machines needing special builds for testing etc.
  • Find a way to develop cleanly using docker/vagrant or similar. The code is executed and possibly compiled within VMs.These can be rebuilt every time.

Build images and documentation are both useful first steps, but ultimately, the VM-based solutions feel right. At this point, I am going to put in some TODOs to cover this. It’s unsatisfactory, as there are now 5 of them (compared to just 11 lines of code, 5 of which are blank or single characters). This issue needs to be dealt with soon, but I want to put in a bit more structure to make this easier. However, as a stop-gap we should also note our current build environment in the README file.

The latest git revision is eb62957. In the next part, I’ll be adding an automated build tool.

Categories
infrastructure

Java Infrastructure Part 2 – A ‘simple’ Java class

Even though we’re focusing on infrastructure, we should have some code to play with. We’ll use a simple Java class, one that merely says hello, and build everything else around it. We will put this onto a server, with (eventually!) a pipeline to deploy it, and a user management system so it can say hello to specific users; but we’re not going to add any functionality beyond greetings until that is all working.

Our first version of the class is this:

package com.riddlefox.greeting;

public class Greeter {
   public static void main (String[] args) {
       System.out.println(greet("world"));
   }

   public static String greet(String name){
       return "Hello " + name;
   }
}

And we create this in the src/com/riddlefox/greeting. It’s a simple class that would work in almost any version of Java. But it already makes a lot of assumptions. These aren’t necessarily problems, as long as we’re aware of them. What can we say about this first class and what it implies about the project?

  1. We have placed this into a fairly uninformative package/folder structure. At this point, there isn’t any need for sub-modules, particularly with no other files to distinguish it from. A src folder in the project root and a package name of  com.riddlefox.greeting are probably good enough for now.
  2. I haven’t added any Javadoc. Arguably, the class is too simple to need it yet, and I want to avoid the sort of Javadoc that simply repeats the method defintion. We’ll add a TODO about adding in Javadoc later on when the project is a little larger, and the method might be used without access to the source.
  3. I have misgivings around the use of the static keyword. It’s mainly there to make the main method concise. In terms of a single class this probably isn’t the end of the world, but if we add much more code such issues of style will become important.
  4. I also have misgivings around the name. As the joke goes, there are only two hard problems in computer programming, and I’m avoiding one of these hard problems. Again, as a project grows naming becomes more important.
  5. String management is an issue here. Changing the strings requires recompiling the class. There is also no means of internationalisation. These can both be added to the TODO list.

The issue of String management is a difficult one. It’s good to be able to update the strings on an application without redeploying, particularly when you have a monolith that takes time to deploy. However, it also adds a level of obfuscation to the code. If the application is easy to update, then redeploying it might not be a problem.

I’ve spent so long talking about this class that I am not going to actually compile it yet – that can wait until the next part. And we’re going to spend that entire post just talking about that. In the meantime, the latest version of the repository is on github.

PS – If you don’t know the joke, there are said to be two hard problems in computer science: cache invalidation, naming things, and off-by-one errors.

Categories
infrastructure

Java Infrastructure Part 1 – Version Control

If your code isn’t in version control, then it doesn’t really exist. It’s too easy to lose the code on a single computer – a hard drive failure, or maybe a mistaken rm command. And it’s also easy to make a change that breaks something and not be able to get back to a working version. At those times a stored copy is a lifesaver. The basic requirement of a Version Control System is to keep code safe. But, over the years it’s become much more than that.

I’ve used a lot of different VCS – sourcesafe, RCS, CVS, SVN and git. At my first job, we used a series of network folders, with a lotus notes DB to keep track of who was working on each file. Different versions of the folders identified the different development environments, from development through to live. Promoting the site  would involve copying code from one set of folders to the next.

At first, version control is about making sure the software is safe and providing a history. This in itself makes it invaluable. But tools like SVN and git make collaboration easier. Of all the tools I’ve used, git is the first one that I’ve loved.

A lot of the tutorials on git treat it like a distributed SVN. This may be helpful in getting started, but soon leads to confusion. My favourite tutorial is Git from the Bottom Up, which discusses git in terms of the objects it uses internally. That makes it much easier to understand: git is a time machine, allowing you to open up alternative timestreams, recombine them, and do it all without opening up paradoxes. As long as you understand what you’re doing.

Git adds a lot of great features. git bisect is great for finding where bugs were introduced. Git stash is great for when you need to change what you’re working on. Git detects moved files better than svn does. But the best thing about git is the branching model. Rather than have branching being something difficult, as it can be in SVN, git is based treats branching as something that should be commonplace.

The big problem with version control systems is how you fit them into the company’s working methods. Git enables people to collaborate effectively but also provides challenges. This is a topic that I owe a whole post on its own. The Death of Continuous Integration is an excellent talk on the topic by Steve Smith.

So, before we write any code for our new project, we need to set up a repository. Github is a convenient place to host these archives, and that’s where I’ll be putting the java-infrastructure code.

Screenshot - 160216 - 18:37:27

Our repository is created with a Java gitignore file and a readme. To that I am going to add a single file, a TODO. The Readme provides a quick overview of what this project is for. The TODO is a simple reference to track things that need to be added to this project. The first item in this file is a note to add a better issue tracking system.

It’s not much of a project yet, but at least we know anything we add is safe in a repository. The current state can be found as commit 94d34c6.

Next up: writing some Java code.

Categories
infrastructure

Java Infrastructure Part 0 – The Long Way

I’ve been programming for a long time and worked with a lot of different companies. I’ve seen a range of architectures, organisations and processes. I started coding before the Agile Manifesto was signed, so I’m old enough to remember that projects were sometimes still successful under waterfall – but that’s another story.

Writing classes and putting applications online are easy enough. Most companies are working on well-understood problems. Despite this, two issues tend to emerge. The first is maintenance. A lot is written about refactoring and managing software, but it rarely works in practise. No matter how clever the devs, code tends to end up more complicated than it needs to be and change becomes difficult.

The second issue is linked to the first, and that is infrastructure. It’s easy enough to write a new piece of code and put it live. It’s so easy that a lot of people focus on writing features for a new application. Deployment tends to be figured out in the closing weeks of the project. After all, the first deployment is relatively straightforward. The problem comes as things grow more complicated.

Once a piece of software is live and has users, it’s hard to switch the deployment strategies.The Internet is now sufficiently established that it’s not  appropriate to shut down the system every time you need to make a change. The first few deployments are simple, quick. As the system grows, it takes longer and longer to redeploy by which time there are a lot of other things competing for attention.

Adding infrastructure to a large project is a challenge. One doesn’t want to risk breaking those obscure sections of config files, placed there to handle one specific situation. Obsolete sections are left in the config because nobody is quite sure if a line does something or not. In the end, only one or two long-established developers are able to change the infrastructure. After they leave things become even more difficult.

What I want to do with this series – both on my blog and on github – is to build up a generic piece of software with simple Java code, but to build a rich infrastructure around it. I think there is a lot to learn from this – and at the end I’ll have a good base to work from with my own future projects. I hope to learn about making infrastructure flexible, which, as I’ve said above, it a rare thing.

Java is easy, but being a professional developer requires much more: version control, continuous delivery/deployment, build management, monitoring, IDEs, logging frameworks, email management etc, etc. This is what I’m going to focus on.

Some years back, I studied deconstruction for my MA. At the start of the course, the Professor read a short Kafa story, the Next Village:

My grandfather used to say: Life is astoundingly short. To me, looking back over it, life seems so foreshortened that I scarcely understand, for instance, how a young man can decide to ride over to the next village without being afraid that -not to mention accidents- even the span of a normal happy life may fall far short of the time needed for such a journey.”

After reading those sixty-six words, the Professor sighed. “We could spend all ten weeks on that piece”. I’m not planning to be quite that meticulous, but this is going to be quite detailed. Based on the notes I’ve made so far, no code gets compiled until Part 3.

Categories
springboot

Deploying sites quickly on Spring Boot

According to the author Thomas Mann, “A writer is someone for whom writing is more difficult than it is for other people”. And, working as a tech-lead, I found myself turning into someone for whom writing Java was more difficult than for other people. When a piece of software might need supporting for years there are a lot of things to consider.

One of the attractions of Spring Boot is that  it offers a way to get sites live very quickly. In practise, a production environment offers so many potential issues that a Spring Boot deployment still requires a significant amount of work. A simple hobby site… should, in theory, be very quick.

So, I’m going to try to put a Spring Boot site live and see how long it takes. The site will be simple – a map of Brighton where users can add points of interest. I’m going to use heroku to simplify deployment and cobble together some Javascript.

I’m not racing to get this live, but taking this at a leisurely pace – and the hours won’t be contiguous. Let’s see how it goes.

Hour 1

  • Set up a new git repo.
  • 0c7b180 – Created a basic Spring Boot project, using Accessing JPA Data with REST as a template. Rather than storing Person objects, I used PlaceOfInterest objects. Within 15 minutes I could write and read from an in-memory JPA datastore.
  • 2efcede Added some static pages to the Spring Application, which I had lying around already.
  • 1cad492 Next was setting up Heroku (which meant retrieving an ancient login). The free service should suffice for the basics. Heroku have a guide to deploying Spring Boot applications. This didn’t mention the Procfile, which is described in a post by Nicholas Paul Smith.  Testing the app through curl, it is persisting data, albeit to an in-memory database.

So, in the first hour, I have a Spring Boot project running on heroku. The next step is to add a persistent postgres database.

Hour 2

Hour 2 was spent in Emporium Cafe, struggling to connect the Spring Boot application through hibernate to a local PSQL database. Since I was trying to do things fast, this was something of a hack-and-slash effort – for example, resorting to creating a hibernate_sequence object manually. Setting up postgres was a bit of a drag as I’m used to MySQL.

The database URL was passed via a JDBC_URL environment variable as I’ll need this for Heroku. At the end of the hour I had a local application writing to the postgres database. The next session will start with tidying up the source code that was added and committing it.

  • 179356f – Added a basic SQL file. This was generated by Hibernate with the addition of a sequence. This needs to be tidied up.

Hour 3 (and a bit)

  • 30b5346 Starting by committing the database work from the day before
  • 4fdc2c7 Added a new file to keep track of what needs doing for this site.
  • 2a217b0 I added a new layer to the map, which loaded in a tab-separated data from a new controller. We’re using Openlayers 2 for expediency, as this simple format is not available in Openlayers 3.  This was a rather frustrating process: the markers were not displaying but there was no error. This commit has a single marker appearing about 400 miles South of Ghana.

The next session will start with working out why Openlayers is not displaying the marker where I expected,  in Sussex, England. This was annoying enough to keep me looking into it for another 15 minutes, so hour 3 was a little longer than planned. As usual with mapping bugs, the problem is not setting the projection.

Hour 4

A slightly shorter hour, to make up for overrunning the previous night.

  • ba69d68 Added the projection to the POI layer, and generated the TSV file from the database. This means that the markers on the map are now connected to Postgres.
  • 74b718d Set the app up on Heroku – which required changing the database url value; and, for some reason, having to reinstall  some heroku command line plugins, which slowed things down. I also had to add database objects  to the external database.
  • 1a436bc Cosmetic changes, including a title for the page and removing the default favicon – although this is still appearing for some reason.

At the end of the fourth hour, I have a site running on a remote server that can read/update a database to display locations. Given my slowness setting up postgres and fixing the Openlayers issue, it looks like a basic CRUD site could be put live in a couple of hours. Next up: adding a form to the page for new data.

Hour 5

At this point, I’m into playing with Javascript, setting up a form to submit data. Clicking on the map now updates a form, and submitting the form adds entries to the database. It’s a little clunky – the new object doesn’t appear on the map without refreshing; the form looks ugly; and there is an error handler firing on the AJAX request. A slightly slow hour, getting to grips with Javascript, but the main user-facing functionality is now there.

  • 4452c5d Added a form to the site so that new places-of-interest can be submitted.

To Be Continued?

There are a few obvious things to do next. The TSV file should only show visible points, the page needs tidying and I’d like to use a proper domain in place of the Heroku-supplied one. I’m very pleased with the speed and simplicity of progress so far.

Categories
openstreetmap

Finding Ley-Lines with PostGIS

I’ve recently been playing with PostGIS. This post will summarise a simple attempt to manipulate data and draw it on top of Open Streetmap. I wanted to produce a Brighton version of Steven Kay’s Pub Ley Lines. As you will see, the outcome was less interesting than the process.

What follows is, basically, a How-to-Draw-An-Owl tutorial. It summarises the steps I usually look up and is intended to share with a few specific people. If you find yourself here via Google and want more information, leave a comment and I will add more detail.

1 – Download data from OSM

The first thing I needed was the data from Open Streetmap, which contains pub locations among the points-of-interest. There are a number of options. Downloading direct from OSM failed when I last tried it, but I had an older version of the data available.

2 – Set up Postgis and osm2pgsql

I’d previously installed Postgresql and Postgis on my laptop but somehow the installation has become broken and won’t be easily repaired or uninstalled. I should fix this, but I wanted to get on with this experiment. I’ve been meaning to set up AWS for some time, and using a micro instance on Amazon allowed me to get a version of PostGIS running very quickly.

I created a new EC2 micro instance based on the basic ubuntu instance, ran ‘sudo apt-get update’ followed by ‘sudo apt-get install osm2pgsql’ and I had everything I needed.

AWS is awesome, and I love being able to run up an instance for a small task and throw it away once I’m done.

3 – Create postGIS user and table

Setting up a new user on postgres is certainly less of a hassle than doing it on MySQL, but there are a few gotchas – such as needing to add a line to the /etc/postgresql/9.3/main/pg_hba.conf file. Also, when creating a new database, remember to enable PostGIS with the command “CREATE EXTENSION POSTGIS”

4 – Load the OSM data in Postgres

Having transferred my PBF file to the AWS instance using scp, I could then load the data. The command here is a little different to the one I used previously because of this server’s limitations:

osm2pgsql -U ley -d ley  --slim --cache-strategy sparse --number-processes 4 brighton.pbf

5 – Create a table containing all of the ley lines

I pretty-much followed the recipe given by Stephen Kay here.

6 -Extract the data

This time, rather than a tab-separated format, I selected the WKT (well-known text) format for all the ley-lines with more than 8 pubs. I’m sure there are better ways to extract this. The query used was:

psql -U ley -w ley -c "COPY (select st_astext(st_transform(geom,4674)) from leys where ct> 8) TO STDOUT WITH CSV" > lines.txt

This may not be the most portable format for the data but I can bully it into something Openlayers can use.

7 – Create a page to display the data

Since the data is constant and will be low-traffic, I am using OSM via static files to display the results.  The linestring WKT representations of the leys have been copied into the javascript file rather than being loaded from a file. All quick-and-dirty, but it has worked. The source is on github and the results are online.

Categories
continuousdelivery

An Enterprise Java Hello World

The ‘hello world’ program is of great importance to developers. It’s usually the first thing written when using a new language or framework, pretty much the simplest thing you can do: output 12 characters (assuming a newline). Writing the hello world program in C, the first time I’d written a compiled program, was an incredible moment for me: I could make the computer do something. This simple idea has an entry on wikipedia and a list of examples.

Hello World is supposed to be simple and there are a few jokes about Java EE hello world programs, mocking the framework for being long-winded and unwieldy – see for example, item 8 in The top Java viral jokes of 2014. In its defence, the contents page shown in the article covers a lot more than code, and is intended to get someone up and running from a bare-bones structure. But Java definitely doesn’t have the concision of Python’s print “Hello, world!”

One of the proud boasts for Spring Boot was how simple it was, with an example application that fitted into a single tweet (the link includes instructions for running it):

spring-boot

As easy as it is to get a Spring Boot application working, this is only part of the task of the development life-cycle. By focussing on the output, it’s easy to miss a lot of the non-functional aspects of an application. You can deploy a new piece of software to a server without making it easy for developers to work with. This is particularly dangerous with non-technical stakeholders who only see these non-functional requirements indirectly. It’s hard to prioritise infrastructure against features and bug-fixes. They are also incredibly difficult to fit into an application after it goes live.

I’ve recently set up a new application. The initial project was produced using Spring Initializr but, rather than start cutting code, I’ve been thinking about what else I need for a basic application. The essentials include:

  • Source control – and, preferably, some sort of branching and versioning strategy
  • Continuous integration and related process to make sure that new commits don’t break tests
  •  A deployment process allowing the same binary to deployed to immutable servers – preferably using some sort of container or virtualisation – preferably with the binary produced once and stored in a centralised location.
  • A means of externalising configuration for different environments
  • Some sort of monitoring and log management

It’s easy to drop a Spring Boot jar file onto a server and run it, but that’s not going to work in the long term and the easiest time to sort these infrastructural items in place is at a project’s start. The more complicated things become, the harder it is to add them in.

In short: an application isn’t just the software you are writing: it’s also the infrastructure that you put around it. In order to release and maintain an application you need to do a certain amount of work beforehand. Your hello-world application isn’t ready for production until these things are done.

Categories
brightonjava

Brighton Java – Continuous Deployment

IMG_20150304_195142
I had a break this month as Mr. Stanier hosted the meeting

 

Last Wednesday we had Brighton Java’s March event. It was another good turn-out, with about 35 people turning up to hear Jose Baena talk about his experience of continuous delivery.

Hearing about other people’s experiences with introducing a technology is incredibly valuable. The talk was followed by a discussion, chaired by James Stanier. We’ve not often used this format but it drew out some interesting discussion points.

Jose’s presentation was great (especially the hypnotic footage of an apple-slicing machine), with some useful suggestions on how to get Maven, Nexus, Ansible and Jenkins working together – with Jenkins acting as the driving force. There was also a detailed explanation of the importance of versioning.

The discussion underlined something I’ve been thinking about for a while – that things like continuous delivery need to be put in place early on, that these sort of infrastructural things are hard to retrofit. But that’s a story for another post.

Dan Chalmers has also posted a response to the meet-up: Continuous Deployment and Developers on-call. Dan does a good job of explaining the issues around making developers responsible for their code. I still think this is important but making it work in practise is a subtle, difficult problem.