Categories
infrastructure

Java Infrastructure Part 6 – Unit testing

I’ve been using JUnit for about fifteen years. In that time it has become a central part of Java development. While that’s great, unit testing is still problematic. Most people agree that automated testing is necessary, but its exact form is more controversial.

There are two main candidates for a unit testing framework, JUnit and TestNG. These packages have very different aims – in Senior Developer interviews I would sometimes as the difference between them. I like the question as there are a lot of right answers and the chosen response says a lot about the developer.

The simplest answer is that they are broadly similar. They do have a different order of argument in their assertions, which has led to TestNG having a separate class with JUnit assertion order.Some companies I’ve worked for have solved the problem of using JUnit or TestNG by going for both – and then used both argument orders for TestNG.

There’s a good JUnit/TestNG comparison on Mykong.com which points out that the main difference between the two is that JUnit doesn’t have some of the features included in TestNG. But the best answer (which I never received) is that the philosophy of the two are very different. JUnit is a unit testing framework, and as such does not encourage certain practises, for example, having tests that need to run in a particular order. JUnit 5 Alpha was recently announced and the new features don’t appear to violate this goal.

There are often problems with what unit testing actually is. Unit tests should be small, independent, deterministic and low-level. They should not have any direct dependencies on files, database or underlying OS (random number generation, current time, etc). Tests at higher levels are important too, but these should be clearly separate. By not following these rules, developers risk producing slow, brittle unit tests that require complicated set-up. Good unit tests actually force the code to be written in a more testable way, reducing dependencies, and using smaller elements.

Adding tests to the project is simple enough. There is a new class, GreeterTest, and a few lines in the build.gradle file. The changes to the build include a JUnit dependency, which in turn means adding a link to the central maven repository. Gradle is written so that dependencies can be added with little trouble, but this does bear thinking about a little. How does one know that the files being downloaded have not been tampered with? Are we accessing these repositories efficiently? This will be discussed later in detail later, but meanwhile we need to add a note to the TODO page.

Some years back, I gave a talk on unit testing. It was about an hour, and still only scratched the surface. While unit testing is easy, it introduces a lot of issues around how a project will work. I’ve seen company after company get tangled in unit tests. Simply adding JUnit to a project is not enough without some consistency and a real rigour is needed around how tests are used.

An example with this is the use of set-up methods in the tests. These become complicated, and end up with test subclasses and classes, making fixing tests a chore. Better to have the set-up in each individual test method, and if these become unwieldy then examining our object model. This then means that a broken test can be read from start to finish and understood on a single IDE screen. Yes, it produces duplication, but the aims of test code are very different to production code.

Introducing unit testing also requires the processes and infrastructure to support it. We’ve added Jenkins to the project, and broken unit tests will cause the build to ‘break’, to turn red. Continuous integration relies on tests running quickly, to allow a tight feedback loop for developers. Over time, slow builds become slower, and never quite get fixed. Also, the rules about not committing to a broken build need to be taken seriously. Too often, companies have unit test suites that break in specific ways, and developers are forced to understand when a broken build matters and when it is acceptable. This is far too confusing. Using unit testing means following certain rules and methods of development.

Adding JUnit to Jenkins is relatively simple. Make sure that the JUnit plugin is installed, then edit the build for the project to publish the JUnit results. The build will now fail if the tests fail, and full details can be seen within Jenkins.

jenkins-junit-plugin

The latest commit is a3b7fa1. In the next part, we’ll be looking at adding coverage checking to the project.

 

Categories
infrastructure

Java Infrastructure Part 5 – Introducing Jenkins

An interesting effect of writing a series of posts like this is how it clarifies your thinking. I originally planned to introduce a continuous integration server after Javadoc, JUnit and so on. But, as I’ve researched and thought about this, I’ve decided that a continuous integration server is a fundamental tool for development. It should be at the heart of any project.

Good development requires automation. Rather than have any steps carried out manually, we should automate them from the start. I’ve known colleagues who saw Jenkins as the powerhouse of an organisation; that one could have hundreds of jobs, not just passively monitoring repositories to run builds, but to promote code, run reports and even deploy software. Jenkins provides the plugins and the framework for a finely-grained permissions system, based on specific tasks, rather than all the underlying grants and credentials needed.

The problem with CI is that it takes a significant amount of investment and commitment to put in place retrospectively. An organisation that is able to deploy code manually may not feel excited about spending time and energy just to simplify those builds, even when deployments become unwieldy enough to prevent growth. CI also requires discipline – it takes a lot of courage to stop a large organisation until failing unit tests or transitory broken builds are resolved. It’s far easy to carry on with a broken system that seems to work than to push towards an efficient, modern build.

Jenkins runs inside its own application server, separate to the built software. It is available for download from http://jenkins-ci.org/, where there is a Java Web Archive available. The current version is 1.650 and, as discussed in the last section, we need to note this for later use as we scale up.

We need to introduce and document a new environment variable here, JENKINS_HOME, specifying the location where Jenkins stores its internal files. A major issue with Jenkins is that it doesn’t do a good job of separating code from configuration. This poses the question of how to run, maintain and restore Jenkins instances. I will avoid the question of restoration just now – I suspect it will be much simpler after virtualisation is introduced.

The command to run Jenkins is simple; java -jar jenkins.war. The server can then be accessed at its default location, http://localhost:8080/. Running Jenkins on a local machine is not really satisfactory in the long term, but will do for now.

Some initial configuration is required. Again, for the time being, this is system specific and can be found by clicking Manage Jenkins then Configure System. We can point to the current JDK or download a new one. Location of the JDK to be used is another ambiguity that must be dealt with.

I will  skip over some of the steps here – there are many good tutorials about Jenkins available, including a very useful O’Reilly book, which I have been using as a reference. The main steps I followed were:

  1. Install the git plugin (version 2.4.2)
  2. Install the gradle plugin (version 1.24)
  3. Install the blue/green balls plugin. By default, Jenkins has its successful builds shown as blue. The Jenkins blog notes that this plugin is in the top ten – and also points out that the red/blue colour scheme is a Japanese thing.

Having set up the basic environment, we add a new freestyle project to build our code. We use gradlew, with both the clean and build targets.

We test the build with running the jar, and that seems to work just fine.

Happy green build
Happy green build

So, there we have it, a slightly clunky local build of Jenkins. I wouldn’t say that this Jenkins set-up is particularly good.  However, even with those limitations, it provides a heartbeat for the upcoming stages of the project. If you’d like any more detail on steps that I’ve skipped over, please leave a comment and I’ll edit the text.

The latest commit on github is 392d98e

Categories
brightonjava

March’s Brighton Java – Scala Types

IMG_20160302_193759

On Wednesday it was Brighton Java, where Richard Dallaway gave a preview of his QCon session,  Types working for you, not against you. Richard has spoken at Brighton Java twice before (an Introduction to Scala and Lambda Expressions in Java 8) and it was good to welcome him back.

While I don’t know Scala at all, the talk was a fascinating insight into the language. By focussing on types, the talk gave an idea of Scala’s power and flexibility. Richard described Scala as an ‘unopinionated language’, allowing for a great freedom of style. It was also interesting to learn a little about Monoids and the ways they could be used.

The talk reminded me that I really need to look into Scala more.  While I recently became excited about groovy, I found this quote from James Strachan, the language’s designer: “if someone had shown me the Programming in Scala book… back in 2003 I’d probably have never created Groovy”. Richard listed some good starting points for investigating Scala – see the threads here.

Wednesday was the first Brighton Java  meeting held at the Brandwatch HQ. We had a record number of signups, with 50 people turning up on the night. I’m pleased to see this much engagement from Brighton’s Java community.

On a personal level, I’m delighted with how Brighton Java has grown. The audience has increased massively since the first meeting back in August 2012. The ongoing support of Brandwatch and the Skiff have been fantastic. I’ve also really appreciated the help of my co-organisers Luke and James. Things have really got going since they came on board.

Now to organise our April events. Yes, for the first time we’re going to be doing two events in a month. Let’s see how this goes.

IMG_20160302_200419

Categories
infrastructure

Java Infrastructure Part 4 – The Build System

It’s about time we added a build tool to the project. It’s possible to create jars by hand, but that soon becomes time-consuming and error prone. Having a repeatable build process launched with a single command is pretty much essential to doing anything interesting with software.

Over the years I’ve used make, ant, maven and gradle. The one of these I like least is ant. It seems to produce massive, thousand-line monstrosities that are unreadable and inscrutable. And while ivy is fairly similar to maven’s dependency management, it doesn’t seem as natural to me. Having said that, maven can also get unwieldy, with simple builds that get out of hand.

I’ve not used Gradle a great deal, but it seems an obvious choice. A significant reason is its success – Gradle is the standard tool for Android Studio and Spring. Popularity is often under-rated as a reason for choosing tools or frameworks, but means examples and expertise are easier to find. There may be many good reasons for lesser-used frameworks, but knowing there is a vibrant community around a platform is a major plus.

However, I’m still cautious about Gradle. I’ve found some of the plugins I’ve used unhelpful, with the missing options harder to find than they were with maven. I also find the documentation focuses too much on how to do certain tasks rather than explaining the underlying concepts and assumptions. On top of that is a growing suspicion that Groovy may result in scripts that are write-only, impossible to read back later on, just like Perl scripts used to be.

(There’s an example in the documentation of the power of dynamically-generated tasks and their potential for chaos. The script

4.times { counter ->
    task "task$counter" << {
        println "I'm task number $counter"
    }
}

creates four tasks, which can then be called as

> gradle -q task1
I'm task number 1

I can see some powerful uses for this, but I can also see myself struggling to work out where on earth a failing task comes from)

Despite some teething problems with the Artifactory plugin at work, I’ve enjoyed using Gradle so far. I love groovy for its concision and charm and there’s an optimism to using a new tool, particularly when the documentation explains how much better it is. It may turn out that maven would be a better choice but, because we’re working on infrastructure rather than code, we should have a lot more freedom to change things later.

Gradle uses the same concept of configuration over convention as maven. Past experience tells me that it’s easy to work with the grain of such things that fight the tool, so we will move our source directories from src/ to src/main/java/ in line with this.

Because we’ve used the standard directory layout, the initial build script is extremely simple. In fact, it’s just a single line in our initial build.gradle file:

apply plugin: 'java'

Running the command ‘gradle build’ results in the jar file being built. Nice and straightforward – but I feel a slight sense of nervousness that so much happens with a single command. For example, if we had not moved the source directories, gradle would still happily produce a jar file, just one with nothing in it.

Introducing a new tool means something else to track. As well as noting the current version in the readme and todo files, Gradle also offers a mechanism for reducing the risk of different versions being used – the Gradle wrapper. This is a script that checks whether the required version of Gradle is available on the local machine. If not, the version is downloaded and stored locally. This requires us to add a new gradle wrapper task to the script, then execute the gradle wrapper command.

task wrapper(type: Wrapper) {
    gradleVersion = '2.11'
}

The wrapper adds several new files – gradlew and gradlew.bat scripts, as well as a jar file and configuration in the gradle/wrapper folder. This is intended to be commited to git, so that anyone building the project in future can use the correct version of gradle via the gradlew command. This version is downloaded and stored centrally so that it can be used by other gradlew scripts as needed.

However, this convenience introduces a new issue, one we will face again when we introduce dependency management: how do we make sure that the code we download is safe? There’s an interesting discussion of risk in a post called How to Take over the computer of any Java developer. Basically, we need to make sure that the code we download has not been tampered with.

A basic level of security is provided by the distributionSha256Sum property which is added to the gradle-wrapper.properties and checks that the zip file downloaded from http://services.gradle.org/distributions/gradle-2.11-bin.zip is the one expected. Of course, this in itself requires finding ” the SHA-256 hash of a known Gradle distribution”. We’d probably be OK in trusting the (HTTP) download, but this isn’t really good enough. It’s going to be added to the TODO list, and dealt with after we’ve looked at dependency management.

The latest git commit is cd8e97a. In the next part we’ll look at adding a continuous integration server.