Categories
programming

lifelong learning in software development

I always wanted to have a Brighton Java talk on lifelong learning. The techniques, tools and fashions in software development are constantly changing – how do you keep on top of this churn while doing an often-stressful job and maintaining the family and social life you have a right to? I never found anyone to give this talk, and now I don’t need to, as a fantastic talk by Trisha Gee called Becoming a Fully Buzzword Compliant Developer has been released on InfoQ.

In the talk, Gee sets out a simple step-by-step approach to learning and career development, without ignoring how expectations around this impact on often-excluded groups in IT. (Remember: asking for interview candidate’s github profile makes a lot of assumptions about your potential employees and their free time).

Gee also looks at how quickly new ideas are absorbed into the mainstream of development; how to discover and assess new buzzwords; the importance of real-world meetups (like Brighton Java!); and the importance of “Enough knowledge to blag your way through a conversation in the pub.”

Sometimes, when discussing training and development outside work, developers point out that there is no requirement for, say, HR staff or office managers to do their job as part of a hobby. But look at it another way: modern careers are a half-century or so. You want to find a way to engage with software development and lifelong learning that is so exciting that you want to do it. Even maintaining the simplest hobby site can help you to keep up with the latest exciting new things.

There is also a flip-side to Gee’s talk which is less often discussed – and that’s how little learning Java enterprise developers can get away with by using the slow pace of change to avoid any self-development. Mobile and JS devs tend to engage much more because their careers depend on it (an Android dev who does not use the latest APIs will soon be unemployable). New ideas and tools can take a long time to filter through to the enterprise. The downside of this is that it can take even longer to get them right – TDD, agile and microservices suffer in enterprises due to the lack of an engaged, interested and playful attitude among devs and managers.

Categories
programming

The TDD Lie

The requirements for pretty-much every developer job I see these days includes Test Driven Development. Which is exciting – everyone recognises the importance of unit tests.

In my previous post I talked about how often companies claim to do agile, but don’t succeed in practise. And I see the same thing happening with test-driven development.

It’s easy to tell which companies want to use TDD but don’t in practise: you ask what percentage coverage they have for their code.

In its purest sense, TDD demands that tests are written before any new code is written. If you’re coding in this way (as suggested in  Kent Beck’s Test Driven Development book) your coverage should be close to 100%. Of course, there are other ways one might approach TDD, such as writing business tests before the code. This is definitely better than adding a few tests at the end of a project.

But, if you’re tracking the coverage and have chosen a figure less than 100%, then there is a choice being made about which code is not covered. I’ve never seen a standard coverage threshold working out well. The obvious indication  of this is how often the coverage of projects is just over the required threshold.

I’m a great believer in high coverage (while acknowledging the limitations of that). And if you don’t grow your code with the  tests then you tend to end up with code where it’s difficult to add code at the end.

If you’re listing  TDD in job requirements and not practising it, then it’s worth asking why not. A lot of times it’s a fundamental limitation to the company – we know we should have tests but we’re rushing. It’s the same problem that many companies have with agile – these things are obviously a good idea, but difficult to do in practise.

As Bob Martin pointed out,”You know what you believe by observing yourself in a crisis. If in a crisis you follow your disciplines, then you truly believe in those disciplines. On the other hand, if you change your behaviour in a crisis, then you don’t truly believe in your normal behaviour” (from Chapter 11 in the Clean Coder).

If you know you should be using TDD but aren’t, then you need to ask why not. And, if you can’t fix that, stop lying that you’re doing TDD – admit what you’re actually doing and focus on improving that.

Categories
programming

How to Build a Webapp in Java

tldr; building a webapp is complicated – particularly if you’re doing it right. JHipster offers a promising solution.

I was a little shocked recently when I realised that I didn’t know how to make a web-application.

It’s not like I’m inexperienced. I’ve been working on the web since 2000; but my job involves focussing on specific areas. Even when helping with strategy, I have very little input into the front-end. Being responsible for an application going live and doing it all myself are very different things.

Part of the problem is how intimidating front end development has become, with flurries of Javascript frameworks, UI libraries and strange pre-compilers. Angular has two versions that are so different as to be incompatible, but with similar names. Tutorials date quickly and beginners can’t tell if they are current. There is a huge amount of knowledge needed to build a web application.

Now, obviously, as the project website says, Spring boot “makes it easy to create stand-alone, production-grade Spring based applications that you can ‘just run’“.  My initial impulse to cobble something together from Spring and Thymeleaf would allow me to get something live. But this is an unsatisfying solution. Server-generated template pages don’t feel particularly powerful even with a good MVC framework. They have several problems:

  1. They tend to have code for the view tangled up with the application, with no clear boundary
  2. Coupling the templates to the server-side code misses the separation of architectures like REST. Adding, say, a mobile client would involve a lot of hassle to get the two different front ends working.
  3. More modern architectures are available for front-end. Using REST and angular allows rapid development of flexible front ends, less coupled to the backend architecture.

Another question that arose was user management and security. This is one of those things that every site needs, but tends to be tedious to code (as well as inviting errors). Any decent user management system needs account creation, password resetting, different roles etc. For most projects, these are going to be the same. My basic assumption would be that Spring security would be useful for that, but that is really a framework for creating a security framework, and there are no obvious mature implementations. Setting up email account confirmation is a drag.

I weighed up a few options for authentication, particularly since I wanted one that supported a stateless web-server. After speaking to a friend, JWT seemed to be the way to go, but this still seemed like a lot of work. And I’d not even started grappling with the whole front-end stack.

I started working in a new office recently and a colleague suggested JHipster as a solution to my problems. I’d heard about the technology via InfoQ but hadn’t paid it a lot of attention. Part of this was the name, which sounded insincere and jokey. And my cursory look suggested it was ‘just’ angular with spring boot. I’d missed something very interesting.

According to the JHipster mini-book, JHipster allows Java devs to “be hip again”. The knowing irony of the framework’s name is irritating, but this idea of being ‘hip’ is good. Web development has changed a great deal, and JHipster provides a simple way to get started with up-to-date technologies, combining bootstrap, angular (or react) and Spring boot. The project started back in October 2013, with the first release in December of that year.

JHipster is similar to Ruby on Rails, providing rapid development of a stack based upon entity definitions. The ‘Hipster’ part of the name is referring to the need to keep up with the modern tools. JHipster produces the plumbing and boilerplate for a whole stack including a variety of best of breed tools. It also adds in a basic user-management system for free. JHipster prepares multiple files for each entity:

  • Database tables
  • Liquibase changes
  • JPA entity definitions
  • Spring data repository
  • Spring rest controller
  • Angular route, controller and service
  • HTML page

The best way to get a flavour of how JHipster works it to look at Matt Raible’s 16-minute demo of building a blog application, get started with JHipster 5.

Full-stack development is hard work, even with JHipster – you need to be familiar with both Spring and angular. But this does put you a lot closer than ever before to putting live a full-stack application using best of breed technology.

Categories
books programming

Book review: Beyond Java by Bruce A Tate

Tldr; things looked bleak for Java in the mid-noughties, but the language and eco-system have undergone massive changes, allowing Java to thrive and survive.

My new job has a daily commute, which means time to read. I’m focussing on tech books and recently picked Bruce A Tate’s Beyond Java. I originally read this in July 2006, the year after it was published. The book described why Java was doomed, with Tate saying in the introduction, “Java’s reign will end. It’s not a question of if but when.” (pviii). While the predictions have turned out to be wrong it is interesting to re-read the book and see how Java escaped its fate.

Revisiting the mid-noughties was fascinating in itself: a time when AJAX was just emerging, Web 2.0 was the next big thing and folksonomies would change everything. It was an environment that caused obvious problems for Java. Its flaky text-processing was just no good for large XML files, and Java was forced to deal with a lot of XML. In addition, Java tended to be used for the middleware on top of databases, which it was not particularly suited for (p52).

Even with early versions of Spring and hibernate, making Java applications was a chore, with overwhelming amounts of boilerplate and XML config. Even with ORMs, Java required you to work with both entities and database code, and to keep them consistent.

Tate talked about this difficulty in getting software running, and all the tools that were needed. Compared with PHP, Java felt slow to develop in, and its advantages weren’t sufficient compensation. It did seem that Java was on the ropes. I remember dropping out of Java to do simple text tasks. Ruby-on-rails and similar frameworks promised the chance to get applications running quicker.

The book is a pessimistic one. Tate saw the problems as unsolvable and, given Java’s history and trajectory, this was a fair assumption to make. Tate looked at Java critically and suggested other platforms developers should investigate. Reading the book with hindsight, one can see Tate’s predictions have failed, but the discussion is still interesting.

In the 12-or-so years since Tate was writing, Java has managed a comeback. Decent dependency management from maven and gradle has eradicated the nightmare of setting classpaths. The language has been made more expressive with new features, such as the functional constructs in Java 8. And the growing fashion for opinionated configuration has also made things easier – getting Spring Boot running is incredibly easy. The culture of the language has changed for the better too. In the past, checked exceptions proliferated everywhere, and now people tend more towards throwing runtime exceptions and handling everything at a single sensible point.

One particularly strong improvement is the Spring framework. At the time Tate was writing, Spring was a mess of XML configuration and cryptic errors. With annotations, it has become easy to use and unobtrusive. New platforms such as Spring Boot allow developers to release production-ready features out quickly.  At the same time, Java/Spring offers a mature platform, where code can be taken to production and supported at massive scale. Ruby-on-rail’s promises have faltered, with Twitter famously moving to Java because of scaling issues. The XML problem has disappeared as JSON takes over as a less robust, easier to use standard. I don’t miss the days of choosing between SAX and DOM parsers for XML.

Around the time Tate was writing, a debate raged about variable typing: strong vs weak, static vs dynamic. Java’s static and strong typing felt too rigid for many people. This debate seems to have reached an interesting conclusion: Javascript has been restricted by typescript while Java 10 has introduced the var keyword.

(I’ve always seen the inflexibility of Java as a positive. While it’s less expressive than languages such as python or, um, perl, the code it produces tends to look very similar. It’s a good language for non-rock stars – who massively outnumber the sort of people capable of producing cryptic code given the slightest chance).

At the time of writing, Tate saw Java’s fate as connected with that of Sun. Tate wondered what might happen if Sun disappeared, wondering who might take over Java. The resulting situation, with Oracle running Java (and MySQL!) was hard to predict but has worked out well – even with the occasional fake news scare that Oracle would start charging for Java.

Tate discussed a number of contenders for Java’s throne. Perl has lost popularity over the last decade, with Python offering a more structured (and readable) alternative. Perl, like PHP, was a great tool for its time but ultimately a ladder to better things. The book makes a quick mention of functional languages: Clojure seems to have come and gone, but  Scala and Kotlin (first released in 2011) seem to be gaining traction. Ruby itself is much less popular. Java seems to have been saved by the flexibility of the JVM, which offers more powerful options for those that need them.

For me, the most interesting discussion was around state in web-servers, and the problems of that. Tate talks about continuation servers as an upcoming opportunity. This is another debate that has been settled, with REST winning out over stateful alternatives. I worked with Seam for a time – it was an interesting idea, but so alien to the way most developers worked, that its subtleties tended to be fought with rather being  made use of.

One problem that Tate raised which is still present is the approachability of Java. Even in the innocent days of 2005, the simplest useful Java application required use of a large number of tools. This issue is present in a lot of fields, including front-end dev work, where increasing sophistication has made things much more complicated. There are solutions to this, with JHipster being a promising one, but these work to hide the complexity. Just because the tools are set up for you doesn’t mean they aren’t there and can’t go wrong in strange and unpredictable ways.

EDIT: Tom Hume pointed out  that the discussion above doesn’t include Android, which was originally released in 2008 and has become hugely popular, with over 2 billion users. As someone who  focuses on backend dev, I didn’t engage with this, but Android has been contributed significantly to Java’s survival. Having  said that, Kotlin is becoming more popular in Android, threatening Java’s dominance on that platform. Interestingly, Kotlin is failing to see significant uptake in backend dev, despite recently becoming compatible with Spring.

If I had more time, I’d also look at the JVM platform as distinct from Java. While Groovy and Clojure have faltered, Scala (launched 2004) and Kotlin (launched 2011) are both popular JVM languages

Categories
openstreetmap programming

First steps with Openlayers

tldr; first steps in setting up an Openlayers map with markers loaded from a GeoJSON file

I spent a chunk of this weekend wrestling with writing software. Which was a good way to relax after a late night dancing at Is that all there is to a Disco? Saturday was a slow day: writing code, watching Escape Plan 2 with Rosy Carrick (not a good film) and getting to grips with Openlayers.

Most of the time I learn about programming through toying with examples. While this is easy in a subject I know well, it can prove more difficult in new areas. Working with Javascript libraries can be particularly difficult since it’s hard to tell which versions particular examples are for – with Openlayers there are several mutually incompatible versions with very different APIs. They say you should only do one new thing at a time, and learning Openlayers was held back by my poor Javascript skills. I also needed to get to grips with the particular way the mapping geometries worked.

But I managed to beat it all into submission and produce a working demo. I wanted to load some data from GeoJSON and display location markers on an Openstreetmap map. It’s not very much, but it is something I can later add to a larger project.

Introduction

According to Wikipedia , Openlayers is “an open source JavaScript library for displaying map data in web browsers as slippy maps”. It supports multiple map vendors and different data formats. The main project site  says that version 5.2 is the latest. There are links on the site to

Basic Example

The quickstart page shows how to set up a simple Openlayers map. CSS and HTML are used to create a page-wide div that can hold the map. The Javascript to create and display the map is simple:

var map = new ol.Map({
target: 'map',
layers: [
new ol.layer.Tile({
source: new ol.source.OSM()
})
],
view: new ol.View({
center: ol.proj.fromLonLat([37.41, 8.82]),
zoom: 4
})
});

This creates a map object and sets it into the map object. A single layer is set up with OSM (openstreetmap) as the source. The view allows the map’s centre, resolution and rotation to be set. Note that zoom level 0 is zoomed out fully.

  • As the tutorial explains:
    A layer is a visual representation of data from a source. OpenLayers has four basic types of layers:
    Tile – Renders sources that provide tiled images in grids that are organized by zoom levels for specific resolutions.
  • Image – Renders sources that provide map images at arbitrary extents and resolutions.
  • Vector – Renders vector data client-side.
  • VectorTile – Renders data that is provided as vector tiles.

The center of the view is set using  a conversion from longitude and latitude. This is one of the confusions with open streetmap. It uses a co-ordinate system called Spherical Mercator projection (known as EPSG:3857),  and the usual longitude/latitude co-ordinates are held in a projection called EPSG:4326.

Adding Markers

The next stage was finding a way to add markers to the map. A lot of the examples for this were using different versions of Openlayers. The one that I ended up adapting was a recent tutorial on mediarealm.

This piece of code gave me enough of a clue to be able to set up a layer with a marker:

function add_map_point(lat, lng) {
var vectorLayer = new ol.layer.Vector({
source:new ol.source.Vector({
features: [new ol.Feature({
geometry: new ol.geom.Point(ol.proj.transform([parseFloat(lng), parseFloat(lat)], 'EPSG:4326', 'EPSG:3857')),
})]
}),
style: new ol.style.Style({
image: new ol.style.Icon({
anchor: [0.5, 0.5],
anchorXUnits: "fraction",
anchorYUnits: "fraction",
src: "https://upload.wikimedia.org/wikipedia/commons/e/ec/RedDot.svg"
})
})
});
map.addLayer(vectorLayer);
}

We create a new vectorlayer which contains  another vector with all of the features. This could then display markers with a small red dot.

GeoJSON

Ultimately, I wanted to use some GeoJSON  to provide a set of features. This proved to be tricky, as I had features in longitude/latitude co-ordinates, and couldn’t see how to make the import with that taken into account. Fortunately, it’s easy to iterate over objects in javascript, and I simply looped over the GeoJSON, making the conversion with one object at a time before adding it to the layer.

var featureArray;
for (feature of placedata){
        featureArray.push(new ol.Feature({
 geometry: new ol.geom.Point(ol.proj.transform([parseFloat(feature. longitude), parseFloat(feature.latitude)], 'EPSG:4326', 'EPSG:3857')),
name: feature.name,
description:feature.description
 }))
}

Once loaded, this feature array can then be dropped easily into a vector layer and added to the map. The source of the placedata array can be changed – I started out with a hardcoded version, and then added in one from a REST endpoint. I created a quick REST server using the Spring Boot REST example as a basis, and changing it to return place data.

Loading the data from GeoJSON turned out to be the fiddliest part of this because of the conversions between different projections.

Pop-ups

The last thing I wanted to do was add pop-ups to the markers. There was an example of this in the Openlayers workshop. The way this works is simple – an onClick event for the map is added, checking whether there is a feature at the point in question. There’s a lot of code to the example, but it was easy enough to amend and get working.

Summary

It took a little longer than planned, but I managed to set up a simple HTML page that displayed openstreetmaps tiles, with markers provided by a REST endpoint.

Categories
programming

Microservices for the monolith

I found this post in my drafts folder. It’s about three years old, and I was going to delete it; but I had a quick read and there were a few things that are still worth saying.

It dates back to when there was a lot of excitement about microservices and I was researching them for an employer. Ultimately, the technical demands of maintaining microservices were too much for a company that had bigger problems – but the principle involved (microservices as service-oriented architecture done right) were incredibly useful.

Foremost among these useful principles was the idea of treating all interactions as inherently asynchronous and unreliable. The principles in the Fallacies of Distributed Systems are a useful caution for almost all projects. Although I’m not convinced Conway’s Law is either valid or useful.

Now that we’ve got big data out of the way, it looks as if the next big hype will be microservices. The term has been around for a while, most notably in a January 2013 talk by James Lewis of Thoughtworks. The idea started to become popular around February 2014, with Martin Fowler publishing a post on the subject in March. Discussion is gathering, with a conference, Mu-Con, planned for the end of November and an O’Reilly book on the topic due for publication in March 2015.

As with Big Data, there is a similar lack of clarity about the definition. How tiny is micro? Definitions have ranged to a couple of screens of code, to a small amount of code that a single developer can understand in a few days. James Lewis suggested about 1000 lines long.

Regardless of the problematic definition, obvious characteristics of such services are emerging, with JSON and REST are forerunners for the protocols. Netflix have been open about their successes with this architecture, releasing some amazing tools and documentation.

A purist microservice architecture is not going to be helpful for most companies. In his early presentation, James Lewis mentioned the importance of Conway’s law. Microservices require highly-skilled developers, clearly defined structure and rigourous processes. Most companies are unlikely to have the structure, processes and calibre of developer to make full use of these architectures, let alone convert their existing architecture to fit in with this new way of working.

Despite this, microservices are still relevant to every Java developer, even if you’re working with a monolithic ball-of-mud architecture. It’s a rare system that doesn’t need to integrate with some external service. Microservices teach us to treat any external system as potentially unreliable and incompetent. If the service takes too long to reply, how does that effect our SLAs?

Netflix are so confident of the resilience of their server eco-system that they have introduced the chaos monkey. This turns off services randomly during business hours, to make sure that service continues uninterrupted. Because failures are going to happen so you should treat that as a fact of life. You system might be up 99.9% of the time, but it’s the 0.01% that gets remembered in your annual review.

The tools Netflix have produced to deal with these issues are designed to work with thousands of small services, but something like Hystrix is usable for a single integration. You need to be asking; how might this fail? When should I time out? And you can get this power with just a few Spring annotations.

These tools and architectures will be essential for every Java developer. When I first started writing software, everyone knew how important automated tests were, but test harnesses were difficult to produce. The creation of Junit made tests simple to write and this has altered the way Java software is developed, enabling refactoring, continuous delivery, and TDD.

Testing was important before Junit, and there are developers who still don’t use automated tests, but but Junit has revolutionised Java development. The software I am writing now is considerably more sophisticated that what I was writing 14 years ago. In part that is down to new tools like unit testing and Spring.

The tools designed for microservices are simple enough that they should be understood by every developer. The questions faced by Netflix should be considered for any integration: how do I handle failure, what do I do if this is not available?

The scale and size of the systems being build nowadays are incredible, and this has been enabled by the range and skill of the tools that have been open-sourced. Every developer needs to understand the tools created for microservices. Simple architectures should be as reliable as complicated ones.

Categories
programming

The Complexity of Java

Java has become much more complicated over the years. I started working with the language back in 2000. I’d been a database developer for a couple of years, working with Oracle, but wanted to create more general applications. I learned enough to pass my first Java job interview with two books: Laura Lemay’s Teach Yourself Java in 21 days (which had been recently revised for Java 2!) and the first half of Wrox’s Professional Java Server Programming.

Together, these two books contained most of the Java knowledge I needed to do my job. I quickly picked up a lot of other things like CVS and Unix, but Java was definitely a lot simpler back then. I would say that the Java I needed as a professional developer back then included :

  • Core Java
  • JDBC
  • MySQL
  • HTML
  • CVS

The builds were done through  makefiles if they were scripted at all. It was fairly easy for a new developer to get working professionally. I mean, two books contained most of the information you needed – along information on how to do graphics, animation and applets. And the Wrox book also found time to cram in chapters on esoterica like Jini and Javaspaces. You could learn a lot of Java in 21 days.

(I wish I still had my old copies of these books. Living in Brighton involved moving frequently between small rooms and a lot of books had to be abandoned).

Over the last 15 years Core Java has become more complicated. The addition of things like generics and lambdas were much needed but make the language much more complicated. And the basic skills a developer needs where I’ve worked recently are much more complicated:

  • Core Java
  • Eclipse or equivalent IDE
  • Hibernate
  • MySQL
  • HTML
  • XML/JSON
  • REST
  • Junit and a mocking framework
  • git
  • Maven
  • Spring

The applications that can be built with modern Java are impressive and far beyond the scale of what would have been possible in 2000. I think it would be impossible now to write any large scale Java application without a decent IDE. And Java is much more complicated than before.

A lot has improved too, and it’s great to escape the horrors of classpath config, which has disappeared in place of easier options. But the point remains: I know a fair few people who learned to code under their own steam and ended up with successful careers. I imagine that is much more difficult nowadays. Back in 2000, applets were an easy way to learn to code and you could get going with notepad and a compiler. Modern Java is probably not a good beginner’s language.

Categories
programming

Discipline vs. scheduling

One of the best statements I’ve read on developer discipline (which for me includes testing, documentation etc) came from Robert C. Martin:

“You know what you believe by observing yourself in a crisis. If in a crisis you follow your disciplines, then you truly believe in those disciplines. On the other hand, if you change your behaviour in a crisis, then you don’t truly believe in your normal behaviour” (from Chapter 11 in the Clean Coder)

There’s a slight subtlety here, in that you can sometimes gain time by dropping process, but this gain quickly evaporates as technical debt builds up. And, if you believe in these disciplines, you will schedule time to make up for this. As Martin explains:

“If you follow the discipline of Test Driven Development in non-crisis times but abandon it during a crisis, then you don’t really trust that TDD is helpful. If you keep your code clean during normal times but make messes in a crisis, then you don’t really believe that messes slow you down. If you pair in a crisis but don’t normally pair, then you believe pairing is more efficient than non-pairing. Choose disciplines that you feel comfortable following in a crisis. Then follow them all the time. Following these disciplines is the best way to avoid getting into a crisis.”

Discipline is often abandoned due to scheduling pressure. Martin discusses how to respond to such pressure in the second chapter of the Clean Coder. Developers often give an estimate only to be pressured to produce the output more quickly. The temptation is to give in to this pressure and promise to “try”. This is dangerous:

“If you are not holding back some energy in reserve, if you don’t have a new plan, if you aren’t going to change your behavior, and if you are reasonably confident in your original estimate, then promising to try is fundamentally dishonest. You are lying. And you are probably doing it to save face and to avoid a confrontation.”

Overtime is one response to this, but it can easily spiral out of control, since this is one of the few places where leverage can be applied. It also suffers from diminishing returns. There is good evidence that working more than 40 hours a week over a long time is harmful to projects.

So how can projects speed up? By the time an unrealistic deadline has become solid, it’s usually too late. Deadlines are often particularly problematic in scrum. It takes time for a scrum team to settled into a cadence where their work-rate becomes predictable. If the scope and team size have been fixed, then there is no way to hit the deadline without distorting the process.

What should you do when you have an urgent deadline that looks unachievable? In such a situation, failure has happened before development begins. Most times, this is not recognised until the developers are at work – resulting in crunch time, and estimates made to fit the deadline. In such situations, the best thing to do is to deal with the current project as best you can; and to look at future projects, making sure that these are not being scheduled without a good idea of the development needed.

Categories
programming

The best answer to the factorial code interview question

Whenever I interview a developer, I always ask them to write code on a whiteboard. Nothing too complicated – I expect everyone uses an IDE these days and the candidate is also probably feeling a little nervous.

The big interview cliche is asking for a method producing a Fibonacci sequence. Even with telephone screening, this still eliminates more candidates than it should. A well-prepared interview candidate should have practised that one already, which means it’s still a useful test. Whether or not someone can do it, there are lots of interesting follow-up questions.

Some candidates get flustered trying to understand the Fibonacci sequence. I’d expect most people to know this already, but I wouldn’t want to reject a potentially-excellent candidate for lacking a bit of maths knowledge. So sometimes I would simply ask for a function to product factorials.

There are several different ways to do this, with the two main options being whether the solution is recursive or not. A simple solution would look something like this:

public int fact(int n)
{
    return n == 1 ? 1 : n * fact(n - 1);
}

One obvious follow-up question to this is what the limits of the function are – using an int means you’ll have overflow problems quite quickly. Does the developer know the class to use to avoid this?

The question simply asks for a piece of code. I would have given bonus points for any developer who mentioned testing before writing. But the perfect response would have been a candidate saying that the code would be different depending on its intended use.

Someone on github has produced an enterprise version of another interview classic, with Enterprise Fizzbuzz. Obviously that is going too far. But there are considerations for even the simplest piece of professional code:

  • Who else needs to work on this? What documentation/commenting is required?
  • How and where will it be deployed?
  • Is any error checking or exception handling required? In the example above – which I would have accepted as correct – there is no handling for the obvious overflow error.
  • Is something this simple suitable for the intended use? For example, a large number of similar requests might be better handled with some sort of question.

All of which is a complicated answer to a simple question. A dev raising these issues in an interview would still need to produce the code – but the discussion that followed would be very different.

Thomas Mann once claimed that “A writer is someone for whom writing is more difficult than it is for other people.” In the same way, the more I think about development, the harder it seems. A factorial example might seem almost insultingly simple – but it’s possible to have a very complicated conversation about it.

Categories
programming

How not to do terrible things with scheduled jobs

Every system I’ve worked on has had scheduled jobs. Regular tasks need to be automated, particularly if they need to run in the middle of the night. Often these jobs are used for billing. For example, a job might need to:

  • Find all customers who need  subscription renewals
  • Bill each customer
  • Update that customer’s records, allowing them to continue accessing a service
  • Email a billing confirmation

Writing a job like that is easy. The problem comes with remembering all the different things that might go wrong. Some developers are good at this, but others are optimistic, happy-go-lucky souls that never consider all the terrible things that might happen. I thought it would be useful to make a list of the sort of things I ask myself when thinking about a scheduled job:

  • What happens if it fails to run? How do I find out it has failed (ie, where do we see the effects of the job not running?)
  • What happens if there is a problem? Who gets notified? How do they know what to do next?
  • If there is a problem with processing one of the records, does the job continue? What if every record is failing? Do we give up or keep going?
  • If a run of the job doesn’t happen, does the job run as normal on the next execution? Does it catch up on the previously missed work? Should it?
  • Can I run the job manually if I need it to? How should this be done? Who should be allowed to do it? How do they know when they should do this?
  • What happens if the job runs more often than it should? What if it’s running once a minute rather than once a day?
  • What happens if the job is taking too long? How can I tell if the job has failed, is paused, or is just taking a really long time?
  • What if an execution of the job is still going on when it’s time for the next execution to begin? Can the different instances of the job interact safely? Does the job check whether it is already running?
  • What happens if two instances of the job are running simultaneously?
  • What happens if the job fails halfway? Can it be restarted safely?
  • What happens if the data used by the job is changed by another process? For example if a user cancels their subscription after the job has started?
  • What happens if one of the steps fails? Email and Billing are often third party systems. Which order should the events happen to ensure the safest failure?
  • What is the worst possible thing this job could do if it were to go wrong?

Not all these steps are relevant to all jobs. Writing code to handle every eventuality can sometimes be more expensive than clearing up the mess when things go wrong. And, obviously, a lot of the issues are handled by frameworks. But it’s worth running through these sorts of questions before writing any code. Thinking about terrible things and knowing what will happen produces more robust code.