Last Saturday, while waiting to be picked up by my friends Joh and Simon, I was reading the original paper on Conway’s Law. It is one of the reference points of microservices and I wondered exactly what it said. How had Conway proved his Law? Talking about this in the car, Joh and Simon said I should read Laurent Bossavit‘s Leprechauns of Software Engineering.
In this small book (which is a good thing – too many software books are much larger than they should be) Bossavit investigates the common-sense things that ‘everybody knows’ about software development and finds that many of them are based on hearsay and poor citation. The book is a sort of Bad Science for computer programmers, what Bossavit refers to as “epistemic hygiene”. Some of the myths examined are the idea of the 10x programmer, the idea that bugs cost more the later that they are found, and the problems of the waterfall method.
I love this sort of research. I spent an hour during my MA investigating the often-quoted story that Tristan Tzara caused a riot by reading random poetry. Since my essay was on William Burroughs and detournement, I barely managed to fit the research into a footnote, but the work was fascinating.
Bossavit wants to train software developers to be more sceptical, and outlines his method. Of course, the big flaw with this book like this is that it demands a lot from the reader. Without going back and reading the original sources for myself, I can’t be sure that Bossavit’s claims aren’t hearsay themselves. But, even without doing that work, there is value in the critical doubt that it stirs up.
For me, the most interesting part of the book was the discussion of waterfall, and the suggestion that the attacks on waterfall are based on a straw-man. While I love agile, and feel that it produces more humane projects, a well-run traditional project can be more effective than a poorly run agile one. Indeed, one of the problems with agile is that failed projects are often dismissed as ‘not being properly agile’. As Bossavit writes: “Software engineering is a social process, not a naturally occurring one- it therefore has the property that what we believe about software engineering has causal impacts on what is real about software engineering.”
Brighton Java has settled into its traditional Christmas break. And it’s great that we’re established enough to have a tradition! When we started in 2012 it was difficult to get going but this year things have taken off. We have 260 people in the group and our last session was full with very little promotion.
A lot of this is due to the sponsorship we’ve had. The Skiff provide us with a great venue, and we’ve had a great deal of support from Brandwatch – the promise of pizza and drinks definitely draws people in. I’ve also had some help from James Stanier and Luke Whiting with organisation and planning.
It’s now time to start planing the 2015 events. Next year I’d like to try to have an event every month. We had hoped to have a Hack Day as part of the Brighton Science Festival. That wasn’t possible, but I hope we can arrange something similar later in the year. I’m also hoping to bring in more students from the universities, as well as some academic speakers. I’d also like to see some smaller, more technical workshop events.
This is an exciting time for Java, and the claims that it was dying or “21st century COBOL” are quietening down, replaced by excitement over new JVM languages, microservices and the possibility of finally getting some long-promised features in Java 9. I’m very excited about the talks and events to come from Brighton Java in 2015.
So, thank you to the Skiff, Brandwatch, everyone who came to the talks and of course to all to the speakers, who for 2014 were:
James Burt: Solving poetry with Java
Laurence Barry: Agile: Mad, Bad, and Dangerous
Andrew Walters: Hazelcast: Distributed Computing, Simplified
Douglas Hoskins: Cross-platform mobile development
James Stanier: Spring Boot: cool stuff, really fast
I’ve enjoyed the sessions, and am grateful to all the speakers – I think we’ve provided a varied and up-to-date range of talks. If you’d like to be involved in 2015, please get in touch.
The next session will be on Wednesday February 4th 2015, at the Skiff. Details and signup will be via the meetup groupWednesday January 7th 2015. I will be speaking about applying microservice techniques to monoliths and my colleague Danielle Ashley will discuss an inappropriate Scala project.
(PS – I’ve set up a Linked-In group for people who like that sort of thing)
Every system I’ve worked on has had scheduled jobs. Regular tasks need to be automated, particularly if they need to run in the middle of the night. Often these jobs are used for billing. For example, a job might need to:
Find all customers who need subscription renewals
Bill each customer
Update that customer’s records, allowing them to continue accessing a service
Email a billing confirmation
Writing a job like that is easy. The problem comes with remembering all the different things that might go wrong. Some developers are good at this, but others are optimistic, happy-go-lucky souls that never consider all the terrible things that might happen. I thought it would be useful to make a list of the sort of things I ask myself when thinking about a scheduled job:
What happens if it fails to run? How do I find out it has failed (ie, where do we see the effects of the job not running?)
What happens if there is a problem? Who gets notified? How do they know what to do next?
If there is a problem with processing one of the records, does the job continue? What if every record is failing? Do we give up or keep going?
If a run of the job doesn’t happen, does the job run as normal on the next execution? Does it catch up on the previously missed work? Should it?
Can I run the job manually if I need it to? How should this be done? Who should be allowed to do it? How do they know when they should do this?
What happens if the job runs more often than it should? What if it’s running once a minute rather than once a day?
What happens if the job is taking too long? How can I tell if the job has failed, is paused, or is just taking a really long time?
What if an execution of the job is still going on when it’s time for the next execution to begin? Can the different instances of the job interact safely? Does the job check whether it is already running?
What happens if two instances of the job are running simultaneously?
What happens if the job fails halfway? Can it be restarted safely?
What happens if the data used by the job is changed by another process? For example if a user cancels their subscription after the job has started?
What happens if one of the steps fails? Email and Billing are often third party systems. Which order should the events happen to ensure the safest failure?
What is the worst possible thing this job could do if it were to go wrong?
Not all these steps are relevant to all jobs. Writing code to handle every eventuality can sometimes be more expensive than clearing up the mess when things go wrong. And, obviously, a lot of the issues are handled by frameworks. But it’s worth running through these sorts of questions before writing any code. Thinking about terrible things and knowing what will happen produces more robust code.
I followed up my two days at µCon with the LJC Open Conference. The third early start in a row was made a little more difficult by a late night – I gave a talk in Eastbourne the night before – but the sessions kept me alert. Rather than µCon’s deep focus on one area, the Openconf was an opportunity to learn about many different things, with introductions to Spock and Cassandra, as well as a good discussion about Akka, actors and other methods of distributed computing.
I don’t have a use for Cassandra yet, but it was good to learn more. In his talk, Chris Batey made an interesting point that abstracting out underlying data storage architecture can be a bad thing, since it hides the particular nature of that layer. For example, in Cassandra it is important to be aware of the partition key’s meaning. The slides are online.
The lightning talks were excellent, including a spoof on devops and consulting by Daniel Bryant, who coined the term QAOps, as well as a report on mob programming. I also caught Daniel Bryant’s presentation on microservices, which I’d missed at MuCon, having chosen the other track.
It was a good day out and excellent value. I’m very glad that I was able to go.
The second day of µCon proved to be as interesting as the first. I have another huge list of things to investigate, and there are some things I need to get underway at work tomorrow. As I said in my post about day 1, the most important thing about microservices is, for me, the discussion of the problems and patterns of modern web-development. Pretending that monolithic applications work is no longer an option.
The relationship of microservices to SOA came up several times, with people questioning whether this was just a re-branding or rehabilitation of SOA.
Another good question emerging from microservices is why we still can’t produce satisfactory architecture after all this time (Conway’s Law dates back to 1968). We have a cycle of new techniques becoming common, then standardised, then growing bloated and cluttered until someone takes a clean-sheet approach and we start again.
The concept of cohesion is one that I haven’t paid enough attention to; (micro)services should be ‘loosely coupled, highly cohesive’. It was also good to be reminded that logical and physical architecture do not need to be the same. I don’t think we pay enough attention to things as simple as package structure.
Jeppe Cramon pointed out that “a service that only deals with data is a database,” and we already have APIs for that. Why wrap that in REST?
One thing that’s difficult for me personally is getting out of the database transaction mindset. Cross-service transactions are a bad thing, but are likely to become prevalent as large software providers make it easier for people to do things the wrong way.
Jakub Korab had an important lesson for developers working with monoliths: if you touch any external system then you’re working with distributed computing. Monolithic architecture is an assumption that has worked (more-or-less) for a long time, but it’s not really true. All developers should have an awareness of the CAP framework and the Fallacies of Distributed Computing.
The only major problem I had with µCon was the lack of female presenters. If a subject has no female practitioners with something to say, then the topic may not be ready for a conference. Otherwise it was an excellent two days. I’ve already booked a ticket for the 2015 event.
I spent today in London, attending the first day of µCon, a conference dedicated to microservices. A lot of delegates seemed to be in the same position as me – working with a monolithic application that needs separating. Microservices are certainly not the answer to my problems, but many of the techniques and architectures will help me. The most relevant session was Chipping Away at a Monolith by Mark Landeryou and Steve McDonald from SagePay, which gave me a lot to think about.
Some other immediate responses:
There was a great deal of pragmatism. As enthusiastic as people were about microservices, there was an air of caution rather than triumphalism. This is a good solution in some situations but certainly not a silver bullet.
Many microservice architectures have focussed on HTTP/REST, but some of the other options (pub/sub messaging, thrift) are interesting.
Conway’s Law is a constant reference point, and I need to sit down and read the original paper. I find myself wondering recently why we even organise programmers into IT departments rather than place them in the business where they are needed.
The Unix Philosophy came up a lot too, with one speaker describing Unix pipes as the most perfect API ever invented.
My main concern with microservices is that they are an ideal. Most people have good intentions but still somehow find themselves dealing with difficult codebases. Greenfield development is rare and temporary. Maybe we need conferences focussed on monolith programming – but I expect that would be a hard sell
Another early start tomorrow for day 2. It’s going to take me a few days to read through the notes I made properly (21 pages so far), but I’m looking forward to another day of talks.
I’ve been writing drafts of technical posts recently. I was going to put them on my personal site, but they didn’t seem to fit there. So I decided to start a whole new site for my technical writing.
Over the last year or so I’ve been playing with a lot of fascinating technology: microservices, Spring boot, AWS, Jenkins, REST and so on. For a while I’d been bored by programming but I now feel enthusiasm again. I’m full of ideas for things to learn and services that I want to build, and wanted a space where I could write about what I’d learned and what I was thinking.
I’m off to µCon on Thursday, then the LJC Open Conference on Saturday, both of which I’m sure will give me a lot to write about. There are various new technologies I want to play with, like Vagrant. I’ve been thinking about how the lessons from microservices apply to the world of monoliths. Programming has become massively more complicated over the years I’ve been doing it, but that is starting to feel exciting.