Categories
books programming

A review of Dave Farley’s Modern Software Engineering

My colleague Luke Punnett recently recommended Dave Farley’s book ‘Modern Software Engineering’. While it’s not quite a classic, it’s a superb summary of the state of the art in software development. Anyone writing enterprise software should read this, and ideally follow the book’s advice.

Farley attempts to build a foundation for software engineering as a discipline using the scientific method. Writing code is formalised as a series of experiments, set within the process of ‘characterise, hypothesise, predict and experiment’: “Software engineering is the application of an empirical, scientific approach to finding efficient, economic solutions to practical problems in software.

What was most valuable about this book for me was getting a glimpse of how a very good and experienced software engineer approaches his work (Farley mentions a couple of times that he was involved in the LMAX exchange project).

There are several topics on which the book is particularly strong. Farley approaches agile from a fresh angle, renewing my faith in it – a faith that had been ground down by SaFE and ritualised agile processes. Farley is also excellent on test-driven development, arguing that TDD is not about producing code coverage, but a method of working that produces “a pressure … to write code that is more testable“. Farley then argues that testable code has the same attributes as code that is easy to maintain. There is also some excellent discussion of the pros and cons of microservices, arguing that their main strength is in allowing smaller, more focussed teams.

Robert C Martin’s Clean Code feels like it has reached the end of its life. People are less comfortable with what some of the guidelines that it proposes. Farley’s book is, I think, an excellent replacement. It is short and well-argued, and sets out a clear case for its recommendations. Some of the things Farley proposes – TDD, microservices, test automation etc – are still controversial in some companies. Hopefully this book will help towards their wider adoption.

Categories
Uncategorized

Developer Experience

I’ve written a post on the Mindera company blog, Why Everyone Needs to be Thinking About Developer Experience. It’s a topic I don’t think gets addressed enough. Too many companies erect uneccessary barriers to their developers producing good work.

Developer Experience is the idea of ensuring that developers’ tools, practices and working environment are as good as they can be to support their job.

It emerged from considering User Experience (UX) for developer-focused products. Some companies use these ideas to assess their internal platforms and processes. It’s an idea that shouldn’t seem radical — improving developers’ experience increases the speed and quality of their work.

Read more on the Mindera blog…

Categories
Uncategorized

Time to stop lying about TDD

I’ve just had a post published on the Mindera company blog, It’s Time to Stop Lying About TDD. This post came from a frustration with both companies and developers using the term TDD (Test-Driven Development) when what they really mean is that they have a code-coverage threshold. Adding coverage without using TDD removes many of the benefits of the practise. TDD is hard, and requires practise to get good at it, and few developers give it the time needed.

Many of the CVs that I see as a Java interviewer for Mindera mention test-driven development (TDD). It’s in most of the job roles I see advertised too. But I am fairly certain that most people are not actually doing TDD.

How do I know this? Because when we do live-coding exercises, very few candidates write tests, let alone do them before they add any code. It’s the same with the code samples sent in — very few of them include tests. Maybe these developers are using TDD, but they are not demonstrating it.

Read more on the Mindera blog…

Categories
books remote working

Effective Remote Work: A Review

We’re now deep into the second year of what’s been referred to as “The Great Remote Work Experiment”. March 2020’s lockdowns came in suddenly, with many companies treating them as a temporary inconvenience. The long-term effects of new working practises are still not obvious.

James Stanier’s second book, Effective Remote Work is an important book, providing the sort of clear ground-rules for remote working which would have been incredibly useful at the pandemic’s start. The book is humane and thoughtful, looking at how to work productively while protecting one’s mental and physical health.

My personal belief is that a properly run remote team will always be more effective than a co-located team. Too many companies rely on having people in the same room to deal with complexity rather than looking at managing things properly. Remote work also gives employees control over their lives, allowing them to live anywhere, and to have a schedule that fits in with their life. Remote employees don’t lose hours each week in commuting to open-plan vanity offices. Most importantly, people who cannot manage a 9–5 in a physical office due to childcare, caring responsibilities or poor health are welcomed back into work by flexible remote companies.

While I’ve read a few books about remote working, Stanier’s is an obvious leader since it has such clear practical guidelines. These recommendations are all based on hard data where possible (such as a comparison of reseach on reading vs listening speeds). Stanier’s book is aimed at any remote worker, regardless of seniority — while management staff have more leverage to introduce new practises, Stanier also shows how even junior staff can produce changes, addressing the small areas of the company that they can influence.

The two most useful ideas for me were the spectrum of synchronous/asynchronous communication, and the ‘golden rule’. Stanier looks at the continuum of media available, from in-person meetings to formal documentation, and how best to use asynchronous media that don’t rely on people being around at the same time. Stanier’s golden rule is that if some employees are remote, everyone should be treated as such. No more meetings where the remote staff stare at a diagram on a whiteboard that they can barely see. This should not need stating — but too often I’ve worked with offshore teams who were treated as if they were second-class. Since the pandemic, most people hopefully have a better understanding of the challenges from remote working.

Another topic the book discusses in detail is onboarding, which Stanier describes as “a lens that we can continually apply to everything we do, all the time”. People always need to do new things, and the more effective the training is, the more easily anyone can switch to new work. (One worrying effect of pandemic remote working is a reduction in roles for more junior staff, which some have thought might be down to concerns about training staff with less work experience).

Effective Remote Work also has an entertaining writing style, and contains the best simile I’ve read in years: “sneaky like a small mouse in a trenchcoat with sunglasses on”.

I think the remote working revolution has only just started. While there are companies treating remote working as a temporary inconvenience, others have made the transition successfully, and who will begin overtaking less flexible competitors — in output, hiring and growth. Stanier’s book provides a clear framework for how companies and employees can prosper over the next few years.

Categories
Uncategorized

An online talk on JHipster

As part of the Mindera virtual open day, I gave a short talk on JHipster. I’m a great enthusiast for this tool, and how it enables application prototypes to be quickly generated.

Categories
Uncategorized

DDD Conference, October 2021

Last weekend I attended the DDD conference (Developer! Developer! Developer!) in Nottingham, going along with Mindera and helping to run our company stand. I can’t remember the last time I was at a conference and I’ve missed these sorts of events. The day as a whole was well-run, with food, clear covid protocols and some great talks. Plus, working on the stand gave me a chance to talk to lots of people.

There was a good range of talks, with some interesting clashes to choose between. My first one, Lex Lofthouse’s ‘Design for Developers’ was one of those talks where you emerge at the end feeling like an expert on the subject. The idea of design was broken down into four principles: hierarchy, proximity, contrast and balance, with examples of each.

Typography has always been a mystery to me, but Lex explained some simple principles before suggesting various supporting websites like fonts.google.com for typefaces and fontjoy.com and type-scale.com to work out how to use them. Similar resources are available for colour palettes such as coolors.co. It’s amazing to have these sort of online resources available, providing expert guidance for beginners. The talk also included discussions of using photos and illustrations, and quick introductions to grids and atomic designs.

The second talk was Stephen Haunts talking about social engineering techniques. It was both fascinating and terrifying, particularly around how sophisticated many of the techniques used by fraudsters were becoming, with IVR and sound effects being used to convince people they were being called by their bank. 

After an excellent vegan lunch I attended a short talk on Open Banking vs Defi by Tom Morton, which discussed how the two compete, looking at blockchains as a sort of finance counter-culture. It was interesting to see how Open Banking has produced a wealth of personal finance managers, but very few new payments options, and Tom complained about the effects of the Visa/Mastercard duopoly – and indeed, there are recent examples of that market position being used to enforce the payment networks’ moral/political standards.

My final talk was Dylan Beattie on how there is no such thing as plain text. In an hour this went from the Cooke and Wheatstone telegraph (the earliest form of text encoding) through to unicode and emojis. The session was filled with both trivia and useful information. I leaned why Ctrl+C and Ctrl+D work as they do in terminals, and how the Danish Spelling Reform of 1948 changed the alphabetical ordering of identical words. Understanding how character sets are built up leads to some powerful insights, and Dylan described an impressive bug he’d solved around the appearance of weird characters. Another excellent talk on which to end the day.

Categories
java

Java Peaked With Version 7

Next week sees the launch of Java 17. This is the 8th release since Java moved to its six-monthly release cycle. This means nine releases have taken place in the same time that it took to go from version 6 to 7.

But, personally, I think the best version of Java is Java 7. The first time I said this I was joking, having just wrestled with some very complicated code that someone had written using streams. But the more I think about this, the more certain I am that this is right.

I can understand the pressure to keep adding features to improve Java. I remember the noughties, when everyone said Java was going to be replaced by newer, cooler languages. Java might be verbose and clunky but it’s also consistent. Pre-Java 8, there tended to be only one or two ways of doing things, and code would look relatively similar between different companies. Being a simple language, it was easy for developers to follow what was happening. There was less space for clever code that junior developers couldn’t understand. 

Java 8 was exciting, providing new paradigms for Java. But it’s made code more idiomatic and it’s easier to write obfuscated code. Compare it with perl, a language that was intentionally designed to be expressive. It’s not used in many large-scale systems. 

I can understand the pressure to add new features. Sometimes Java is frustrating. But Kotlin and Scala have set out their stalls as advanced languages on the JVM. They’re compatible with Java too, so there’s a strong argument for keeping Java as the dull boring option. I might want to use Scala or Kotlin in my own projects, but where I’m collaborating with multinational teams working agilely (which, in practise, means no documentation) I like my Java code as simple as it can be.

Categories
programming

What Do We Mean By Technical Debt?

The cat is a metaphor for technical debt

Recently, I’ve been thinking a lot about technical debt. It’s a great metaphor for how time is lost in programming projects but, like all metaphors, you have to be aware of where it stops working.

For example, when people compare national debts to household debts, this ignores how national banks control the supply of money, and that countries run into different problems to households when they’re over-extended.

Technical debt is, basically, time saved now which costs more later – taking tactical short cuts. It’s usefully compared to any other corporate debt, providing leverage for growth that would be impossible otherwise. The problem is that people don’t seem to track how much technical debt they’ve accrued and need to ‘pay back’. I’ve worked at a number of places where dealing with inefficient processes that were set up in a hurry took up a substantial amount of day-to-day work.

Having seem companies struggling with debt, I wonder if technical debt should be treated more like personal or household debt. Advice to people who are struggling with debt generally focusses on three points:

  1. Reduce outgoings to focus on paying off the debt
  2. Pay off the highest-interest debts first
  3. Do not save or invest anything until any costly debts are dealt with.

The software equivalent of expensive credit card debt is any inefficiencies or manual steps in repeated processes. If your regular work is taking longer than it should, there’s less time available for new things. So, like a household in debt, don’t try to invest in new things before sorting these existing inefficiencies. Fix the automated tests so no-one needs to reduce the manual checks on a release. Automate the entire release process. Improve testing to prevent disruptive production bugs.

Technical debt is a good thing when it’s used to provide leverage for growth. But, if you’re not tracking this debt accurately then maybe you should treat technical debt more like household debt, and prioritise reducing it to a manageable level. There’s no point investing in new features when you’re paying high interest on broken processes.

Categories
serverless

First steps in servlerless

I’m starting a new job next month where I’ll be using AWS Lambda. In preparation, I’ve been cramming on the topic. The main resource I’ve used is O’Reilly’s Programming AWS Lambda, and I’m enjoying learning from an actual physical book with an animal on the cover.

Here’s a quick summary of some of the other sources I’ve been looking at:

  • Mike Robert’s Serverless Architectures post is massive, and full of really useful discussion. This includes: a comparison between serverless and stored procedures (vendor locking, difficulty testing and versioning); the value of reduced time to market; environmental benefits of serverless; and the challenges of integration testing.
  • Gunnar Morling produced a good infoQ talk, Serverless Search for my blog, which discusses AWS Lambda used for a Lucene-based blog search. Morling uses Quarkus to avoid lock-in, and also suggests this gets around the cold-start problem. He also suggested Funqy as an vendor independent abstraction for serverless code. Morling points out that serverless has a smaller attack service, but looked in detail at dealing with a ‘denial of wallet’ attack.
  • Bruce Schneier discussed The Misaligned Incentives for Cloud Security, warning that it has a few large providers making technical decisions for millions of users; and that security problems such as data breaches affect their customers more than it affects them.
  • Guy Podjarny talks about the security issues in greater detail in Serverless security: What’s left to protect. He points out that one still needs to consider dependency vulnerabilities. While security permissions in serverless can be very granular, there is also a risk of this sprawling. Podjarny makes a number of suggestions including having critical and non-critical functionality in different accounts or regions.
  • Serverless and Chatbots: A Match Made in the Cloud by Gillian Armstrong was focussed on chatbots, but had a good overview of a lambda-based platform in production. Armstrong also noted that while lambdas scale every quickly, other parts of an infrastructure such as datastores might not.
  • A 2020 article Why the Serverless Revolution Has Stalled takes a more cynical approach, looking at four potential issues: limited programming languages; vendor lock; performance; inability to replace monolithic applications. Some of these issues have been solved by some teams, but all these points are worth considering.
  • Cloud study by the writer Robin Sloan discusses his use of cloud functions to provide simple support for running his newsletter. His solution to the cold start problem is, he admits, not best practise, but works for him: “Instead of deploying each of my functions as Actually Different cloud functions, I’ve rolled them up into one “mega function”—really almost a tiny app.” This solves a lot of issues for this small piece of functionality, not least that it fails fast: “if something isn’t working, nothing is working
  • Another post on cold starts suggested reducing the artefact size and had a good discussion of using pings to keep services live.
  • Operational Best Practices #serverless talked about how serverless limits the amount of code an enterprise needed, and that BaaS, FaaS and BaaS can all help speed up dev, particularly early in the process “You get to rent engineers from Google, AWS, Pagerduty, Pingdom, Heroku, etc for much cheaper than if you hired them in-house — if you could even get them, which you probably can’t because talent is scarce.”
  • That piece also contains a stern warning: “there is no such thing as having the luxury of not having to understand how your storage systems work. Queries will get slow, and you’ll need to be able to figure out why and fix them. You’ll hit scaling cliffs where suddenly a perfectly-usable app just starts timing everything out because of that extra second of latency coming from … The more you understand about your storage system (and the more you stay in the lane of how it was intended to be used), the happier you’ll be.

Using serverless for hobby projects does look attractive. But, having tried to get S3 and IAM working on AWS, I’d be reluctant to suggest that to anyone – particularly given the financial perils of AWS.

Categories
openstreetmap

Using postgis on docker to Find Ley-Lines

Back in 2015, I wrote a blog post about finding ley-lines with postgis, based on some work by Steven Kay.

I described the outline of how to produce as leylines as a How-to-Draw-An-Owl tutorial, since I skipped over a lot of the complex points. I was setting up my temporary postgis instances on AWS and there was a certain amount of faff involved in sorting out database access and structure. This post is an improvement on those methods.

(There are still bits I’m not explaining in full – basically, how to display the ley line – but if you need more details, please let a comment and I’ll get in touch. But, basically, you need a little bit of HTML/Javascript to take this all the way through)

I’ve returned to this project for some recent writing, and the technical aspects are much easier using docker. In fact, the process can be broken down to seven steps, which I’ve listed below. You’ll need to be comfortable with a command line, have docker installed, and be willing to play about a little. But the process is relatively straightforward:

  1. Start up a postgis instance from a computer with docker installed:
docker run --name some-postgis -e POSTGRES_PASSWORD=mysecret -d postgis/postgis

2. Open a terminal in the docker instance:

docker exec -it some-postgis /bin/bash

3. From inside the docker instance, install the OSM tools

apt update
apt install osm2pgsql

4. From the host computer, download some OSM data and copy the file into the container. There are various options for downloading this data but I tend to use the Geofabrik server.

docker cp ~/east-sussex-latest.osm.pbf some-postgis:postgres

Inside the container, the OSM file can then be loaded into postGIS using the command

osm2pgsql merseyside-latest.osm.pbf -U postgres

5. Back to our terminal inside the host, start up postgres

psql -U postgres

6. Now, here is the bit where I send you to another tutorial to get the ley lines SQL. These have been made available as a github gist by Steven Kay. Cutting and pasting the commands will get a fairly decent result, but I recommend having a play to get the best results.

7. You can now use another query to get the KML (Keyhole Markup Representation) format of the lines. This format is fairly portable, and I tend to plonk it in a simple HTML page. The larger the “ct” value, the more items are connected by the ley.

SELECT (ST_AsKML(geom)) as geom from leys where ct = 7;

8. And this is the hand-waving step, which is to display your KML lines on an actual map. Basically, you need to amend a simple OpenLayers example. Basically, if that leaves you stuck, leave a comment and I’ll get back to you (commenter email addresses are not be visible on the actual site).

The script from Steven Kay is great for producing candidate lines. The only problem is that it tends to follow clusters of pubs on the same street. The restriction that the lines must begin and end at a site while being at least 20km long prevents short local lines being plotted. But it would also be good to add an extra predicate to judge lines by how spaced-out the locations are. That’s something I need to work on in future.