Categories
brightonjava

James Stanier’s Remote Asynchronous Working Talks

Brighton Java has been on hiatus during the pandemic. A lot of groups have gone online, but I was reluctant to do that, since most technical jobs already involve too many video calls these days. There is a lot of excellent material online and, for me, local meet-ups are all about being there in person. I wanted to make sure that any remote event we ran justified people’s time and attention.

Our first event this year, in association with Silicon Brighton, is a series of remote and asynchronous talks by James Stanier. James has been a supporter of Brighton Java for a long time and recently published Become an effective software engineer. On Wednesdays in September he has been providing a series of talks on remote asynchronous working.

So far, the first three talks in the series of five have been posted onto Youtube:

  1. Positive & Negative Effects of the Pandemic on the Tech Industry
  2. The Distributed Working Mindset
  3. Staying Connected by Communicating Deliberately

In his first talk, James looked at the current situation, and how the pandemic has forced people into home working. While times are difficult there is a possibility for producing useful long-term change. Remote work is now more prevalent, and some companies are switching to be more flexible than before (Brandwatch, where James is SVP Engineering, has gone remote-first).

James divided the responses to remote work into three groups – those who wanted to be in an office full time, those who wanted to be remote at all times, and those who wanted a sustainable mix of the two. Companies that can be flexible to people’s needs (for example childcare and health) open up a wider pool of talent.

I worked remotely with ribot, mixing my time between home, office, hotels, client sites and airports. Since the pandemic, I’ve been at home only, and I can see a definite advantage to having a base office. But I have also collaborated successfully with entirely remote colleagues, some of whom I never met in person. I believe that flexibility is vital in the modern world.

James himself is fully remote, and his experiences with this led into his second talk, about The Remote Mindset. This is the idea that if one colleague is remote, then companies should behave as if all of them are. James looked at a number of strategies around this. One example here was meetings, and how, if anyone is remote then everyone should interact via screens so there is a level playing field.

The practise of ‘broadcasting’ is also worth encouraging. This is the idea of leaving a public written trace of all smaller group discussions, particular the reasons for any decisions made. This sort of asynchronicity removes the need to be present at specific times. Recording meeting and providing good documentation makes companies more resilient. For example, someone who is ill can catch up easily.

Something else James said, which should be done more is the use of ‘spotters’ in meetings, to draw out people who have not spoken. I also think more companies would benefit by looking at how to make more of the scrum ceremonies asynchronous. I’ve always found the timing of a daily stand-up to be a point of friction.

The third talk looked at Staying Connected through Deliberate Communication. One thing lost through remote working is spontaneous interaction, the serendipity of casual in-person conversations. It’s not just a case of adding a burden of socialising through zoom. (I’ve pretty much stopped all non-work video calls).

James’s suggestions here included setting up informal one-to-one mentoring, and he described how this had been done at Branchwatch. He also made a strong case for connection people (asynchronously) through their passions, whether cooking, pets or exercise. Apparently teams at Brandwatch are competing to cover the distance between the Brighton and Boston offices. James also suggests explicit agenda-free “coffee break” get-togethers as part of the work day.

One of the things I love about these talks is that their form matches with their message. James has allowed the audience to choose topics for future talks, as well as answering specific questions. Some of these have been very useful such as how to handle colleagues who don’t turn on video, or tools for technical white-boarding.

The fourth talk will be live on Wednesday afternoon, on the topic of managing remote meetings. If you would like a reminder about it, please check the Meetup Page.

Categories
infrastructure programming

Why Use Couchbase?

Back in the Noughties, when I first started web programming, data storage choices were straightforward. Your options were limited to RDBMS systems (Oracle if there was a budget, MySQL otherwise); if you to store binary data, then you could use file systems; and, in some cases, where the data was read-only maybe, you’d use a CSV file. Life was simple.

Back then, when I first heard the term ‘NoSQL’, I dismissed it. It’s never good to define something as what it isn’t, and the lack of a structure query language didn’t sound that compelling. But, over the years, NoSQL datastores have become essential, with some of them not being promoted as databases as such. The first one I used extensively was Lucene, which I didn’t really think of as a datastore. (Arguably, it isn’t strictly a datastore, but that’s another discussion).

Now there is a wide range of choices, each with their own specific use cases. I was recently tasked with looking into Couchbase, and the first question to answer is, why use Couchbase at all?

For an overview of Couchbase we can turn to a Linked in blog post on Couchbase’s evolution:

Couchbase is a highly scalable, distributed data store that plays a critical role in LinkedIn’s caching systems. Couchbase was first adopted at LinkedIn in 2012, and it now handles over 10 million queries per second with over 300 clusters in our production, staging, and corporate environments. Couchbase’s replication mechanisms and high performance have enabled us to use Couchbase for a number of mission-critical use cases at LinkedIn.

Wikipedia provides a good summary in their Comparison of Structured Storage software. We can see that Couchbase is a document storage solution, similar to MongoDB, but adding high availability functionality.

Couchbase started as a memcached replacement, adding in features like persistence, replicas, and cluster resizability. Its use as a backend to LinkedIn has demonstrated its potential in large deployments, with LinkedIn having, at one point, “over 2,000 hosts running Couchbase in production with over 300 unique clusters”. Or there were the 100 hosts used for Draw Something – 2 billion drawings were stored, at a rate of up to 3000 per second.

One of the interesting problems with learning a lot of modern technologies is that their potential only really comes out at scale. Speaking as a developer, I would be hard pushed to find a reason to use Couchbase above Mongo unless the use case involved master/client on mobile, or a website I expected to scale massively. But it is easy to get started with a basic Couchbase site thanks to JHipster and docker.

There are clear instructions online for getting going with JHipster, and having a working Couchbase application could be managed within about an hour, even with no JHipster experience. The basic steps are:

  • Install JHipster
  • Start JHipster and run through the basic application creation options. It’s easiest to work with a monolithic application if you’re new to JHipster. Make sure to pick Couchbase as the database, but otherwise the defaults will work well enough.
  • In the newly created application folders, go to the src/main/docker folder, and type the command ‘docker-compose -f couchbase.yml up’
  • In the main JHipster folder, use the command ‘./gradlew’ to build and run the Spring Boot application.
  • The application can then be viewed at http://localhost:8080. I had to use Chrome to get this working successfully.

The basic JHipster application, with no customisation includes a basic usermanagement system. The couchbase docker instance can be accessed at http://127.0.0.1:8091/, username ‘Adminstrator’, password ‘password’.

Clicking through to the ‘Buckets’ option on the left-handside menu shows the different data partitions available. Clicking on the ‘Documents’ link for the partition we have created shows the basic user data that has been added.

This is not much of an application, but by following the JHipster instructions for creating new entities, we get CRUD options for new pieces of data. While this produces a relatively simple application, JHipster has produced an entire stack, including Spring Data Couchbase. The work so far could be customised to provide a full application, or used as a working example of how to integrate Couchbase into a Spring Boot application. (One advantage of JHipster is that the application produced can be subsequently developed without reference or use of JHipster.)

Categories
infrastructure

Microservice Lessons For Monoliths

InfoQ recently published a new talk by Adrian Cockroft, Managing Failure Modes in Microservice Architectures.

It’s a good talk, but I think the title is a little restrictive. Many of the problems with microservices are problems with all computer systems – it’s just that microservices punish mistakes more brutally.

In a recent job interview I was asked if I could deal with the specific demands of microservices when (the interviewer felt) much of my experience involved monoliths or small groups of services. My response was that principles such as loose coupling, monitoring, and resilience are needed in all systems .

It’s a rare system that has no external dependencies. I once worked with a monolithic system that made a call to Salesforce as part of the login process. When Saleforce went down, users could not log into the system. The issue was obvious – we had to manage failures in the external dependency. Microservices, by involving many more dependencies, force people to engage with this; or else, suffer massive disruption and downtime.

One of the most interesting things about Cockroft’s talk was his observation about the problem with disaster recovery:

Your switching processes, and code, and practices are not well-tested. You’ve got an unreliable switch between your primary and backup datacenter. You might as well just not have the backup datacenter.

The answer here is to make the failure systems part of the normal workings; to be constantly switching between datacenters, for example.

There was also an interested related point about the chaos monkey. This was not just about testing resilience:

We were enforcing autoscaling. We wanted to be able to scale down. If you think of an autoscaler scaling up and scaling down, to scale down, it has to be able to kill instances. The Chaos Monkey was there to enforce the ability to scale down horizontally scaled workloads. That was actually what it was for. It was to make sure you didn’t put stateful machine, stateful workloads in autoscalers. Then you can have this badge of honor gamified a bit. “My app survived all of this chaos testing, and it’s running in this super high availability environment, and your app didn’t. Do you mean your app’s not important?” You can gamify it a bit.

Many of the principles discussed in relation to massive companies such as Netflix are needed by everyone. The good thing about this is that the tools being produced empower companies of all sizes. Microservices enforce a more mature attitude to failure; but failures occur in systems of all sizes.



Categories
brightonjava

Brighton Java 2020

We’ve just announced the first two events for Brighton Java in 2020. On Tuesday January 28th, we have a talk on Serialization Vulnerabilities in Java from Joe Beeton and sponsored by Amex.

The talk looks at how Java applications can be vulnerable to serialization attacks, and how they can be protected. The talk will be useful to Java developers of all experience levels. Joe has done some interesting work around this topic, which I’m looking forward to hearing more about.

In February, we have Luke Whiting talking on Kubernetes in the Data Center: Theory vs Reality. This talk will be a little different to the usual format. Luke will start with a talk he gave at the start of a project, following it up with the lessons learned in hindsight.

We’re also putting together future events, including a collaboration with Silicon Brighton and an exciting speaker for April. But we’re always on the lookout for new speakers, and welcome people of all levels – introductory talks on a topic are just as useful as expert sessions. If you’ve not spoken before, we’re happy to help you prepare. Either leave a comment, or email james@orbific.com

I’m also hoping to do some practical first steps sessions. The JVM eco-system feels more exciting than ever right now; but it’s sometimes hard to work out how to begin with new topics. Hopefully, we can provide a collaborative environment for this. More soon.

Categories
projectmanagement

Low Inventory development

How many open tickets do you have in your tracking software?
And how many tickets were closed in the last six months?

In a lot of places I’ve worked, the relative size of these two numbers meant it would take years to clear all the open tickets. When the company has been going a long time, there are even sometimes tickets relating to issues that were fixed years before, often as the side-effect of another change.

These other tickets turn up in searches and need to be considered, if only in a small way, when new work is prioritised. While it’s useful to have long-term plans and roadmaps, I am not convinced that the ticketing system is the right place.

The more tickets you have, the more temptation there is to throw things in the backlog that need doing one day. Things get lost, and it’s hard to be certain you’re working on the most important thing.
If something is not going to get done in the next year, there’s no point tracking it.

What about bugs and live issues?
The best platforms I’ve worked on have had a zero-defect policy. If something is worth fixing, it should be fixed as soon as possible. If you’re prepared to live with it, and you’re not going to get round to fixing it any time soon… why bother recording it?

What about technical debt?
If your technical debt is causing problems, you’ll be able to figure the most important thing to focus on. The rest of those things that you might get onto next year… Still might not be a bug enough problem to actually solve next year.

Massive backlogs are a weight we don’t need to carry. Tickets you’d love to do but won’t are clutter. They’re self-delusion.

Do some new year tidying up: delete the tickets you’re not going to work on, and focus on what you actually can do.

Categories
Uncategorized

My visit to J-o-X (Java Oxford meetup)

Last week, I visited J-O-X, the Oxford Java user group, to give a talk on JHipster and to meet the organisers.

I’ve been running the Brighton Java group since 2012, when I founded it with David Pashley. The group has ebbed and flowed over the years and, together with the current organising team, we’re always looking at ways to improve the group. Visiting different user groups is a part of this. It’s a chance to see what has worked for them, and improvements we can make.

The other good thing about visiting Oxford was finally being able to visit the Pitt-Rivers museum, which was as strange and as wonderful as I’d been told. I saw lots of amazing things, including this figure. I wrote a story based on a photo of this, which was later printed as a postcard promoting the Quick Fictions app:

I don’t think I gave the best delivery of my JHipster talk, being slightly spooked by technical issues that made me decide against the live demo. But people seemed to enjoy it, and I had some interesting questions, as well as some interesting discussions afterwards. I discovered that many of the people attending worked for the company Diffblue and had PhDs – it could have been a slightly intimidating crowd, but everyone was very friendly.

The other speaker was Pascal Kesseli, who spoke about DiffBlue’s JBMC tool (also on github). It’s a Bounded Model Checker that attempts to find bugs in software. Based on the demo, this looks like a fascinating technology, with the potential to find bugs in software that are not obvious, even after unit testing.

Thank you to Haybrook IT Resourcing Limited for organising the event, and to DiffBlue for their sponsorship.

Categories
Uncategorized

Reflections on a workshop

At the start of last week, I ran an API Workshop for Brighton Java. It was a fun event, and people enjoyed it. But nothing is ever perfect, and I wanted to note some potential improvements.

The workshop had been designed as being freeform. As outlined in the previous post, I wanted to provide people a space where they could play with ideas. Everyone made some progress towards an idea, but future sessions could be much more engaging. The main changes to make are:

  • There need to be a series of wins at different levels. The first stage would be getting the basic example working, then perhaps amending that, and then converting the code for new functionality. Obviously, not all attendees will manage all of this, but everyone should come out of the workshop having achieved something.
  • While I had provided a working example and asked people to bring along a laptop and IDE, not everyone found it straightforward to get this example working . Java is never going to achiece the ‘write once, run everywhere‘ nirvana that Sun promised. Some people didn’t have a compatible JDK on their machine; others were more comfortable with Android and wanted to convert the example. One possibility is using VMs as a fallback, but that brings its own issues.
  • A workshop that was already potentially complicated had the additional challenge of picking an idea to work on. I should have prepared a set of possible examples, which people be inspired by, or even reproduce.

Running an effective workshop in a couple of hours is challenging, which is why good faciliators are in-demand and highly paid (and why it takes years of training to become a teacher). But I’m very excited about what I could achieve with some more preparation on the points above. It’s definitely a session I’d like to run again.

Categories
Uncategorized

Fun with APIs

The next Brighton Java session is a workshop about building small applications by combining APIs (it’s September 23rd, details and signup on Meetup). I think everyone will able to produce something interesting in a couple of hours, despite Java’s reputation for being being heavyweight (although we might have to skip error handling and testing).

There is a good resource of public APIs on github. The workshop will involve picking two of these APIs and combining them (I’m old enough to remember this being referred to as a mashup). There are particularly interesting possibilities involving time and location, but as a practise example, I combined Kanye.REST‘s random Kanye quotes with cat pictures.

(There is an API endpoint returning random APIs, so I guess you could even produce a service to suggest mashups…)

I’m going to prepare a full worksheet for the event, but in the meantime, here are the simple steps I followed to produce a very simple API:

Set up basic project

I downloaded a Spring Boot guide application to use as a basis and imported this into intellij. This gaves me a build system with no need to mess around. When importing the project, I made sure to use external sources for the build tooling, which set all the libraries appropriately.

Add dependencies

The example I worked with used REST, so I added the Jersey libraries to gradle.

compile 'org.glassfish.jersey.containers:jersey-container-servlet:2.25.1'
compile group: 'org.json', name: 'json', version: '20190722'
compile 'org.glassfish.jersey.inject:jersey-hk2:2.28'
compile 'org.glassfish.jersey.media:jersey-media-json-jackson:2.22'

EDIT – 23/9 – Jersey is probably not the best library to use for REST APIs. Spring includes RestTemplate, which also has some great testing support in TestRestTemplate. However, it does not seem to have a user-agent set, which upsets a lot of servers. So I am going to use Jersey for this demonstration to make the code simpler, although I would use probably use RestTemplate if i was going into production. Thanks to David Pashley for raising and discussing this issue with me.

Add classes to manage responses

Next I made some classes to map the responses. This produces a coupling between the endpoint and the code. I’m not sure if there is a way to make this more flexible, but as this is a quick-and-dirty demo, it will do.

import com.fasterxml.jackson.annotation.JsonProperty;

public class Quote {
@JsonProperty("quote")
  public String quote;

  public Quote() { }
  public String getQuote() {
return quote;
}
}

Add the basic code

I then edited the HelloController class within the downloaded Spring Boot project, and wrote some quick code for downloading the APIs. This is a very long way from production quality, with no error management, but it does the job:

package hello;

import javax.ws.rs.core.MediaType;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.RequestMapping;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.WebTarget;

@RestController
public class HelloController {
@RequestMapping("/")
public String index() {
String responseString;
String location = "";
try {
Client client = ClientBuilder.newClient();
WebTarget target2 = client.target("https://api.kanye.rest");
Quote response2 = target2.request(MediaType.APPLICATION_JSON_TYPE).get(Quote.class);
responseString = response2.getQuote();
WebTarget target = client.target("https://aws.random.cat/meow");
CatPic response1 = target.request(MediaType.APPLICATION_JSON_TYPE).get(CatPic.class);
location = response1.getLocation();
} catch (Throwable t) {
return t.getMessage();
}
return "<h1>" + responseString + "</h1><img src=\"" + location + "\"/>";
}
}

At this point the code can be executed from the command line with the command ‘gradle bootRun’.

Some other useful resources

Categories
programming

lifelong learning in software development

I always wanted to have a Brighton Java talk on lifelong learning. The techniques, tools and fashions in software development are constantly changing – how do you keep on top of this churn while doing an often-stressful job and maintaining the family and social life you have a right to? I never found anyone to give this talk, and now I don’t need to, as a fantastic talk by Trisha Gee called Becoming a Fully Buzzword Compliant Developer has been released on InfoQ.

In the talk, Gee sets out a simple step-by-step approach to learning and career development, without ignoring how expectations around this impact on often-excluded groups in IT. (Remember: asking for interview candidate’s github profile makes a lot of assumptions about your potential employees and their free time).

Gee also looks at how quickly new ideas are absorbed into the mainstream of development; how to discover and assess new buzzwords; the importance of real-world meetups (like Brighton Java!); and the importance of “Enough knowledge to blag your way through a conversation in the pub.”

Sometimes, when discussing training and development outside work, developers point out that there is no requirement for, say, HR staff or office managers to do their job as part of a hobby. But look at it another way: modern careers are a half-century or so. You want to find a way to engage with software development and lifelong learning that is so exciting that you want to do it. Even maintaining the simplest hobby site can help you to keep up with the latest exciting new things.

There is also a flip-side to Gee’s talk which is less often discussed – and that’s how little learning Java enterprise developers can get away with by using the slow pace of change to avoid any self-development. Mobile and JS devs tend to engage much more because their careers depend on it (an Android dev who does not use the latest APIs will soon be unemployable). New ideas and tools can take a long time to filter through to the enterprise. The downside of this is that it can take even longer to get them right – TDD, agile and microservices suffer in enterprises due to the lack of an engaged, interested and playful attitude among devs and managers.

Categories
infrastructure

Deploying a prototype with JHipster

In my previous post, I looked at the minimal infrastructure for a hobbyist webapp in JHipster. Now I want to look at the process for putting a prototype into production.

What do I mean here by a prototype? I mean a simple first cut of a production site. It needs to be simple, while achieving the standards required of a professional site. It also needs to be better documented and reproducible than a one person hobby-site: I should be able to hand over the code and documentation to another developer and never have them call me for help.

The background to this post is a requirement to produce a simple platform for a local company. The work needs to be supportable by other people, as I don’t want to be the only person who can work on this. This project provided a good opportunity to look at moving beyond the hobby-site model.

Ultimately, the changes made on top of what was done last time are adding continuous integration; planning zero-downtime deployment; and sketching out a roadmap for the future.

Platform

In the previous post I discussed the trade-offs between AWS and Digital Ocean. AWS has powerful infrastructure, but is only worth taking on when the infrastructure costs are lower than the additional time needed to manage that infrastructure. There also needs to be a commitment to pay the ongoing platform costs, when long-term budget and timescales are not yet set. For the same reason, I dismissed the idea of a managed database, preferring the quick and cheap option of installing MySQL on the server.

For this initial prototype set-up, I think we are still at the stage where digital ocean has the lead. However, one of the downsides of this is losing a resilience in the system, so good monitoring needs to be in place.

Jenkins

As this is a professional site, the deployments need to be accountable – which means the builds need to be repeatable. JHipster provides support for setting up continuous integration, and this is so simple that it is inexcusable not to use it. By using Jenkins for production builds we can be sure these are done correctly, with no danger of files outside version control polluting the project.

I added a new digital ocean server using their Jenkins tutorials and soon had a CI server up-and-running. The first time I tried this, I had a few problems with the machine running out of memory, but assigning some virtual memory fixed this.

A basic change deployment process

A deployment process should be completely automated – that way you can be sure the process takes place the same way every time. However, if the process has not been automated, it does at least need to be documented.

If the process is not being automated at the start of a project, then some critical thought needs to be given as to when this will happen. As more features get added, performing an automated deployment only gets more complicated. It’s technical debt: over years, days can be lost on something like manual deployments, while time can’t be found to fix things there and then.

At this stage I am assuming a single production server, configured as in my previous post, ie with an Apache Server to handle HTTPS connections and proxy to the application server.

The deployment process should be clearly documented so that anyone working with the site can find it. That process will look something like this:

  • Create a ticket, summarising the changes that need to be made. You might not use Jira, but some sort of tracking system should be used, to provide a trail of events.
  • Define a series of (preferably automated) tests that will define whether the changes are successful.
  • Create a branch, add the tests and make the changes. Database updates must be backwardly-compatible and work with both the new and current version of the application (for example, take care removing columns).
  • If a code reviewer is available, they should review the changes before they are merged. If everything seems fine, the branch should be rebased against master then merged.
  • The changes are picked up by Jenkins and a production jar is produced.
  • The production jar is started using a new port via a command line option (-Dserver.port=XXXX). If database changes are included in liquibase, they will be applied at this point.
  • The Apache configuration is amended to point to this new version. Apache is then restarted
  • The old application server is removed.

We could use a load balancer for the deployments, which currently costs $10.80 a month on digital ocean. This would offer several more options for deployment, but these would also add costs and take more time to set up.

As we are using JWT, users don’t notice the switch of servers. There are issues with JWT, such as the difficulty of repudiating tokens when we need to end the session, but that is not an issue in this case.

Recovery

One of the most interesting trade-offs in these systems is between preparing for the possibility of error or dealing with the issues that actually occur. Getting a site close to 100% uptime is incredibly expensive – for example, what if the provider suffers an outage? should you be able to fall back to another cloud provider? Time/money might be better spent on getting things out there and exploring user responses.

I would suggest there are two important things to consider:

  • Given the time/money trade-off, what are acceptable SLAs for this simple site?
  • If the whole system were to be deleted, can the site be rebuilt in a sensible amount of time? How much data would be lost?

What’s missing

The above outlines what I would need for a minimum viable site deployment. There is some distance between this and what I would expect from a fully-featured production webapp. I will cover these in future posts:

  • A full continuous integration pipeline, including quality assurance tools such as Sonar and findbugs.
  • A more nuanced git branching strategy.
  • Spreading the site across multiple hosts, increasing the robustness and allowing scaling.
  • Capacity and error logging/alerting. This needs to be persistent, and to immediately communicate serious problems. Ideally, load spikes can be responded to automatically.
  • Better database recovery planning.
  • Clearer deployment tracking, so we can identify which version of the application a bug report occurred against.

The other thing to consider is at what point the savings would justify moving across to AWS.

Conclusion

The changes between this type of application and a hobbyist version are minimal, but JHipster supports us in getting continuous integration running, which is a great help. The main changes are tightening up the process, so that other people can become involved in this process as possible.