Categories
infrastructure

Microservice Lessons For Monoliths

InfoQ recently published a new talk by Adrian Cockroft, Managing Failure Modes in Microservice Architectures.

It’s a good talk, but I think the title is a little restrictive. Many of the problems with microservices are problems with all computer systems – it’s just that microservices punish mistakes more brutally.

In a recent job interview I was asked if I could deal with the specific demands of microservices when (the interviewer felt) much of my experience involved monoliths or small groups of services. My response was that principles such as loose coupling, monitoring, and resilience are needed in all systems .

It’s a rare system that has no external dependencies. I once worked with a monolithic system that made a call to Salesforce as part of the login process. When Saleforce went down, users could not log into the system. The issue was obvious – we had to manage failures in the external dependency. Microservices, by involving many more dependencies, force people to engage with this; or else, suffer massive disruption and downtime.

One of the most interesting things about Cockroft’s talk was his observation about the problem with disaster recovery:

Your switching processes, and code, and practices are not well-tested. You’ve got an unreliable switch between your primary and backup datacenter. You might as well just not have the backup datacenter.

The answer here is to make the failure systems part of the normal workings; to be constantly switching between datacenters, for example.

There was also an interested related point about the chaos monkey. This was not just about testing resilience:

We were enforcing autoscaling. We wanted to be able to scale down. If you think of an autoscaler scaling up and scaling down, to scale down, it has to be able to kill instances. The Chaos Monkey was there to enforce the ability to scale down horizontally scaled workloads. That was actually what it was for. It was to make sure you didn’t put stateful machine, stateful workloads in autoscalers. Then you can have this badge of honor gamified a bit. “My app survived all of this chaos testing, and it’s running in this super high availability environment, and your app didn’t. Do you mean your app’s not important?” You can gamify it a bit.

Many of the principles discussed in relation to massive companies such as Netflix are needed by everyone. The good thing about this is that the tools being produced empower companies of all sizes. Microservices enforce a more mature attitude to failure; but failures occur in systems of all sizes.



Categories
brightonjava

Brighton Java 2020

We’ve just announced the first two events for Brighton Java in 2020. On Tuesday January 28th, we have a talk on Serialization Vulnerabilities in Java from Joe Beeton and sponsored by Amex.

The talk looks at how Java applications can be vulnerable to serialization attacks, and how they can be protected. The talk will be useful to Java developers of all experience levels. Joe has done some interesting work around this topic, which I’m looking forward to hearing more about.

In February, we have Luke Whiting talking on Kubernetes in the Data Center: Theory vs Reality. This talk will be a little different to the usual format. Luke will start with a talk he gave at the start of a project, following it up with the lessons learned in hindsight.

We’re also putting together future events, including a collaboration with Silicon Brighton and an exciting speaker for April. But we’re always on the lookout for new speakers, and welcome people of all levels – introductory talks on a topic are just as useful as expert sessions. If you’ve not spoken before, we’re happy to help you prepare. Either leave a comment, or email james@orbific.com

I’m also hoping to do some practical first steps sessions. The JVM eco-system feels more exciting than ever right now; but it’s sometimes hard to work out how to begin with new topics. Hopefully, we can provide a collaborative environment for this. More soon.

Categories
projectmanagement

Low Inventory development

How many open tickets do you have in your tracking software?
And how many tickets were closed in the last six months?

In a lot of places I’ve worked, the relative size of these two numbers meant it would take years to clear all the open tickets. When the company has been going a long time, there are even sometimes tickets relating to issues that were fixed years before, often as the side-effect of another change.

These other tickets turn up in searches and need to be considered, if only in a small way, when new work is prioritised. While it’s useful to have long-term plans and roadmaps, I am not convinced that the ticketing system is the right place.

The more tickets you have, the more temptation there is to throw things in the backlog that need doing one day. Things get lost, and it’s hard to be certain you’re working on the most important thing.
If something is not going to get done in the next year, there’s no point tracking it.

What about bugs and live issues?
The best platforms I’ve worked on have had a zero-defect policy. If something is worth fixing, it should be fixed as soon as possible. If you’re prepared to live with it, and you’re not going to get round to fixing it any time soon… why bother recording it?

What about technical debt?
If your technical debt is causing problems, you’ll be able to figure the most important thing to focus on. The rest of those things that you might get onto next year… Still might not be a bug enough problem to actually solve next year.

Massive backlogs are a weight we don’t need to carry. Tickets you’d love to do but won’t are clutter. They’re self-delusion.

Do some new year tidying up: delete the tickets you’re not going to work on, and focus on what you actually can do.

Categories
Uncategorized

My visit to J-o-X (Java Oxford meetup)

Last week, I visited J-O-X, the Oxford Java user group, to give a talk on JHipster and to meet the organisers.

I’ve been running the Brighton Java group since 2012, when I founded it with David Pashley. The group has ebbed and flowed over the years and, together with the current organising team, we’re always looking at ways to improve the group. Visiting different user groups is a part of this. It’s a chance to see what has worked for them, and improvements we can make.

The other good thing about visiting Oxford was finally being able to visit the Pitt-Rivers museum, which was as strange and as wonderful as I’d been told. I saw lots of amazing things, including this figure. I wrote a story based on a photo of this, which was later printed as a postcard promoting the Quick Fictions app:

I don’t think I gave the best delivery of my JHipster talk, being slightly spooked by technical issues that made me decide against the live demo. But people seemed to enjoy it, and I had some interesting questions, as well as some interesting discussions afterwards. I discovered that many of the people attending worked for the company Diffblue and had PhDs – it could have been a slightly intimidating crowd, but everyone was very friendly.

The other speaker was Pascal Kesseli, who spoke about DiffBlue’s JBMC tool (also on github). It’s a Bounded Model Checker that attempts to find bugs in software. Based on the demo, this looks like a fascinating technology, with the potential to find bugs in software that are not obvious, even after unit testing.

Thank you to Haybrook IT Resourcing Limited for organising the event, and to DiffBlue for their sponsorship.

Categories
Uncategorized

Reflections on a workshop

At the start of last week, I ran an API Workshop for Brighton Java. It was a fun event, and people enjoyed it. But nothing is ever perfect, and I wanted to note some potential improvements.

The workshop had been designed as being freeform. As outlined in the previous post, I wanted to provide people a space where they could play with ideas. Everyone made some progress towards an idea, but future sessions could be much more engaging. The main changes to make are:

  • There need to be a series of wins at different levels. The first stage would be getting the basic example working, then perhaps amending that, and then converting the code for new functionality. Obviously, not all attendees will manage all of this, but everyone should come out of the workshop having achieved something.
  • While I had provided a working example and asked people to bring along a laptop and IDE, not everyone found it straightforward to get this example working . Java is never going to achiece the ‘write once, run everywhere‘ nirvana that Sun promised. Some people didn’t have a compatible JDK on their machine; others were more comfortable with Android and wanted to convert the example. One possibility is using VMs as a fallback, but that brings its own issues.
  • A workshop that was already potentially complicated had the additional challenge of picking an idea to work on. I should have prepared a set of possible examples, which people be inspired by, or even reproduce.

Running an effective workshop in a couple of hours is challenging, which is why good faciliators are in-demand and highly paid (and why it takes years of training to become a teacher). But I’m very excited about what I could achieve with some more preparation on the points above. It’s definitely a session I’d like to run again.

Categories
Uncategorized

Fun with APIs

The next Brighton Java session is a workshop about building small applications by combining APIs (it’s September 23rd, details and signup on Meetup). I think everyone will able to produce something interesting in a couple of hours, despite Java’s reputation for being being heavyweight (although we might have to skip error handling and testing).

There is a good resource of public APIs on github. The workshop will involve picking two of these APIs and combining them (I’m old enough to remember this being referred to as a mashup). There are particularly interesting possibilities involving time and location, but as a practise example, I combined Kanye.REST‘s random Kanye quotes with cat pictures.

(There is an API endpoint returning random APIs, so I guess you could even produce a service to suggest mashups…)

I’m going to prepare a full worksheet for the event, but in the meantime, here are the simple steps I followed to produce a very simple API:

Set up basic project

I downloaded a Spring Boot guide application to use as a basis and imported this into intellij. This gaves me a build system with no need to mess around. When importing the project, I made sure to use external sources for the build tooling, which set all the libraries appropriately.

Add dependencies

The example I worked with used REST, so I added the Jersey libraries to gradle.

compile 'org.glassfish.jersey.containers:jersey-container-servlet:2.25.1'
compile group: 'org.json', name: 'json', version: '20190722'
compile 'org.glassfish.jersey.inject:jersey-hk2:2.28'
compile 'org.glassfish.jersey.media:jersey-media-json-jackson:2.22'

EDIT – 23/9 – Jersey is probably not the best library to use for REST APIs. Spring includes RestTemplate, which also has some great testing support in TestRestTemplate. However, it does not seem to have a user-agent set, which upsets a lot of servers. So I am going to use Jersey for this demonstration to make the code simpler, although I would use probably use RestTemplate if i was going into production. Thanks to David Pashley for raising and discussing this issue with me.

Add classes to manage responses

Next I made some classes to map the responses. This produces a coupling between the endpoint and the code. I’m not sure if there is a way to make this more flexible, but as this is a quick-and-dirty demo, it will do.

import com.fasterxml.jackson.annotation.JsonProperty;

public class Quote {
@JsonProperty("quote")
  public String quote;

  public Quote() { }
  public String getQuote() {
return quote;
}
}

Add the basic code

I then edited the HelloController class within the downloaded Spring Boot project, and wrote some quick code for downloading the APIs. This is a very long way from production quality, with no error management, but it does the job:

package hello;

import javax.ws.rs.core.MediaType;
import org.springframework.web.bind.annotation.RestController;
import org.springframework.web.bind.annotation.RequestMapping;
import javax.ws.rs.client.Client;
import javax.ws.rs.client.ClientBuilder;
import javax.ws.rs.client.WebTarget;

@RestController
public class HelloController {
@RequestMapping("/")
public String index() {
String responseString;
String location = "";
try {
Client client = ClientBuilder.newClient();
WebTarget target2 = client.target("https://api.kanye.rest");
Quote response2 = target2.request(MediaType.APPLICATION_JSON_TYPE).get(Quote.class);
responseString = response2.getQuote();
WebTarget target = client.target("https://aws.random.cat/meow");
CatPic response1 = target.request(MediaType.APPLICATION_JSON_TYPE).get(CatPic.class);
location = response1.getLocation();
} catch (Throwable t) {
return t.getMessage();
}
return "<h1>" + responseString + "</h1><img src=\"" + location + "\"/>";
}
}

At this point the code can be executed from the command line with the command ‘gradle bootRun’.

Some other useful resources

Categories
programming

lifelong learning in software development

I always wanted to have a Brighton Java talk on lifelong learning. The techniques, tools and fashions in software development are constantly changing – how do you keep on top of this churn while doing an often-stressful job and maintaining the family and social life you have a right to? I never found anyone to give this talk, and now I don’t need to, as a fantastic talk by Trisha Gee called Becoming a Fully Buzzword Compliant Developer has been released on InfoQ.

In the talk, Gee sets out a simple step-by-step approach to learning and career development, without ignoring how expectations around this impact on often-excluded groups in IT. (Remember: asking for interview candidate’s github profile makes a lot of assumptions about your potential employees and their free time).

Gee also looks at how quickly new ideas are absorbed into the mainstream of development; how to discover and assess new buzzwords; the importance of real-world meetups (like Brighton Java!); and the importance of “Enough knowledge to blag your way through a conversation in the pub.”

Sometimes, when discussing training and development outside work, developers point out that there is no requirement for, say, HR staff or office managers to do their job as part of a hobby. But look at it another way: modern careers are a half-century or so. You want to find a way to engage with software development and lifelong learning that is so exciting that you want to do it. Even maintaining the simplest hobby site can help you to keep up with the latest exciting new things.

There is also a flip-side to Gee’s talk which is less often discussed – and that’s how little learning Java enterprise developers can get away with by using the slow pace of change to avoid any self-development. Mobile and JS devs tend to engage much more because their careers depend on it (an Android dev who does not use the latest APIs will soon be unemployable). New ideas and tools can take a long time to filter through to the enterprise. The downside of this is that it can take even longer to get them right – TDD, agile and microservices suffer in enterprises due to the lack of an engaged, interested and playful attitude among devs and managers.

Categories
infrastructure

Deploying a prototype with JHipster

In my previous post, I looked at the minimal infrastructure for a hobbyist webapp in JHipster. Now I want to look at the process for putting a prototype into production.

What do I mean here by a prototype? I mean a simple first cut of a production site. It needs to be simple, while achieving the standards required of a professional site. It also needs to be better documented and reproducible than a one person hobby-site: I should be able to hand over the code and documentation to another developer and never have them call me for help.

The background to this post is a requirement to produce a simple platform for a local company. The work needs to be supportable by other people, as I don’t want to be the only person who can work on this. This project provided a good opportunity to look at moving beyond the hobby-site model.

Ultimately, the changes made on top of what was done last time are adding continuous integration; planning zero-downtime deployment; and sketching out a roadmap for the future.

Platform

In the previous post I discussed the trade-offs between AWS and Digital Ocean. AWS has powerful infrastructure, but is only worth taking on when the infrastructure costs are lower than the additional time needed to manage that infrastructure. There also needs to be a commitment to pay the ongoing platform costs, when long-term budget and timescales are not yet set. For the same reason, I dismissed the idea of a managed database, preferring the quick and cheap option of installing MySQL on the server.

For this initial prototype set-up, I think we are still at the stage where digital ocean has the lead. However, one of the downsides of this is losing a resilience in the system, so good monitoring needs to be in place.

Jenkins

As this is a professional site, the deployments need to be accountable – which means the builds need to be repeatable. JHipster provides support for setting up continuous integration, and this is so simple that it is inexcusable not to use it. By using Jenkins for production builds we can be sure these are done correctly, with no danger of files outside version control polluting the project.

I added a new digital ocean server using their Jenkins tutorials and soon had a CI server up-and-running. The first time I tried this, I had a few problems with the machine running out of memory, but assigning some virtual memory fixed this.

A basic change deployment process

A deployment process should be completely automated – that way you can be sure the process takes place the same way every time. However, if the process has not been automated, it does at least need to be documented.

If the process is not being automated at the start of a project, then some critical thought needs to be given as to when this will happen. As more features get added, performing an automated deployment only gets more complicated. It’s technical debt: over years, days can be lost on something like manual deployments, while time can’t be found to fix things there and then.

At this stage I am assuming a single production server, configured as in my previous post, ie with an Apache Server to handle HTTPS connections and proxy to the application server.

The deployment process should be clearly documented so that anyone working with the site can find it. That process will look something like this:

  • Create a ticket, summarising the changes that need to be made. You might not use Jira, but some sort of tracking system should be used, to provide a trail of events.
  • Define a series of (preferably automated) tests that will define whether the changes are successful.
  • Create a branch, add the tests and make the changes. Database updates must be backwardly-compatible and work with both the new and current version of the application (for example, take care removing columns).
  • If a code reviewer is available, they should review the changes before they are merged. If everything seems fine, the branch should be rebased against master then merged.
  • The changes are picked up by Jenkins and a production jar is produced.
  • The production jar is started using a new port via a command line option (-Dserver.port=XXXX). If database changes are included in liquibase, they will be applied at this point.
  • The Apache configuration is amended to point to this new version. Apache is then restarted
  • The old application server is removed.

We could use a load balancer for the deployments, which currently costs $10.80 a month on digital ocean. This would offer several more options for deployment, but these would also add costs and take more time to set up.

As we are using JWT, users don’t notice the switch of servers. There are issues with JWT, such as the difficulty of repudiating tokens when we need to end the session, but that is not an issue in this case.

Recovery

One of the most interesting trade-offs in these systems is between preparing for the possibility of error or dealing with the issues that actually occur. Getting a site close to 100% uptime is incredibly expensive – for example, what if the provider suffers an outage? should you be able to fall back to another cloud provider? Time/money might be better spent on getting things out there and exploring user responses.

I would suggest there are two important things to consider:

  • Given the time/money trade-off, what are acceptable SLAs for this simple site?
  • If the whole system were to be deleted, can the site be rebuilt in a sensible amount of time? How much data would be lost?

What’s missing

The above outlines what I would need for a minimum viable site deployment. There is some distance between this and what I would expect from a fully-featured production webapp. I will cover these in future posts:

  • A full continuous integration pipeline, including quality assurance tools such as Sonar and findbugs.
  • A more nuanced git branching strategy.
  • Spreading the site across multiple hosts, increasing the robustness and allowing scaling.
  • Capacity and error logging/alerting. This needs to be persistent, and to immediately communicate serious problems. Ideally, load spikes can be responded to automatically.
  • Better database recovery planning.
  • Clearer deployment tracking, so we can identify which version of the application a bug report occurred against.

The other thing to consider is at what point the savings would justify moving across to AWS.

Conclusion

The changes between this type of application and a hobbyist version are minimal, but JHipster supports us in getting continuous integration running, which is a great help. The main changes are tightening up the process, so that other people can become involved in this process as possible.

Categories
infrastructure

Deploying a hobby site with JHipster

This is the first in a series of posts looking at JHipster deployment. This post considers the most basic deployment. Later posts will look at more maintainable set-ups, including continuous integration and deployment pipelines

For me, one of the most exciting things about JHipster is that it makes it feasible to build hobby websites on a full-stack Java/Javascript platform. Basic functionality can be added within a few hours, and pretty good user administration comes as standard.

But, for all web-applications, getting from a feature-complete version to a live application can be a problem. There are several things to consider such as reliability, monitoring and security. This post looks into the minimal configuration required for a hobby site. I’m planning to maintain it as a work-in-progress, so if you have any questions or suggestions, please leave a comment below.

What do we mean by a hobby website?

By ‘hobby website’, I mean something that is created and run for fun. Since it’s not intended to make money, the requirements for uptime and stability are lower than a professional site, and we are probably making a few shortcuts in terms of process. We want it to chug along by itself and check in with it at weekends; we assume we will get pinged for errors, but we won’t respond to them immediately. And we also probably want to spend more time working on code rather then infrastructure.

Indeed, I suspect most people considering a hobby site are not going to run full test suites. While proper testing is more efficient in the long run, and essential on large or collaborative projects, some people enjoy the immediacy of seat-of-the-pants development. Certainly, reputational risks are lower when doing a hobby website. I’m not saying that it’s correct to avoid setting up a proper process with continuous delivery – but for most people that’s not what they want from a project they work on in their spare time.

Platform

Unless you have a very generous employer, you’re going to need somewhere to host the application and in the long term, this will cost something. If you’re eligible for the free tier on AWS, or GCP’s free offerings, that’s a great help, otherwise you’re looking at a paid option. While a lot of people offer virtual servers, two popular options are Amazon’s AWS and Digital Ocean.

While AWS is a powerful and sophisticated platform, it’s probably overkill for most hobby sites. Setting up an AWS account involves a number of subtleties, and the additional features can be expensive. So, while I would tend towards the power of AWS for a commerical application, I favour Digital Ocean for hobby sites and simple deployments. Digital Oceans’s virtual servers are about half the price of an Amazon server ($5 a month for the lowest level, $10 for something a little more powerful). Since we’re talking about sites that aren’t going to make money, reducing costs is probably good.

Getting a server up-and-running on Digital Ocean is straightforward, and they provide great step-by-step tutorials. The first step is the basic server set-up. Digital Ocean seem to be offering a managed database service, but it’s also easy to install MySQL and set up a new user.

Setting up email is best done through a third-party supplier. Mailgun‘s SMTP service is easily to configure with JHipster, and offers up to 10,000 emails a month for free.

Running the app

There are various options for running an app on a server, and Spring Boot’s documentation has a section on deployment and installation of Spring Boot apps. The main takeaway from this is do not run the Spring Boot application as root.

I’ve actually been quite lazy. Rather than setting up a service, I ran my application from the command line and backgrounded it. I wouldn’t do this on an app I was producing professionally – but I have a monitoring service on the app and it’s stable enough for my needs.

One other thing – the Spring Boot app should be listening on port 8080, since we’re not going to have it communicating directly with the internet. Instead, we will use an Apache proxy to handle HTTPS:

HTTPS

HTTPS is a non-negotiable part of any web project these days. You can run a web application over HTTP but:

  • Browsers will give lots of warnings and this provides a horrific user experience (and quite rightly so!)
  • You’re risking users sending sensitive data in the clear. Many users have horrible password strategies, and reuse the same one for multiple sites. So, an error on your site risks exposing other accounts and causing catastrophic loss for that user. While you aren’t responsible for other people’s bad password management, eliminating unnecessary risks is good.

Spring Boot supports HTTPS, but it’s a bit of a hassle to set up multiple connectors so that an app can listen on both HTTP and HTTPS so that HTTP requests redirect to a secure connection. Given the efficiency of the EFF’s Certbot scripts, it’s easier to set up Apache as a reverse proxy. In addition, terminating the HTTPS connections with Apache makes sense as Apache is better at managing HTTPS than Spring Boot.

Once again, Digital Ocean offers some useful tutorials on installing and setting up Apache and using certbot to set up HTTPS. Certbot configures the apache instance to redirect HTTP requests to HTTPS.

The basic set-up in the tutorials is for a static site, but it’s a simple matter to updatew this. First, from the command line, enable some additional Apache modules:

sudo a2enmod proxy
sudo a2enmod header
sudo a2enmod headers
sudo a2enmod proxy_http

Then, finally, remove the DocumentRoot directive from the ssl.conf configuration and add in the lines to proxy HTTP requests to port 8080:

ProxyPreserveHost on
RequestHeader set X-Forwarded-Proto https
RequestHeader set X-Forwarded-Port 443
ProxyPass / http://127.0.0.1:8080/
ProxyPassReverse / http://127.0.0.1:8080/

Preventing Terrible Things

Whenever anything is put live on the Internet, thought needs to be given to the risks and costs involved. It’s impossible to be 100% safe, but obvious problems need to be prevented. We don’t want our hobby website to ever give us a horrible sinking feeling that something has gone very, very wrong. So it’s worth thinking about the things that could go wrong with any application exposed to the internet:

  • Running up massive bills with a hosting provider through insecurity in our user account with them.
  • Leaking user data (which has potential legal ramifications)
  • The host being compromised, allowing it to be used for nefarious activities
  • Losing user data (which might annoy people, causing them to annoy us)

The measures discussed in this post should mitigate most of these risks, but you should take time to consider things such as data and password security to make sure you’re comfortable with the application you’re running.

One thing to be aware of is that all dependencies should be regularly upgraded to make sure the latest versions are used. The JHipster upgrade tool is provided for this.

This is a good point to remind you that care must be taken with the secrets stored in the JHipster application’s application-prod.yml file. Do not carelessly commit these secrets to git and then push to a public repo. Sounds obvious, but JHipster automatically adds this file to the repository.

Database Backups

Any application which creates and maintain data needs some sort of backup. A manual solution is going to be more hassle than it’s worth, as is a full up-to-the second solution, such as would be used on a professional site.

One option is to pay a little extra to use the new managed (Postgres) database services on Digital Ocean, and let them handle backups. I’ve gone for a quick-and-dirty daily backup using automysqlbackup with the files produced rsynced automatically to another server. It means a catastrophic failure could wipe out up to 24 hours of data, but the solution was something I could set up in 15 minutes, and I am OK with the risk.

Monitoring

Given the simplicity of the app I’m working with, I’ve not bothered producing persistent monitoring of the application beyond the logs on the server. If, for some reason, the Digital Ocean server vanishes, these will be gone. I’m OK with that.

However, I do want to know if, for any reason, the Java application server falls over. I’m using Uptime Robot‘s free offering to email me if the app’s homepage stops responding. This is enough for the first version of my application.

Thank you to Laurence Barry and Alex Tawse for their feedback on a draft of this post

Categories
infrastructure

Setting up an online radiostation

Recently, I was set an interesting challenge: set up an Internet radio station. I knew very little about this but was relieved to find out it was easier than expected.

Having reviewed the different option, the best one seemed to be icecast2, which is actually the backing for a lot of commercial services. Getting the basics up and running was relatively straighforward, with a couple of gotchas.

Virtual Server

I used a digital ocean server to host the radio station. These are reasonably-priced servers, with 1TB of transfer on the basic level. The physical hypervisors have about 1GBps. In a 4-year-old forum post, it was suggested that “You should expect and plan on 300Mbps of available bandwidth (up and down) in order to plan your deployment and get the most out of your services.”

This covers a decent number of users, and hitting this limit would be a nice problem to have. Potential solutions in this case would finding a way to peer the radio through a more powerful icecast server. As far as bandwidth use went, looking at someone else’s calculations suggested that this would not be a problem in my case. So, I set up a basic digital ocean server to see how that would cope.

The server I went for was a little beefier than the basic model, with 2 CPUs and 3TB of transfer – far more than was needed. Within a few minutes I had a server set up.

Icecast

Installing icecast is easy on ubuntu using the standard repositories. During testing we had a number of issues that seemed to be caused by not using HTTPS. In the end, it turned out that these were actually caused by using Ogg rather than MP3 for the streams. This was a good thing to discover, as getting HTTPS support working was quite a chore.

I found a good guide to setting up Icecast and Liquidsoap on the Linux Journal site: Creating an Internet Radio Station with Icecast and Liquidsoap, and it’s worth following this closely.

However, these versions of Icecast do not include SSL support. Doing this requires building icecast from source, using the SSL option. However, installing from source doesn’t do a great job of setting up the software locations and services.

I ended up settling on a slightly weird way of installing icecast, by using apt-get to do the config, then compiling and installing the binary from source.

I was fortunate to find some helpful documentation for compiling icecast with SSL. However, this needed a little tinkering, mainly due to dependencies and the commands I needed were:

sudo apt-get update
sudo apt install git gcc build-essential libcurl4-openssl-dev libxslt1-dev libxml2-dev libogg-dev libvorbis-dev libflac-dev libtheora-dev libssl-dev autoconf libtool pkgconf
mkdir src
cd src/
git clone --recursive https://git.xiph.org/icecast-server.git
cd icecast-server; ./autogen.sh
./configure --with-curl --with-openssl
make install
mkdir /etc/icecast2
mkdir /var/log/icecast2

I then copied the pre-prepared icecast config file into /etc/icecast2, along with the pre-prepared SSH certificate file. Then there were some last bits of set-up. I had to explicitly enable the icecast daemon in /etc/default/icecast2 to declare I had changed the passwords (done via copying across the config)

chown icecast2:icecast /etc/icecast2/icecast.*
systemctl enable icecast2
systemctl start icecast2

At this point, the icecast server would be visible on port 8000. I had to do a little work to ensure that the correct XSL files were being pointed to, but it was fairly obvious (from a 500 error) that this needed changing.

Liquidsoap

While icecast handles the streaming, a second application is needed to generate those streams, and this is the role of liquidsoap. Again, the basics of this were provided by some helpful documentation online, this time from the linuxjournal.

adduser liq
gpasswd -a liq sudo
apt-get install opam
su - liq
opam init
# say yes to changing profile
eval opam config env
opam install depext
opam depext taglib mad lame vorbis cry ssl samplerate magic opus liquidsoap
opam install taglib mad lame vorbis cry ssl samplerate magic opus liquidsoap
sudo mkdir /var/log/liquidsoap
sudo chown liq:liq /var/log/liquidsoap/
mkdir /home/liq/archive
mkdir /home/liq/playlist

I then copied across the liquidsoap config file, and tested it – liquidsoap complained about a fallible source, as there was not yet a fallback file in place. The script I’m using has a test.mp3 file defined in the configuration that can be used when all other sources fail.

Other tasks

I needed to copy some other people’s SSH keys to the server to allow them to work on it. Then I needed to set up apache, and configure the firewall. Digital ocean provide instructions for this, which are comprehensive and include firewall management. Once basic apache was set up, I created the new vhost for the station and transferred the pre-prepared configuration files. Then test that this configuration was successful.

Pointing the DNS to the new server is simple enough, but there are also SSL certificates needed for both Apache and icecast. The simplest way to do this is via certbot

sudo add-apt-repository universe
sudo apt-get update
sudo apt-get install certbot python-certbot-apache
certbot --apache

There is also a certificate needed for icecast, as some players are unhappy with unencrypted streams.

cat /etc/letsencrypt/live/pilgrimradio.info/fullchain.pem /etc/letsencrypt/live/pilgrimradio.info/privkey.pem > icecast.pem

Other things

I used uptime robot to set up a basic monitoring service for the site. Compiling the appropriate liquidsoap script took some work, but the main site had some great examples.

We needed a website to send people, with links to the streams and audio players. This was easy enough to do, with the icecast admin screen providing example HTML for linking to the streams and embedded audio players. The related Apache server needs to have HTTPS set up, and Digital Ocean provide a simple Apache SSL tutorial using certbot.

I could write another whole post about scheduling and organising content, but that is for another day.The above is a rundown of some of the issues I faced. Every set-up is going to run into it’s own problems. I’m happy to try answering any questions left in the comments though.