Categories
infrastructure

Setting up an online radiostation

Recently, I was set an interesting challenge: set up an Internet radio station. I knew very little about this but was relieved to find out it was easier than expected.

Having reviewed the different option, the best one seemed to be icecast2, which is actually the backing for a lot of commercial services. Getting the basics up and running was relatively straighforward, with a couple of gotchas.

Virtual Server

I used a digital ocean server to host the radio station. These are reasonably-priced servers, with 1TB of transfer on the basic level. The physical hypervisors have about 1GBps. In a 4-year-old forum post, it was suggested that “You should expect and plan on 300Mbps of available bandwidth (up and down) in order to plan your deployment and get the most out of your services.”

This covers a decent number of users, and hitting this limit would be a nice problem to have. Potential solutions in this case would finding a way to peer the radio through a more powerful icecast server. As far as bandwidth use went, looking at someone else’s calculations suggested that this would not be a problem in my case. So, I set up a basic digital ocean server to see how that would cope.

The server I went for was a little beefier than the basic model, with 2 CPUs and 3TB of transfer – far more than was needed. Within a few minutes I had a server set up.

Icecast

Installing icecast is easy on ubuntu using the standard repositories. During testing we had a number of issues that seemed to be caused by not using HTTPS. In the end, it turned out that these were actually caused by using Ogg rather than MP3 for the streams. This was a good thing to discover, as getting HTTPS support working was quite a chore.

I found a good guide to setting up Icecast and Liquidsoap on the Linux Journal site: Creating an Internet Radio Station with Icecast and Liquidsoap, and it’s worth following this closely.

However, these versions of Icecast do not include SSL support. Doing this requires building icecast from source, using the SSL option. However, installing from source doesn’t do a great job of setting up the software locations and services.

I ended up settling on a slightly weird way of installing icecast, by using apt-get to do the config, then compiling and installing the binary from source.

I was fortunate to find some helpful documentation for compiling icecast with SSL. However, this needed a little tinkering, mainly due to dependencies and the commands I needed were:

sudo apt-get update
sudo apt install git gcc build-essential libcurl4-openssl-dev libxslt1-dev libxml2-dev libogg-dev libvorbis-dev libflac-dev libtheora-dev libssl-dev autoconf libtool pkgconf
mkdir src
cd src/
git clone --recursive https://git.xiph.org/icecast-server.git
cd icecast-server; ./autogen.sh
./configure --with-curl --with-openssl
make install
mkdir /etc/icecast2
mkdir /var/log/icecast2

I then copied the pre-prepared icecast config file into /etc/icecast2, along with the pre-prepared SSH certificate file. Then there were some last bits of set-up. I had to explicitly enable the icecast daemon in /etc/default/icecast2 to declare I had changed the passwords (done via copying across the config)

chown icecast2:icecast /etc/icecast2/icecast.*
systemctl enable icecast2
systemctl start icecast2

At this point, the icecast server would be visible on port 8000. I had to do a little work to ensure that the correct XSL files were being pointed to, but it was fairly obvious (from a 500 error) that this needed changing.

Liquidsoap

While icecast handles the streaming, a second application is needed to generate those streams, and this is the role of liquidsoap. Again, the basics of this were provided by some helpful documentation online, this time from the linuxjournal.

adduser liq
gpasswd -a liq sudo
apt-get install opam
su - liq
opam init
# say yes to changing profile
eval opam config env
opam install depext
opam depext taglib mad lame vorbis cry ssl samplerate magic opus liquidsoap
opam install taglib mad lame vorbis cry ssl samplerate magic opus liquidsoap
sudo mkdir /var/log/liquidsoap
sudo chown liq:liq /var/log/liquidsoap/
mkdir /home/liq/archive
mkdir /home/liq/playlist

I then copied across the liquidsoap config file, and tested it – liquidsoap complained about a fallible source, as there was not yet a fallback file in place. The script I’m using has a test.mp3 file defined in the configuration that can be used when all other sources fail.

Other tasks

I needed to copy some other people’s SSH keys to the server to allow them to work on it. Then I needed to set up apache, and configure the firewall. Digital ocean provide instructions for this, which are comprehensive and include firewall management. Once basic apache was set up, I created the new vhost for the station and transferred the pre-prepared configuration files. Then test that this configuration was successful.

Pointing the DNS to the new server is simple enough, but there are also SSL certificates needed for both Apache and icecast. The simplest way to do this is via certbot

sudo add-apt-repository universe
sudo apt-get update
sudo apt-get install certbot python-certbot-apache
certbot --apache

There is also a certificate needed for icecast, as some players are unhappy with unencrypted streams.

cat /etc/letsencrypt/live/pilgrimradio.info/fullchain.pem /etc/letsencrypt/live/pilgrimradio.info/privkey.pem > icecast.pem

Other things

I used uptime robot to set up a basic monitoring service for the site. Compiling the appropriate liquidsoap script took some work, but the main site had some great examples.

We needed a website to send people, with links to the streams and audio players. This was easy enough to do, with the icecast admin screen providing example HTML for linking to the streams and embedded audio players. The related Apache server needs to have HTTPS set up, and Digital Ocean provide a simple Apache SSL tutorial using certbot.

I could write another whole post about scheduling and organising content, but that is for another day.The above is a rundown of some of the issues I faced. Every set-up is going to run into it’s own problems. I’m happy to try answering any questions left in the comments though.

Categories
programming

The TDD Lie

The requirements for pretty-much every developer job I see these days includes Test Driven Development. Which is exciting – everyone recognises the importance of unit tests.

In my previous post I talked about how often companies claim to do agile, but don’t succeed in practise. And I see the same thing happening with test-driven development.

It’s easy to tell which companies want to use TDD but don’t in practise: you ask what percentage coverage they have for their code.

In its purest sense, TDD demands that tests are written before any new code is written. If you’re coding in this way (as suggested in  Kent Beck’s Test Driven Development book) your coverage should be close to 100%. Of course, there are other ways one might approach TDD, such as writing business tests before the code. This is definitely better than adding a few tests at the end of a project.

But, if you’re tracking the coverage and have chosen a figure less than 100%, then there is a choice being made about which code is not covered. I’ve never seen a standard coverage threshold working out well. The obvious indication  of this is how often the coverage of projects is just over the required threshold.

I’m a great believer in high coverage (while acknowledging the limitations of that). And if you don’t grow your code with the  tests then you tend to end up with code where it’s difficult to add code at the end.

If you’re listing  TDD in job requirements and not practising it, then it’s worth asking why not. A lot of times it’s a fundamental limitation to the company – we know we should have tests but we’re rushing. It’s the same problem that many companies have with agile – these things are obviously a good idea, but difficult to do in practise.

As Bob Martin pointed out,”You know what you believe by observing yourself in a crisis. If in a crisis you follow your disciplines, then you truly believe in those disciplines. On the other hand, if you change your behaviour in a crisis, then you don’t truly believe in your normal behaviour” (from Chapter 11 in the Clean Coder).

If you know you should be using TDD but aren’t, then you need to ask why not. And, if you can’t fix that, stop lying that you’re doing TDD – admit what you’re actually doing and focus on improving that.

Categories
projectmanagement

“We’re agile, but…” talk notes

I spoke at Brighton Lean Agile last week, stepping in at short notice to cover for a friend. The event was great fun and I enjoyed the discussions I had with everyone. Rather than share the slides, I thought I’d do a quick blog post, allowing me to add some context to the bullet points.

Photo by @timrabjohns

Introduction

The talk discussed how agile doesn’t always work, and what can be done about it.

Overview

As Churchill put it: “it has been said that agile is the worst form of project methodology except for all those other forms that have been tried from time to time.”

While I might sound cynical about agile in what follows, I still believe in it. I think that the principles it promotes, such as regular reviews, tailoring methods to the team etc. are incredibly important. At the same time, a good waterfall project beats a bad agile one.

Agile is:

  • Simple
  • Flexible
  • Popular

While agile is detailed and subtle, there are core principles that are easily applied. It is flexible – there are accounts of it being applied to many industries and sizes of team. And it is popular – a large number of people have experience of it.

However, most companies face problems with the implementation, and even fundamental aspects. Every IT job ad I see these days has agile as a requirement. But listing something in a set of job requirements is not the same as understanding or implementing it.

Issues with the daily meeting

I believe in the principle that ‘the way you do anything is the way you do everything’. I’ve seen a lot of issues with the daily meeting, and feel that many wider issues with agile could be alleviated by getting this right. The stand-up is particularly useful for experimentation since it happens daily, and only lasts 15 minutes (in theory!). Note how ‘briefing scenes’ turn up in movies and TV shows a lot – this meeting should also be compelling, informative and entertaining. Problems seen include:

  • The stand-up is boring
  • Too many people in the stand-up
  • The stand-up is not relevant to the entire team,
  • The stand-up is at the wrong time
  • The stand-up shifts with a manager’s schedule
  • The stand-up becomes a reporting tool
  • Some developers think agile is silly

Some things I’ve tried

  • Balancing co-ordination vs social vs reporting
  • Adding initial reports to slack (particularly useful across timezones)
  • Changing who runs the standup (making team members feel more invested in agile)
  • Enforcing promptness
  • Track who is talking and who isn’t
  • Asking what we gain from it
  • Be high energy, be entertaining

Most of the points above are self-explanatory. I think it’s important to look at what an organisation needs from the daily meeting, and to focus on providing that without distraction or fluff (for example, if you do need people to report to management, find a more effective way to do that). The energy is also important – attendees need to think about their contribution and how to make it effective.

Estimation Problems

  • The estimates are already set (by a team member or management) and planning is a sham
  • Low expectations – developers aren’t pushed
  • No theme to the sprint
  • Team members don’t understand some stories (specialisations)
  • Nobody cares about the estimates – the code is produced late without any investigation
  • Translation is made between story points and time (worse when compared between teams or the translation being used is not explicit)
  • Estimates aren’t believed by management
  • Pressure is applied to reduce estimates: “But how long if you try harder…”
  • Estimates are given as self-protection – Padding or straight-out refusal to play
  • Punishing bad news
  • The team are told how, not what and why, meaning they estimate that method rather than considering wider goals
  • Boring

In the discussion we talked a little about how teams refusing to give estimates is usually a symptom of a larger problem. Another issue is estimates as negotiation rather than a best-effort prediction (ie padding for safety). Punishment for failing to hit previous estimates often leads to this sort of padding.

I concluded this section by saying that estimation sessions were something I often wished to cancel, possibly in favour of the person who was most informed/influential doing the estimates themselves. One participant said they had abandoned estimates in favour or sizing stories in a predictable manner.

One important thing to consider is whether the information in the estimates will change anything. If not (for example, schedule and scope are already set) then there may be little point spending time working out how long something will actually take.

When agile doesn’t fit the business

  • External factors (deadlines, releases, tradition) restrict agile
  • Product owner issues:
    • No time to do the role
    • Don’t want to do the role (the product owner is the most senior person who can’t avoid doing it)
    • Product owner has no power (proxy product owners)
  • Too much long-term planning – looking months ahead rather than supporting the team’s current work
  • Committing to futures that might not happen
  • Disconnect between stories and business
  • No idea how much a story costs to produce
  • Team composition shifting so you can’t learn anything
  • Technical debt is ever-growing
  • Multiple developer commitments outside project
  • Agile not adopted/understood/respected above the team level
  • The project is scheduled with no slack for developers being called away by emergencies.
  • Planning based on what people would like to happen, rather than taking note of what will happen
    • An example of this is ‘schedule compression’ – the deadline doesn’t move, but everyone resolves to try harder and be more efficient.
  • Not warning people about coming problems in the hope that someone else announces a serious issue first

Why aren’t Things Better?

I made two suggestions – the first was a reference to Sturgeon’s Law, which claims that “90% of everything is crap”. Maybe agile is difficult, everyone is doing their best, and we just have to be happy that things are working better than they would without agile.

My other suggestion was looking at cargo cults. This is a pejorative term in agile, but this betrays a lack of sympathy/understanding of the actual cargo cults.

Cargo cults were seen on a number of Polynesian islands, particularly after world war 2, where the residents of those islands built planes and airstrips from local materials, supposedly in the hope of cargo planes landing on the island. It’s given as an example of people imitating something without understanding the principles.

However, this idea that cargo cults are foolish is a racist and limited one. Indeed, the idea that they are about the arrival of cargo and that people actually expect planes to land are probably misapprehensions. For those involved, there are other reasons for setting out these airstrips. Western anthropologists have potentially had a blind-spot in their own obsession with monetary goods and missed these deeper motives.

In fact, these cargo cults are a success, if you understand why they have been set up. They are also effective simply on the terms for which they have mocked, since the cults have brought anthropologists and TV documentary crews to the islands in question. But to see why they succeed, you need to have sympathy/empathy with why cargo cults exist.

Sabotage

A lot of agile implementations seem to feature a sort of sabotage, where people involved do things that are directly against their best interests.

I closed by suggesting that, when an organisation is doing things that are obviously wrong and harmful to agile, there is often a deeper, more compelling system in place. This then formed the basis of a final discussion section.

Categories
programming

How to Build a Webapp in Java

tldr; building a webapp is complicated – particularly if you’re doing it right. JHipster offers a promising solution.

I was a little shocked recently when I realised that I didn’t know how to make a web-application.

It’s not like I’m inexperienced. I’ve been working on the web since 2000; but my job involves focussing on specific areas. Even when helping with strategy, I have very little input into the front-end. Being responsible for an application going live and doing it all myself are very different things.

Part of the problem is how intimidating front end development has become, with flurries of Javascript frameworks, UI libraries and strange pre-compilers. Angular has two versions that are so different as to be incompatible, but with similar names. Tutorials date quickly and beginners can’t tell if they are current. There is a huge amount of knowledge needed to build a web application.

Now, obviously, as the project website says, Spring boot “makes it easy to create stand-alone, production-grade Spring based applications that you can ‘just run’“.  My initial impulse to cobble something together from Spring and Thymeleaf would allow me to get something live. But this is an unsatisfying solution. Server-generated template pages don’t feel particularly powerful even with a good MVC framework. They have several problems:

  1. They tend to have code for the view tangled up with the application, with no clear boundary
  2. Coupling the templates to the server-side code misses the separation of architectures like REST. Adding, say, a mobile client would involve a lot of hassle to get the two different front ends working.
  3. More modern architectures are available for front-end. Using REST and angular allows rapid development of flexible front ends, less coupled to the backend architecture.

Another question that arose was user management and security. This is one of those things that every site needs, but tends to be tedious to code (as well as inviting errors). Any decent user management system needs account creation, password resetting, different roles etc. For most projects, these are going to be the same. My basic assumption would be that Spring security would be useful for that, but that is really a framework for creating a security framework, and there are no obvious mature implementations. Setting up email account confirmation is a drag.

I weighed up a few options for authentication, particularly since I wanted one that supported a stateless web-server. After speaking to a friend, JWT seemed to be the way to go, but this still seemed like a lot of work. And I’d not even started grappling with the whole front-end stack.

I started working in a new office recently and a colleague suggested JHipster as a solution to my problems. I’d heard about the technology via InfoQ but hadn’t paid it a lot of attention. Part of this was the name, which sounded insincere and jokey. And my cursory look suggested it was ‘just’ angular with spring boot. I’d missed something very interesting.

According to the JHipster mini-book, JHipster allows Java devs to “be hip again”. The knowing irony of the framework’s name is irritating, but this idea of being ‘hip’ is good. Web development has changed a great deal, and JHipster provides a simple way to get started with up-to-date technologies, combining bootstrap, angular (or react) and Spring boot. The project started back in October 2013, with the first release in December of that year.

JHipster is similar to Ruby on Rails, providing rapid development of a stack based upon entity definitions. The ‘Hipster’ part of the name is referring to the need to keep up with the modern tools. JHipster produces the plumbing and boilerplate for a whole stack including a variety of best of breed tools. It also adds in a basic user-management system for free. JHipster prepares multiple files for each entity:

  • Database tables
  • Liquibase changes
  • JPA entity definitions
  • Spring data repository
  • Spring rest controller
  • Angular route, controller and service
  • HTML page

The best way to get a flavour of how JHipster works it to look at Matt Raible’s 16-minute demo of building a blog application, get started with JHipster 5.

Full-stack development is hard work, even with JHipster – you need to be familiar with both Spring and angular. But this does put you a lot closer than ever before to putting live a full-stack application using best of breed technology.

Categories
Uncategorized

speaking at Brighton Lean Agile on 9th jan 2019

Due to a last-minute drop-out, I’m speaking at Brighton Lean Agile on January 9th. The title of the talk is “We’re agile, but…” and looks at some of the ways I’ve seen agile going astray in implementations.

Despite the simplicity and popularity of Agile, it often goes wrong in practise. Drawing on 15 years’ experience of Agile as developer and PM, James Burt will lead a discussion on why agile sometimes fails, and what can be done about it.

The session is currently full but there is a waiting list, as people sometimes drop out as the event approaches. Refreshments are provided by the sponsor, Bright Interactive.

Categories
books programming

Book review: Beyond Java by Bruce A Tate

Tldr; things looked bleak for Java in the mid-noughties, but the language and eco-system have undergone massive changes, allowing Java to thrive and survive.

My new job has a daily commute, which means time to read. I’m focussing on tech books and recently picked Bruce A Tate’s Beyond Java. I originally read this in July 2006, the year after it was published. The book described why Java was doomed, with Tate saying in the introduction, “Java’s reign will end. It’s not a question of if but when.” (pviii). While the predictions have turned out to be wrong it is interesting to re-read the book and see how Java escaped its fate.

Revisiting the mid-noughties was fascinating in itself: a time when AJAX was just emerging, Web 2.0 was the next big thing and folksonomies would change everything. It was an environment that caused obvious problems for Java. Its flaky text-processing was just no good for large XML files, and Java was forced to deal with a lot of XML. In addition, Java tended to be used for the middleware on top of databases, which it was not particularly suited for (p52).

Even with early versions of Spring and hibernate, making Java applications was a chore, with overwhelming amounts of boilerplate and XML config. Even with ORMs, Java required you to work with both entities and database code, and to keep them consistent.

Tate talked about this difficulty in getting software running, and all the tools that were needed. Compared with PHP, Java felt slow to develop in, and its advantages weren’t sufficient compensation. It did seem that Java was on the ropes. I remember dropping out of Java to do simple text tasks. Ruby-on-rails and similar frameworks promised the chance to get applications running quicker.

The book is a pessimistic one. Tate saw the problems as unsolvable and, given Java’s history and trajectory, this was a fair assumption to make. Tate looked at Java critically and suggested other platforms developers should investigate. Reading the book with hindsight, one can see Tate’s predictions have failed, but the discussion is still interesting.

In the 12-or-so years since Tate was writing, Java has managed a comeback. Decent dependency management from maven and gradle has eradicated the nightmare of setting classpaths. The language has been made more expressive with new features, such as the functional constructs in Java 8. And the growing fashion for opinionated configuration has also made things easier – getting Spring Boot running is incredibly easy. The culture of the language has changed for the better too. In the past, checked exceptions proliferated everywhere, and now people tend more towards throwing runtime exceptions and handling everything at a single sensible point.

One particularly strong improvement is the Spring framework. At the time Tate was writing, Spring was a mess of XML configuration and cryptic errors. With annotations, it has become easy to use and unobtrusive. New platforms such as Spring Boot allow developers to release production-ready features out quickly.  At the same time, Java/Spring offers a mature platform, where code can be taken to production and supported at massive scale. Ruby-on-rail’s promises have faltered, with Twitter famously moving to Java because of scaling issues. The XML problem has disappeared as JSON takes over as a less robust, easier to use standard. I don’t miss the days of choosing between SAX and DOM parsers for XML.

Around the time Tate was writing, a debate raged about variable typing: strong vs weak, static vs dynamic. Java’s static and strong typing felt too rigid for many people. This debate seems to have reached an interesting conclusion: Javascript has been restricted by typescript while Java 10 has introduced the var keyword.

(I’ve always seen the inflexibility of Java as a positive. While it’s less expressive than languages such as python or, um, perl, the code it produces tends to look very similar. It’s a good language for non-rock stars – who massively outnumber the sort of people capable of producing cryptic code given the slightest chance).

At the time of writing, Tate saw Java’s fate as connected with that of Sun. Tate wondered what might happen if Sun disappeared, wondering who might take over Java. The resulting situation, with Oracle running Java (and MySQL!) was hard to predict but has worked out well – even with the occasional fake news scare that Oracle would start charging for Java.

Tate discussed a number of contenders for Java’s throne. Perl has lost popularity over the last decade, with Python offering a more structured (and readable) alternative. Perl, like PHP, was a great tool for its time but ultimately a ladder to better things. The book makes a quick mention of functional languages: Clojure seems to have come and gone, but  Scala and Kotlin (first released in 2011) seem to be gaining traction. Ruby itself is much less popular. Java seems to have been saved by the flexibility of the JVM, which offers more powerful options for those that need them.

For me, the most interesting discussion was around state in web-servers, and the problems of that. Tate talks about continuation servers as an upcoming opportunity. This is another debate that has been settled, with REST winning out over stateful alternatives. I worked with Seam for a time – it was an interesting idea, but so alien to the way most developers worked, that its subtleties tended to be fought with rather being  made use of.

One problem that Tate raised which is still present is the approachability of Java. Even in the innocent days of 2005, the simplest useful Java application required use of a large number of tools. This issue is present in a lot of fields, including front-end dev work, where increasing sophistication has made things much more complicated. There are solutions to this, with JHipster being a promising one, but these work to hide the complexity. Just because the tools are set up for you doesn’t mean they aren’t there and can’t go wrong in strange and unpredictable ways.

EDIT: Tom Hume pointed out  that the discussion above doesn’t include Android, which was originally released in 2008 and has become hugely popular, with over 2 billion users. As someone who  focuses on backend dev, I didn’t engage with this, but Android has been contributed significantly to Java’s survival. Having  said that, Kotlin is becoming more popular in Android, threatening Java’s dominance on that platform. Interestingly, Kotlin is failing to see significant uptake in backend dev, despite recently becoming compatible with Spring.

If I had more time, I’d also look at the JVM platform as distinct from Java. While Groovy and Clojure have faltered, Scala (launched 2004) and Kotlin (launched 2011) are both popular JVM languages

Categories
openstreetmap

Openlayers and Angular

I recently set up a simple Openlayers page, which proved to be an interesting experience. As mentioned in that post, one problem I had was working with so many unfamiliar technologies. It’s a long time since my first tinkerings with Javascript, and I’ve paid little attention to it. Since then, front-end development has exploded into a variety of frameworks and libraries that interact in strange and spooky ways. Node! Typescript! Less! Angular! Jquery!

In the long term, I want to get back into full-stack development; but in the short term, I just want to put a mapping site live. I’ve settled on JHipster as a platform, since it makes managing the database and middle-tier incredible easy, as well as setting up a lot of the Angular infrastructure. The problem is that almost all the Openlayers examples that I can find are written with JQuery.

I found a few clues on what to do, such as a Stackoverflow post on How do I dynamically add popups to markers using OpenLayers 5 in Angular 4?, but my meagre experience of Angular meant I couldn’t get the Popper library this used to work. But the answer gave me enough of a clue to get the syntax and code set up for a very basic implementation of Openlayers popups within angular. The work is in done within the controller, and builds and executes correctly.

It’s far from perfect – because the component is embedded within another page, the pop-up doesn’t appear with the feature, rather it is placed a short distance above. There is also a problem with the click detection being flakey – the window has to be resized for this to work reliably.

But the good thing is, I now have a proof of concept for Openlayers working within JHipster. As scrappy as this is, I do seem to have them working together now. It’s going to be ugly, but I have enough to build a basic application with.

Categories
openstreetmap programming

First steps with Openlayers

tldr; first steps in setting up an Openlayers map with markers loaded from a GeoJSON file

I spent a chunk of this weekend wrestling with writing software. Which was a good way to relax after a late night dancing at Is that all there is to a Disco? Saturday was a slow day: writing code, watching Escape Plan 2 with Rosy Carrick (not a good film) and getting to grips with Openlayers.

Most of the time I learn about programming through toying with examples. While this is easy in a subject I know well, it can prove more difficult in new areas. Working with Javascript libraries can be particularly difficult since it’s hard to tell which versions particular examples are for – with Openlayers there are several mutually incompatible versions with very different APIs. They say you should only do one new thing at a time, and learning Openlayers was held back by my poor Javascript skills. I also needed to get to grips with the particular way the mapping geometries worked.

But I managed to beat it all into submission and produce a working demo. I wanted to load some data from GeoJSON and display location markers on an Openstreetmap map. It’s not very much, but it is something I can later add to a larger project.

Introduction

According to Wikipedia , Openlayers is “an open source JavaScript library for displaying map data in web browsers as slippy maps”. It supports multiple map vendors and different data formats. The main project site  says that version 5.2 is the latest. There are links on the site to

Basic Example

The quickstart page shows how to set up a simple Openlayers map. CSS and HTML are used to create a page-wide div that can hold the map. The Javascript to create and display the map is simple:

var map = new ol.Map({
target: 'map',
layers: [
new ol.layer.Tile({
source: new ol.source.OSM()
})
],
view: new ol.View({
center: ol.proj.fromLonLat([37.41, 8.82]),
zoom: 4
})
});

This creates a map object and sets it into the map object. A single layer is set up with OSM (openstreetmap) as the source. The view allows the map’s centre, resolution and rotation to be set. Note that zoom level 0 is zoomed out fully.

  • As the tutorial explains:
    A layer is a visual representation of data from a source. OpenLayers has four basic types of layers:
    Tile – Renders sources that provide tiled images in grids that are organized by zoom levels for specific resolutions.
  • Image – Renders sources that provide map images at arbitrary extents and resolutions.
  • Vector – Renders vector data client-side.
  • VectorTile – Renders data that is provided as vector tiles.

The center of the view is set using  a conversion from longitude and latitude. This is one of the confusions with open streetmap. It uses a co-ordinate system called Spherical Mercator projection (known as EPSG:3857),  and the usual longitude/latitude co-ordinates are held in a projection called EPSG:4326.

Adding Markers

The next stage was finding a way to add markers to the map. A lot of the examples for this were using different versions of Openlayers. The one that I ended up adapting was a recent tutorial on mediarealm.

This piece of code gave me enough of a clue to be able to set up a layer with a marker:

function add_map_point(lat, lng) {
var vectorLayer = new ol.layer.Vector({
source:new ol.source.Vector({
features: [new ol.Feature({
geometry: new ol.geom.Point(ol.proj.transform([parseFloat(lng), parseFloat(lat)], 'EPSG:4326', 'EPSG:3857')),
})]
}),
style: new ol.style.Style({
image: new ol.style.Icon({
anchor: [0.5, 0.5],
anchorXUnits: "fraction",
anchorYUnits: "fraction",
src: "https://upload.wikimedia.org/wikipedia/commons/e/ec/RedDot.svg"
})
})
});
map.addLayer(vectorLayer);
}

We create a new vectorlayer which contains  another vector with all of the features. This could then display markers with a small red dot.

GeoJSON

Ultimately, I wanted to use some GeoJSON  to provide a set of features. This proved to be tricky, as I had features in longitude/latitude co-ordinates, and couldn’t see how to make the import with that taken into account. Fortunately, it’s easy to iterate over objects in javascript, and I simply looped over the GeoJSON, making the conversion with one object at a time before adding it to the layer.

var featureArray;
for (feature of placedata){
        featureArray.push(new ol.Feature({
 geometry: new ol.geom.Point(ol.proj.transform([parseFloat(feature. longitude), parseFloat(feature.latitude)], 'EPSG:4326', 'EPSG:3857')),
name: feature.name,
description:feature.description
 }))
}

Once loaded, this feature array can then be dropped easily into a vector layer and added to the map. The source of the placedata array can be changed – I started out with a hardcoded version, and then added in one from a REST endpoint. I created a quick REST server using the Spring Boot REST example as a basis, and changing it to return place data.

Loading the data from GeoJSON turned out to be the fiddliest part of this because of the conversions between different projections.

Pop-ups

The last thing I wanted to do was add pop-ups to the markers. There was an example of this in the Openlayers workshop. The way this works is simple – an onClick event for the map is added, checking whether there is a feature at the point in question. There’s a lot of code to the example, but it was easy enough to amend and get working.

Summary

It took a little longer than planned, but I managed to set up a simple HTML page that displayed openstreetmaps tiles, with markers provided by a REST endpoint.

Categories
projectmanagement

Mobile Happiness Workshop

It’s been a while since my last post here! Most of 2017 was taken up project managing a massive mobile project across ten countries – which came in on time and on budget. More recently, I’ve started a new company called Impaladev with Brad Legge, providing managed mobile and Java development teams.

As part of this, we’ve set up a workshop on Mobile Happiness. This condenses all the things we’ve learned about mobile delivery into a single hour (although many of the points are relevant to web and desktop applications). And we reveal the innermost secret of Project Management.

We ran a test of the workshop last night for a variety of ex-colleagues and it went very well – not least because we finished in time for that evening’s football match. Everyone came out of the workshop activities with new ideas for shaking up there projects.

We’re now looking for new venues and audiences for the talk. If your group or company might be interested, get in touch! Below is the obligatory corporate pointing-at-a-screen photo.

Categories
alexa

The Perils of Alexa certification

I’ve got my first skill up on Amazon’s Alexa service. It’s called Travel Stories and it’s not very exciting. To be honest, it could have done with more work, but there’s an offer of free Alexa  Echo Dots in return for submitting a new skill. So this post will be about two things: my experience with the certification process; and Amazon’s desperate drive to get developers producing new skills.

One of the biggest boasts about Alexa compared with other voice interfaces is that it has many more skills available – 15,000 compared with hundreds on Apple. The problem is that a lot of these skills are not very good. Amazon has templates for basic voice apps with the idea that developers will customise these. It means there are a lot of skills which recite pieces of trivia. Travel Stories is pretty much one of these in its first incarnation.

In an BBC new article, Amazon’s race to make Alexa smarter, Leo Kelion pointed out that “For all the promise of compelling new ways to control home appliances or on-demand news updates from major media brands, there seem to be a mountain of apps dedicated to delivering “fun facts”, gags, wacky noises and a vast range of ambient sounds.

When Amazon’s representative was challenged on this, he said “I guess I would not agree with the thesis that some of the skills are not sticky – many of them are… You never know which one of those Cat Facts is going to turn into the next big thing. There are many examples of that out there.

The lack of an example here, the absence of a well-known case where a cat skill became famous, underlines the issue. There are a lot of not-very-interesting skills for Alexa.

Alongside prizes, Amazon’s push for new skills also includes huge discounts on the AWS services needed. Amazon have also began directly paying the makers of successful skills. There’s a lot of encouragement to work with the platform.

On a more positive note, Amazon have a fast efficient certification service that aims to get skills moved through as quickly as possible. I originally submitted my new skill at the start of July. At the Amazon Workshop we’d been warned that 70% of skills failed because example phrases do not occur in the sample utterances. According to Amazon I failed for two reasons

* “The example phrases that you choose to present to users in the companion app must be selected from your sample utterances. These sample utterances should not include the wake word or any relevant launch phrasing.
* “If the session closes after launching the skill, a core functionality must be completed without prompting users to speak.

The first of these was clumsiness on my part – somehow I fell for a trap I’d be warned about. I’m still not sure why my sample utterances failed and the ones Amazon suggested worked. I’m sure this check could be automated in some way before certification. The response was short and to the point so that a few minutes work, editing the code and my sample utterances, was all I needed to get through the second time.

You can now add my skill to your Alexa device, but I’m not sure why you would. Certification is more about quantity and quality. But I do have interesting plans for my skill, and I’ll be working on them soon.