Categories
alexa

Getting Alexa working on the Raspberry Pi

640

I wasn’t that interested in the Amazon Alexa until I saw its product-placement appearance in Mr Robot. Grace Gummer’s character, Dominique DiPierro. lies in bed, unable to sleep, talking to the device.

It’s an odd scene, and the character’s isolation is probably not the best advert for a new piece of technology. The thing that fascinated me was the potential for empathy in the voice interface. DiPerro took some comfort from the machine. This might be a kinder interface than the command line (which a voice interface is, ultimately). This is possibly a strange place to start looking at a new piece of technology, but there you go.

The main reason I’d not looked at Alexa sooner was a disappointment with the command-line bots I’ve seen over the years. Most of them seemed stilted and artificial. Rather than being a natural interaction, I’d be fighting a parser, like I was playing an 80’s era text adventure. I assumed Alexa would be more of the same. Mr Robot suggested other possibilities. Voice is a natural interface – it’s one we use in daily life. We speak to people when we’re with them rather than turning to writing notes as being efficient.

The echo Dot is about £44.99, but I had a Raspberry Pi at home to experiment with. This also offers the possibility of changing the wake word, rather than choosing between ‘Alexa’, ‘Echo’, ‘Amazon’ or ‘Computer’. The Alexa on raspberry pi tutorial contains most of the information needed. I’m not going to write a tutorial for a tutorial, but I wanted to note a few things:

  1. The tutorial lists monitor, mouse and keyboard as pre-requisites, but these aren’t needed if you SSH into the Pi. This worked perfectly as long as I remembered to plug the Pi into the router before turning it on. It even set itself up on the network with a namer of raspberrypi.lan – which was very helpful, as I’m not that great at configuring networks.
  2. As Jez Nicholson pointed out to me, this is a “slightly hostile ‘frontier land’ coding environment”. The java client spits out stack traces to the command line in a way that’s a little surprising from a major company. And my first attempts to set up the Pi were frustrated by an unhelpful error when authenticating the device on the Amazon site – no indication what it was, just the assurance that “we’re already working on the problem”.
  3. I eventually found out what the issue was after googling the issue. It turned out I’d not copied my DeviceID exactly. If you’re writing an API, you have to give as much feedback as possible to the user. APIs really should return detailed errors or trigger an automated ticket creation (rather than claim you’re working on the problem). You could even email me when it is fixed.
  4. The default setup for kitt-ai meant that the wake word detection was far less sensitive than Alexa herself. It was a little difficult to get the device to acknowledge my repeated calls of its name, like I had a poorly-trained dog in the flat. I’m not sure what the neghbours make of my increasingly loud calls of “Alexa”.
  5. Alexa was happy to give me the weather – in Washington State. When asked “where am I”, it insisted I was in Seattle. I guess there are some settings need altering somewhere.
  6. One thing that feels like its missing on voice interfaces (or, at least, this implementation) is a return key. There was a short pause after I finished speaking. When speaking to humans we can usually figure out when one of us is done talking. Also, the need to get information from servers prevents a flowing interaction between us.

Now I’ve got this working, which means I too can ask Alexa when the world will end. I now need to set up a simple Amazon skill. Which looks like it could be as much fun as setting up Alexa was in the first place.

Categories
brightonjava

Speaking at Brighton Java on January 25th

I’m speaking at Brighton Java next week on How to Deploy Java Webapps the Hard Way. It’s currently fully booked, but it’s worth trying the waitlist. The talk looks at some of the issues I’ve written about on this blog, such as the complexities of even simple Java projects. I’ll also consider technical debt and how to cope when problems arise.

Java Web Development the Hard Way

Tools like Spring Boot now mean that you can produce a working service from a tweet’s worth of code. But too often rushing headlong into production can lead to problems later.

James Burt looks at the work needed to get a supportable Java service into production. Too often, in the hurry to release features, companies pile on technical debt until their code becomes unmanageable. How do you avoid getting into these situations? And what do you do if it’s already happened?

This talk will start with a simple Spring Boot service and look at the tools and techniques that should be added before going live, making a case for a slower, more sustainable pace of development. 

Categories
springboot

The complexity of a simple Java project

I recently built a small prototype using Spring Boot. It wasn’t much of a site, just two forms and two reports. As it was only being demonstrated locally, there was no need to worry about deployment and hosting. What’s fascinating is how many different technologies were used in such a simple piece of development:

  • Java 8
  • Git
  • Gradle
  • Spring Boot
  • Spring Security
  • Spring Data
  • Flyway
  • JPA
  • MySQL
  • HTML
  • Thymeleaf
  • Angular
  • CSS
  • Bootstrap
  • REST
  • Webjars
  • JUnit
  • Mockito
  • Jacoco

Obviously, it’s great how easy it is to use so many different technologies together: building software and handling dependencies through gradle is much easier than doing it by hand; using Spring Data to handle persistence saves a huge amount of time and configuration.

Having said that, a full-stack demo in Java has involved a huge number of technologies – 19 in total. Not all of these require competency (and in some, like Angular, I’d like to know a lot more).  Some, like flyway or jacoco could be ignored – although I’d rather have them in place early than add them later.

When I first leaned Java it was relatively straightforward. My first job required Java, HTML, CVS, make, JSP, JDBC and MySQL. I wouldn’t want to return to that era – the applications I produce now are far more sophisticated than anything we could have done back in 2000. But I do wonder how easy it is for new developers to gain confidence and competence in so many tools. Consider how much harder it would be to get a simple site up-and-running in Java compared with PHP.

Which poses an interesting question – how does a new developer get to the point where he or she can produce something interesting? And how can we know that we are using each of these 19 technologies competently and securely?

Categories
brightonjava

Brighton Java Coding Dojo

Coding Dojos are a way for developers to practise together and improve their skills. I first became aware of them through the sessions run by Tom and Joh at Future Platforms. Since then, I’ve run lunchtime sessions at Crunch and now we’re running them as part of Brighton Java.

What happens at a Coding Dojo

The coding dojo begins with a description of a problem. Developers then take turns pair-programming for five minutes each, with one as the driver (using the  keyboard) and the other as navigator (making suggestions). After five minutes, the navigator takes over as the driver and there is a new navigator. As the pairs work, they narrate what they are doing. The IDE screen is also projected so that everyone in the room can see it clearly.

Seeing other people code is good for picking up the different ways people work. As developers we see a lot of people’s code, but few companies offer a chance to see it being written. By changing the people working on the dojo code, we see how developers respond to existing code.

Coding Dojos typically feature the use of TDD. Test-driven development is one of those things that a lot of companies endorse, but few use properly. Trying to maintain a tight test-code-refactor cycle is much easier with an audience to act as a conscience!

At the end of the session, everyone takes a turn saying what went well, what went badly, and anything they’ve learned.

Why would I want to attend a Coding Dojo?

Whatever your exact job within programming, it’s vital to be comfortable with basic development. The structure of a coding dojo, with its focus on deliberate practise and review makes a good environment for learning. This is important for new Java developers, but there are also valuable insights for advanced developers – not least, practise at collaborating with people at different levels.

I’ve definitely gain from the two recent sessions, which have provoked interesting discussions on when to refactor, the place of design in TDD/agile and error checking in functions. None of these are advanced issues, but I reckon they are all vital for day-to-day work.

Another reason for attending is that the sessions are fun. You get to meet other developers and work with them. Collaborating with people from outside our own companies is a great way to learn new things.

Categories
continuousdelivery

Spring Bootstrap (and AngularJS!)

Over the past few weeks I’ve been working on my simple Java application, which I’m planning to put into a github archive as the basis for Java prototyping. The last thing needed  before doing this is to tidy up the front end.

I’m not a particularly experienced front-end developer – it’s a skill I’ve neglected over the years. But it’s hard to excuse not making some effort. A lot of people find it easier to appreciate a prototype when it looks a little polished.

In this post I’m going to summarise basic steps to add Bootstrap and AngularJS to a project (it’s not intended as a step-by-step tutorial!). Adding Angular is particularly useful as it allows the back-end to be sensibly separated from the front end. I’ve worked at too many places where a templated front-end became tightly coupled to the back-end. A REST server accessed via a Javascript framework like Angular is much more sensible. It forces the back-end to remain independent, with a clear, well-communicated API.

REST endpoints

Converting a Spring endpoint to use REST is ridiculously easy. I had a simple Spring Data repository that returned a list of RSS entries. The controller class was annotated with the @RestController annotation. Then there was a method returning a list of Entry objects:

@RequestMapping("/rest")
@CrossOrigin
public List<Entry> rest() {
  List<Entry> entries = new ArrayList<Entry>();
    if(entryRepository!=null) {
      entries = entryRepository.findAll();
    }
  return entries;
}

The CrossOrigin annotation was there to allow me to access the endpoint from a local copy of the HTML page. This meant I could test changes to the static content without redeploying the Spring application.

Bootstrap

Bootstrap is a front-end framework, designed to make it easier to produce decent looking UIs. Even at its most basic level, it avoids an HTML page looking like the dreary browser defaults. All that is needed it to adapt one of the basic templates.

AngularJS

I’m less confident about my AngularJS use than any other part of this project. It works, but I can’t guarantee that I’m using Angular in the most sensible or efficient way.

There are two files involved, a Javascript page and some changes to the HTML page. The relevant section of the HTML page is straightforward:

 <div ng-controller="Rss">
 <table>
 <tr>
 <th>Title</th>
 <th>Link</th>
 </tr>
 <tr ng-repeat="entry in entries">
 <td>Title: {{entry.title}}</td>
 <td>{{entry.link}}</td>
 </tr>
 </table>
 </div>

The Javascript file contains the following method:

function Rss($scope, $http) {
  $http.get('http://localhost:8080/rest').
  success(function(data) {
  $scope.entries = data;
 });
}

Conclusion

While the front-end described here is not sophisticated, it improves signficantly upon the default UI provided by a browser. It also separates the UI clearly from a REST API for the backend, allowing the two to be worked on independently.

Categories
continuousdelivery

A quick note on Spring Integration

Another thing I’ve played with recently is Spring Integration, adding an RSS reader to the example I’m working with. There are a lot of examples of Spring Integration about the place, but there weren’t any doing exactly what I wanted – the clearest examples tended to rely on XML rather than annotations.

(An interesting discussion on a recent Java Council podcast discussed how common it was to find obsolete answers voted up on Stack Overflow. It’s easy to find answers to technical questions these days, but sometimes it’s harder to find the most up-to-date answer).

Anyway, I put together a working, if clunky, RSS reader, and thought I would make a quick summary of the changes it required.

build.gradle

Some new dependencies were needed in build.gradle:

 compile("org.springframework.boot:spring-boot-starter-integration")
compile("org.springframework.integration:spring-integration-feed")
 compile 'com.rometools:rome:1.6.0'

resources/feed-bean.xml

For the sake of expediency, I succumbed to a little XML. With a little more time, I could transalte this into the newer annotation format:

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
 xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:int="http://www.springframework.org/schema/integration"
 xmlns:feed="http://www.springframework.org/schema/integration/feed"
 xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/integration
http://www.springframework.org/schema/integration/spring-integration.xsd
http://www.springframework.org/schema/integration/feed
http://www.springframework.org/schema/integration/feed/spring-integration-feed.xsd">
 
 <int:channel id="feedChannel">
 <int:queue />
 </int:channel>
 
 <feed:inbound-channel-adapter id="feedAdapter"
 channel="feedChannel" url="http://feeds.bbci.co.uk/news/rss.xml?edition=uk">
 <int:poller fixed-rate="30000" max-messages-per-poll="100" />
 </feed:inbound-channel-adapter>
 
</beans>

Controller Java class

I added a new method to my existing controller (which is turning into a little of a catch-all class – I’ll be a little more tidy when I come to do this as part of my Java infrastructure write-up)

@PostConstruct
public void setUpRssReader() {
  ApplicationContext context = new ClassPathXmlApplicationContext( "/feed-bean.xml");
 
  // create a pollable channel
  PollableChannel feedChannel = context.getBean("feedChannel", PollableChannel.class);
 
  for (int i = 0; i < 10; i++) {
  // receive the message feed
     Message<SyndEntry> message = (Message<SyndEntry>) feedChannel.receive(1000);
     if (message != null) {
       SyndEntry entry = message.getPayload();
       log.info(entry.getPublishedDate() + " - " + entry.getTitle());
       Entry myEntry = new Entry(entry.getTitle(), entry.getLink());
       entryRepository.save(myEntry);
     } else {
       try {
         Thread.sleep(10000);
       } catch (Throwable t) {
         // ignore
     }
   }
  }
 }

It’s not a particularly sophisticated example, but it illustrates how easy it is to set up simple integrations in Spring. I’m now going to proceed to set up a decent front end – once that is done, I can put it all together to make a very simple Java prototype.

Categories
continuousdelivery

Databases and continuous deployment

Flyway

After my recent experiment I’m going to add in databases. One of the pain points in most deployments I’ve worked with is the need for database updates. These were always manually applied and, in some cases, required a time-consuming reversion if things went wrong. And, of course, the production database would never be quite in the state expected.

There are options for automatic deployment including Flyway and Liquibase. These maintain a database version number and apply any new scripts at application start-up. This also makes it easier to test deployments, as it becomes more obvious what state a particular database should be in. I’ve never understood why Flyway isn’t more commonly used. I think the main objection was that it requires a little more care in writing database scripts, taking care to make sure scripts are backwardly-compatible.

The advantage of a small test project is being able to test adding things without worrying about complicated regression problems. Adding flyway on its own also means that I can then add Spring Data on top of this and the database structure will always be under version control. Flyway has been chosen here since it uses standard SQL scripts rather than the custom language used by liquibase.

With Spring Boot, only two files needed to be edited to provide flyway support. The first was some additional dependencies to the build.gradle file:

compile "org.flywaydb:flyway-core:4.0"
compile("org.springframework.boot:spring-boot-starter-data-jpa")
compile 'mysql:mysql-connector-java:5.1.38'

I also added a new src/main/resources/application.properties file which contained the database parameters. These are taken from environmental variables, meaning that the database credentials are not stored in the git repository anywhere. It also means the same war file can used on any server.

spring.datasource.url=${DB_URL}
spring.datasource.username=${DB_USERNAME}
spring.datasource.password=${DB_PASSWORD}

These changes in their own are enough to get flyway working. The Spring Boot logs show some relevant output, and at the end a new table is added to the database:

show-tables

Adding database output to the application

The good thing about adding Flyway to the application before we have any database entries is that the database can be completely maintained through versioning. The next step is to add some more interesting database objects than a schema_version table.

I’m going to make a basic page displaying some page titles. On another occasion I can then connect this to Spring Integration to make a simple RSS reader. I’m going to summarise the changes that were made rather than go into them in detail as there are good Spring Guides on accessing data through JPA.

Gradle

A new dependency was needed in the gradle script to set up thymeleaf

compile("org.springframework.boot:spring-boot-starter-thymeleaf")

There was also a missing dependency needed to set up Tomcat. While there was a war file being produced beforehand, it was not doing some of the more advanced setup:

providedRuntime 'org.springframework.boot:spring-boot-starter-tomcat'

Database Script

There is a simple database script in src/main/resources/db/migration which creates an entries table and adds a couple of entries.

Thymeleaf templates

There is a new index template in src/main/resources/templates, where the body iterates through the entries in the feeds table:

<body>
 <p th:text="'Hello, ' + ${name} + '!'" />
 <p th:each="entry : ${entries}"
 th:text="${entry.title}">"${entry.link}"</p>
</body>
</html>

Entry.java

We add a new database entry object, which links to the entries table and contains two properties, title and link.

EntryRepository.java

A standard Spring Data repository containing a findAll() method.

Greeter.java

This class required a number of changes to use the repository class as well as to enable it to work with Tomcat. Spring MVC was used to provide values for the templates:

model.addAttribute("entries", entryRepository.findAll());

The main catch was changing the class to provide a new definition:

@Controller
@SpringBootApplication
public class Greeter extends SpringBootServletInitializer implements WebApplicationInitializer {

There was also a new method needed to supply the basic configuration.

 @Override
 protected SpringApplicationBuilder configure(SpringApplicationBuilder application) {
 return application.sources(Greeter.class);
 }

Deploying to RDS

Setting up the RDS instance in AWS was relatively easy. The free tier I’m currently using provides a free small database, and there’s even a checkbox during set-up to restrict the available choices to ones under the free option. The contextual help here is excellent. With little work it was possible to set up a user on the instance and access them remotely.

There was an option in the AWS set-up to add the environmental variables for the Elastic Beanstalk instance. This supplied the database credentials to the war file.

Deploying the war to AWS via Jenkins required no changes, as it was already set-up. The only problem I had centered around making the database instance accessible to security groups – which would have been solved if I’d RTFM’d.

Conclusion

The main takeaway from this is that continous deployment is not, in itself, a complicated thing. It’s not taken long to set up a proof of concept, with a database backed site updating through Flyway. This basic model would be easy to replicate.

I’m going to make a few more additions to this example, adding Spring Integration to read from an RSS database, and some tidying up on the front end, most likely with Angular and Bootstrap. This will then provide the basis for building quick Java application prototypes.

And, of course, the information from these posts will be tidied up and incorporated back into my Java infrastructure posts.

Categories
continuousdelivery

Simple continuous deployment from Jenkins to AWS

I’ve been working away at my Java infrastructure series, but I had a spare afternoon, so I thought I’d play with continuous deployment to AWS. I used the current version of the infrastructure project as a basis, and followed Adao Feliz’s tutorial. I needed to make a few amendments to get it working, so I’ve noted these below.

Setting up continuous deployment for such a simple project may not be rocket science, but it provides a good basis for any further development – and it’s a lot easier than converting an existing project!

One significant change I did was converting the Spring Boot project I had to create a war file rather than a jar file. This required updating the Gradle file and making some changes to the Java file.

The instructions from the blog post are straightforward and in six steps:

  1. I already had jenkins running, and a github repository.
  2. I installed the new Jenkins plugins…
  3. …the S3 set-up was simple…
  4. …and the Elastic Beanstalk set-up was also very simple
  5. I used a copy of an existing Jenkins build, with no need to make amendments
  6. The only problem I had was with the deployment configuration, which seems to have changed a little since the tutorial I was following. The AWS credentials had to be added separately, at Manage Jenkins -> Configure System. Also, the version label format didn’t take some of the Jenkins variables I used, but build-${PROMOTED_NUMBER} seemed to work fine.

The result is simple but exciting. I commit a new change to the git repo, Jenkins picks it up then tests, builds and deploys the project. All it does at the moment is print a simple string, but it’s not going to be too hard to add in a database. But that can wait until next weekend!

Categories
infrastructure

Java Infrastructure Part 7 – Adding coverage checking

Coverage testing is considered to be an essential part of development nowadays, but I don’t think many people reflect deeply enough about what it involves, and why they are doing it.

Coverage tools measure how much of the code is exercise by the tests that are run and, broadly, a higher number is better. But it doesn’t tell you how good the tests are. You can have problems such as high coverage with redundant tests, code being exercised without being tested properly, and unmaintainable test code. Sometimes coverage is used as a replacement for understanding unit testing.

The other important question is how high the coverage should be. 100% coverage is extremely hard to achieve, so much so that many people suggest it is a waste of time. Having said that, all other levels of coverage are somewhat arbitrary. Some places I’ve worked have aimed for an ‘appropriate level of coverage’, but never have time to review and enforce that.

Good  coverage is difficult to add retrospectively; easily testable code needs to be written in a certain way. The Michael Feathers book Working Effectively with Legacy Code is still an excellent guide to salvaging untested code, despite being almost 12 years old. Actually implementing Feathers’ recommendation takes a care and diligence that few people bother with. It’s better to aim for excellent coverage from the start.

The good thing about having very little code in our new project is that we can get our coverage up to 100%.  Adding jacoco is a simple matter of putting a single line into the build.gradle file:

apply plugin: 'jacoco'

The jacoco reports can then be produced locally with the command  ./gradlew clean test jacocoTestReport

The test case covered the Greeter class’s greet method, but not the main method. Perhaps controversially, I’ve chosen to remove this method rather than add a test for it – this was originally used as a test harness and that behaviour is not needed when we have the unit tests. And less code means less to keep track of.

Something interesting happens when we add the jacoco reports to Jenkins. Initially I’ve added the Jacoco reports to the main job for the project. This doesn’t matter when the project is small, but will become a problem later on. We want to maintain a fast response to errors. The current run is taking about 30 seconds, which is on the outside edge of acceptability.

A new post-build action for Jacoco
A new post-build action for Jacoco

A problem comes with the results for the build. No coverage results are produced, and an error occurs on the Jenkins command line although not the build: java.io.IOException: Incompatible version 1006

Jacoco results are not working
Jacoco results are not working

The problem here is that the Jenkins gradle plugin doesn’t work with all versions of Jacoco. This appears to be a problem with gradle versions which can be solved by forcing version 0.75 of gradle. This is now working:

jacoco

(The branch coverage is at zero because the code currently has no branches)

The problem with this adding this line to the build file is that it is a significant piece of technical debt. We have one tool’s version restricted due to compatability with one. This may cause problems if we introduce another tool requiring a specific version; and we have to keep track of when to remove the version. It doesn’t take long for the purity of a greenfield development to disappear.

The commit for the latest version is 2f7307d. Next it’s time to look at adding Spring Boot to the project.

Categories
infrastructure

Java Infrastructure Part 6 – Unit testing

I’ve been using JUnit for about fifteen years. In that time it has become a central part of Java development. While that’s great, unit testing is still problematic. Most people agree that automated testing is necessary, but its exact form is more controversial.

There are two main candidates for a unit testing framework, JUnit and TestNG. These packages have very different aims – in Senior Developer interviews I would sometimes as the difference between them. I like the question as there are a lot of right answers and the chosen response says a lot about the developer.

The simplest answer is that they are broadly similar. They do have a different order of argument in their assertions, which has led to TestNG having a separate class with JUnit assertion order.Some companies I’ve worked for have solved the problem of using JUnit or TestNG by going for both – and then used both argument orders for TestNG.

There’s a good JUnit/TestNG comparison on Mykong.com which points out that the main difference between the two is that JUnit doesn’t have some of the features included in TestNG. But the best answer (which I never received) is that the philosophy of the two are very different. JUnit is a unit testing framework, and as such does not encourage certain practises, for example, having tests that need to run in a particular order. JUnit 5 Alpha was recently announced and the new features don’t appear to violate this goal.

There are often problems with what unit testing actually is. Unit tests should be small, independent, deterministic and low-level. They should not have any direct dependencies on files, database or underlying OS (random number generation, current time, etc). Tests at higher levels are important too, but these should be clearly separate. By not following these rules, developers risk producing slow, brittle unit tests that require complicated set-up. Good unit tests actually force the code to be written in a more testable way, reducing dependencies, and using smaller elements.

Adding tests to the project is simple enough. There is a new class, GreeterTest, and a few lines in the build.gradle file. The changes to the build include a JUnit dependency, which in turn means adding a link to the central maven repository. Gradle is written so that dependencies can be added with little trouble, but this does bear thinking about a little. How does one know that the files being downloaded have not been tampered with? Are we accessing these repositories efficiently? This will be discussed later in detail later, but meanwhile we need to add a note to the TODO page.

Some years back, I gave a talk on unit testing. It was about an hour, and still only scratched the surface. While unit testing is easy, it introduces a lot of issues around how a project will work. I’ve seen company after company get tangled in unit tests. Simply adding JUnit to a project is not enough without some consistency and a real rigour is needed around how tests are used.

An example with this is the use of set-up methods in the tests. These become complicated, and end up with test subclasses and classes, making fixing tests a chore. Better to have the set-up in each individual test method, and if these become unwieldy then examining our object model. This then means that a broken test can be read from start to finish and understood on a single IDE screen. Yes, it produces duplication, but the aims of test code are very different to production code.

Introducing unit testing also requires the processes and infrastructure to support it. We’ve added Jenkins to the project, and broken unit tests will cause the build to ‘break’, to turn red. Continuous integration relies on tests running quickly, to allow a tight feedback loop for developers. Over time, slow builds become slower, and never quite get fixed. Also, the rules about not committing to a broken build need to be taken seriously. Too often, companies have unit test suites that break in specific ways, and developers are forced to understand when a broken build matters and when it is acceptable. This is far too confusing. Using unit testing means following certain rules and methods of development.

Adding JUnit to Jenkins is relatively simple. Make sure that the JUnit plugin is installed, then edit the build for the project to publish the JUnit results. The build will now fail if the tests fail, and full details can be seen within Jenkins.

jenkins-junit-plugin

The latest commit is a3b7fa1. In the next part, we’ll be looking at adding coverage checking to the project.