A Library of Interview Question Algorithms

I recently spent some time revising for programming interview questions, and having done all of that work I wondered what I could do with it.

The result is a git repo of interview question algorithms that I’ve structured as a somewhat tongue-in-cheek utility; the kind of utility that i’d love to be able to use in a programming interview, but the kind would probably negate the point of the interview question in the first place.

The library solves programming questions that keep popping up, whether in books or online on sites such as HackerRank. This includes solutions to the following problems:

Strings

Arrays

Graphs

Lists

Math

Sorting / Search

 

Project Euler

As an extra bit of practice, I also created a repo of solutions to Project Euler problems. These were mostly based on the HackerRank version of the problems, which typically require the most efficient solution to the problem rather than a single solution to a given input.

Writing a Huffman Encoder with Java 8 and Dagger 2

Last year I wrote a Huffman Encoder to try out Dagger 2, and to experiment with Java 8, as we’d just started using it at work.

As a dependency injection tool, the advantage of Dagger is that it generates dependencies at compile time, meaning it doesn’t require reflection and errors should be easier to spot up-front.

In practice, I had a hard time getting it to work until I found this great post on using annotation processors in eclipse. This makes sure eclipse does all of the annotation processing that Dagger requires at compile time, though I found I had to manually build the project to get this to work (rather than relying on save-and-build).

There aren’t many component parts to the program, so Dagger doesn’t have to do much. The project is possibly most useful as a basic skeleton showing Dagger 2 working with maven and JUnit tests.

Fixing Common PowerMock Problems

PowerMock is a great tool for static / final mocking, but exhibits odd behavior in some cases. This post covers three of the most common PowerMock problems I’ve encountered.

PowerMock
A test class annotated for PowerMock.

Intro to PowerMock

PowerMock is useful in cases that a tool like Mockito can’t handle — namely mocking final classes and static methods.

There are many reasons why you shouldn’t aspire to mocking static methods, but sometimes it can’t be avoided.

To mock with PowerMock, you have to use the @PrepareForTest annotation, which sets up the class being mocked (it dynamically loads another class to wrap it). You can also use the @PowerMockIgnore annotation to specify classes that should be ignored when the mock occurs.

This post includes some examples of PowerMock in action, which you can find here.

Common PowerMock Problems

Mocking Core Java Classes

By default, a lot of Java system classes are ignored by PowerMock, meaning you can’t mock them the way you would any other class.

To show what happens when you do this type of mock, consider this example using the java.net.NetworkInterface class.

I have one other class in the test (creatively called OtherClass), whose call takes a NetworkInterface object, calls it’s isUp() method, and returns the result:

My first test shows that the mock is created successfully by calling isUp() method from within the test method (code):

This test passes, but if you try and use this same mock in a different class, it won’t work (code):

The NetworkInterface class, as a standard Java class, is ignored by PowerMock, so the call to isUp() doesn’t result in the expected mocked call.

To fix this, you need to @PrepareForTest the class calling NetworkInterface#isUp(), OtherClassrather than the class being mocked (code):

The test now passes, and the mock is successfully executed.

@PrepareForTest disables code coverage tools for the prepared class, which is hard to fix. The only way around this problem is to not mock Java system classes, which you could do by wrapping system class and all calls to it.

OutOfMemory PermGen Errors

PowerMock seems prone to the ‘java.lang.OutOfMemoryError: PermGen space' error, because it does so much dynamic class loading. There are some useful blog posts on this (1), but they tend to be a little too optimistic about this problem being completely fixed.

There are a number of ways to fix a PermGen error:

  1. Increase your PermGen size. Since it’s only an issue with tests, this may not be such a big deal (2)
  2. Pool mocks to limit the volume of classloading (3)
  3. Avoid using mocks entirely, instead using anonymous test classes (3)

Test Class Doesn’t Run: java.lang.VerifyError

I had this issue using PowerMock in unit test classes that had a superclass.

It happens because Oracle made Java’s bytecode checking stricter in Java 7, so it can be fixed with by stopping this checking using -noverify as a JVM argument.

If you’re not the kind of person to take that as an acceptable answer (good call), I fixed this issue by updating my version of the JDK (in my case from 7_72 to 7_79). It’s not clear that one version of the JDK definitively fixed this problem (people seem to report it as being fixed before 7_72, for example), but it’s worth updating the JDK and your version of PowerMock before resorting to -noverify.

Other Resources

Continuous Integration for GitHub Java Projects

This post discusses how to setup a Java GitHub project with code coverage and continuous integration that is run automatically on each commit.

Working on the WordBrain Solver I wrote about last month, I thought it’d be interesting to set the project up to do this, and set up a few extra reporting tasks. This project manages:

  • Build. Managed by maven.
  • Code coverage. Instrumented by Jacoco.
  • Reporting. FindBugs and PMD reports.
  • Continuous Integration. Updates automatically pushed to Travis CI.
  • Code coverage reports. Published on Coveralls.

The result is a set of tags displaying the state of the project on GitHub:

Result Badges

Maven

The majority of the setup for all of these tools is in the project’s pom.xml file.

This is broken into three key sections:

  • Dependencies, which includes each of the libraries used in the project. Libraries that are only used in testing are given the test scope, meaning they are unit included in the release of the project.
  • Build, which describes where the code is located and how to build the code. This includes the plugins used to run unit tests and instrument code coverage.
  • Reporting, which includes plugins that are used to generate analysis or reports of the code. This includes the generation of JavaDoc, and plugins such as PMD and FindBugs.

Instrumenting Code Coverage

Code coverage is included in the build section of the pom. The code coverage tool i’m using, Jacoco, is executed in two phases:

The pre-unit-test phase attaches the jacoco agent to the JVM used to run the unit tests. In this execution, surefireArgLine is used to add the agent to the surefire test runner as an extra JVM parameter here.

The post-unit-test phase is run after the unit tests have completed, analyzing the jacoco output to produce a nice HTML report.
The jacoco agent is attached to the unit tests (run with surefire), by adding to the surefire argline here.

To generate code coverage results (created by default in target/jacoco-ut), run the test command:

Reporting: FindBugs and PMD

Published Results

While the code coverage tools form part of the maven build process, the maven reporting process includes plugins such as FindBugs and PMD.

In this pom, I added reporting for: JDepend, FindBugs, PMD, JavaDoc, and Surefire.

Adding them is as simple as adding the maven dependency, and running them is as simple as:

Continuous Integration: Travis CI

Travis CI Page

Travis CI is free for open source projects, so it’s a great tool to use if you have a project like this one. All it requires is a yml file specifying what the server should run (and signing up to the website):

If this file is set up correctly, you shouldn’t need anything else in your project. Travis automatically builds every time you push to your main branch.

Coverage Reporting: Coveralls

Coveralls Site

Coveralls automatically accepts coverage reports from Travis CI and displays them in nice interface.

As with Travis, it’s free for open source projects, and as with Travis, it doesn’t require much configuration.

Once you’re signed up, the only addition to the project is a maven build plugin:

Pushing updates from your local machine or another build server requires you to enter an authentication token here, but if you’re using Travis you don’t need any additional setup. The following line in the travis.yml file creates and sends the coverage report with no additional configuration:

Results

Result Badges

The result of this configuration is a pair of badges showing the current state of the build and the code coverage.

More importantly, it sets the project up for future work with a great set of tools for monitoring the state of development.

Creating a WordBrain Solver

I became quietly obsessed with an iOS game called WordBrain over the holidays, and thought it’d be interesting to write some code to solve it’s puzzles (you can find the result here).

basic-example
A basic single-word game.

The game itself is similar to a typical word search puzzle where, given a grid of letters, you have to find the words in the grid.

It differs in that words don’t have to be found in straight lines, so individual letters can be joined across diagonals, and as each word is found, it is removed from the grid and the remaining letters fall down. This means that for harder puzzles it isn’t possible to connect the letters in the last word unless you have found the preceding word(s).

Before starting, you know how long each word is and the order in which words must be found (as the 5 word puzzle below illustrates).

A more elaborate multiple-word game.

Solution

My solution started out as a simple word finder — finding all words of a certain length — but evolved into a more complete puzzle solver.

First, given a grid and a set of word lengths, the solver comprehensively scans the the grid for words matching against the first word’s length. It then recursively scans for the subsequent words, based on the updated grid (which is created by removing the preceding word(s)). This creates a set of potential solutions to the puzzle.

To speed up search, I created a trie dictionary to allow the solver to terminate when a non-english prefix is found.

Then, to narrow down the set of possible solutions, I used a corpus of English word frequencies to rank them by the most commonly used set of words. This tends to match the expected solution, because WordBrain uses fairly common words.

Finally, I created a Swing UI to make it easier to enter problems and view solutions.

Analysis

The trie solution is significantly faster than the alternative, an in-memory set of words that didn’t allow early termination based on prefixes.

The solver itself could be slightly more space efficient, because it creates new a string through each recursive call rather than modifying a character array. However, as the search continues onto successive words, some of this copying is necessary (the updated grid, the words found in each solution, etc.).

The Swing UI isn’t particularly well written, but it does display everything required for a solution. First, potential solutions are shown next to the grid:

Potential Solutions
Potential Solutions

Next, for a particular solution, the highlighted word is shown in a decreasing shade to show the order in which letters must be selected:

Solution Illustration
Solution Illustration

Overall, this was a relatively interesting side-project, with a few things such as the trie and solution sorting which added a little depth to the implementation.

That said, I’d recommend completely ignoring this code and playing WordBrain unassisted. It’s a lot of fun!

Creating a Java REST API With Apache Jersey

In our latest development cycle we’ve been working on creating an official API to manage and control AetherStore. As part of this process I’ve been experimenting with Jersey 2, the Java reference implementation for JAX-RS, the Java API for RESTful Web Services.

This post discusses an example API using Jersey. The example is itself fairly well commented, explaining why certain pieces of code are needed, and how they relate to the project. This post covers the structure of the project and discusses some of its more interesting features.

The example is of an API representing a set, which allows:

  • Strings to be added as part of a GET request parameter
  • Strings to be added as part of a PUT request body.
  • The set of all strings stored to be returned.

If you’re using Jersey, it’s important to note that a lot of examples online use Jersey v1, which causes problems because this version of Jersey uses an entirely different namespace — v1 usescom.sun.jersey, whereas v2 uses org.glassfish.jersey, and a number of classes are either named differently, or are in different sub-packages.

The code on GitHub should work straight out of the box if you’re using Eclipse for Java EE.

Areas Covered

The code is useful if you’re interested in one of the following features in relation to jersey:

  • Running in eclipse.
  • Setting up appropriate Jersey Maven dependencies.
  • Setting up your web.xml to work with Jersey.
  • Creating a basic API.
  • Using JSON to wrap requests and responses.
  • Using an exception mapper for more readable error handling.
  • Injecting dependencies into resource classes (the API classes).
  • Unit testing Jersey.
  • Mocking calls used by our Jersey API.
  • Unmarshalling responses from a Jersey API.

Reading the Code

This section describes how the code is structured at a high level. There are some comments on specific lines of code, but I’d recommend looking at the comments in the code for a closer look at individual features.

The Apache Jersey Core

At the core of a Jersey application is the pom.xml file, which specifies all of the dependencies in the code — including Jersey itself — and the versions being used. If you’re using another example, it’s important to note the version of Jersey you’re using. Here we are using Jersey 2.6:

The web.xml file specifies how your servlet is named, and where the main Jersey Application class is. This Application class is used to start your Jersey servlet.

In this example our application class is called SetApplication. It registers bindings for a few classes, which we’ll discuss later. At this point the most relevant call to our application is:

This tells the servlet container where to look for the resource classes that form the API. In this case our resource class is called SetResource.

SetResource (API) Class

To recap, this project implements an API which supports two calls to add a string to a set (/set/add/<value> and /set/add), and a single call to get all entries in the set (/set/get). The class definition for this class annotated (shown below), to set the base path of the API call to be /set:

Then the methods in this class are further annotated to describe the API calls under this path. For example, the following code is executed when a /set/add/{value} request is made.

The full call to this method will be /set/add/{value}, where value is a variable that is mapped to the value parameter of the method as a result of the @PathParam annotation. We want the call to consume and produce a JSON response, so we specify this in the @Produces annotation. The marshalling to JSON is handled automatically, but we have to specify the marshaller dependency in thepom.xml file, as follows:

There are two methods which include the /add path, but they accept different numbers of inputs so there is no conflict. The class also contains an @Inject annotation, which tells the container to inject a dependency into the callHandler field. If we look back at the SetApplication class, we can see that this value is injected by registering it with the application through the register() call:

Unit Tests

I’ve written two unit test classes. One, SetApiTests provides what are essentially end-to-end integration tests, which call the API and check that its operations perform as expected. The second,MockedSetApiTests provides an example of using mocking to test just the API calls themselves. Both test classes extend JerseyTest, which handles the heavy lifting of setting up a servlet container and providing the API. To run correctly, JerseyTest requires a test framework provider, which is the servlet container used to run the test. In this example I’ve used the jetty container, where the dependency is specified in the pom.xml class with the following code:

In the MockedSetApiTests class, the SetCallHandler, which manages the logic behind the API call (and is injected), is mocked out:

The configure call is required by JerseyTest to properly configure the application, which in this case requires us to pass the mocked dependency so that it can be injected into the SetResource. In the tests themselves, the call to the API is relatively simple:

This makes the API call and returns the result (whether it is a success or failure to theresponseWrapper). This can then be queried to establish whether the call was successful (by getting the HTTP response code):

If successful, we can then obtain the returned value:

The test addMultipleSingleCall shows an example of an API request with a message body (in this case a set of Strings), also showing how to package up this request parameter in the test:

The examples in this post don’t include any parameters that are non-standard Java types, but doing so is relatively simple. By default only the public fields in the class are serialized, and a default constructor is required, but the class doesn’t have to implement serializable.

How To Run

To run this example in Eclipse for Java EE:

  1. Download / clone the code from GitHub.
  2. In Eclispe, go to File -> New -> Java Project
  3. Untick ‘Use default location‘ and navigate to the path of the jersey2-example repository.
  4. Re-tick the ‘Use default location‘ option, which sets up the project name as jersey2-example.
  5. Click finish to create the project.
  6. To run, either run the unit tests in JUnit, or right-click on the project and select Run as -> Run on Server.

If you want to create your own eclipse project, you can follow this example, but note that this is for Jersey v1, so you need to adjust the options used in steps 3 and 5 (i’d compare them to the example in my GitHub repo).

To run standalone (the previous steps are required):

  1. Right-click on the project, Export -> WAR File.
  2. Set the location to store the WAR file, and change the specified server runtime if necessary.
  3. Run the WAR in your favorite application server, or standalone with Jetty Runner.

Additional Resources

The following links are resources I found useful in writing this example. Where the examples refer to v1 of Jersey I’ve said so — examples are included because some part of them is useful, but be careful to note places where v1 specific code is used. This includes anywhere where a com.sun.jersey namespace is used.

This is a repost of a blog I wrote over on the AetherWorks Blog earlier this year.