Solve blog

Taipan – (Tool for Annotating Images in Preparation for ANalysis)

Tom Carmichael

The following is the second in a series of recent applications that we’ve worked on since userR! 2018, so you’ll see a number of links to packages that we learnt about from that conference. 

This example uses the Taipan (Tool for Annotating Images in Preperation for ANalysis) package which was presented by Stephanie Kobakian from Monash University in a talk called Taipan: Woman faces machine.

We’ve recently started doing a number of jobs that have had a deep learning flavor and require a training data set to be created on numerous images over a large number of samples. As a result, we’ve spent a (large) number of hours annotating images and identifying certain features in them. Unfortunately most off the shelf options to do this are limited and don’t provide enough customisation to make sure the job is done accurately and equally as important, in a timely fashion. For these reasons, Taipan was a perfect base to build an image annotation tool. Taipan allows users to easily define a set of questions and have a user annotate an image and answer questions about that image.

In it’s base form it’s perfect to use on static images with polygons (ideal if you’re annotating where a face is in an image), but for our purpose (identifying the presence or absence of rock in an image) we need to make some slight modifications.

Image of core from the Estonian Core repository

Take this photo for example: if we wanted to build a training set of where core sat in wooden core boxes, we can’t just look for the edge of the coreboxes due to the variable thickness of the rows.

We’ve modified Taipan so that it has four click polygon creation that can be used to quickly annotate the image like so:

In addition to visually annotating the image, it’s often critically important to have where the image is located downhole so that any parameter calculated on the image can then be tied back to an important parameter (assay intervals, hardness measurement etc). A second stage is required to input the Hole ID and the depth from and depth to. This is done on the Image information tab.

When you’re adding information to an image in Taipan, you put Hole ID, Depth From and Depth To into the required fields, you can also check if the image is appropriate for analysis (blurry, doesn’t contain the feature of interest, etc.). It also autofills the next image for you (with the previous images Hole ID, as well as assigning the Depth To of the previous image to the Depth From of this one). A minimum threshold can be set for the size of the box (in this case we set each box to be a minimum of 4 meters, so if the depths you enter are less than that, you get a warning in red, greater than that, and the depth is returned in green).

Subset image – named with Hole ID and depth.

Using our digitized polygon we can now crop the image to include only core. These photos can be further analysed or subset for a feature of interest, used as a cropping tool for the easier storage of core photos, or joined together as a strip log. 

Taipan is flexible enough that you can achieve a number of different tasks (here we’ve just shown the most basic functionality) depending on the training set you’re trying to create and the deep learning problem you’re trying to solve. Importantly, we’ve seen a significant speed up in how long it’s taking us to annotate images compared to an off the shelf option.

If you’re interested in a custom solution for image annotation or how you can extend your core photography to get more information from it, please contact