Site icon Cape RADD

Revisiting the kelp map

Out with the Old…

Several months back I wrote a blog detailing a preliminary method for classifying kelp from satellite imagery, based on some methods from researchers in California. We drew a couple conclusions from this. Namely, that the resolution of the LANDSAT imagery was not high enough for the relatively small kelp forests along the Cape coast. I would like to revisit the original workflow by using some new data, and a different approach, this time, the ever flexible and powerful Generalised Additive Model (GAM), as demonstrated by mgcv package developer Simon Wood in this presentation. We’ll skip the tutorial blog style this time and just drill straight into some results.

In with the New!

Aerial photograph of Miller's Point

COCT aerial photograph of Miller’s Point, highlighting the areas we sampled with floating quadrats.

First up is the new data. In the original method we cheated and cherry-picked some kelp reference pixels directly from the imagery. This time around we got our feet wet and measured kelp density in the field with some floating quadrats. This method comes with several benefits. Firstly, not only does it provide information on kelp presence, but we also get density and from previous research we can convert this to biomass. Secondly, it’s fast, and fun! We can cover a large area very rapidly, without having to harvest any kelp, and who doesn’t love a day freediving in the kelp forests.

The second new source of information is some aerial photography of the coast line from the City of Cape Town. Here’s a nice clear image of Miller’s Point from 2017 where we can clearly see the kelp through the water. The resolution on these images in fantastic, but the trade off is that because they are visible-spectrum, we only have three bands to work with, Red, Green, and Blue (RGB).

Some modelling

The process by which we are estimating kelp cover will have two levels of observation. First, the observations on the ground (or in this case in the water), and a second level of observation from the aerial photographs. The first step here, perhaps a bit pedantically, is to link the observation-level process on the ground, with the true kelp count. We do this by modelling the true count of kelp at a single point as the response of several predictor variables like tide height and the person making the observation. People inherently count things differently so we include observer as a random term, and we made observations at varying tide heights, which inevitably influences the amount of kelp actually floating at the surface.

An aside: along the way we took almost 60 years of hourly tide height observations from Simon’s Town GLOSS buoy data to fit a 60 harmonic-constituent model that allows us to predict tide hieght for any time. Here’s a summary of the tides at Simon’s Town

Features (Semi-Diurnal):
     MLWS      MLWN       MSL      MHWN      MHWS 
0.2902751 0.7494520 1.0339368 1.3184215 1.7775984
Bias corrected kelp observations

Coarse output of the first-level observation model predicitng kelp countat each sampling location

We do this as a two step process known as a hurdle-model, first modelling the presence of kelp, and then modeling the amount of kelp contingent on it being present in the first place. In the end we get a model that allows us to remove the biases of tide and observer, and predict the true amount of kelp at a given location. The plot at left shows the output of these models. We’re going to use this data as the input for our next model which links these counts with the visible-specturm data from the aerial photography. In order to retain the error still present in this model, we will simulate 100 datasets from the model, and fit the new model to each of those, and then pool the results in a technique known as Multiple Imputation. A sample of some of the simulated datasets is shown below.

25 simulated datasets

25 of the 100 simulated datasets, based on the actual observed data, and fitted observation model, are shown here.

Some more modelling…

Final kelp prediction

Final model output of kelp density (in fronds/m²) for the entire image.

The last model then links the simulated true kelp presence and kelp counts to the spectral data with another hurdled set of GAMs. This is fit to each of the 100 simulations and the results are pooled, giving us a final model that incorporates any error of the original model. This model is then used to predict kelp density (in fronds/m²) for each pixel of the image. The results are shown at right. The plot below overlays the model output as a contour line added around the areas where kelp is detected.

Final model output contour overlayed on photo

The final model output is overlayed on the original photo as a contour around the area where kelp presence is predicted.

What’s left?

Close scrutiny of the map above shows some places where the model isn’t perfect, namely the shallow area near the slipway in about 2/3rds up the image, and the small sandy beach patch at the bottom of the image. But damn if it isn’t near-perfect! I think a few more floating quadrats would really iron this out.

We can apply the same principle to multi-spectral data from the Sentinel 2 missions and get a kelp map for the entire coast!

kelp detection from Sentinel-2

Surface kelp detected by modelling Sentinel-2B data. Ignore that patch on land!

spectral signature

Multi-spectral reflectance signature modelled for kelp

Hyperspeed!

Perhaps a funding application for a flight with the Southern Mapping ProSpecTIR Hyperspectral Camera is in order!

Check out our Field Course if you are interested in learning more about some of the concepts involved in this demonstration!

Exit mobile version