Machine Learning with Multispectral Imagery

Landsat imagery is some of the most spectacular data available covering the Earth, its multispectral images spanning decades. Talk about great temporal and spatial resolution — and all open data.

In this exercise I attempted to classify land cover in a region of Richmond using Landsat imagery and machine learning techniques. While this was a fun class experiment (while at JHU) to learn how classification techniques can be used to extract knowledge from data, it was only a taste of what’s possible with remote sensing images and applications.

Disclaimer: there are multiple types of classifiers used for image classification, both supervised and unsupervised. The choice of input data bands, choice of training sites, and choice of classification scheme are all incredibly important to the final output. Results may vary!

Data:

I used a provided 6-band ETM+ multispectral image of a region in Richmond, Virginia from 1999. It does not contain a thermal band.

Data input here is three-dimensional, containing an x, y, and band number. Landsat ETM+ imagery contains different bands that designate different wavelengths (in micrometers) at 30-meter spatial resolutions.

The data is stored as a .LAN format, to be used for specific image processing software packages. After a color composite is selected, the multi spec image can be viewed. I used a 3-band color composite of 5-4-3 for display.

Tools:

I really enjoyed using Multispec, a free package being developed at Purdue (funded by NASA) to analyze multispectral and hyperspectral image data.

Action:

Supervised Classification – Maximum Likelihood
Max likelihood calculates the Gaussian (normal) probability distribution from training for each class. Class assignments are based on the highest probability that a pixel falls into the class, and distributions may overlap. For supervised classification, after classes are defined, training sites are picked from the image, and the classifier is run to compare the ground truth.

In a digital image each cell is represented by a digital number, which translates to the brightness value when displayed as an image. How water is reflected by light differs from how forests, sidewalks, and open fields reflect… And so, different wavelengths correspond to different ground covers. This is the simplified story, at least!

First I chose three classes: urban/bare, water, and vegetation. After choosing the max likelihood classifier in Multispec, I “trained” the image by clicking on areas I was certain fell into those bins, using my class notes and readings of how light reflects land differently, and satellite imagery from Google Earth, as guides. Then I ran the classifier, and produced a thematic map.

Screen Shot 2015-01-22 at 1.48.11 PM

I ran the entire process again with 6 classes.

Screen Shot 2015-01-22 at 1.48.17 PM

I was expecting the accuracy to improve, but it actually decreased with additional classes, though not for all classes. Grouping Urban/Bare produced a 100% reference accuracy, with a 92.5% reference accuracy for the Vegetation class. The Kappa statistic was 90.6%. With six classes, some new classes had a high accuracy, like forest and dense urban, with an improved accuracy for water. Classifying crops, soil, and residential ad reduced accuracy, giving the overall Kappa statistic a value of 70.4%. I think this might be the result of the more difficult spatial composure of those classes, and my novice experience in choosing the best training and test sites for classifying. Teasing out the soil versus crop regions was especially tricky.

My final distributions for the land-use cover, though further training and experience would be needed for fine-tuning, had almost 35% forest cover, 44% residential cover, 10% dense urban, about 10% crop and soil combined, and 1% water.

A possible extension of this would be to perform the same classification technique and different intervals to see how land changed over time…


More Info

Classifying multispectral and hyper spectral imagery tells us about what makes up the land today, as well as how it’s changed. Change detection between process images can show how landscapes have changed with urban growth, forest fires, farming, etc.  The Multiresolution Land Characteristics (MLC) program is a consortium of U.S. Federal agencies that produce 30-meter resolution land-cover maps for the U.S. using Landsat TM imagery. You may also be interested in learning more about Landsat Missions, Landsat Science, and the US Remote Sensing Program.

There is also a massive body of literature about machine learning techniques used in image processing, as the max likelihood classification is just one of many (check out decision trees, neural networks, cluster techniques, and mixed methods). Some nice articles on max likelihood classification can be found here, here, and hereThis lab by Harvard also looks like a nice overview of this process, using some free applications.

Advertisements