Scroll to navigation

pkann(1) pkann(1)

NAME

pkann - classify raster image using Artificial Neural Network

SYNOPSIS


pkann
-t training [-i input] [-cv value] [options] [advanced options]

DESCRIPTION

pkann implements an artificial neural network (ANN) to solve a supervised classification problem. The implementation is based on the open source C++ library ( fann ⟨http://leenissen.dk/fann/wp/⟩ ). Both raster and vector files are supported as input. The output will contain the classification result, either in raster or vector format, corresponding to the format of the input. A training sample must be provided as an OGR vector dataset that contains the class labels and the features for each training point. The point locations are not considered in the training step. You can use the same training sample for classifying different images, provided the number of bands of the images are identical. Use the utility pkextract(1) to create a suitable training sample, based on a sample of points or polygons. For raster output maps you can attach a color table using the option -ct.

OPTIONS

input image
training vector file. A single vector file contains all training features (must be set as: B0, B1, B2,...) for all classes (class numbers identified by label option). Use multiple training files for bootstrap aggregation (alternative to the --bag and --bsize options, where a random subset is taken from a single training file)
training layer name(s)
identifier for class label in training vector file. (default: label)
prior probabilities for each class (e.g., -prior 0.3 -prior 0.3 -prior 0.2 )
n-fold cross validation mode (default: 0)
number of neurons in hidden layers in neural network (multiple hidden layers are set by defining multiple number of neurons: -nn 15 -nn 1, default is one hidden layer with 5 neurons)
Only classify within specified mask (vector or raster). For raster mask, set nodata values with the option --msknodata.
mask value(s) not to consider for classification. Values will be taken over in classification image. Default is 0.
nodata value to put where image is masked as nodata (default: 0)
output classification image
Data type for output image ({Byte / Int16 / UInt16 / UInt32 / Int32 / Float32 / Float64 / CInt16 / CInt32 / CFloat32 / CFloat64}). Empty string: inherit type from input image
Output image format (see also gdal_translate(1)). Empty string: inherit from input image
Output ogr format for active training sample (default: SQLite)
colour table in ASCII format having 5 columns: id R G B ALFA (0: transparent, 255: solid)
Creation option for output file. Multiple options can be specified.
list of class names.
list of class values (use same order as in --class option).
set to: 0 (results only), 1 (confusion matrix), 2 (debug)

Advanced options

balance the input data to this number of samples for each class (default: 0)
if number of training pixels is less then min, do not take this class into account (0: consider all classes)
band index (starting from 0, either use --band option or use --start to --end)
start band sequence number (default: 0)
end band sequence number
offset value for each spectral band input features: refl[band]=(DN[band]-offset[band])/scale[band]
scale value for each spectral band input features: refl=(DN[band]-offset[band])/scale[band]
how to combine aggregated classifiers, see also --rc option (1: sum rule, 2: max rule).
connection rate (default: 1.0 for a fully connected network)
weights for neural network. Apply to fully connected network only, starting from first input neuron to last output neuron, including the bias neurons (last neuron in each but last layer)
learning rate (default: 0.7)
number of maximum iterations (epoch) (default: 500)
how to combine bootstrap aggregation classifiers (0: sum rule, 1: product rule, 2: max rule). Also used to aggregate classes with --rc option. Default is sum rule (0)
Number of bootstrap aggregations (default is no bagging: 1)
Percentage of features used from available training features for each bootstrap aggregation (one size for all classes, or a different size for each class respectively. default: 100)
output for each individual bootstrap aggregation (default is blank)
probability image. Default is no probability image
number of active training points (default: 1)

EXAMPLE

Classify input image input.tif with an Artificial Neural Network using one hidden layer with 5 neurons. A training sample that is provided as an OGR vector dataset. It contains all features (same dimensionality as input.tif) in its fields (please check pkextract(1) on how to obtain such a file from a "clean" vector file containing locations only). A two-fold cross validation (cv) is performed (output on screen).

pkann -i input.tif -t training.sqlite -o output.tif --nneuron 5 -cv 2

Same example as above, but use two hidden layers with 15 and 5 neurons respectively.

pkann -i input.tif -t training.sqlite -o output.tif --nneuron 15 --neuron 5 -cv 2

SEE ALSO

pkextract(1)

02 March 2024