.TH mia\-2dmyopgt\-nonrigid 1 "v2.4.6" "USER COMMANDS" .SH NAME ('mia\\-2dmyopgt\\-nonrigid',) \- Run a registration of a series of 2D images. .SH SYNOPSIS .B mia\-2dmyopgt\-nonrigid \-i \-o [options] .SH DESCRIPTION .B mia\-2dmyopgt\-nonrigid This program implements the non-linear registration based on Pseudo Ground Thruth for motion compensation of series of myocardial perfusion images given as a data set as described in Chao Li and Ying Sun, 'Nonrigid Registration of Myocardial Perfusion MRI Using Pseudo Ground Truth' , In Proc. Medical Image Computing and Computer-Assisted Intervention MICCAI 2009, 165-172, 2009. Note that for this nonlinear motion correction a preceding linear registration step is usually required. .SH OPTIONS .SS File-IO .RS .IP "\-i \-\-in-file=(input, required); string" input perfusion data set .IP "\-o \-\-out-file=(output, required); string" output perfusion data set .IP "\-r \-\-registered=reg" file name base for registered files, the image file type is the same as given in the input data set .RE .SS Pseudo Ground Thruth estimation .RS .IP "\-A \-\-alpha=1" spacial neighborhood penalty weight .IP "\-B \-\-beta=1" temporal second derivative penalty weight .IP "\-R \-\-rho-thresh=0.85" correlation threshold for neighborhood analysis .IP "\-k \-\-skip=0" skip images at the beginning of the series e.g. because as they are of other modalities .RE .SS Registration .RS .IP "\-O \-\-optimizer=gsl:opt=gd,step=0.1" Optimizer used for minimization For supported plugins see PLUGINS:minimizer/singlecost .IP "\-a \-\-start-c-rate=32" start coefficinet rate in spines, gets divided by \-\-c\-rate\-divider with every pass .IP " \-\-c-rate-divider=4" cofficient rate divider for each pass .IP "\-d \-\-start-divcurl=20" start divcurl weight, gets divided by \-\-divcurl\-divider with every pass .IP " \-\-divcurl-divider=4" divcurl weight scaling with each new pass .IP "\-w \-\-imageweight=1" image cost weight .IP "\-l \-\-mg-levels=3" multi\-resolution levels .IP "\-P \-\-passes=4" registration passes .RE .SS Help & Info .RS .IP "\-V \-\-verbose=warning" verbosity of output, print messages of given level and higher priorities. Supported priorities starting at lowest level are: .RS 10 .I info \(hy Low level messages .RE .RS 10 .I trace \(hy Function call trace .RE .RS 10 .I fail \(hy Report test failures .RE .RS 10 .I warning \(hy Warnings .RE .RS 10 .I error \(hy Report errors .RE .RS 10 .I debug \(hy Debug output .RE .RS 10 .I message \(hy Normal messages .RE .RS 10 .I fatal \(hy Report only fatal errors .RE .IP " \-\-copyright" print copyright information .IP "\-h \-\-help" print this help .IP "\-? \-\-usage" print a short help .IP " \-\-version" print the version number and exit .RE .SS Processing .RS .IP " \-\-threads=\-1" Maxiumum number of threads to use for processing,This number should be lower or equal to the number of logical processor cores in the machine. (\-1: automatic estimation). .RE .SH PLUGINS: minimizer/singlecost .TP 10 .B gdas Gradient descent with automatic step size correction., supported parameters are: .P .RS 14 .I ftolr = 0; double in [0, inf) .RS 2 Stop if the relative change of the criterion is below.. .RE .RE .RS 14 .I max-step = 2; double in (0, inf) .RS 2 Maximal absolute step size. .RE .RE .RS 14 .I maxiter = 200; uint in [1, inf) .RS 2 Stopping criterion: the maximum number of iterations. .RE .RE .RS 14 .I min-step = 0.1; double in (0, inf) .RS 2 Minimal absolute step size. .RE .RE .RS 14 .I xtola = 0.01; double in [0, inf) .RS 2 Stop if the inf\-norm of the change applied to x is below this value.. .RE .RE .TP 10 .B gdsq Gradient descent with quadratic step estimation, supported parameters are: .P .RS 14 .I ftolr = 0; double in [0, inf) .RS 2 Stop if the relative change of the criterion is below.. .RE .RE .RS 14 .I gtola = 0; double in [0, inf) .RS 2 Stop if the inf\-norm of the gradient is below this value.. .RE .RE .RS 14 .I maxiter = 100; uint in [1, inf) .RS 2 Stopping criterion: the maximum number of iterations. .RE .RE .RS 14 .I scale = 2; double in (1, inf) .RS 2 Fallback fixed step size scaling. .RE .RE .RS 14 .I step = 0.1; double in (0, inf) .RS 2 Initial step size. .RE .RE .RS 14 .I xtola = 0; double in [0, inf) .RS 2 Stop if the inf\-norm of x\-update is below this value.. .RE .RE .TP 10 .B gsl optimizer plugin based on the multimin optimizers of the GNU Scientific Library (GSL) https://www.gnu.org/software/gsl/, supported parameters are: .P .RS 14 .I eps = 0.01; double in (0, inf) .RS 2 gradient based optimizers: stop when |grad| < eps, simplex: stop when simplex size < eps.. .RE .RE .RS 14 .I iter = 100; uint in [1, inf) .RS 2 maximum number of iterations. .RE .RE .RS 14 .I opt = gd; dict .RS 2 Specific optimizer to be used.. Supported values are: .RS 4 .I bfgs \(hy Broyden-Fletcher-Goldfarb-Shann .RE .RS 4 .I bfgs2 \(hy Broyden-Fletcher-Goldfarb-Shann (most efficient version) .RE .RS 4 .I cg\-fr \(hy Flecher-Reeves conjugate gradient algorithm .RE .RS 4 .I gd \(hy Gradient descent. .RE .RS 4 .I simplex \(hy Simplex algorithm of Nelder and Mead .RE .RS 4 .I cg\-pr \(hy Polak-Ribiere conjugate gradient algorithm .RE .RE .RE .RS 14 .I step = 0.001; double in (0, inf) .RS 2 initial step size. .RE .RE .RS 14 .I tol = 0.1; double in (0, inf) .RS 2 some tolerance parameter. .RE .RE .TP 10 .B nlopt Minimizer algorithms using the NLOPT library, for a description of the optimizers please see 'http://ab-initio.mit.edu/wiki/index.php/NLopt_Algorithms', supported parameters are: .P .RS 14 .I ftola = 0; double in [0, inf) .RS 2 Stopping criterion: the absolute change of the objective value is below this value. .RE .RE .RS 14 .I ftolr = 0; double in [0, inf) .RS 2 Stopping criterion: the relative change of the objective value is below this value. .RE .RE .RS 14 .I higher = inf; double .RS 2 Higher boundary (equal for all parameters). .RE .RE .RS 14 .I local-opt = none; dict .RS 2 local minimization algorithm that may be required for the main minimization algorithm.. Supported values are: .RS 4 .I gn\-orig\-direct\-l \(hy Dividing Rectangles (original implementation, locally biased) .RE .RS 4 .I gn\-direct\-l\-noscal \(hy Dividing Rectangles (unscaled, locally biased) .RE .RS 4 .I gn\-isres \(hy Improved Stochastic Ranking Evolution Strategy .RE .RS 4 .I ld\-tnewton \(hy Truncated Newton .RE .RS 4 .I gn\-direct\-l\-rand \(hy Dividing Rectangles (locally biased, randomized) .RE .RS 4 .I ln\-newuoa \(hy Derivative-free Unconstrained Optimization by Iteratively Constructed Quadratic Approximation .RE .RS 4 .I gn\-direct\-l\-rand\-noscale \(hy Dividing Rectangles (unscaled, locally biased, randomized) .RE .RS 4 .I gn\-orig\-direct \(hy Dividing Rectangles (original implementation) .RE .RS 4 .I ld\-tnewton\-precond \(hy Preconditioned Truncated Newton .RE .RS 4 .I ld\-tnewton\-restart \(hy Truncated Newton with steepest-descent restarting .RE .RS 4 .I gn\-direct \(hy Dividing Rectangles .RE .RS 4 .I ln\-neldermead \(hy Nelder-Mead simplex algorithm .RE .RS 4 .I ln\-cobyla \(hy Constrained Optimization BY Linear Approximation .RE .RS 4 .I gn\-crs2\-lm \(hy Controlled Random Search with Local Mutation .RE .RS 4 .I ld\-var2 \(hy Shifted Limited-Memory Variable-Metric, Rank 2 .RE .RS 4 .I ld\-var1 \(hy Shifted Limited-Memory Variable-Metric, Rank 1 .RE .RS 4 .I ld\-mma \(hy Method of Moving Asymptotes .RE .RS 4 .I ld\-lbfgs\-nocedal \(hy None .RE .RS 4 .I ld\-lbfgs \(hy Low-storage BFGS .RE .RS 4 .I gn\-direct\-l \(hy Dividing Rectangles (locally biased) .RE .RS 4 .I none \(hy don't specify algorithm .RE .RS 4 .I ln\-bobyqa \(hy Derivative-free Bound-constrained Optimization .RE .RS 4 .I ln\-sbplx \(hy Subplex variant of Nelder-Mead .RE .RS 4 .I ln\-newuoa\-bound \(hy Derivative-free Bound-constrained Optimization by Iteratively Constructed Quadratic Approximation .RE .RS 4 .I ln\-praxis \(hy Gradient-free Local Optimization via the Principal-Axis Method .RE .RS 4 .I gn\-direct\-noscal \(hy Dividing Rectangles (unscaled) .RE .RS 4 .I ld\-tnewton\-precond\-restart \(hy Preconditioned Truncated Newton with steepest-descent restarting .RE .RE .RE .RS 14 .I lower = \-inf; double .RS 2 Lower boundary (equal for all parameters). .RE .RE .RS 14 .I maxiter = 100; int in [1, inf) .RS 2 Stopping criterion: the maximum number of iterations. .RE .RE .RS 14 .I opt = ld\-lbfgs; dict .RS 2 main minimization algorithm. Supported values are: .RS 4 .I gn\-orig\-direct\-l \(hy Dividing Rectangles (original implementation, locally biased) .RE .RS 4 .I g\-mlsl\-lds \(hy Multi-Level Single-Linkage (low-discrepancy-sequence, require local gradient based optimization and bounds) .RE .RS 4 .I gn\-direct\-l\-noscal \(hy Dividing Rectangles (unscaled, locally biased) .RE .RS 4 .I gn\-isres \(hy Improved Stochastic Ranking Evolution Strategy .RE .RS 4 .I ld\-tnewton \(hy Truncated Newton .RE .RS 4 .I gn\-direct\-l\-rand \(hy Dividing Rectangles (locally biased, randomized) .RE .RS 4 .I ln\-newuoa \(hy Derivative-free Unconstrained Optimization by Iteratively Constructed Quadratic Approximation .RE .RS 4 .I gn\-direct\-l\-rand\-noscale \(hy Dividing Rectangles (unscaled, locally biased, randomized) .RE .RS 4 .I gn\-orig\-direct \(hy Dividing Rectangles (original implementation) .RE .RS 4 .I ld\-tnewton\-precond \(hy Preconditioned Truncated Newton .RE .RS 4 .I ld\-tnewton\-restart \(hy Truncated Newton with steepest-descent restarting .RE .RS 4 .I gn\-direct \(hy Dividing Rectangles .RE .RS 4 .I auglag\-eq \(hy Augmented Lagrangian algorithm with equality constraints only .RE .RS 4 .I ln\-neldermead \(hy Nelder-Mead simplex algorithm .RE .RS 4 .I ln\-cobyla \(hy Constrained Optimization BY Linear Approximation .RE .RS 4 .I gn\-crs2\-lm \(hy Controlled Random Search with Local Mutation .RE .RS 4 .I ld\-var2 \(hy Shifted Limited-Memory Variable-Metric, Rank 2 .RE .RS 4 .I ld\-var1 \(hy Shifted Limited-Memory Variable-Metric, Rank 1 .RE .RS 4 .I ld\-mma \(hy Method of Moving Asymptotes .RE .RS 4 .I ld\-lbfgs\-nocedal \(hy None .RE .RS 4 .I g\-mlsl \(hy Multi-Level Single-Linkage (require local optimization and bounds) .RE .RS 4 .I ld\-lbfgs \(hy Low-storage BFGS .RE .RS 4 .I gn\-direct\-l \(hy Dividing Rectangles (locally biased) .RE .RS 4 .I ln\-bobyqa \(hy Derivative-free Bound-constrained Optimization .RE .RS 4 .I ln\-sbplx \(hy Subplex variant of Nelder-Mead .RE .RS 4 .I ln\-newuoa\-bound \(hy Derivative-free Bound-constrained Optimization by Iteratively Constructed Quadratic Approximation .RE .RS 4 .I auglag \(hy Augmented Lagrangian algorithm .RE .RS 4 .I ln\-praxis \(hy Gradient-free Local Optimization via the Principal-Axis Method .RE .RS 4 .I gn\-direct\-noscal \(hy Dividing Rectangles (unscaled) .RE .RS 4 .I ld\-tnewton\-precond\-restart \(hy Preconditioned Truncated Newton with steepest-descent restarting .RE .RS 4 .I ld\-slsqp \(hy Sequential Least-Squares Quadratic Programming .RE .RE .RE .RS 14 .I step = 0; double in [0, inf) .RS 2 Initial step size for gradient free methods. .RE .RE .RS 14 .I stop = \-inf; double .RS 2 Stopping criterion: function value falls below this value. .RE .RE .RS 14 .I xtola = 0; double in [0, inf) .RS 2 Stopping criterion: the absolute change of all x\-values is below this value. .RE .RE .RS 14 .I xtolr = 0; double in [0, inf) .RS 2 Stopping criterion: the relative change of all x\-values is below this value. .RE .RE .SH EXAMPLE Register the perfusion series given in 'segment.set' by using Pseudo Ground Truth estimation. Skip two images at the beginning and otherwiese use the default parameters. Store the result in 'registered.set'. .HP mia\-2dmyopgt\-nonrigid \-i segment.set \-o registered.set \-k 2 .SH AUTHOR(s) Gert Wollny .SH COPYRIGHT This software is Copyright (c) 1999\(hy2015 Leipzig, Germany and Madrid, Spain. It comes with ABSOLUTELY NO WARRANTY and you may redistribute it under the terms of the GNU GENERAL PUBLIC LICENSE Version 3 (or later). For more information run the program with the option '\-\-copyright'.