.\" -*- mode: troff; coding: utf-8 -*- .\" Automatically generated by Pod::Man 5.01 (Pod::Simple 3.43) .\" .\" Standard preamble: .\" ======================================================================== .de Sp \" Vertical space (when we can't use .PP) .if t .sp .5v .if n .sp .. .de Vb \" Begin verbatim text .ft CW .nf .ne \\$1 .. .de Ve \" End verbatim text .ft R .fi .. .\" \*(C` and \*(C' are quotes in nroff, nothing in troff, for use with C<>. .ie n \{\ . ds C` "" . ds C' "" 'br\} .el\{\ . ds C` . ds C' 'br\} .\" .\" Escape single quotes in literal strings from groff's Unicode transform. .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" .\" If the F register is >0, we'll generate index entries on stderr for .\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index .\" entries marked with X<> in POD. Of course, you'll have to process the .\" output yourself in some meaningful fashion. .\" .\" Avoid warning from groff about undefined register 'F'. .de IX .. .nr rF 0 .if \n(.g .if rF .nr rF 1 .if (\n(rF:(\n(.g==0)) \{\ . if \nF \{\ . de IX . tm Index:\\$1\t\\n%\t"\\$2" .. . if !\nF==2 \{\ . nr % 0 . nr F 2 . \} . \} .\} .rr rF .\" ======================================================================== .\" .IX Title "PDL::LinearAlgebra 3pm" .TH PDL::LinearAlgebra 3pm 2024-03-07 "perl v5.38.2" "User Contributed Perl Documentation" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l .nh .SH NAME PDL::LinearAlgebra \- Linear Algebra utils for PDL .SH SYNOPSIS .IX Header "SYNOPSIS" .Vb 1 \& use PDL::LinearAlgebra; \& \& $a = random (100,100); \& ($U, $s, $V) = mdsvd($a); .Ve .SH DESCRIPTION .IX Header "DESCRIPTION" This module provides a convenient interface to PDL::LinearAlgebra::Real and PDL::LinearAlgebra::Complex. Since Blas and Lapack use a column major ordering scheme some routines here need to transpose matrices before calling fortran routines and transpose back (see the documentation of each routine). If you need optimized code use directly PDL::LinearAlgebra::Real and PDL::LinearAlgebra::Complex. .SH FUNCTIONS .IX Header "FUNCTIONS" .SS setlaerror .IX Subsection "setlaerror" Sets action type when an error is encountered, returns previous type. Available values are NO, WARN and BARF (predefined constants). If, for example, in computation of the inverse, singularity is detected, the routine can silently return values from computation (see manuals), warn about singularity or barf. BARF is the default value. .PP .Vb 1 \& # h : x \-> g(f(x)) \& \& $a = sequence(5,5); \& $err = setlaerror(NO); \& ($b, $info)= f($a); \& setlaerror($err); \& $info ? barf "can\*(Aqt compute h" : return g($b); .Ve .SS getlaerror .IX Subsection "getlaerror" Gets action type when an error is encountered. .PP .Vb 3 \& 0 => NO, \& 1 => WARN, \& 2 => BARF .Ve .SS t .IX Subsection "t" .Vb 2 \& PDL = t(PDL, SCALAR(conj)) \& conj : Conjugate Transpose = 1 | Transpose = 0, default = 0; .Ve .PP Convenient function for transposing real or complex 2D array(s). For complex data, if conj is true returns conjugate transposed array(s) and doesn't support dataflow. Supports threading. .SS issym .IX Subsection "issym" .Vb 3 \& PDL = issym(PDL, SCALAR|PDL(tol),SCALAR(hermitian)) \& tol : tolerance value, default: 1e\-8 for double else 1e\-5 \& hermitian : Hermitian = 1 | Symmetric = 0, default = 0; .Ve .PP Checks symmetricity/Hermitianicity of matrix. Supports threading. .SS diag .IX Subsection "diag" Returns i\-th diagonal if matrix in entry or matrix with i\-th diagonal with entry. I\-th diagonal returned flows data back&forth. Can be used as lvalue subs if your perl supports it. Supports threading. .PP .Vb 3 \& PDL = diag(PDL, SCALAR(i), SCALAR(vector))) \& i : i\-th diagonal, default = 0 \& vector : create diagonal matrices by threading over row vectors, default = 0 .Ve .PP .Vb 7 \& my $a = random(5,5); \& my $diag = diag($a,2); \& # If your perl support lvaluable subroutines. \& $a\->diag(\-2) .= pdl(1,2,3); \& # Construct a (5,5,5) PDL (5 matrices) with \& # diagonals from row vectors of $a \& $a\->diag(0,1) .Ve .SS tritosym .IX Subsection "tritosym" Returns symmetric or Hermitian matrix from lower or upper triangular matrix. Supports inplace and threading. Uses tricpy or ctricpy from Lapack. .PP .Vb 3 \& PDL = tritosym(PDL, SCALAR(uplo), SCALAR(conj)) \& uplo : UPPER = 0 | LOWER = 1, default = 0 \& conj : Hermitian = 1 | Symmetric = 0, default = 0; .Ve .PP .Vb 3 \& # Assume $a is symmetric triangular \& my $a = random(10,10); \& my $b = tritosym($a); .Ve .SS positivise .IX Subsection "positivise" Returns entry pdl with changed sign by row so that average of positive sign > 0. In other words threads among dimension 1 and row = \-row if sum(sign(row)) < 0. Only makes sense for real ndarrays. Works inplace. .PP .Vb 3 \& my $a = random(10,10); \& $a \-= 0.5; \& $a\->xchg(0,1)\->inplace\->positivise; .Ve .SS mcrossprod .IX Subsection "mcrossprod" Computes the cross-product of two matrix: A' x B. If only one matrix is given, takes B to be the same as A. Supports threading. Uses crossprod or ccrossprod. .PP .Vb 1 \& PDL = mcrossprod(PDL(A), (PDL(B)) .Ve .PP .Vb 2 \& my $a = random(10,10); \& my $crossproduct = mcrossprod($a); .Ve .SS mrank .IX Subsection "mrank" Computes the rank of a matrix, using a singular value decomposition, returning a Perl scalar. from Lapack. .PP .Vb 2 \& SCALAR = mrank(PDL, SCALAR(TOL)) \& TOL: tolerance value, default : mnorm(dims(PDL),\*(Aqinf\*(Aq) * mnorm(PDL) * EPS .Ve .PP .Vb 2 \& my $a = random(10,10); \& my $b = mrank($a, 1e\-5); .Ve .SS mnorm .IX Subsection "mnorm" Computes norm of real or complex matrix Supports threading. .PP .Vb 6 \& PDL(norm) = mnorm(PDL, SCALAR(ord)); \& ord : \& 0|\*(Aqinf\*(Aq : Infinity norm \& 1|\*(Aqone\*(Aq : One norm \& 2|\*(Aqtwo\*(Aq : norm 2 (default) \& 3|\*(Aqfro\*(Aq : frobenius norm .Ve .PP .Vb 2 \& my $a = random(10,10); \& my $norm = mnorm($a); .Ve .SS mdet .IX Subsection "mdet" Computes determinant of a general square matrix using LU factorization. Supports threading. Uses getrf or cgetrf from Lapack. .PP .Vb 1 \& PDL(determinant) = mdet(PDL); .Ve .PP .Vb 2 \& my $a = random(10,10); \& my $det = mdet($a); .Ve .SS mposdet .IX Subsection "mposdet" Compute determinant of a symmetric or Hermitian positive definite square matrix using Cholesky factorization. Supports threading. Uses potrf or cpotrf from Lapack. .PP .Vb 2 \& (PDL, PDL) = mposdet(PDL, SCALAR) \& SCALAR : UPPER = 0 | LOWER = 1, default = 0 .Ve .PP .Vb 2 \& my $a = random(10,10); \& my $det = mposdet($a); .Ve .SS mcond .IX Subsection "mcond" Computes the condition number (two-norm) of a general matrix. .PP The condition number in two-n is defined: .PP .Vb 1 \& norm (a) * norm (inv (a)). .Ve .PP Uses a singular value decomposition. Supports threading. .PP .Vb 1 \& PDL = mcond(PDL) .Ve .PP .Vb 2 \& my $a = random(10,10); \& my $cond = mcond($a); .Ve .SS mrcond .IX Subsection "mrcond" Estimates the reciprocal condition number of a general square matrix using LU factorization in either the 1\-norm or the infinity-norm. .PP The reciprocal condition number is defined: .PP .Vb 1 \& 1/(norm (a) * norm (inv (a))) .Ve .PP Supports threading. Works on transposed array(s) .PP .Vb 4 \& PDL = mrcond(PDL, SCALAR(ord)) \& ord : \& 0 : Infinity norm (default) \& 1 : One norm .Ve .PP .Vb 2 \& my $a = random(10,10); \& my $rcond = mrcond($a,1); .Ve .SS morth .IX Subsection "morth" Returns an orthonormal basis of the range space of matrix A. .PP .Vb 2 \& PDL = morth(PDL(A), SCALAR(tol)) \& tol : tolerance for determining rank, default: 1e\-8 for double else 1e\-5 .Ve .PP .Vb 2 \& my $a = sequence(10,10); \& my $ortho = morth($a, 1e\-8); .Ve .SS mnull .IX Subsection "mnull" Returns an orthonormal basis of the null space of matrix A. Works on transposed array. .PP .Vb 2 \& PDL = mnull(PDL(A), SCALAR(tol)) \& tol : tolerance for determining rank, default: 1e\-8 for double else 1e\-5 .Ve .PP .Vb 2 \& my $a = sequence(10,10); \& my $null = mnull($a, 1e\-8); .Ve .SS minv .IX Subsection "minv" Computes inverse of a general square matrix using LU factorization. Supports inplace and threading. Uses getrf and getri or cgetrf and cgetri from Lapack and returns \f(CW\*(C`inverse, info\*(C'\fR in array context. .PP .Vb 1 \& PDL(inv) = minv(PDL) .Ve .PP .Vb 2 \& my $a = random(10,10); \& my $inv = minv($a); .Ve .SS mtriinv .IX Subsection "mtriinv" Computes inverse of a triangular matrix. Supports inplace and threading. Uses trtri or ctrtri from Lapack. Returns \f(CW\*(C`inverse, info\*(C'\fR in array context. .PP .Vb 3 \& (PDL, PDL(info))) = mtriinv(PDL, SCALAR(uplo), SCALAR|PDL(diag)) \& uplo : UPPER = 0 | LOWER = 1, default = 0 \& diag : UNITARY DIAGONAL = 1, default = 0 .Ve .PP .Vb 3 \& # Assume $a is upper triangular \& my $a = random(10,10); \& my $inv = mtriinv($a); .Ve .SS msyminv .IX Subsection "msyminv" Computes inverse of a symmetric square matrix using the Bunch-Kaufman diagonal pivoting method. Supports inplace and threading. Uses sytrf and sytri or csytrf and csytri from Lapack and returns \f(CW\*(C`inverse, info\*(C'\fR in array context. .PP .Vb 2 \& (PDL, (PDL(info))) = msyminv(PDL, SCALAR|PDL(uplo)) \& uplo : UPPER = 0 | LOWER = 1, default = 0 .Ve .PP .Vb 3 \& # Assume $a is symmetric \& my $a = random(10,10); \& my $inv = msyminv($a); .Ve .SS mposinv .IX Subsection "mposinv" Computes inverse of a symmetric positive definite square matrix using Cholesky factorization. Supports inplace and threading. Uses potrf and potri or cpotrf and cpotri from Lapack and returns \f(CW\*(C`inverse, info\*(C'\fR in array context. .PP .Vb 2 \& (PDL, (PDL(info))) = mposinv(PDL, SCALAR|PDL(uplo)) \& uplo : UPPER = 0 | LOWER = 1, default = 0 .Ve .PP .Vb 4 \& # Assume $a is symmetric positive definite \& my $a = random(10,10); \& $a = $a\->crossprod($a); \& my $inv = mposinv($a); .Ve .SS mpinv .IX Subsection "mpinv" Computes pseudo-inverse (Moore-Penrose) of a general matrix. Works on transposed array. .PP .Vb 2 \& PDL(pseudo\-inv) = mpinv(PDL, SCALAR(tol)) \& TOL: tolerance value, default : mnorm(dims(PDL),\*(Aqinf\*(Aq) * mnorm(PDL) * EPS .Ve .PP .Vb 2 \& my $a = random(5,10); \& my $inv = mpinv($a); .Ve .SS mlu .IX Subsection "mlu" Computes LU factorization. Uses getrf or cgetrf from Lapack and returns L, U, pivot and info. Works on transposed array. .PP .Vb 1 \& (PDL(l), PDL(u), PDL(pivot), PDL(info)) = mlu(PDL) .Ve .PP .Vb 2 \& my $a = random(10,10); \& ($l, $u, $pivot, $info) = mlu($a); .Ve .SS mchol .IX Subsection "mchol" Computes Cholesky decomposition of a symmetric matrix also known as symmetric square root. If inplace flag is set, overwrite the leading upper or lower triangular part of A else returns triangular matrix. Returns \f(CW\*(C`cholesky, info\*(C'\fR in array context. Supports threading. Uses potrf or cpotrf from Lapack. .PP .Vb 2 \& PDL(Cholesky) = mchol(PDL, SCALAR) \& SCALAR : UPPER = 0 | LOWER = 1, default = 0 .Ve .PP .Vb 3 \& my $a = random(10,10); \& $a = crossprod($a, $a); \& my $u = mchol($a); .Ve .SS mhessen .IX Subsection "mhessen" Reduces a square matrix to Hessenberg form H and orthogonal matrix Q. .PP It reduces a general matrix A to upper Hessenberg form H by an orthogonal similarity transformation: .PP .Vb 1 \& Q\*(Aq x A x Q = H .Ve .PP or .PP .Vb 1 \& A = Q x H x Q\*(Aq .Ve .PP Uses gehrd and orghr or cgehrd and cunghr from Lapack and returns \f(CW\*(C`H\*(C'\fR in scalar context else \f(CW\*(C`H\*(C'\fR and \f(CW\*(C`Q\*(C'\fR. Works on transposed array. .PP .Vb 1 \& (PDL(h), (PDL(q))) = mhessen(PDL) .Ve .PP .Vb 2 \& my $a = random(10,10); \& ($h, $q) = mhessen($a); .Ve .SS mschur .IX Subsection "mschur" Computes Schur form, works inplace. .PP .Vb 1 \& A = Z x T x Z\*(Aq .Ve .PP Supports threading for unordered eigenvalues. Uses gees or cgees from Lapack and returns schur(T) in scalar context. Works on transposed array(s). .PP .Vb 10 \& ( PDL(schur), (PDL(eigenvalues), (PDL(left schur vectors), PDL(right schur vectors), $sdim), $info) ) = mschur(PDL(A), SCALAR(schur vector),SCALAR(left eigenvector), SCALAR(right eigenvector),SCALAR(select_func), SCALAR(backtransform), SCALAR(norm)) \& schur vector : Schur vectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& left eigenvector : Left eigenvectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& right eigenvector : Right eigenvectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& select_func : Select_func is used to select eigenvalues to sort \& to the top left of the Schur form. \& An eigenvalue is selected if PerlInt select_func(w) is true; \& (the inputs are converted to complex ndarrays for you) \& Note that a selected complex eigenvalue may no longer \& satisfy select_func(w) = 1 after ordering, since \& ordering may change the value of complex eigenvalues \& (especially if the eigenvalue is ill\-conditioned). \& All eigenvalues/vectors are selected if select_func is undefined. \& backtransform : Whether or not backtransforms eigenvectors to those of A. \& Only supported if schur vectors are computed, default = 1. \& norm : Whether or not computed eigenvectors are normalized to have Euclidean norm equal to \& 1 and largest component real, default = 1 \& \& Returned values : \& Schur form T (SCALAR CONTEXT), \& eigenvalues, \& Schur vectors (Z) if requested, \& left eigenvectors if requested \& right eigenvectors if requested \& sdim: Number of eigenvalues selected if select_func is defined. \& info: Info output from gees/cgees. .Ve .PP .Vb 8 \& my $a = random(10,10); \& my $schur = mschur($a); \& sub select{ \& my $w = shift; \& # select "discrete time" eigenspace \& return $w\->Cabs < 1 ? 1 : 0; \& } \& my ($schur,$eigen,$svectors,$evectors) = mschur($a,1,1,0,\e&select); .Ve .SS mschurx .IX Subsection "mschurx" Computes Schur form, works inplace. Uses geesx or cgeesx from Lapack and returns schur(T) in scalar context. Works on transposed array. .PP .Vb 10 \& ( PDL(schur) (,PDL(eigenvalues)) (, PDL(schur vectors), HASH(result)) ) = mschurx(PDL, SCALAR(schur vector), SCALAR(left eigenvector), SCALAR(right eigenvector),SCALAR(select_func), SCALAR(sense), SCALAR(backtransform), SCALAR(norm)) \& schur vector : Schur vectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& left eigenvector : Left eigenvectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& right eigenvector : Right eigenvectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& select_func : Select_func is used to select eigenvalues to sort \& to the top left of the Schur form. \& An eigenvalue is selected if PerlInt select_func(w) is true; \& (the inputs are converted to complex ndarrays for you) \& Note that a selected complex eigenvalue may no longer \& satisfy select_func(w) = 1 after ordering, since \& ordering may change the value of complex eigenvalues \& (especially if the eigenvalue is ill\-conditioned). \& All eigenvalues/vectors are selected if select_func is undefined. \& sense : Determines which reciprocal condition numbers will be computed. \& 0: None are computed \& 1: Computed for average of selected eigenvalues only \& 2: Computed for selected right invariant subspace only \& 3: Computed for both \& If select_func is undefined, sense is not used. \& backtransform : Whether or not backtransforms eigenvectors to those of A. \& Only supported if schur vector are computed, default = 1 \& norm : Whether or not computed eigenvectors are normalized to have Euclidean norm equal to \& 1 and largest component real, default = 1 \& \& Returned values : \& Schur form T (SCALAR CONTEXT), \& eigenvalues, \& Schur vectors if requested, \& HASH{VL}: left eigenvectors if requested \& HASH{VR}: right eigenvectors if requested \& HASH{info}: info output from gees/cgees. \& if select_func is defined: \& HASH{n}: number of eigenvalues selected, \& HASH{rconde}: reciprocal condition numbers for the average of \& the selected eigenvalues if requested, \& HASH{rcondv}: reciprocal condition numbers for the selected \& right invariant subspace if requested. .Ve .PP .Vb 8 \& my $a = random(10,10); \& my $schur = mschurx($a); \& sub select{ \& my $m = shift; \& # select "discrete time" eigenspace \& return $m\->Cabs < 1 ? 1 : 0; \& } \& my ($schur,$eigen, $vectors,%ret) = mschurx($a,1,0,0,\e&select); .Ve .SS mgschur .IX Subsection "mgschur" Computes generalized Schur decomposition of the pair (A,B). .PP .Vb 2 \& A = Q x S x Z\*(Aq \& B = Q x T x Z\*(Aq .Ve .PP Uses gges or cgges from Lapack. Works on transposed array. .PP .Vb 10 \& ( PDL(schur S), PDL(schur T), PDL(alpha), PDL(beta), HASH{result}) = mgschur(PDL(A), PDL(B), SCALAR(left schur vector),SCALAR(right schur vector),SCALAR(left eigenvector), SCALAR(right eigenvector), SCALAR(select_func), SCALAR(backtransform), SCALAR(scale)) \& left schur vector : Left Schur vectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& right schur vector : Right Schur vectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& left eigenvector : Left eigenvectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& right eigenvector : Right eigenvectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& select_func : Select_func is used to select eigenvalues to sort. \& to the top left of the Schur form. \& An eigenvalue w = wr(j)+sqrt(\-1)*wi(j) is selected if \& PerlInt select_func(alpha,beta) is true; \& (the inputs are converted to complex ndarrays for you) \& Note that a selected complex eigenvalue may no longer \& satisfy select_func = 1 after ordering, since \& ordering may change the value of complex eigenvalues \& (especially if the eigenvalue is ill\-conditioned). \& All eigenvalues/vectors are selected if select_func is undefined. \& backtransform : Whether or not backtransforms eigenvectors to those of (A,B). \& Only supported if right and/or left schur vector are computed, \& scale : Whether or not computed eigenvectors are scaled so the largest component \& will have abs(real part) + abs(imag. part) = 1, default = 1 \& \& Returned values : \& Schur form S, \& Schur form T, \& alpha, \& beta (eigenvalues = alpha/beta), \& HASH{info}: info output from gges/cgges. \& HASH{SL}: left Schur vectors if requested \& HASH{SR}: right Schur vectors if requested \& HASH{VL}: left eigenvectors if requested \& HASH{VR}: right eigenvectors if requested \& HASH{n} : Number of eigenvalues selected if select_func is defined. .Ve .PP .Vb 8 \& my $a = random(10,10); \& my $b = random(10,10); \& my ($S,$T) = mgschur($a,$b); \& sub select{ \& my ($alpha,$beta) = @_; \& return $alpha\->Cabs < abs($beta) ? 1 : 0; \& } \& my ($S, $T, $alpha, $beta, %res) = mgschur( $a, $b, 1, 1, 1, 1,\e&select); .Ve .SS mgschurx .IX Subsection "mgschurx" Computes generalized Schur decomposition of the pair (A,B). .PP .Vb 2 \& A = Q x S x Z\*(Aq \& B = Q x T x Z\*(Aq .Ve .PP Uses ggesx or cggesx from Lapack. Works on transposed array. .PP .Vb 10 \& ( PDL(schur S), PDL(schur T), PDL(alpha), PDL(beta), HASH{result}) = mgschurx(PDL(A), PDL(B), SCALAR(left schur vector),SCALAR(right schur vector),SCALAR(left eigenvector), SCALAR(right eigenvector), SCALAR(select_func), SCALAR(sense), SCALAR(backtransform), SCALAR(scale)) \& left schur vector : Left Schur vectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& right schur vector : Right Schur vectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& left eigenvector : Left eigenvectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& right eigenvector : Right eigenvectors returned, none = 0 | all = 1 | selected = 2, default = 0 \& select_func : Select_func is used to select eigenvalues to sort. \& to the top left of the Schur form. \& An eigenvalue w = wr(j)+sqrt(\-1)*wi(j) is selected if \& PerlInt select_func(alpha,beta) is true; \& (the inputs are converted to complex ndarrays for you) \& Note that a selected complex eigenvalue may no longer \& satisfy select_func = 1 after ordering, since \& ordering may change the value of complex eigenvalues \& (especially if the eigenvalue is ill\-conditioned). \& All eigenvalues/vectors are selected if select_func is undefined. \& sense : Determines which reciprocal condition numbers will be computed. \& 0: None are computed \& 1: Computed for average of selected eigenvalues only \& 2: Computed for selected deflating subspaces only \& 3: Computed for both \& If select_func is undefined, sense is not used. \& \& backtransform : Whether or not backtransforms eigenvectors to those of (A,B). \& Only supported if right and/or left schur vector are computed, default = 1 \& scale : Whether or not computed eigenvectors are scaled so the largest component \& will have abs(real part) + abs(imag. part) = 1, default = 1 \& \& Returned values : \& Schur form S, \& Schur form T, \& alpha, \& beta (eigenvalues = alpha/beta), \& HASH{info}: info output from gges/cgges. \& HASH{SL}: left Schur vectors if requested \& HASH{SR}: right Schur vectors if requested \& HASH{VL}: left eigenvectors if requested \& HASH{VR}: right eigenvectors if requested \& HASH{rconde}: reciprocal condition numbers for average of selected eigenvalues if requested \& HASH{rcondv}: reciprocal condition numbers for selected deflating subspaces if requested \& HASH{n} : Number of eigenvalues selected if select_func is defined. .Ve .PP .Vb 8 \& my $a = random(10,10); \& my $b = random(10,10); \& my ($S,$T) = mgschurx($a,$b); \& sub select{ \& my ($alpha,$beta) = @_; \& return $alpha\->Cabs < abs($beta) ? 1 : 0; \& } \& my ($S, $T, $alpha, $beta, %res) = mgschurx( $a, $b, 1, 1, 1, 1,\e&select,3); .Ve .SS mqr .IX Subsection "mqr" Computes QR decomposition. Handles complex data. Uses geqrf and orgqr or cgeqrf and cungqr from Lapack and returns \f(CW\*(C`Q\*(C'\fR in scalar context. Works on transposed array. .PP .Vb 2 \& (PDL(Q), PDL(R), PDL(info)) = mqr(PDL, SCALAR) \& SCALAR : ECONOMIC = 0 | FULL = 1, default = 0 .Ve .PP .Vb 5 \& my $a = random(10,10); \& my ( $q, $r ) = mqr($a); \& # Can compute full decomposition if nrow > ncol \& $a = random(5,7); \& ( $q, $r ) = $a\->mqr(1); .Ve .SS mrq .IX Subsection "mrq" Computes RQ decomposition. Handles complex data. Uses gerqf and orgrq or cgerqf and cungrq from Lapack and returns \f(CW\*(C`Q\*(C'\fR in scalar context. Works on transposed array. .PP .Vb 2 \& (PDL(R), PDL(Q), PDL(info)) = mrq(PDL, SCALAR) \& SCALAR : ECONOMIC = 0 | FULL = 1, default = 0 .Ve .PP .Vb 5 \& my $a = random(10,10); \& my ( $r, $q ) = mrq($a); \& # Can compute full decomposition if nrow < ncol \& $a = random(5,7); \& ( $r, $q ) = $a\->mrq(1); .Ve .SS mql .IX Subsection "mql" Computes QL decomposition. Handles complex data. Uses geqlf and orgql or cgeqlf and cungql from Lapack and returns \f(CW\*(C`Q\*(C'\fR in scalar context. Works on transposed array. .PP .Vb 2 \& (PDL(Q), PDL(L), PDL(info)) = mql(PDL, SCALAR) \& SCALAR : ECONOMIC = 0 | FULL = 1, default = 0 .Ve .PP .Vb 5 \& my $a = random(10,10); \& my ( $q, $l ) = mql($a); \& # Can compute full decomposition if nrow > ncol \& $a = random(5,7); \& ( $q, $l ) = $a\->mql(1); .Ve .SS mlq .IX Subsection "mlq" Computes LQ decomposition. Handles complex data. Uses gelqf and orglq or cgelqf and cunglq from Lapack and returns \f(CW\*(C`Q\*(C'\fR in scalar context. Works on transposed array. .PP .Vb 2 \& ( PDL(L), PDL(Q), PDL(info) ) = mlq(PDL, SCALAR) \& SCALAR : ECONOMIC = 0 | FULL = 1, default = 0 .Ve .PP .Vb 5 \& my $a = random(10,10); \& my ( $l, $q ) = mlq($a); \& # Can compute full decomposition if nrow < ncol \& $a = random(5,7); \& ( $l, $q ) = $a\->mlq(1); .Ve .SS msolve .IX Subsection "msolve" Solves linear system of equations using LU decomposition. .PP .Vb 1 \& A * X = B .Ve .PP Returns X in scalar context else X, LU, pivot vector and info. B is overwritten by X if its inplace flag is set. Supports threading. Uses gesv or cgesv from Lapack. Works on transposed arrays. .PP .Vb 1 \& (PDL(X), (PDL(LU), PDL(pivot), PDL(info))) = msolve(PDL(A), PDL(B) ) .Ve .PP .Vb 3 \& my $a = random(5,5); \& my $b = random(10,5); \& my $X = msolve($a, $b); .Ve .SS msolvex .IX Subsection "msolvex" Solves linear system of equations using LU decomposition. .PP .Vb 1 \& A * X = B .Ve .PP Can optionally equilibrate the matrix. Uses gesvx or cgesvx from Lapack. Works on transposed arrays. .PP .Vb 10 \& (PDL, (HASH(result))) = msolvex(PDL(A), PDL(B), HASH(options)) \& where options are: \& transpose: solves A\*(Aq * X = B \& 0: false \& 1: true \& equilibrate: equilibrates A if necessary. \& form equilibration is returned in HASH{\*(Aqequilibration\*(Aq}: \& 0: no equilibration \& 1: row equilibration \& 2: column equilibration \& row scale factors are returned in HASH{\*(Aqrow\*(Aq} \& column scale factors are returned in HASH{\*(Aqcolumn\*(Aq} \& 0: false \& 1: true \& LU: returns lu decomposition in HASH{LU} \& 0: false \& 1: true \& A: returns scaled A if equilibration was done in HASH{A} \& 0: false \& 1: true \& B: returns scaled B if equilibration was done in HASH{B} \& 0: false \& 1: true \& Returned values: \& X (SCALAR CONTEXT), \& HASH{\*(Aqpivot\*(Aq}: \& Pivot indice from LU factorization \& HASH{\*(Aqrcondition\*(Aq}: \& Reciprocal condition of the matrix \& HASH{\*(Aqferror\*(Aq}: \& Forward error bound \& HASH{\*(Aqberror\*(Aq}: \& Componentwise relative backward error \& HASH{\*(Aqrpvgrw\*(Aq}: \& Reciprocal pivot growth factor \& HASH{\*(Aqinfo\*(Aq}: \& Info: output from gesvx .Ve .PP .Vb 7 \& my $a = random(10,10); \& my $b = random(5,10); \& my %options = ( \& LU=>1, \& equilibrate => 1, \& ); \& my( $X, %result) = msolvex($a,$b,%options); .Ve .SS mtrisolve .IX Subsection "mtrisolve" Solves linear system of equations with triangular matrix A. .PP .Vb 1 \& A * X = B or A\*(Aq * X = B .Ve .PP B is overwritten by X if its inplace flag is set. Supports threading. Uses trtrs or ctrtrs from Lapack. Work on transposed array(s). .PP .Vb 4 \& (PDL(X), (PDL(info)) = mtrisolve(PDL(A), SCALAR(uplo), PDL(B), SCALAR(trans), SCALAR(diag)) \& uplo : UPPER = 0 | LOWER = 1 \& trans : NOTRANSPOSE = 0 | TRANSPOSE = 1, default = 0 \& uplo : UNITARY DIAGONAL = 1, default = 0 .Ve .PP .Vb 4 \& # Assume $a is upper triagonal \& my $a = random(5,5); \& my $b = random(5,10); \& my $X = mtrisolve($a, 0, $b); .Ve .SS msymsolve .IX Subsection "msymsolve" Solves linear system of equations using diagonal pivoting method with symmetric matrix A. .PP .Vb 1 \& A * X = B .Ve .PP Returns X in scalar context else X, block diagonal matrix D (and the multipliers), pivot vector an info. B is overwritten by X if its inplace flag is set. Supports threading. Uses sysv or csysv from Lapack. Works on transposed array(s). .PP .Vb 2 \& (PDL(X), ( PDL(D), PDL(pivot), PDL(info) ) ) = msymsolve(PDL(A), SCALAR(uplo), PDL(B) ) \& uplo : UPPER = 0 | LOWER = 1, default = 0 .Ve .PP .Vb 4 \& # Assume $a is symmetric \& my $a = random(5,5); \& my $b = random(5,10); \& my $X = msymsolve($a, 0, $b); .Ve .SS msymsolvex .IX Subsection "msymsolvex" Solves linear system of equations using diagonal pivoting method with symmetric matrix A. .PP .Vb 1 \& A * X = B .Ve .PP Uses sysvx or csysvx from Lapack. Works on transposed array. .PP .Vb 10 \& (PDL, (HASH(result))) = msymsolvex(PDL(A), SCALAR (uplo), PDL(B), SCALAR(d)) \& uplo : UPPER = 0 | LOWER = 1, default = 0 \& d : whether return diagonal matrix d and pivot vector \& FALSE = 0 | TRUE = 1, default = 0 \& Returned values: \& X (SCALAR CONTEXT), \& HASH{\*(AqD\*(Aq}: \& Block diagonal matrix D (and the multipliers) (if requested) \& HASH{\*(Aqpivot\*(Aq}: \& Pivot indice from LU factorization (if requested) \& HASH{\*(Aqrcondition\*(Aq}: \& Reciprocal condition of the matrix \& HASH{\*(Aqferror\*(Aq}: \& Forward error bound \& HASH{\*(Aqberror\*(Aq}: \& Componentwise relative backward error \& HASH{\*(Aqinfo\*(Aq}: \& Info: output from sysvx .Ve .PP .Vb 4 \& # Assume $a is symmetric \& my $a = random(10,10); \& my $b = random(5,10); \& my ($X, %result) = msolvex($a, 0, $b); .Ve .SS mpossolve .IX Subsection "mpossolve" Solves linear system of equations using Cholesky decomposition with symmetric positive definite matrix A. .PP .Vb 1 \& A * X = B .Ve .PP Returns X in scalar context else X, U or L and info. B is overwritten by X if its inplace flag is set. Supports threading. Uses posv or cposv from Lapack. Works on transposed array(s). .PP .Vb 2 \& (PDL, (PDL, PDL, PDL)) = mpossolve(PDL(A), SCALAR(uplo), PDL(B) ) \& uplo : UPPER = 0 | LOWER = 1, default = 0 .Ve .PP .Vb 4 \& # asume $a is symmetric positive definite \& my $a = random(5,5); \& my $b = random(5,10); \& my $X = mpossolve($a, 0, $b); .Ve .SS mpossolvex .IX Subsection "mpossolvex" Solves linear system of equations using Cholesky decomposition with symmetric positive definite matrix A .PP .Vb 1 \& A * X = B .Ve .PP Can optionally equilibrate the matrix. Uses posvx or cposvx from Lapack. Works on transposed array(s). .PP .Vb 10 \& (PDL, (HASH(result))) = mpossolvex(PDL(A), SCARA(uplo), PDL(B), HASH(options)) \& uplo : UPPER = 0 | LOWER = 1, default = 0 \& where options are: \& equilibrate: equilibrates A if necessary. \& form equilibration is returned in HASH{\*(Aqequilibration\*(Aq}: \& 0: no equilibration \& 1: equilibration \& scale factors are returned in HASH{\*(Aqscale\*(Aq} \& 0: false \& 1: true \& U|L: returns Cholesky factorization in HASH{U} or HASH{L} \& 0: false \& 1: true \& A: returns scaled A if equilibration was done in HASH{A} \& 0: false \& 1: true \& B: returns scaled B if equilibration was done in HASH{B} \& 0: false \& 1: true \& Returned values: \& X (SCALAR CONTEXT), \& HASH{\*(Aqrcondition\*(Aq}: \& Reciprocal condition of the matrix \& HASH{\*(Aqferror\*(Aq}: \& Forward error bound \& HASH{\*(Aqberror\*(Aq}: \& Componentwise relative backward error \& HASH{\*(Aqinfo\*(Aq}: \& Info: output from gesvx .Ve .PP .Vb 7 \& # Assume $a is symmetric positive definite \& my $a = random(10,10); \& my $b = random(5,10); \& my %options = (U=>1, \& equilibrate => 1, \& ); \& my ($X, %result) = msolvex($a, 0, $b,%opt); .Ve .SS mlls .IX Subsection "mlls" Solves overdetermined or underdetermined real linear systems using QR or LQ factorization. .PP If M > N in the M\-by-N matrix A, returns the residual sum of squares too. Uses gels or cgels from Lapack. Works on transposed arrays. .PP .Vb 2 \& PDL(X) = mlls(PDL(A), PDL(B), SCALAR(trans)) \& trans : NOTRANSPOSE = 0 | TRANSPOSE/CONJUGATE = 1, default = 0 .Ve .PP .Vb 3 \& $a = random(4,5); \& $b = random(3,5); \& ($x, $res) = mlls($a, $b); .Ve .SS mllsy .IX Subsection "mllsy" Computes the minimum-norm solution to a real linear least squares problem using a complete orthogonal factorization. .PP Uses gelsy or cgelsy from Lapack. Works on transposed arrays. .PP .Vb 9 \& ( PDL(X), ( HASH(result) ) ) = mllsy(PDL(A), PDL(B)) \& Returned values: \& X (SCALAR CONTEXT), \& HASH{\*(AqA\*(Aq}: \& complete orthogonal factorization of A \& HASH{\*(Aqjpvt\*(Aq}: \& details of columns interchanges \& HASH{\*(Aqrank\*(Aq}: \& effective rank of A .Ve .PP .Vb 3 \& my $a = random(10,10); \& my $b = random(10,10); \& $X = mllsy($a, $b); .Ve .SS mllss .IX Subsection "mllss" Computes the minimum-norm solution to a real linear least squares problem using a singular value decomposition. .PP Uses gelss or gelsd from Lapack. Works on transposed arrays. .PP .Vb 10 \& ( PDL(X), ( HASH(result) ) )= mllss(PDL(A), PDL(B), SCALAR(method)) \& method: specifies which method to use (see Lapack for further details) \& \*(Aq(c)gelss\*(Aq or \*(Aq(c)gelsd\*(Aq, default = \*(Aq(c)gelsd\*(Aq \& Returned values: \& X (SCALAR CONTEXT), \& HASH{\*(AqV\*(Aq}: \& if method = (c)gelss, the right singular vectors, stored columnwise \& HASH{\*(Aqs\*(Aq}: \& singular values from SVD \& HASH{\*(Aqres\*(Aq}: \& if A has full rank the residual sum\-of\-squares for the solution \& HASH{\*(Aqrank\*(Aq}: \& effective rank of A \& HASH{\*(Aqinfo\*(Aq}: \& info output from method .Ve .PP .Vb 3 \& my $a = random(10,10); \& my $b = random(10,10); \& $X = mllss($a, $b); .Ve .SS mglm .IX Subsection "mglm" Solves a general Gauss-Markov Linear Model (GLM) problem. Supports threading. Uses ggglm or cggglm from Lapack. Works on transposed arrays. .PP .Vb 2 \& (PDL(x), PDL(y)) = mglm(PDL(a), PDL(b), PDL(d)) \& where d is the left hand side of the GLM equation .Ve .PP .Vb 4 \& my $a = random(8,10); \& my $b = random(7,10); \& my $d = random(10); \& my ($x, $y) = mglm($a, $b, $d); .Ve .SS mlse .IX Subsection "mlse" Solves a linear equality-constrained least squares (LSE) problem. Uses gglse or cgglse from Lapack. Works on transposed arrays. .PP .Vb 9 \& (PDL(x), PDL(res2)) = mlse(PDL(a), PDL(b), PDL(c), PDL(d)) \& where \& c : The right hand side vector for the \& least squares part of the LSE problem. \& d : The right hand side vector for the \& constrained equation. \& x : The solution of the LSE problem. \& res2 : The residual sum of squares for the solution \& (returned only in array context) .Ve .PP .Vb 5 \& my $a = random(5,4); \& my $b = random(5,3); \& my $c = random(4); \& my $d = random(3); \& my ($x, $res2) = mlse($a, $b, $c, $d); .Ve .SS meigen .IX Subsection "meigen" Computes eigenvalues and, optionally, the left and/or right eigenvectors of a general square matrix (spectral decomposition). Eigenvectors are normalized (Euclidean norm = 1) and largest component real. The eigenvalues and eigenvectors returned are complex ndarrays. If only eigenvalues are requested, info is returned in array context. Supports threading. Uses geev or cgeev from Lapack. Works on transposed arrays. .PP .Vb 3 \& (PDL(values), (PDL(LV), (PDL(RV)), (PDL(info)))) = meigen(PDL, SCALAR(left vector), SCALAR(right vector)) \& left vector : FALSE = 0 | TRUE = 1, default = 0 \& right vector : FALSE = 0 | TRUE = 1, default = 0 .Ve .PP .Vb 2 \& my $a = random(10,10); \& my ( $eigenvalues, $left_eigenvectors, $right_eigenvectors ) = meigen($a,1,1); .Ve .SS meigenx .IX Subsection "meigenx" Computes eigenvalues, one-norm and, optionally, the left and/or right eigenvectors of a general square matrix (spectral decomposition). Eigenvectors are normalized (Euclidean norm = 1) and largest component real. The eigenvalues and eigenvectors returned are complex ndarrays. Uses geevx or cgeevx from Lapack. Works on transposed arrays. .PP .Vb 10 \& (PDL(value), (PDL(lv), (PDL(rv)), HASH(result)), HASH(result)) = meigenx(PDL, HASH(options)) \& where options are: \& vector: eigenvectors to compute \& \*(Aqleft\*(Aq: computes left eigenvectors \& \*(Aqright\*(Aq: computes right eigenvectors \& \*(Aqall\*(Aq: computes left and right eigenvectors \& 0: doesn\*(Aqt compute (default) \& rcondition: reciprocal condition numbers to compute (returned in HASH{\*(Aqrconde\*(Aq} for eigenvalues and HASH{\*(Aqrcondv\*(Aq} for eigenvectors) \& \*(Aqvalue\*(Aq: computes reciprocal condition numbers for eigenvalues \& \*(Aqvector\*(Aq: computes reciprocal condition numbers for eigenvectors \& \*(Aqall\*(Aq: computes reciprocal condition numbers for eigenvalues and eigenvectors \& 0: doesn\*(Aqt compute (default) \& error: specifies whether or not it computes the error bounds (returned in HASH{\*(Aqeerror\*(Aq} and HASH{\*(Aqverror\*(Aq}) \& error bound = EPS * One\-norm / rcond(e|v) \& (reciprocal condition numbers for eigenvalues or eigenvectors must be computed). \& 1: returns error bounds \& 0: not computed \& scale: specifies whether or not it diagonaly scales the entry matrix \& (scale details returned in HASH : \*(Aqscale\*(Aq) \& 1: scales \& 0: Doesn\*(Aqt scale (default) \& permute: specifies whether or not it permutes row and columns \& (permute details returned in HASH{\*(Aqbalance\*(Aq}) \& 1: permutes \& 0: Doesn\*(Aqt permute (default) \& schur: specifies whether or not it returns the Schur form (returned in HASH{\*(Aqschur\*(Aq}) \& 1: returns Schur form \& 0: not returned \& Returned values: \& eigenvalues (SCALAR CONTEXT), \& left eigenvectors if requested, \& right eigenvectors if requested, \& HASH{\*(Aqnorm\*(Aq}: \& One\-norm of the matrix \& HASH{\*(Aqinfo\*(Aq}: \& Info: if > 0, the QR algorithm failed to compute all the eigenvalues \& (see syevx for further details) .Ve .PP .Vb 10 \& my $a = random(10,10); \& my %options = ( rcondition => \*(Aqall\*(Aq, \& vector => \*(Aqall\*(Aq, \& error => 1, \& scale => 1, \& permute=>1, \& schur => 1 \& ); \& my ( $eigenvalues, $left_eigenvectors, $right_eigenvectors, %result) = meigenx($a,%options); \& print "Error bounds for eigenvalues:\en $eigenvalues\en are:\en". transpose($result{\*(Aqeerror\*(Aq}) unless $info; .Ve .SS mgeigen .IX Subsection "mgeigen" Computes generalized eigenvalues and, optionally, the left and/or right generalized eigenvectors for a pair of N\-by-N real nonsymmetric matrices (A,B) . The alpha from ratio alpha/beta is a complex ndarray. Supports threading. Uses ggev or cggev from Lapack. Works on transposed arrays. .PP .Vb 3 \& ( PDL(alpha), PDL(beta), ( PDL(LV), (PDL(RV) ), PDL(info)) = mgeigen(PDL(A),PDL(B) SCALAR(left vector), SCALAR(right vector)) \& left vector : FALSE = 0 | TRUE = 1, default = 0 \& right vector : FALSE = 0 | TRUE = 1, default = 0 .Ve .PP .Vb 3 \& my $a = random(10,10); \& my $b = random(10,10); \& my ( $alpha, $beta, $left_eigenvectors, $right_eigenvectors ) = mgeigen($a, $b,1, 1); .Ve .SS mgeigenx .IX Subsection "mgeigenx" Computes generalized eigenvalues, one-norms and, optionally, the left and/or right generalized eigenvectors for a pair of N\-by-N real nonsymmetric matrices (A,B). The alpha from ratio alpha/beta is a complex ndarray. Uses ggevx or cggevx from Lapack. Works on transposed arrays. .PP .Vb 10 \& (PDL(alpha), PDL(beta), PDL(lv), PDL(rv), HASH(result) ) = mgeigenx(PDL(a), PDL(b), HASH(options)) \& where options are: \& vector: eigenvectors to compute \& \*(Aqleft\*(Aq: computes left eigenvectors \& \*(Aqright\*(Aq: computes right eigenvectors \& \*(Aqall\*(Aq: computes left and right eigenvectors \& 0: doesn\*(Aqt compute (default) \& rcondition: reciprocal condition numbers to compute (returned in HASH{\*(Aqrconde\*(Aq} for eigenvalues and HASH{\*(Aqrcondv\*(Aq} for eigenvectors) \& \*(Aqvalue\*(Aq: computes reciprocal condition numbers for eigenvalues \& \*(Aqvector\*(Aq: computes reciprocal condition numbers for eigenvectors \& \*(Aqall\*(Aq: computes reciprocal condition numbers for eigenvalues and eigenvectors \& 0: doesn\*(Aqt compute (default) \& error: specifies whether or not it computes the error bounds (returned in HASH{\*(Aqeerror\*(Aq} and HASH{\*(Aqverror\*(Aq}) \& error bound = EPS * sqrt(one\-norm(a)**2 + one\-norm(b)**2) / rcond(e|v) \& (reciprocal condition numbers for eigenvalues or eigenvectors must be computed). \& 1: returns error bounds \& 0: not computed \& scale: specifies whether or not it diagonaly scales the entry matrix \& (scale details returned in HASH : \*(Aqlscale\*(Aq and \*(Aqrscale\*(Aq) \& 1: scales \& 0: doesn\*(Aqt scale (default) \& permute: specifies whether or not it permutes row and columns \& (permute details returned in HASH{\*(Aqbalance\*(Aq}) \& 1: permutes \& 0: Doesn\*(Aqt permute (default) \& schur: specifies whether or not it returns the Schur forms (returned in HASH{\*(Aqaschur\*(Aq} and HASH{\*(Aqbschur\*(Aq}) \& (right or left eigenvectors must be computed). \& 1: returns Schur forms \& 0: not returned \& Returned values: \& alpha, \& beta, \& left eigenvectors if requested, \& right eigenvectors if requested, \& HASH{\*(Aqanorm\*(Aq}, HASH{\*(Aqbnorm\*(Aq}: \& One\-norm of the matrix A and B \& HASH{\*(Aqinfo\*(Aq}: \& Info: if > 0, the QR algorithm failed to compute all the eigenvalues \& (see syevx for further details) .Ve .PP .Vb 11 \& $a = random(10,10); \& $b = random(10,10); \& %options = (rcondition => \*(Aqall\*(Aq, \& vector => \*(Aqall\*(Aq, \& error => 1, \& scale => 1, \& permute=>1, \& schur => 1 \& ); \& ($alpha, $beta, $left_eigenvectors, $right_eigenvectors, %result) = mgeigenx($a, $b,%options); \& print "Error bounds for eigenvalues:\en $eigenvalues\en are:\en". transpose($result{\*(Aqeerror\*(Aq}) unless $info; .Ve .SS msymeigen .IX Subsection "msymeigen" Computes eigenvalues and, optionally eigenvectors of a real symmetric square or complex Hermitian matrix (spectral decomposition). The eigenvalues are computed from lower or upper triangular matrix. If only eigenvalues are requested, info is returned in array context. Supports threading and works inplace if eigenvectors are requested. From Lapack, uses syev or syevd for real and cheev or cheevd for complex. Works on transposed array(s). .PP .Vb 4 \& (PDL(values), (PDL(VECTORS)), PDL(info)) = msymeigen(PDL, SCALAR(uplo), SCALAR(vector), SCALAR(method)) \& uplo : UPPER = 0 | LOWER = 1, default = 0 \& vector : FALSE = 0 | TRUE = 1, default = 0 \& method : \*(Aqsyev\*(Aq | \*(Aqsyevd\*(Aq | \*(Aqcheev\*(Aq | \*(Aqcheevd\*(Aq, default = \*(Aqsyevd\*(Aq|\*(Aqcheevd\*(Aq .Ve .PP .Vb 3 \& # Assume $a is symmetric \& my $a = random(10,10); \& my ( $eigenvalues, $eigenvectors ) = msymeigen($a,0,1, \*(Aqsyev\*(Aq); .Ve .SS msymeigenx .IX Subsection "msymeigenx" Computes eigenvalues and, optionally eigenvectors of a symmetric square matrix (spectral decomposition). The eigenvalues are computed from lower or upper triangular matrix and can be selected by specifying a range. From Lapack, uses syevx or syevr for real and cheevx or cheevr for complex. Works on transposed arrays. .PP .Vb 10 \& (PDL(value), (PDL(vector)), PDL(n), PDL(info), (PDL(support)) ) = msymeigenx(PDL, SCALAR(uplo), SCALAR(vector), HASH(options)) \& uplo : UPPER = 0 | LOWER = 1, default = 0 \& vector : FALSE = 0 | TRUE = 1, default = 0 \& where options are: \& range_type: method for selecting eigenvalues \& indice: range of indices \& interval: range of values \& 0: find all eigenvalues and optionally all vectors \& range: PDL(2), lower and upper bounds interval or smallest and largest indices \& 1<=range<=N for indice \& abstol: specifies error tolerance for eigenvalues \& method: specifies which method to use (see Lapack for further details) \& \*(Aqsyevx\*(Aq (default) \& \*(Aqsyevr\*(Aq \& \*(Aqcheevx\*(Aq (default) \& \*(Aqcheevr\*(Aq \& Returned values: \& eigenvalues (SCALAR CONTEXT), \& eigenvectors if requested, \& total number of eigenvalues found (n), \& info \& issupz or ifail (support) according to method used and returned info, \& for (sy|che)evx returns support only if info != 0 .Ve .PP .Vb 10 \& # Assume $a is symmetric \& my $a = random(10,10); \& my $overflow = lamch(9); \& my $range = cat pdl(0),$overflow; \& my $abstol = pdl(1.e\-5); \& my %options = (range_type=>\*(Aqinterval\*(Aq, \& range => $range, \& abstol => $abstol, \& method=>\*(Aqsyevd\*(Aq); \& my ( $eigenvalues, $eigenvectors, $n, $isuppz ) = msymeigenx($a,0,1, %options); .Ve .SS msymgeigen .IX Subsection "msymgeigen" Computes eigenvalues and, optionally eigenvectors of a real generalized symmetric-definite or Hermitian-definite eigenproblem. The eigenvalues are computed from lower or upper triangular matrix If only eigenvalues are requested, info is returned in array context. Supports threading. From Lapack, uses sygv or sygvd for real or chegv or chegvd for complex. Works on transposed array(s). .PP .Vb 9 \& (PDL(values), (PDL(vectors)), PDL(info)) = msymgeigen(PDL(a), PDL(b),SCALAR(uplo), SCALAR(vector), SCALAR(type), SCALAR(method)) \& uplo : UPPER = 0 | LOWER = 1, default = 0 \& vector : FALSE = 0 | TRUE = 1, default = 0 \& type : \& 1: A * x = (lambda) * B * x \& 2: A * B * x = (lambda) * x \& 3: B * A * x = (lambda) * x \& default = 1 \& method : \*(Aqsygv\*(Aq | \*(Aqsygvd\*(Aq for real or ,\*(Aqchegv\*(Aq | \*(Aqchegvd\*(Aq for complex, default = \*(Aqsygvd\*(Aq | \*(Aqchegvd\*(Aq .Ve .PP .Vb 5 \& # Assume $a is symmetric \& my $a = random(10,10); \& my $b = random(10,10); \& $b = $b\->crossprod($b); \& my ( $eigenvalues, $eigenvectors ) = msymgeigen($a, $b, 0, 1, 1, \*(Aqsygv\*(Aq); .Ve .SS msymgeigenx .IX Subsection "msymgeigenx" Computes eigenvalues and, optionally eigenvectors of a real generalized symmetric-definite or Hermitian eigenproblem. The eigenvalues are computed from lower or upper triangular matrix and can be selected by specifying a range. Uses sygvx or cheevx from Lapack. Works on transposed arrays. .PP .Vb 10 \& (PDL(value), (PDL(vector)), PDL(info), PDL(n), (PDL(support)) ) = msymeigenx(PDL(a), PDL(b), SCALAR(uplo), SCALAR(vector), HASH(options)) \& uplo : UPPER = 0 | LOWER = 1, default = 0 \& vector : FALSE = 0 | TRUE = 1, default = 0 \& where options are: \& type : Specifies the problem type to be solved \& 1: A * x = (lambda) * B * x \& 2: A * B * x = (lambda) * x \& 3: B * A * x = (lambda) * x \& default = 1 \& range_type: method for selecting eigenvalues \& indice: range of indices \& interval: range of values \& 0: find all eigenvalues and optionally all vectors \& range: PDL(2), lower and upper bounds interval or smallest and largest indices \& 1<=range<=N for indice \& abstol: specifies error tolerance for eigenvalues \& Returned values: \& eigenvalues (SCALAR CONTEXT), \& eigenvectors if requested, \& total number of eigenvalues found (n), \& info \& ifail according to returned info (support). .Ve .PP .Vb 12 \& # Assume $a is symmetric \& my $a = random(10,10); \& my $b = random(10,10); \& $b = $b\->crossprod($b); \& my $overflow = lamch(9); \& my $range = cat pdl(0),$overflow; \& my $abstol = pdl(1.e\-5); \& my %options = (range_type=>\*(Aqinterval\*(Aq, \& range => $range, \& abstol => $abstol, \& type => 1); \& my ( $eigenvalues, $eigenvectors, $n, $isuppz ) = msymgeigenx($a, $b, 0,1, %options); .Ve .SS mdsvd .IX Subsection "mdsvd" Computes SVD using Coppen's divide and conquer algorithm. Return singular values in scalar context else left (U), singular values, right (V' (hermitian for complex)) singular vectors and info. Supports threading. If only singulars values are requested, info is only returned in array context. Uses gesdd or cgesdd from Lapack. .PP .Vb 5 \& (PDL(U), (PDL(s), PDL(V)), PDL(info)) = mdsvd(PDL, SCALAR(job)) \& job : 0 = computes only singular values \& 1 = computes full SVD (square U and V) \& 2 = computes SVD (singular values, right and left singular vectors) \& default = 1 .Ve .PP .Vb 2 \& my $a = random(5,10); \& my ($u, $s, $v) = mdsvd($a); .Ve .SS msvd .IX Subsection "msvd" Computes SVD. Can compute singular values, either U or V or neither. Return singular values in scalar context else left (U), singular values, right (V' (hermitian for complex) singular vector and info. Supports threading. If only singular values are requested, info is returned in array context. Uses gesvd or cgesvd from Lapack. .PP .Vb 9 \& ( (PDL(U)), PDL(s), (PDL(V), PDL(info)) = msvd(PDL, SCALAR(jobu), SCALAR(jobv)) \& jobu : 0 = Doesn\*(Aqt compute U \& 1 = computes full SVD (square U) \& 2 = computes right singular vectors \& default = 1 \& jobv : 0 = Doesn\*(Aqt compute V \& 1 = computes full SVD (square V) \& 2 = computes left singular vectors \& default = 1 .Ve .PP .Vb 2 \& my $a = random(10,10); \& my ($u, $s, $v) = msvd($a); .Ve .SS mgsvd .IX Subsection "mgsvd" Computes generalized (or quotient) singular value decomposition. If the effective rank of (A',B')' is 0 return only unitary V, U, Q. Handles complex data. Uses ggsvd or cggsvd from Lapack. Works on transposed arrays. .PP .Vb 10 \& (PDL(sa), PDL(sb), %ret) = mgsvd(PDL(a), PDL(b), %HASH(options)) \& where options are: \& V: whether or not computes V (boolean, returned in HASH{\*(AqV\*(Aq}) \& U: whether or not computes U (boolean, returned in HASH{\*(AqU\*(Aq}) \& Q: whether or not computes Q (boolean, returned in HASH{\*(AqQ\*(Aq}) \& D1: whether or not computes D1 (boolean, returned in HASH{\*(AqD1\*(Aq}) \& D2: whether or not computes D2 (boolean, returned in HASH{\*(AqD2\*(Aq}) \& 0R: whether or not computes 0R (boolean, returned in HASH{\*(Aq0R\*(Aq}) \& R: whether or not computes R (boolean, returned in HASH{\*(AqR\*(Aq}) \& X: whether or not computes X (boolean, returned in HASH{\*(AqX\*(Aq}) \& all: whether or not computes all the above. \& Returned value: \& sa,sb : singular value pairs of A and B (generalized singular values = sa/sb) \& $ret{\*(Aqrank\*(Aq} : effective numerical rank of (A\*(Aq,B\*(Aq)\*(Aq \& $ret{\*(Aqinfo\*(Aq} : info from (c)ggsvd .Ve .PP .Vb 3 \& my $a = random(5,5); \& my $b = random(5,7); \& my ($c, $s, %ret) = mgsvd($a, $b, X => 1); .Ve .SH AUTHOR .IX Header "AUTHOR" Copyright (C) Grégory Vanuxem 2005\-2018. .PP This library is free software; you can redistribute it and/or modify it under the terms of the Perl Artistic License as in the file Artistic_2 in this distribution.