.\" Automatically generated by Pod::Man 4.14 (Pod::Simple 3.43) .\" .\" Standard preamble: .\" ======================================================================== .de Sp \" Vertical space (when we can't use .PP) .if t .sp .5v .if n .sp .. .de Vb \" Begin verbatim text .ft CW .nf .ne \\$1 .. .de Ve \" End verbatim text .ft R .fi .. .\" Set up some character translations and predefined strings. \*(-- will .\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left .\" double quote, and \*(R" will give a right double quote. \*(C+ will .\" give a nicer C++. Capital omega is used to do unbreakable dashes and .\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff, .\" nothing in troff, for use with C<>. .tr \(*W- .ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p' .ie n \{\ . ds -- \(*W- . ds PI pi . if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch . if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch . ds L" "" . ds R" "" . ds C` "" . ds C' "" 'br\} .el\{\ . ds -- \|\(em\| . ds PI \(*p . ds L" `` . ds R" '' . ds C` . ds C' 'br\} .\" .\" Escape single quotes in literal strings from groff's Unicode transform. .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" .\" If the F register is >0, we'll generate index entries on stderr for .\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index .\" entries marked with X<> in POD. Of course, you'll have to process the .\" output yourself in some meaningful fashion. .\" .\" Avoid warning from groff about undefined register 'F'. .de IX .. .nr rF 0 .if \n(.g .if rF .nr rF 1 .if (\n(rF:(\n(.g==0)) \{\ . if \nF \{\ . de IX . tm Index:\\$1\t\\n%\t"\\$2" .. . if !\nF==2 \{\ . nr % 0 . nr F 2 . \} . \} .\} .rr rF .\" ======================================================================== .\" .IX Title "Limits 3pm" .TH Limits 3pm "2023-06-17" "perl v5.36.0" "User Contributed Perl Documentation" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l .nh .SH "NAME" PDL::Graphics::Limits \- derive limits for display purposes .SH "DESCRIPTION" .IX Header "DESCRIPTION" Functions to derive limits for data for display purposes .SH "SYNOPSIS" .IX Header "SYNOPSIS" .Vb 1 \& use PDL::Graphics::Limits; .Ve .SH "FUNCTIONS" .IX Header "FUNCTIONS" .SS "limits" .IX Subsection "limits" \&\fBlimits\fR derives global limits for one or more multi-dimensional sets of data for display purposes. It obtains minimum and maximum limits for each dimension based upon one of several algorithms. .PP .Vb 4 \& @limits = limits( @datasets ); \& @limits = limits( @datasets, \e%attr ); \& $limits = limits( @datasets ); \& $limits = limits( @datasets, \e%attr ); .Ve .PP \fIData Sets\fR .IX Subsection "Data Sets" .PP A data set is represented as a set of one dimensional vectors, one per dimension. All data sets must have the same dimensions. Multi-dimensional data sets are packaged as arrays or hashs; one dimensional data sets need not be. The different representations may be mixed, as long as the dimensions are presented in the same order. Vectors may be either scalars or ndarrays. .IP "One dimensional data sets" 8 .IX Item "One dimensional data sets" One dimensional data sets may be passed directly, with no additional packaging: .Sp .Vb 1 \& limits( $scalar, $ndarray ); .Ve .IP "Data sets as arrays" 8 .IX Item "Data sets as arrays" If the data sets are represented by arrays, each vectors in each array must have the same order: .Sp .Vb 2 \& @ds1 = ( $x1_pdl, $y1_pdl ); \& @ds2 = ( $x2_pdl, $y2_pdl ); .Ve .Sp They are passed by reference: .Sp .Vb 1 \& limits( \e@ds1, \e@ds2 ); .Ve .IP "Data sets as hashes" 8 .IX Item "Data sets as hashes" Hashes are passed by reference as well, but \fImust\fR be further embedded in arrays (also passed by reference), in order that the last one is not confused with the optional trailing attribute hash. For example: .Sp .Vb 1 \& limits( [ \e%ds4, \e%ds5 ], \e%attr ); .Ve .Sp If each hash uses the same keys to identify the data, the keys should be passed as an ordered array via the \f(CW\*(C`VecKeys\*(C'\fR attribute: .Sp .Vb 1 \& limits( [ \e%h1, \e%h2 ], { VecKeys => [ \*(Aqx\*(Aq, \*(Aqy\*(Aq ] } ); .Ve .Sp If the hashes use different keys, each hash must be accompanied by an ordered listing of the keys, embedded in their own anonymous array: .Sp .Vb 1 \& [ \e%h1 => ( \*(Aqx\*(Aq, \*(Aqy\*(Aq ) ], [ \e%h2 => ( \*(Aqu\*(Aq, \*(Aqv\*(Aq ) ] .Ve .Sp Keys which are not explicitly identified are ignored. .PP \fIErrors\fR .IX Subsection "Errors" .PP Error bars must be taken into account when determining limits; care is especially needed if the data are to be transformed before plotting (for logarithmic plots, for example). Errors may be symmetric (a single value indicates the negative and positive going errors for a data point) or asymmetric (two values are required to specify the errors). .PP If the data set is specified as an array of vectors, vectors with errors should be embedded in an array. For symmetric errors, the error is given as a single vector (ndarray or scalar); for asymmetric errors, there should be two values (one of which may be \f(CW\*(C`undef\*(C'\fR to indicate a one-sided error bar): .PP .Vb 6 \& @ds1 = ( $x, # no errors \& [ $y, $yerr ], # symmetric errors \& [ $z, $zn, $zp ], # asymmetric errors \& [ $u, undef, $up ], # one\-sided error bar \& [ $v, $vn, undef ], # one\-sided error bar \& ); .Ve .PP If the data set is specified as a hash of vectors, the names of the error bar keys are appended to the names of the data keys in the \&\f(CW\*(C`VecKeys\*(C'\fR designations. The error bar key names are always prefixed with a character indicating what kind of error they represent: .PP .Vb 3 \& < negative going errors \& > positive going errors \& = symmetric errors .Ve .PP (Column names may be separated by commas or white space.) .PP For example, .PP .Vb 2 \& %ds1 = ( x => $x, xerr => $xerr, y => $y, yerr => $yerr ); \& limits( [ \e%ds1 ], { VecKeys => [ \*(Aqx =xerr\*(Aq, \*(Aqy =yerr\*(Aq ] } ); .Ve .PP To specify asymmetric errors, specify both the negative and positive going errors: .PP .Vb 3 \& %ds1 = ( x => $x, xnerr => $xn, xperr => $xp, \& y => $y ); \& limits( [ \e%ds1 ], { VecKeys => [ \*(Aqx xperr\*(Aq, \*(Aqy\*(Aq ] } ); .Ve .PP For one-sided error bars, specify a column just for the side to be plotted: .PP .Vb 3 \& %ds1 = ( x => $x, xnerr => $xn, \& y => $y, yperr => $yp ); \& limits( [ \e%ds1 ], { VecKeys => [ \*(Aqx yperr\*(Aq ] } ); .Ve .PP Data in hashes with different keys follow the same paradigm: .PP .Vb 1 \& [ \e%h1 => ( \*(Aqx =xerr\*(Aq, \*(Aqy =yerr\*(Aq ) ], [ \e%h2 => ( \*(Aqu =uerr\*(Aq, \*(Aqv =verr\*(Aq ) ] .Ve .PP In this case, the column names specific to a single data set override those specified via the \f(CW\*(C`VecKeys\*(C'\fR option. .PP .Vb 1 \& limits( [ \e%h1 => \*(Aqx =xerr\*(Aq ], { VecKeys => [ \*(Aqx xp\*(Aq ] } ) .Ve .PP In the case of a multi-dimensional data set, one must specify all of the keys: .PP .Vb 2 \& limits( [ \e%h1 => ( \*(Aqx =xerr\*(Aq, \*(Aqy =yerr\*(Aq ) ], \& { VecKeys => [ \*(Aqx xp\*(Aq, \*(Aqy yp\*(Aq ] } ) .Ve .PP One can override only parts of the specifications: .PP .Vb 2 \& limits( [ \e%h1 => ( \*(Aq=xerr\*(Aq, \*(Aq=yerr\*(Aq ) ], \& { VecKeys => [ \*(Aqx xp\*(Aq, \*(Aqy yp\*(Aq ] } ) .Ve .PP Use \f(CW\*(C`undef\*(C'\fR as a placeholder for those keys for which nothing need by overridden: .PP .Vb 2 \& limits( [ \e%h1 => undef, \*(Aqy =yerr\*(Aq ], \& { VecKeys => [ \*(Aqx xp\*(Aq, \*(Aqy yp\*(Aq ] } ) .Ve .PP \fIData Transformation\fR .IX Subsection "Data Transformation" .PP Normally the data passed to \fBlimits\fR should be in their final, transformed, form. For example, if the data will be displayed on a logarithmic scale, the logarithm of the data should be passed to \&\fBlimits\fR. However, if error bars are also to be displayed, the \&\fIuntransformed\fR data must be passed, as .PP .Vb 1 \& log(data) + log(error) != log(data + error) .Ve .PP Since the ranges must be calculated for the transformed values, \&\fBrange\fR must be given the transformation function. .PP If all of the data sets will undergo the same transformation, this may be done with the \fBTrans\fR attribute, which is given a list of subroutine references, one for each element of a data set. An \&\f(CW\*(C`undef\*(C'\fR value may be used to indicate no transformation is to be performed. For example, .PP .Vb 1 \& @ds1 = ( $x, $y ); \& \& # take log of $x \& limits( \e@ds1, { trans => [ \e&log10 ] } ); \& \& # take log of $y \& limits( \e@ds1, { trans => [ undef, \e&log10 ] } ); .Ve .PP If each data set has a different transformation, things are a bit more complicated. If the data sets are specified as arrays of vectors, vectors with transformations should be embedded in an array, with the \fIlast\fR element the subroutine reference: .PP .Vb 1 \& @ds1 = ( [ $x, \e&log10 ], $y ); .Ve .PP With error bars, this looks like this: .PP .Vb 2 \& @ds1 = ( [ $x, $xerr, \e&log10 ], $y ); \& @ds1 = ( [ $x, $xn, $xp, \e&log10 ], $y ); .Ve .PP If the \f(CW\*(C`Trans\*(C'\fR attribute is used in conjunction with individual data set transformations, the latter will override it. To explicitly indicate that a specific data set element has no transformation (normally only needed if \f(CW\*(C`Trans\*(C'\fR is used to specify a default) set the transformation subroutine reference to \f(CW\*(C`undef\*(C'\fR. In this case, the entire quad of data element, negative error, positive error, and transformation subroutine must be specified to avoid confusion: .PP .Vb 1 \& [ $x, $xn, $xp, undef ] .Ve .PP Note that \f(CW$xn\fR and \f(CW$xp\fR may be undef. For symmetric errors, simply set both \f(CW$xn\fR and \f(CW$xp\fR to the same value. .PP For data sets passed as hashes, the subroutine reference is an element in the hashes; the name of the corresponding key is added to the list of keys, preceded by the \f(CW\*(C`&\*(C'\fR character: .PP .Vb 2 \& %ds1 = ( x => $x, xerr => $xerr, xtrans => \e&log10, \& y => $y, yerr => $yerr ); \& \& limits( [ \e%ds1, \e%ds2 ], \& { VecKeys => [ \*(Aqx =xerr &xtrans\*(Aq, \*(Aqy =yerr\*(Aq ] }); \& limits( [ \e%ds1 => \*(Aqx =xerr &xtrans\*(Aq, \*(Aqy =yerr\*(Aq ] ); .Ve .PP If the \f(CW\*(C`Trans\*(C'\fR attribute is specified, and a key name is also specified via the \f(CW\*(C`VecKeys\*(C'\fR attribute or individually for a data set element, the latter will take precedence. For example, .PP .Vb 2 \& $ds1{trans1} = \e&log10; \& $ds1{trans2} = \e&sqrt; \& \& # resolves to exp \& limits( [ \e%ds1 ], { Trans => [ \e&exp ] }); \& \& # resolves to sqrt \& limits( [ \e%ds1 ], { Trans => [ \e&exp ], \& VecKeys => [ \*(Aqx =xerr &trans2\*(Aq ] }); \& \& # resolves to log10 \& limits( [ \e%ds1 => \*(Aq&trans1\*(Aq ], { Trans => [ \e&exp ], \& VecKeys => [ \*(Aqx =xerr &trans2\*(Aq ] }); .Ve .PP To indicate that a particular vector should have no transformation, use a blank key: .PP .Vb 2 \& limits( [ \e%ds1 => ( \*(Aqx =xerr &\*(Aq, \*(Aqy =yerr\*(Aq ) ], [\e%ds2], \& { Trans => [ \e&log10 ] } ); .Ve .PP or set the hash element to \f(CW\*(C`undef\*(C'\fR: .PP .Vb 1 \& $ds1{xtrans} = undef; .Ve .PP \fIRange Algorithms\fR .IX Subsection "Range Algorithms" .PP Sometimes all you want is to find the minimum and maximum values. However, for display purposes, it's often nice to have \*(L"clean\*(R" range bounds. To that end, \fBlimits\fR produces a range in two steps. First it determines the bounds, then it cleans them up. .PP To specify the bounding algorithm, set the value of the \f(CW\*(C`Bounds\*(C'\fR key in the \f(CW%attr\fR hash to one of the following values: .IP "MinMax" 8 .IX Item "MinMax" This indicates the raw minima and maxima should be used. This is the default. .IP "Zscale" 8 .IX Item "Zscale" This is valid for two dimensional data only. The \f(CW\*(C`Y\*(C'\fR values are sorted, then fit to a line. The minimum and maximum values of the evaluated line are used for the \f(CW\*(C`Y\*(C'\fR bounds; the raw minimum and maximum values of the \f(CW\*(C`X\*(C'\fR data are used for the \f(CW\*(C`X\*(C'\fR bounds. This method is good in situations where there are \*(L"spurious\*(R" spikes in the \f(CW\*(C`Y\*(C'\fR data which would generate too large a dynamic range in the bounds. (Note that the \f(CW\*(C`Zscale\*(C'\fR algorithm is found in \s-1IRAF\s0 and \s-1DS9\s0; its true origin is unknown to the author). .PP To specify the cleaning algorithm, set the value of the \f(CW\*(C`Clean\*(C'\fR key in the \f(CW%attr\fR hash to one of the following values: .IP "None" 8 .IX Item "None" Perform no cleaning of the bounds. .IP "RangeFrac" 8 .IX Item "RangeFrac" This is based upon the \f(CW\*(C`PGPLOT\*(C'\fR \fBpgrnge\fR function. It symmetrically expands the bounds (determined above) by a fractional amount: .Sp .Vb 3 \& $expand = $frac * ( $axis\->{max} \- $axis\->{min} ); \& $min = $axis\->{min} \- $expand; \& $max = $axis\->{max} + $expand; .Ve .Sp The fraction may be specified in the \f(CW%attr\fR hash with the \&\f(CW\*(C`RangeFrac\*(C'\fR key. It defaults to \f(CW0.05\fR. .Sp Because this is a symmetric expansion, a limit of \f(CW0.0\fR may be transformed into a negative number, which may be inappropriate. If the \f(CW\*(C`ZeroFix\*(C'\fR key is set to a non-zero value in the \f(CW%attr\fR hash, the cleaned boundary is set to \f(CW0.0\fR if it is on the other side of \&\f(CW0.0\fR from the above determined bounds. For example, If the minimum boundary value is \f(CW0.1\fR, and the cleaned boundary value is \f(CW\*(C`\-0.1\*(C'\fR, the cleaned value will be set to \f(CW0.0\fR. Similarly, if the maximum value is \f(CW\*(C`\-0.1\*(C'\fR and the cleaned value is \f(CW0.1\fR, it will be set to \f(CW0.0\fR. .Sp This is the default clean algorithm. .IP "RoundPow" 8 .IX Item "RoundPow" This is based upon the \f(CW\*(C`PGPLOT\*(C'\fR \fBpgrnd\fR routine. It determines a \&\*(L"nice\*(R" value, where \*(L"nice\*(R" is the closest round number to the boundary value, where a round number is 1, 2, or 5 times a power of 10. .PP \fIUser Specified Limits\fR .IX Subsection "User Specified Limits" .PP To fully or partially override the automatically determined limits, use the \fBLimits\fR attribute. These values are used as input to the range algorithms. .PP The \fBLimits\fR attribute value may be either an array of arrayrefs, or a hash. .IP "Arrays" 4 .IX Item "Arrays" The \fBLimits\fR value may be a reference to an array of arrayrefs, one per dimension, which contain the requested limits. .Sp The dimensions should be ordered in the same way as the datasets. Each arrayref should contain two ordered values, the minimum and maximum limits for that dimension. The limits may have the undefined value if that limit is to be automatically determined. The limits should be transformed (or not) in the same fashion as the data. .Sp For example, to specify that the second dimension's maximum limit should be fixed at a specified value: .Sp .Vb 1 \& Limits => [ [ undef, undef ], [ undef, $max ] ] .Ve .Sp Note that placeholder values are required for leading dimensions which are to be handled automatically. For convenience, if limits for a dimension are to be fully automatically determined, the placeholder arrayref may be empty. Also, trailing undefined limits may be omitted. The above example may be rewritten as: .Sp .Vb 1 \& Limits => [ [], [ undef, $max ] ] .Ve .Sp If the minimum value was specified instead of the maximum, the following would be acceptable: .Sp .Vb 1 \& Limits => [ [], [ $min ] ] .Ve .Sp If the data has but a single dimension, nested arrayrefs are not required: .Sp .Vb 1 \& Limits => [ $min, $max ] .Ve .IP "Hashes" 4 .IX Item "Hashes" Th \fBLimits\fR attribute value may be a hash; this can only be used in conjunction with the \fBVecKeys\fR attribute. If the data sets are represented by hashes which do not have common keys, then the user defined limits should be specified with arrays. The keys in the \&\fBLimits\fR hash should be the names of the data vectors in the \&\fBVecKeys\fR. Their values should be hashes with keys \f(CW\*(C`min\*(C'\fR and \f(CW\*(C`max\*(C'\fR, representing the minimum and maximum limits. Limits which have the value \&\f(CW\*(C`undef\*(C'\fR or which are not specified will be determined from the data. For example, .Sp .Vb 1 \& Limits => { x => { min => 30 }, y => { max => 22 } } .Ve .PP \fIReturn Values\fR .IX Subsection "Return Values" .PP When called in a list context, it returns the minimum and maximum bounds for each axis: .PP .Vb 1 \& @limits = ( $min_1, $max_1, $min_2, $max_2, ... ); .Ve .PP which makes life easier when using the \fBenv\fR method: .PP .Vb 1 \& $window\->env( @limits ); .Ve .PP When called in a scalar context, it returns a hashref with the keys .PP .Vb 1 \& axis1, ... axisN .Ve .PP where \f(CW\*(C`axisN\*(C'\fR is the name of the Nth axis. If axis names have not been specified via the \f(CW\*(C`VecKeys\*(C'\fR element of \f(CW%attr\fR, names are concocted as \f(CW\*(C`q1\*(C'\fR, \f(CW\*(C`q2\*(C'\fR, etc. The values are hashes with keys \&\f(CW\*(C`min\*(C'\fR and \f(CW\*(C`max\*(C'\fR. For example: .PP .Vb 2 \& { q1 => { min => 1, max => 2}, \& q2 => { min => \-33, max => 33 } } .Ve .PP \fIMiscellaneous\fR .IX Subsection "Miscellaneous" .PP Normally \fBlimits\fR complains if hash data sets don't contain specific keys for error bars or transformation functions. If, however, you'd like to specify default values using the \f(CW%attr\fR argument, but there are data sets which don't have the data and you'd rather not have to explicitly indicate that, set the \f(CW\*(C`KeyCroak\*(C'\fR attribute to zero. For example, .PP .Vb 2 \& limits( [ { x => $x }, { x => $x1, xerr => $xerr } ], \& { VecKeys => [ \*(Aqx =xerr\*(Aq ] } ); .Ve .PP will generate an error because the first data set does not have an \f(CW\*(C`xerr\*(C'\fR key. Resetting \f(CW\*(C`KeyCroak\*(C'\fR will fix this: .PP .Vb 2 \& limits( [ { x => $x }, { x => $x1, xerr => $xerr } ], \& { VecKeys => [ \*(Aqx =xerr\*(Aq ], KeyCroak => 0 } ); .Ve .SH "AUTHOR" .IX Header "AUTHOR" Diab Jerius, .SH "COPYRIGHT AND LICENSE" .IX Header "COPYRIGHT AND LICENSE" Copyright (C) 2004 by the Smithsonian Astrophysical Observatory .PP This software is released under the \s-1GNU\s0 General Public License. You may find a copy at .