.\" Automatically generated by Pod::Man 2.28 (Pod::Simple 3.29) .\" .\" Standard preamble: .\" ======================================================================== .de Sp \" Vertical space (when we can't use .PP) .if t .sp .5v .if n .sp .. .de Vb \" Begin verbatim text .ft CW .nf .ne \\$1 .. .de Ve \" End verbatim text .ft R .fi .. .\" Set up some character translations and predefined strings. \*(-- will .\" give an unbreakable dash, \*(PI will give pi, \*(L" will give a left .\" double quote, and \*(R" will give a right double quote. \*(C+ will .\" give a nicer C++. Capital omega is used to do unbreakable dashes and .\" therefore won't be available. \*(C` and \*(C' expand to `' in nroff, .\" nothing in troff, for use with C<>. .tr \(*W- .ds C+ C\v'-.1v'\h'-1p'\s-2+\h'-1p'+\s0\v'.1v'\h'-1p' .ie n \{\ . ds -- \(*W- . ds PI pi . if (\n(.H=4u)&(1m=24u) .ds -- \(*W\h'-12u'\(*W\h'-12u'-\" diablo 10 pitch . if (\n(.H=4u)&(1m=20u) .ds -- \(*W\h'-12u'\(*W\h'-8u'-\" diablo 12 pitch . ds L" "" . ds R" "" . ds C` "" . ds C' "" 'br\} .el\{\ . ds -- \|\(em\| . ds PI \(*p . ds L" `` . ds R" '' . ds C` . ds C' 'br\} .\" .\" Escape single quotes in literal strings from groff's Unicode transform. .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" .\" If the F register is turned on, we'll generate index entries on stderr for .\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index .\" entries marked with X<> in POD. Of course, you'll have to process the .\" output yourself in some meaningful fashion. .\" .\" Avoid warning from groff about undefined register 'F'. .de IX .. .nr rF 0 .if \n(.g .if rF .nr rF 1 .if (\n(rF:(\n(.g==0)) \{ . if \nF \{ . de IX . tm Index:\\$1\t\\n%\t"\\$2" .. . if !\nF==2 \{ . nr % 0 . nr F 2 . \} . \} .\} .rr rF .\" ======================================================================== .\" .IX Title "Test::Assertions::Manual 3pm" .TH Test::Assertions::Manual 3pm "2015-12-22" "perl v5.22.1" "User Contributed Perl Documentation" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l .nh .SH "NAME" Test::Assertions::Manual \- A guide to using Test::Assertions .SH "DESCRIPTION" .IX Header "DESCRIPTION" This is a brief guide to how you can use the Test::Assertions module in your code and test scripts. The \f(CW\*(C`Test::Assertions\*(C'\fR documentation has a comprehensive list of options. .SH "Unit testing" .IX Header "Unit testing" To use Test::Assertions for unit testing, import it with the argument \*(L"test\*(R": .PP .Vb 1 \& use Test::Assertions qw(test); .Ve .PP The output of Test::Assertions in test mode is suitable for collation with Test::Harness. Only the \s-1\fIASSERT\s0()\fR and \fIplan()\fR routines can create any output \- all the other routines simply return values. .SS "Planning tests" .IX Subsection "Planning tests" Test::Assertions offers a \*(L"plan tests\*(R" syntax similar to Test::More: .PP .Vb 3 \& plan tests => 42; \& # Which creates the output: \& 1..42 .Ve .PP If you find having to increment the number at the top of your test script every time you add a test irritating, you can use the automatic, Do What I Mean, form: .PP .Vb 1 \& plan tests; .Ve .PP In this case, Test::Assertions will read your code and count the number of \s-1ASSERT\s0 statements and use this for the expected number of tests. A caveat is that it expects all your \s-1ASSERT\s0 statements to be executed once only, hence ASSERTs in if and foreach blocks will fool Test::Assertions and you'll have to maintain the count manually in these cases. Furthermore, it uses \fIcaller()\fR to get the filename of the code so it may not work if you invoke your program with a relative filename and then change working directory before calling this automatic \*(L"plan tests;\*(R" form. .PP Test::Assertions offers a couple of additional functions \- \fIonly()\fR and \fIignore()\fR to control which tests will be reported. Usage is as follows: .PP .Vb 2 \& ignore(2, 5) if($^O eq \*(AqMsWin32\*(Aq); \& only(1..10) unless($^O eq \*(AqMsWin32\*(Aq); .Ve .PP Note that these won't stop the actual test code from being attempted, but the results won't be reported. .SS "Testing things" .IX Subsection "Testing things" The routines for constructing tests are deliberately \s-1ALL CAPS\s0 so you can discriminate at a glance between the test and what is being tested. To check something does what expected, use \s-1ASSERT:\s0 .PP .Vb 1 \& ASSERT(1 == 1); .Ve .PP This gives the output: .PP .Vb 1 \& ok 1 .Ve .PP An optional 2nd arg may be supplied for a comment to label the test: .PP .Vb 1 \& ASSERT(1 == 1, "an example test"); .Ve .PP This gives the output: .PP .Vb 1 \& ok 1 (an example test) .Ve .PP In the interest of brevity of documentation, I'll omit the 2nd argument from my examples below. For your real-world tests, labelling the output is strongly recommended so when something fails you know what it is. .PP If you are hopelessly addicted to invoking your tests with an \fIok()\fR routine, Test::Assertions has a concession for Test::Simple/More junkies: .PP .Vb 3 \& use Test::Assertions qw(test/ok); \& plan tests => 1; \& ok(1, "ok() works just like ASSERT()"); .Ve .SS "More complex tests with helper routines" .IX Subsection "More complex tests with helper routines" Most real-world unit tests will need to check data structures returned from an \s-1API. \s0 The \s-1\fIEQUAL\s0()\fR function compares two data structures deeply (a bit like Test::More's eq_array or eq_hash): .PP .Vb 2 \& ASSERT( EQUAL(\e@arr, [1,2,3]) ); \& ASSERT( EQUAL(\e%observed, \e%expected) ); .Ve .PP For routines that return large strings or write to files (e.g. templating), you might want to have your expected output held externally in a file. Test::Assertions provides a few routines to make this easy. \s-1EQUALS_FILE\s0 compares a string to the contents of a file: .PP .Vb 1 \& ASSERT( EQUALS_FILE($returned, "expected.txt") ); .Ve .PP Whereas \s-1FILES_EQUAL\s0 compares the contents of 2 files: .PP .Vb 3 \& $object_to_test\->write_file("observed.txt"); \& ASSERT( FILES_EQUAL("observed.txt", "expected.txt") ); \& unlink("observed.txt"); #always clean up so state on 2nd run is same as 1st run .Ve .PP If your files contain serialized data structures, e.g. the output of Data::Dumper, you may wish to use \fIdo()\fR, or \fIeval()\fR their contents, and use the \s-1\fIEQUAL\s0()\fR routine to compare the structures, rather than comparing the serialized forms directly. .PP .Vb 3 \& my $var1 = do(\*(Aqfile1.datadump\*(Aq); \& my $var2 = do(\*(Aqfile2.datadump\*(Aq); \& ASSERT( EQUAL($var1, $var2), \*(Aqserialized versions matched\*(Aq ); .Ve .PP The \s-1MATCHES_FILE\s0 routine compares a string with regex that is read from a file, which is most useful if your string contains dates, timestamps, filepaths, or other items which might change from one run of the test to the next, or across different machines: .PP .Vb 1 \& ASSERT( MATCHES_FILE($string_to_examine, "expected.regex.txt") ); .Ve .PP Another thing you are likely to want to test is code raising exceptions with \fIdie()\fR. The \s-1\fIDIED\s0()\fR function confirms if a coderef raises an exception: .PP .Vb 5 \& ASSERT( DIED( \& sub { \& $object_to_test\->method(@bad_inputs); \& } \& )); .Ve .PP The \s-1DIED\s0 routine doesn't clobber $@, so you can use this in your test description: .PP .Vb 5 \& ASSERT( DIED( \& sub { \& $object_to_test\->method(@bad_inputs); \& } \& ), "raises an exception \- " . (chomp $@, $@)); .Ve .PP Occasionally you'll want to check if a perl script simply compiles. Whilst this is no substitute for writing a proper unit test for the script, sometimes it's useful: .PP .Vb 1 \& ASSERT( COMPILES("somescript.pl") ); .Ve .PP An optional second argument forces the code to be compiled under 'strict': .PP .Vb 1 \& ASSERT( COMPILES("somescript.pl", 1) ); .Ve .PP (normally you'll have this in your script anyway). .SS "Aggregating other tests together" .IX Subsection "Aggregating other tests together" For complex systems you may have a whole tree of unit tests, corresponding to different areas of functionality of the system. For example, there may be a set of tests corresponding to the expression evaluation sublanguage within a templating system. Rather than simply aggregating everything with Test::Harness in one flat list, you may want to aggregate each subtree of related functionality so that the Test::Harness summarisation is across these higher-level units. .PP Test::Assertions provides two functions to aggregate the output of other tests. These work on result strings (starting with \*(L"ok\*(R" or \*(L"not ok\*(R"). \s-1ASSESS\s0 is the lower-level routine working directly on result strings, \s-1ASSESS_FILE\s0 runs a unit test script and parses the output. In a scalar context they return a summary result string: .PP .Vb 2 \& @results = (\*(Aqok 1\*(Aq, \*(Aqnot ok 2\*(Aq, \*(AqA comment\*(Aq, \*(Aqok 3\*(Aq); \& print scalar ASSESS(\e@results); .Ve .PP would result in something like: .PP .Vb 1 \& not ok (1 errors in 3 tests) .Ve .PP This output is of course a suitable input to \s-1ASSESS\s0 so complex hierarchies may be created. In an array context, they return a boolean value and a description which is suitable for feeding into \s-1ASSERT \&\s0(although \s-1ASSERT\s0's $;$ prototype means it will ignore the description) : .PP .Vb 3 \& ASSERT ASSESS_FILE("expr/set_1.t"); \& ASSERT ASSESS_FILE("expr/set_2.t"); \& ASSERT ASSESS_FILE("expr/set_3.t"); .Ve .PP would generate output such as: .PP .Vb 3 \& ok 1 \& ok 2 \& ok 3 .Ve .PP Finally Test::Assertions provides a helper routine to interpret result strings: .PP .Vb 1 \& ($bool, $description) = INTERPRET("not ok 4 (test four)"); .Ve .PP would result in: .PP .Vb 2 \& $bool = 0; \& $description = "test four"; .Ve .PP which might be useful for writing your own custom collation code. .SH "Using Test::Assertions for run-time checking" .IX Header "Using Test::Assertions for run-time checking" C programmers often use \s-1ASSERT\s0 macros to trap runtime \*(L"should never happen\*(R" errors in their code. You can use Test::Assertions to do this: .PP .Vb 3 \& use Test::Assertions qq(die); \& $rv = some_function(); \& ASSERT($rv == 0, "some_function returned a non\-zero value"); .Ve .PP You can also import Test::Assertions with warn rather than die so that the code continues executing: .PP .Vb 2 \& use constant ASSERTIONS_MODE => $ENV{ENVIRONMENT} eq \*(Aqproduction\*(Aq? \*(Aqwarn\*(Aq : \*(Aqdie\*(Aq; \& use Test::Assertions(ASSERTIONS_MODE); .Ve .PP Environment variables provide a nice way of switching compile-time behaviour from outside the process. .SS "Minimising overhead" .IX Subsection "Minimising overhead" Importing Test::Assertions with no arguments results in \s-1ASSERT\s0 statements doing nothing, but unlike \s-1ASSERT\s0 macros in C where the preprocessor filters this out before compilation, there are 2 types of residual overhead: .IP "Runtime overhead" 4 .IX Item "Runtime overhead" When Test::Assertions is imported with no arguments, the \s-1ASSERT\s0 statement is aliased to an empty sub. There is a small overhead in executing this. In practice, unless you do an \s-1ASSERT\s0 on every other line, or in a performance-critical loop, you're unlikely to notice the overhead compared to the other work that your code is doing. .IP "Compilation overhead" 4 .IX Item "Compilation overhead" The Test::Assertions module must be compiled even when it is imported with no arguments. Test::Assertions loads its helper modules on demand and avoids using pragmas to minimise its compilation overhead. Currently Test::Assertions does not go to more extreme measures to cut its compilation overhead in the interests of maintainability and ease of installation. .PP Both can be minimised by using a constant: .PP .Vb 1 \& use constant ENABLE_ASSERTIONS => $ENV{ENABLE_ASSERTIONS}; \& \& #Minimise compile\-time overhead \& if(ENABLE_ASSERTIONS) { \& require Test::Assertions; \& import Test::Assertions qq(die); \& } \& \& $rv = some_function(); \& \& #Eliminate runtime overhead \& ASSERT($rv == 0, "some_function returned a non\-zero value") if(ENABLE_ASSERTIONS); .Ve .PP Unlike Carp::Assert, Test::Assertions does not come with a \*(L"built-in\*(R" constant (\s-1DEBUG\s0 in the case of Carp::Assert). Define your own constant, attach it to your own compile-time logic (e.g. env vars) and call it whatever you like. .SS "How expensive is a null \s-1ASSERT\s0?" .IX Subsection "How expensive is a null ASSERT?" Here's an indication of the overhead of calling \s-1ASSERT\s0 when Test::Assertions is imported with no arguments. A comparison is included with Carp::Assert just to show that it's in the same ballpark \- we are not advocating one module over the other. As outlined above, using a constant to disable assertions is recommended in performance-critical code. .PP .Vb 1 \& #!/usr/local/bin/perl \& \& use Benchmark; \& use Test::Assertions; \& use Carp::Assert; \& use constant ENABLE_ASSERTIONS => 0; \& \& #Compare null ASSERT to simple linear algebra statement \& timethis(1e6, sub{ \& ASSERT(1); #Test::Assertions \& }); \& timethis(1e6, sub{ \& assert(1); #Carp::Assert \& }); \& timethis(1e6, sub{ \& ASSERT(1) if ENABLE_ASSERTIONS; \& }); \& timethis(1e6, sub{ \& $x=$x*2 + 3; \& }); .Ve .PP Results on Sun E250 (with 2x400Mhz CPUs) running perl 5.6.1 on solaris 9: .PP .Vb 4 \& Test::Assertions: timethis 1000000: 3 wallclock secs ( 3.88 usr + 0.00 sys = 3.88 CPU) @ 257731.96/s (n=1000000) \& Carp::Assert: timethis 1000000: 6 wallclock secs ( 6.08 usr + 0.00 sys = 6.08 CPU) @ 164473.68/s (n=1000000) \& Test::Assertions + const: timethis 1000000: \-1 wallclock secs ( 0.07 usr + 0.00 sys = 0.07 CPU) @ 14285714.29/s (n=1000000) (warning: too few iterations for a reliable count) \& some algebra: timethis 1000000: 1 wallclock secs ( 2.50 usr + 0.00 sys = 2.50 CPU) @ 400000.00/s (n=1000000) .Ve .PP Results for 1.7Ghz pentium M running activestate perl 5.6.1 on win \s-1XP:\s0 .PP .Vb 4 \& Test::Assertions: timethis 1000000: 0 wallclock secs ( 0.42 usr + 0.00 sys = 0.42 CPU) @ 2380952.38/s (n=1000000) \& Carp::Assert: timethis 1000000: 0 wallclock secs ( 0.57 usr + 0.00 sys = 0.57 CPU) @ 1751313.49/s (n=1000000) \& Test::Assertions + const: timethis 1000000: \-1 wallclock secs (\-0.02 usr + 0.00 sys = \-0.02 CPU) @ \-50000000.00/s (n=1000000) (warning: too few iterations for a reliable count) \& some algebra: timethis 1000000: 0 wallclock secs ( 0.50 usr + 0.00 sys = 0.50 CPU) @ 1996007.98/s (n=1000000) .Ve .SS "How significant is the compile-time overhead?" .IX Subsection "How significant is the compile-time overhead?" Here's an indication of the compile-time overhead for Test::Assertions v1.050 and Carp::Assert v0.18. The cost of running \fIimport()\fR is also included. .PP .Vb 1 \& #!/usr/local/bin/perl \& \& use Benchmark; \& use lib qw(../lib); \& \& timethis(3e2, sub { \& require Test::Assertions; \& delete $INC{"Test/Assertions.pm"}; \& }); \& \& timethis(3e2, sub { \& require Test::Assertions; \& import Test::Assertions; \& delete $INC{"Test/Assertions.pm"}; \& }); \& \& timethis(3e2, sub { \& require Carp::Assert; \& delete $INC{"Carp/Assert.pm"}; \& }); \& \& timethis(3e2, sub { \& require Carp::Assert; \& import Carp::Assert; \& delete $INC{"Carp/Assert.pm"}; \& }); .Ve .PP Results on Sun E250 (with 2x400Mhz CPUs) running perl 5.6.1 on solaris 9: .PP .Vb 4 \& Test::Assertions: timethis 300: 6 wallclock secs ( 6.19 usr + 0.10 sys = 6.29 CPU) @ 47.69/s (n=300) \& Test::Assertions + import: timethis 300: 7 wallclock secs ( 6.56 usr + 0.03 sys = 6.59 CPU) @ 45.52/s (n=300) \& Carp::Assert: timethis 300: 3 wallclock secs ( 2.47 usr + 0.32 sys = 2.79 CPU) @ 107.53/s (n=300) \& Carp::Assert + import: timethis 300: 41 wallclock secs (40.58 usr + 0.32 sys = 40.90 CPU) @ 7.33/s (n=300) .Ve .PP Results for 1.7Ghz pentium M running activestate perl 5.6.1 on win \s-1XP:\s0 .PP .Vb 4 \& Test::Assertions: timethis 300: 2 wallclock secs ( 1.45 usr + 0.21 sys = 1.66 CPU) @ 180.51/s (n=300) \& Test::Assertions + import: timethis 300: 2 wallclock secs ( 1.58 usr + 0.29 sys = 1.87 CPU) @ 160.26/s (n=300) \& Carp::Assert: timethis 300: 1 wallclock secs ( 0.99 usr + 0.26 sys = 1.25 CPU) @ 239.62/s (n=300) \& Carp::Assert + import: timethis 300: 6 wallclock secs ( 5.42 usr + 0.38 sys = 5.80 CPU) @ 51.74/s (n=300) .Ve .PP If using a constant to control compilation is not to your liking, you may want to experiment with SelfLoader or AutoLoader to cut down the compilation overhead further by delaying compilation of some of the subroutines in Test::Assertions (see SelfLoader and AutoLoader for more information) until the first time they are used. .SH "VERSION" .IX Header "VERSION" \&\f(CW$Revision:\fR 1.10 $ on \f(CW$Date:\fR 2005/05/04 15:56:39 $ .SH "AUTHOR" .IX Header "AUTHOR" John Alden