.\" -*- mode: troff; coding: utf-8 -*- .\" Automatically generated by Pod::Man 5.01 (Pod::Simple 3.43) .\" .\" Standard preamble: .\" ======================================================================== .de Sp \" Vertical space (when we can't use .PP) .if t .sp .5v .if n .sp .. .de Vb \" Begin verbatim text .ft CW .nf .ne \\$1 .. .de Ve \" End verbatim text .ft R .fi .. .\" \*(C` and \*(C' are quotes in nroff, nothing in troff, for use with C<>. .ie n \{\ . ds C` "" . ds C' "" 'br\} .el\{\ . ds C` . ds C' 'br\} .\" .\" Escape single quotes in literal strings from groff's Unicode transform. .ie \n(.g .ds Aq \(aq .el .ds Aq ' .\" .\" If the F register is >0, we'll generate index entries on stderr for .\" titles (.TH), headers (.SH), subsections (.SS), items (.Ip), and index .\" entries marked with X<> in POD. Of course, you'll have to process the .\" output yourself in some meaningful fashion. .\" .\" Avoid warning from groff about undefined register 'F'. .de IX .. .nr rF 0 .if \n(.g .if rF .nr rF 1 .if (\n(rF:(\n(.g==0)) \{\ . if \nF \{\ . de IX . tm Index:\\$1\t\\n%\t"\\$2" .. . if !\nF==2 \{\ . nr % 0 . nr F 2 . \} . \} .\} .rr rF .\" ======================================================================== .\" .IX Title "Test2::Manual::Testing::Introduction 3pm" .TH Test2::Manual::Testing::Introduction 3pm 2024-05-08 "perl v5.38.2" "User Contributed Perl Documentation" .\" For nroff, turn off justification. Always turn off hyphenation; it makes .\" way too many mistakes in technical documents. .if n .ad l .nh .SH NAME Test2::Manual::Testing::Introduction \- Introduction to testing with Test2. .SH DESCRIPTION .IX Header "DESCRIPTION" This tutorial is a beginners introduction to testing. This will take you through writing a test file, making assertions, and running your test. .SH BOILERPLATE .IX Header "BOILERPLATE" .SS "THE TEST FILE" .IX Subsection "THE TEST FILE" Test files typically are placed inside the \f(CW\*(C`t/\*(C'\fR directory, and end with the \&\f(CW\*(C`.t\*(C'\fR file extension. .PP \&\f(CW\*(C`t/example.t\*(C'\fR: .PP .Vb 1 \& use Test2::V0; \& \& # Assertions will go here \& \& done_testing; .Ve .PP This is all the boilerplate you need. .IP "use Test2::V0;" 4 .IX Item "use Test2::V0;" This loads a collection of testing tools that will be described later in the tutorial. This will also turn on \f(CW\*(C`strict\*(C'\fR and \f(CW\*(C`warnings\*(C'\fR for you. .IP done_testing; 4 .IX Item "done_testing;" This should always be at the end of your test files. This tells Test2 that you are done making assertions. This is important as \f(CW\*(C`test2\*(C'\fR will assume the test did not complete successfully without this, or some other form of test "plan". .SS "DIST CONFIG" .IX Subsection "DIST CONFIG" You should always list bundles and tools directly. You should not simply list Test2::Suite and call it done, bundles and tools may be moved out of Test2::Suite to their own dists at any time. .PP \fIDist::Zilla\fR .IX Subsection "Dist::Zilla" .PP .Vb 2 \& [Prereqs / TestRequires] \& Test2::V0 = 0.000060 .Ve .PP \fIExtUtils::MakeMaker\fR .IX Subsection "ExtUtils::MakeMaker" .PP .Vb 7 \& my %WriteMakefileArgs = ( \& ..., \& "TEST_REQUIRES" => { \& "Test2::V0" => "0.000060" \& }, \& ... \& ); .Ve .PP \fIModule::Install\fR .IX Subsection "Module::Install" .PP .Vb 1 \& test_requires \*(AqTest2::V0\*(Aq => \*(Aq0.000060\*(Aq; .Ve .PP \fIModule::Build\fR .IX Subsection "Module::Build" .PP .Vb 7 \& my $build = Module::Build\->new( \& ..., \& test_requires => { \& "Test2::V0" => "0.000060", \& }, \& ... \& ); .Ve .SH "MAKING ASSERTIONS" .IX Header "MAKING ASSERTIONS" The most simple tool for making assertions is \f(CWok()\fR. \f(CWok()\fR lets you assert that a condition is true. .PP .Vb 1 \& ok($CONDITION, "Description of the condition"); .Ve .PP Here is a complete \f(CW\*(C`t/example.t\*(C'\fR: .PP .Vb 1 \& use Test2::V0; \& \& ok(1, "1 is true, so this will pass"); \& \& done_testing; .Ve .SH "RUNNING THE TEST" .IX Header "RUNNING THE TEST" Test files are simply scripts. Just like any other script you can run the test directly with perl. Another option is to use a test "harness" which runs the test for you, and provides extra information and checks the scripts exit value for you. .SS "RUN DIRECTLY" .IX Subsection "RUN DIRECTLY" .Vb 1 \& $ perl \-Ilib t/example.t .Ve .PP Which should produce output like this: .PP .Vb 3 \& # Seeded srand with seed \*(Aq20161028\*(Aq from local date. \& ok 1 \- 1 is true, so this will pass \& 1..1 .Ve .PP If the test had failed (\f(CW\*(C`ok(0, ...)\*(C'\fR) it would look like this: .PP .Vb 3 \& # Seeded srand with seed \*(Aq20161028\*(Aq from local date. \& not ok 1 \- 0 is false, so this will fail \& 1..1 .Ve .PP Test2 will also set the exit value of the script, a successful run will have an exit value of 0, a failed run will have a non-zero exit value. .SS "USING YATH" .IX Subsection "USING YATH" The \f(CW\*(C`yath\*(C'\fR command line tool is provided by Test2::Harness which you may need to install yourself from cpan. \f(CW\*(C`yath\*(C'\fR is the harness written specifically for Test2. .PP .Vb 1 \& $ yath \-Ilib t/example.t .Ve .PP This will produce output similar to this: .PP .Vb 1 \& ( PASSED ) job 1 t/example.t \& \& ================================================================================ \& \& Run ID: 1508027909 \& \& All tests were successful! .Ve .PP You can also request verbose output with the \f(CW\*(C`\-v\*(C'\fR flag: .PP .Vb 1 \& $ yath \-Ilib \-v t/example.t .Ve .PP Which produces: .PP .Vb 5 \& ( LAUNCH ) job 1 example.t \& ( NOTE ) job 1 Seeded srand with seed \*(Aq20171014\*(Aq from local date. \& [ PASS ] job 1 + 1 is true, so this will pass \& [ PLAN ] job 1 Expected asserions: 1 \& ( PASSED ) job 1 example.t \& \& ================================================================================ \& \& Run ID: 1508028002 \& \& All tests were successful! .Ve .SS "USING PROVE" .IX Subsection "USING PROVE" The \f(CW\*(C`prove\*(C'\fR command line tool is provided by the Test::Harness module which comes with most versions of perl. Test::Harness is dual-life, which means you can also install the latest version from cpan. .PP .Vb 1 \& $ prove \-Ilib t/example.t .Ve .PP This will produce output like this: .PP .Vb 4 \& example.t .. ok \& All tests successful. \& Files=1, Tests=1, 0 wallclock secs ( 0.01 usr 0.00 sys + 0.05 cusr 0.00 csys = 0.06 CPU) \& Result: PASS .Ve .PP You can also request verbose output with the \f(CW\*(C`\-v\*(C'\fR flag: .PP .Vb 1 \& $ prove \-Ilib \-v t/example.t .Ve .PP The verbose output looks like this: .PP .Vb 8 \& example.t .. \& # Seeded srand with seed \*(Aq20161028\*(Aq from local date. \& ok 1 \- 1 is true, so this will pass \& 1..1 \& ok \& All tests successful. \& Files=1, Tests=1, 0 wallclock secs ( 0.02 usr 0.00 sys + 0.06 cusr 0.00 csys = 0.08 CPU) \& Result: PASS .Ve .SH "THE ""PLAN""" .IX Header "THE ""PLAN""" All tests need a "plan". The job of a plan is to make sure you ran all the tests you expected. The plan prevents a passing result from a test that exits before all the tests are run. .PP There are 2 primary ways to set the plan: .IP \fBdone_testing()\fR 4 .IX Item "done_testing()" The most common, and recommended way to set a plan is to add \f(CW\*(C`done_testing\*(C'\fR at the end of your test file. This will automatically calculate the plan for you at the end of the test. If the test were to exit early then \f(CW\*(C`done_testing\*(C'\fR would not run and no plan would be found, forcing a failure. .IP plan($COUNT) 4 .IX Item "plan($COUNT)" The \f(CWplan()\fR function allows you to specify an exact number of assertions you want to run. If you run too many or too few assertions then the plan will not match and it will be counted as a failure. The primary problem with this way of planning is that you need to add up the number of assertions, and adjust the count whenever you update the test file. .Sp \&\f(CWplan()\fR must be used before all assertions, or after all assertions, it cannot be done in the middle of making assertions. .SH "ADDITIONAL ASSERTION TOOLS" .IX Header "ADDITIONAL ASSERTION TOOLS" The Test2::V0 bundle provides a lot more than \f(CWok()\fR, \&\f(CWplan()\fR, and \f(CWdone_testing()\fR. The biggest tools to note are: .ie n .IP "is($a, $b, $description)" 4 .el .IP "is($a, \f(CW$b\fR, \f(CW$description\fR)" 4 .IX Item "is($a, $b, $description)" \&\f(CWis()\fR allows you to compare 2 structures and insure they are identical. You can use it for simple string comparisons, or even deep data structure comparisons. .Sp .Vb 1 \& is("foo", "foo", "Both strings are identical"); \& \& is(["foo", 1], ["foo", 1], "Both arrays contain the same elements"); .Ve .ie n .IP "like($a, $b, $description)" 4 .el .IP "like($a, \f(CW$b\fR, \f(CW$description\fR)" 4 .IX Item "like($a, $b, $description)" \&\f(CWlike()\fR is similar to \f(CWis()\fR except that it only checks items listed on the right, it ignores any extra values found on the left. .Sp .Vb 1 \& like([1, 2, 3, 4], [1, 2, 3], "Passes, the extra element on the left is ignored"); .Ve .Sp You can also used regular expressions on the right hand side: .Sp .Vb 1 \& like("foo bar baz", qr/bar/, "The string matches the regex, this passes"); .Ve .Sp You can also nest the regexes: .Sp .Vb 1 \& like([1, 2, \*(Aqfoo bar baz\*(Aq, 3], [1, 2, qr/bar/], "This passes"); .Ve .SH "SEE ALSO" .IX Header "SEE ALSO" Test2::Manual \- Primary index of the manual. .SH SOURCE .IX Header "SOURCE" The source code repository for Test2\-Manual can be found at \&\fIhttps://github.com/Test\-More/Test2\-Suite/\fR. .SH MAINTAINERS .IX Header "MAINTAINERS" .IP "Chad Granum " 4 .IX Item "Chad Granum " .SH AUTHORS .IX Header "AUTHORS" .PD 0 .IP "Chad Granum " 4 .IX Item "Chad Granum " .PD .SH COPYRIGHT .IX Header "COPYRIGHT" Copyright 2018 Chad Granum . .PP This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself. .PP See \fIhttp://dev.perl.org/licenses/\fR