test_server(3erl) | Erlang Module Definition | test_server(3erl) |
NAME¶
test_server - This module provides support for test suite authors.DESCRIPTION¶
The test_server module aids the test suite author by providing various support functions. The supported functionality includes:- *
- Logging and timestamping
- *
- Capturing output to stdout
- *
- Retrieving and flushing the message queue of a process
- *
- Watchdog timers, process sleep, time measurement and unit conversion
- *
- Private scratch directory for all test suites
- *
- Start and stop of slave- or peer nodes
TEST SUITE SUPPORT FUNCTIONS¶
The following functions are supposed to be used inside a test suite.EXPORTS¶
os_type() -> OSType
Types:
OSType = term()
This is the same as returned from os:type/0
This function is equivalent to os:type/0. It is kept for backwards
compatibility.
fail()
Types:
Reason = term()
The reason why the test case failed.
This will make the test suite fail with a given reason, or with
suite_failed if no reason was given. Use this function if you want to
terminate a test case, as this will make it easier to read the log- and HTML
files. Reason will appear in the comment field in the HTML log.
timetrap(Timout) -> Handle
Types:
Timeout = integer() | {hours,H} | {minutes,M} |
{seconds,S}
H = M = S = integer()
Pid = pid()
The process that is to be timetrapped (self()by
default)
Sets up a time trap for the current process. An expired timetrap kills the
process with reason timetrap_timeout. The returned handle is to be
given as argument to timetrap_cancel before the timetrap expires. If
Timeout is an integer, it is expected to be milliseconds.
timetrap_cancel(Handle) -> ok
Note:
If the current process is trapping exits, it will not be killed by the exit
signal with reason timetrap_timeout. If this happens, the process will
be sent an exit signal with reason kill 10 seconds later which will
kill the process. Information about the timetrap timeout will in this case not
be found in the test logs. However, the error_logger will be sent a warning.
Types:
Handle = term()
Handle returned from timetrap
This function cancels a timetrap. This must be done before the timetrap
expires.
timetrap_scale_factor() -> ScaleFactor
Types:
ScaleFactor = integer()
This function returns the scale factor by which all timetraps are scaled. It is
normally 1, but can be greater than 1 if the test_server is running
cover, using a larger amount of scheduler threads than the amount of
logical processors on the system, running under purify, valgrind or in a
debug-compiled emulator. The scale factor can be used if you need to scale you
own timeouts in test cases with same factor as the test_server uses.
sleep(MSecs) -> ok
Types:
MSecs = integer() | float() | infinity
The number of milliseconds to sleep
This function suspends the calling process for at least the supplied number of
milliseconds. There are two major reasons why you should use this function
instead of timer:sleep, the first being that the module timer
may be unavailable at the time the test suite is run, and the second that it
also accepts floating point numbers.
adjusted_sleep(MSecs) -> ok
Types:
MSecs = integer() | float() | infinity
The default number of milliseconds to sleep
This function suspends the calling process for at least the supplied number of
milliseconds. The function behaves the same way as test_server:sleep/1,
only MSecs will be multiplied by the 'multiply_timetraps' value, if
set, and also automatically scaled up if 'scale_timetraps' is set to true
(which it is by default).
hours(N) -> MSecs
Types:
N = integer()
Value to convert to milliseconds.
Theese functions convert N number of hours, minutes or seconds into
milliseconds.
Use this function when you want to test_server:sleep/1 for a number of
seconds, minutes or hours(!).
format(Format) -> ok
Types:
Format = string()
Format as described for io_:format.
Args = list()
List of arguments to format.
Formats output just like io:format but sends the formatted string to a
logfile. If the urgency value, Pri, is lower than some threshold value,
it will also be written to the test person's console. Default urgency is 50,
default threshold for display on the console is 1.
Typically, the test person don't want to see everything a test suite outputs,
but is merely interested in if the test cases succeeded or not, which the test
server tells him. If he would like to see more, he could manually change the
threshold values by using the test_server_ctrl:set_levels/3
function.
capture_start() -> ok
These functions makes it possible to capture all output to stdout from a process
started by the test suite. The list of characters captured can be purged by
using capture_get.
messages_get() -> list()
This function will empty and return all the messages currently in the calling
process' message queue.
timecall(M, F, A) -> {Time, Value}
Types:
M = atom()
The name of the module where the function resides.
F = atom()
The name of the function to call in the module.
A = list()
The arguments to supply the called function.
Time = integer()
The number of seconds it took to call the function.
Value = term()
Value returned from the called function.
This function measures the time (in seconds) it takes to call a certain
function. The function call is not caught within a catch.
do_times(N, M, F, A) -> ok
Types:
N = integer()
Number of times to call MFA.
M = atom()
Module name where the function resides.
F = atom()
Function name to call.
A = list()
Arguments to M:F.
Calls MFA or Fun N times. Useful for extensive testing of a sensitive
function.
m_out_of_n(M, N, Fun) -> ok | exit({m_out_of_n_failed, {R,left_to_do}}
Types:
N = integer()
Number of times to call the Fun.
M = integer()
Number of times to require a successful return.
Repeatedly evaluates the given function until it succeeds (doesn't crash) M
times. If, after N times, M successful attempts have not been accomplished,
the process crashes with reason {m_out_of_n_failed, {R,left_to_do}}, where R
indicates how many cases that was still to be successfully completed.
For example:
m_out_of_n(1,4,fun() -> tricky_test_case() end)
Tries to run tricky_test_case() up to 4 times, and is happy if it succeeds once.
m_out_of_n(7,8,fun() -> clock_sanity_check() end)
Tries running clock_sanity_check() up to 8 times,and allows the function to fail
once. This might be useful if clock_sanity_check/0 is known to fail if the
clock crosses an hour boundary during the test (and the up to 8 test runs
could never cross 2 boundaries)
call_crash(M, F, A) -> Result
Types:
Result = ok | exit(call_crash_timeout) |
exit({wrong_crash_reason, Reason})
Crash = term()
Crash return from the function.
Time = integer()
Timeout in milliseconds.
M = atom()
Module name where the function resides.
F = atom()
Function name to call.
A = list()
Arguments to M:F.
Spawns a new process that calls MFA. The call is considered successful if the
call crashes with the gives reason ( Crash) or any reason if not
specified. The call must terminate within the given time (default
infinity), or it is considered a failure.
temp_name(Stem) -> Name
Types:
Stem = string()
Returns a unique filename starting with Stem with enough extra characters
appended to make up a unique filename. The filename returned is guaranteed not
to exist in the filesystem at the time of the call.
break(Comment) -> ok
Types:
Comment = string()
Comment is a string which will be written in the shell, e.g. explaining
what to do.
This function will cancel all timetraps and pause the execution of the test case
until the user executes the continue/0 function. It gives the user the
opportunity to interact with the erlang node running the tests, e.g. for
debugging purposes or for manually executing a part of the test case.
When the break/1 function is called, the shell will look something like
this:
continue() -> ok
--- SEMIAUTOMATIC TESTING --- The test case executes on process <0.51.0> "Here is a comment, it could e.g. instruct to pull out a card" ----------------------------- Continue with --> test_server:continue().The user can now interact with the erlang node, and when ready call test_server:continue(). Note that this function can not be used if the test is executed with ts:run/0/1/2/3/4 in batch mode.
This function must be called in order to continue after a test case has called
break/1.
run_on_shielded_node(Fun, CArgs) -> term()
Types:
Fun = function() (arity 0)
Function to execute on the shielded node.
CArg = string()
Extra command line arguments to use when starting the
shielded node.
Fun is executed in a process on a temporarily created hidden node with a
proxy for communication with the test server node. The node is called a
shielded node (should have been called a shield node). If Fun is
successfully executed, the result is returned. A peer node (see
start_node/3) started from the shielded node will be shielded from test
server node, i.e. they will not be aware of each other. This is useful when
you want to start nodes from earlier OTP releases than the OTP release of the
test server node.
Nodes from an earlier OTP release can normally not be started if the test server
hasn't been started in compatibility mode (see the +R flag in the
erl(1) documentation) of an earlier release. If a shielded node is
started in compatibility mode of an earlier OTP release than the OTP release
of the test server node, the shielded node can start nodes of an earlier OTP
release.
start_node(Name, Type, Options) -> {ok, Node} | {error, Reason}
Note:
You must make sure that nodes started by the shielded node never
communicate directly with the test server node.
Note:
Slave nodes always communicate with the test server node; therefore,
never start slave nodes from the shielded node, always
start peer nodes.
Types:
Name = atom() | string()
Name of the slavenode to start (as given to -sname or
-name)
Type = slave | peer
The type of node to start.
Options = [{atom(), term()]
Tuplelist of options
This functions starts a node, possibly on a remote machine, and guarantees cross
architecture transparency. Type is set to either slave or peer.
slave means that the new node will have a master, i.e. the slave node
will terminate if the master terminates, TTY output produced on the slave will
be sent back to the master node and file I/O is done via the master. The
master is normally the target node unless the target is itself a slave.
peer means that the new node is an independent node with no master.
Options is a tuplelist which can contain one or more of
stop_node(NodeName) -> bool()
- {remote, true}:
- Start the node on a remote host. If not specified, the node will be started on the local host. Test cases that require a remote host will fail with a reasonable comment if no remote hosts are available at the time they are run.
- {args, Arguments}:
- Arguments passed directly to the node. This is typically a string appended to the command line.
- {wait, false}:
- Don't wait until the node is up. By default, this function does not return
until the node is up and running, but this option makes it return as soon
as the node start command is given..
Only valid for peer nodes
- {fail_on_error, false}:
- Returns {error, Reason} rather than failing the test case.
Only valid for peer nodes. Note that slave nodes always act as if they had fail_on_error=false
- {erl, ReleaseList}:
- Use an Erlang emulator determined by ReleaseList when starting nodes,
instead of the same emulator as the test server is running. ReleaseList is
a list of specifiers, where a specifier is either {release, Rel}, {prog,
Prog}, or 'this'. Rel is either the name of a release, e.g.,
"r12b_patched" or 'latest'. 'this' means using the same emulator
as the test server. Prog is the name of an emulator executable. If the
list has more than one element, one of them is picked randomly. (Only
works on Solaris and Linux, and the test server gives warnings when it
notices that nodes are not of the same version as itself.)
When specifying this option to run a previous release, use is_release_available/1 function to test if the given release is available and skip the test case if not.
In order to avoid compatibility problems (may not appear right away), use a shielded node (see run_on_shielded_node/2) when starting nodes from different OTP releases than the test server.
- {cleanup, false}:
- Tells the test server not to kill this node if it is still alive after the test case is completed. This is useful if the same node is to be used by a group of test cases.
- {env, Env}:
- Env should be a list of tuples {Name, Val}, where Name is the name of an environment variable, and Val is the value it is to have in the started node. Both Name and Val must be strings. The one exception is Val being the atom false (in analogy with os:getenv/1), which removes the environment variable. Only valid for peer nodes. Not available on VxWorks.
- {start_cover, false}:
- By default the test server will start cover on all nodes when the test is run with code coverage analysis. To make sure cover is not started on a new node, set this option to false. This can be necessary if the connection to the node at some point will be broken but the node is expected to stay alive. The reason is that a remote cover node can not continue to run without its main node. Another solution would be to explicitly stop cover on the node before breaking the connection, but in some situations (if old code resides in one or more processes) this is not possible.
Types:
NodeName = term()
Name of the node to stop
This functions stops a node previously started with start_node/3. Use
this function to stop any node you start, or the test server will produce a
warning message in the test logs, and kill the nodes automatically unless it
was started with the {cleanup, false} option.
is_commercial() -> bool()
This function test whether the emulator is commercially supported emulator. The
tests for a commercially supported emulator could be more stringent (for
instance, a commercial release should always contain documentation for all
applications).
is_release_available(Release) -> bool()
Types:
Release = string() | atom()
Release to test for
This function test whether the release given by Release (for instance,
"r12b_patched") is available on the computer that the test_server
controller is running on. Typically, you should skip the test case if not.
Caution: This function may not be called from the suite clause of a test
case, as the test_server will deadlock.
is_native(Mod) -> bool()
Types:
Mod = atom()
A module name
Checks whether the module is natively compiled or not
app_test(App) -> ok | test_server:fail()
Types:
App = term()
The name of the application to test
Mode = pedantic | tolerant
Default is pedantic
Checks an applications .app file for obvious errors. The following is checked:
appup_test(App) -> ok | test_server:fail()
- *
- required fields
- *
- that all modules specified actually exists
- *
- that all requires applications exists
- *
- that no module included in the application has export_all
- *
- that all modules in the ebin/ dir is included (If Mode==tolerant this only produces a warning, as all modules does not have to be included)
Types:
App = term()
The name of the application to test
Checks an applications .appup file for obvious errors. The following is checked:
comment(Comment) -> ok
- *
- syntax
- *
- that .app file version and .appup file version match
- *
- for non-library applications: validity of high-level upgrade instructions, specifying no instructions is explicitly allowed (in this case the application is not upgradeable)
- *
- for library applications: that there is exactly one wildcard regexp clause restarting the application when upgrading or downgrading from any version
Types:
Comment = string()
The given String will occur in the comment field of the table on the HTML result
page. If called several times, only the last comment is printed. comment/1 is
also overwritten by the return value {comment,Comment} from a test case or by
fail/1 (which prints Reason as a comment).
TEST SUITE EXPORTS¶
The following functions must be exported from a test suite module.EXPORTS¶
all(suite) -> TestSpec | {skip, Comment}
Types:
TestSpec = list()
Comment = string()
This comment will be printed on the HTML result
page
This function must return the test specification for the test suite module. The
syntax of a test specification is described in the Test Server User's
Guide.
init_per_suite(Config0) -> Config1 | {skip, Comment}
Types:
Config0 = Config1 = [tuple()]
Comment = string()
Describes why the suite is skipped
This function is called before all other test cases in the suite. Config
is the configuration which can be modified here. Whatever is returned from
this function is given as Config to the test cases.
If this function fails, all test cases in the suite will be skipped.
end_per_suite(Config) -> void()
Types:
Config = [tuple()]
This function is called after the last test case in the suite, and can be used
to clean up whatever the test cases have done. The return value is
ignored.
init_per_testcase(Case, Config0) -> Config1 | {skip, Comment}
Types:
Case = atom()
Config0 = Config1 = [tuple()]
Comment = string()
Describes why the test case is skipped
This function is called before each test case. The Case argument is the
name of the test case, and Config is the configuration which can be
modified here. Whatever is returned from this function is given as
Config to the test case.
end_per_testcase(Case, Config) -> void()
Types:
Case = atom()
Config = [tuple()]
This function is called after each test case, and can be used to clean up
whatever the test case has done. The return value is ignored.
Case(doc) -> [Decription]
Types:
Description = string()
Comment = string()
Config = [tuple()]
Short description of the test case
TestSpec = list()
This comment will be printed on the HTML result
page
Ok = term()
Elements from the Config parameter can be read with the
?config macro, see section about test suite support macros
The documentation clause (argument doc) can be used for automatic
generation of test documentation or test descriptions.
The specification clause (argument spec) shall return an empty
list, the test specification for the test case or {skip,Comment}. The
syntax of a test specification is described in the Test Server User's Guide.
The execution clause (argument Config) is only called if the
specification clause returns an empty list. The execution clause is the real
test case. Here you must call the functions you want to test, and do whatever
you need to check the result. If something fails, make sure the process
crashes or call test_server:fail/0/1 (which also will cause the process
to crash).
You can return {skip,Comment} if you decide not to run the test case
after all, e.g. if it is not applicable on this platform.
You can return {comment,Comment} if you wish to print some information in
the 'Comment' field on the HTML result page.
If the execution clause returns anything else, it is considered a success,
unless it is {'EXIT',Reason} or {'EXIT',Pid,Reason} which can't
be distinguished from a crash, and thus will be considered a failure.
A conf test case is a group of test cases with an init and a cleanup
function. The init and cleanup functions are also test cases, but they have
special rules:
- *
- They do not need a specification clause.
- *
- They must always have the execution clause.
- *
- They must return the Config parameter, a modified version of it or {skip,Comment} from the execution clause.
- *
- The cleanup function may also return a tuple {return_group_result,Status}, which is used to return the status of the conf case to Test Server and/or to a conf case on a higher level. ( Status = ok | skipped | failed).
- *
- init_per_testcase and end_per_testcase are not called before and after these functions.
TEST SUITE LINE NUMBERS¶
If a test case fails, the test server can report the exact line number at which it failed. There are two ways of doing this, either by using the line macro or by using the test_server_line parse transform. The line macro is described under TEST SUITE SUPPORT MACROS below. The line macro will only report the last line executed when a test case failed. The test_server_line parse transform is activated by including the headerfile test_server_line.hrl in the test suite. When doing this, it is important that the test_server_line module is in the code path of the erlang node compiling the test suite. The parse transform will report a history of a maximum of 10 lines when a test case fails. Consecutive lines in the same function are not shown. The attribute -no_lines(FuncList). can be used in the test suite to exclude specific functions from the parse transform. This is necessary e.g. for functions that are executed on old (i.e. <R10B) OTP releases. FuncList = [{Func,Arity}]. If both the line macro and the parse transform is used in the same module, the parse transform will overrule the macro.TEST SUITE SUPPORT MACROS¶
There are some macros defined in the test_server.hrl that are quite useful for test suite programmers: The line macro, is quite essential when writing test cases. It tells the test server exactly what line of code that is being executed, so that it can report this line back if the test case fails. Use this macro at the beginning of every test case line of code. The config macro, is used to retrieve information from the Config variable sent to all test cases. It is used with two arguments, where the first is the name of the configuration variable you wish to retrieve, and the second is the Config variable supplied to the test case from the test server. Possible configuration variables include:- *
- data_dir - Data file directory.
- *
- priv_dir - Scratch file directory.
- *
- nodes - Nodes specified in the spec file
- *
- nodenames - Generated nodenames.
- *
- Whatever added by conf test cases or init_per_testcase/2
test_server 3.7.1 | Ericsson AB |