Package org.apache.axis2.transport.testkit
Introduction and high level overview
In general a good test suite for an Axis2 transport should contain test cases that- test the transport sender in isolation, i.e. with non Axis2 endpoints;
- test the transport listener in isolation, i.e. with non Axis2 clients;
- test the interoperability between the transport sender and the transport listener.
- different message exchange patterns (at least one-way and request-response);
- different content types (SOAP 1.1/1.2, POX, SOAP with attachments, MTOM, plain text, binary, etc.).
The test kit grew out of the idea that is should be possible to apply a common set of tests (with different MEPs and content types) to several transports with a minimum of code duplication. By providing non Axis2 test clients and endpoints as well as the code that sets up the necessary environment as input, the framework should then be able to build a complete test suite for the transport.
It is clear that since each transport protocol has its own specificities, a high level of abstraction is required to achieve this goal. The following sections give a high level overview of the various abstractions that have been introduced in the test kit.
Integration with JUnit
One of the fundamental requirements for the test kit is to integrate well with JUnit. This requirement ensures that the tests can be executed easily as part of the Maven build and that other available tools such as test report generators and test coverage analysis tools can be used.
The usual approach to write JUnit tests is to extend junit.framework.TestCase.TestCase
and to define a set of methods that implement the different test cases. Since the goal of the framework
is to build test suites in an automated way and the number of test cases can be fairly high, this
approach would not be feasible. Fortunately JUnit supports another way to create a test suite
dynamically. Indeed JUnit scans the test code for methods with the following signature:
public static TestSuite suite()A typical transport test will implement this method and use
TransportTestSuiteBuilder
to let the framework create the test suite.
Test case naming
One problem that immediately arises when building a test suite dynamically is that each test case must have a name (which should be unique) and that this name should be sufficiently meaningful so that when it appears in a report a human should be able to get a basic idea of what the test case does. The names generated by the test kit have two parts:- A numeric ID which is the sequence number of the test case in the test suite.
- A set of key-value pairs describing the components that are used in the test case.
0076:test=REST,client=java.net,endpoint=axisThe algorithm used by the test kit to collect the key-value pairs is described in the documentation of the
org.apache.axis2.transport.testkit.name
package.
Resource management
In general setting up the environment in which a given test case is executed may be quite expensive. For example, running a test case for the JMS transport requires starting a message broker. Also every test case requires at least an Axis2 client and/or server environment to deploy the transport. Setting up and tearing down the entire environment for every single test case would be far too expensive. On the other hand the environments required by different test cases in a single test suite are in general very different from each other so that it would not possible to set up a common environment used by all the test cases.To overcome this difficulty, the test kit has a mechanism that allows a test case to reuse resources from the previous test case. This is managed in an entirely transparent way by a lightweight dependency injection container (see [TODO: need to regroup this code in a single package]), so that the test case doesn't need to care about it.
The mechanism is based on a set of simple concepts: [TODO: this is too detailed for a high level overview and should be moved to the Javadoc of the relevant package]
Every test case is linked to a set of resources which are plain Java objects (that are not required to extend any particular class or implement any particular interface). These objects define the resource set of the test case (which is represented internally by a
TestResourceSet
object).The lifecycle of a resource is managed through methods annotated by
Setup
andTearDown
. These annotations identify the methods to be called when the framework sets up and tears down the resource. The arguments of the methods annotated usingSetup
also define the dependencies of that resource.Example:
public class MyTestClient { \@Setup private void setUp(MyProtocolProvider provider) throws Exception { provider.connect(); } }
As shown in this example, dependencies are specified by class (which may be abstract). The actual instance that will be injected is selected during resource resolution.
Resources are (in general) resolved from the resource set of the test case. For example an instance of the
MyTestClient
class can only be used as a resource for a given test case if the resource set of this test case also contains an instance ofMyProtocolProvider
(more precisely an object that is assignment compatible withMyProtocolProvider
).A resource will be reused across two test cases if it is part of the resource sets of both test cases and all its dependencies (including transitive dependencies) are part of both resource sets. The precise meaning of "reusing" in this context is using the same instance without calling the tear down and set up methods.
For example, consider the following test cases and resource sets:
Test case Resource set T1 c:MyTestClient
,p1:MyProtocolProvider
T2 c:MyTestClient
,p1:MyProtocolProvider
,r:SomeOtherResourceType
T3 c:MyTestClient
,p2:MyProtocolProvider
,r:SomeOtherResourceType
Assuming that
SomeOtherResourceType
is independent ofMyTestClient
andMyProtocolProvider
, the lifecycle of the different resources will be as follows:Transition Lifecycle actions • → T1 set up p1
, set upc
T1 → T2 set up r
T2 → T3 tear down c
, tear downp1
, set upp2
, set upc
T3 → • tear down c
, tear downp2
, tear downr
Even if T2 and T3 use the same instance
c
ofMyTestClient
, this resource is not reused (in the sense defined above) since theMyProtocolProvider
dependency resolves to different instances.
Resources required by a transport test case
Every transport test case (extendingMessageTestCase
)
at least requires three resources:
- A test client (
AsyncTestClient
orRequestResponseTestClient
) that allows the test case to send messages (and receive responses). - A test endpoint (
AsyncEndpoint
orInOutEndpoint
). In the one-way case, this resource is used to receive requests send by the test client. In the request-response case its responsibility is to generate well defined responses (typically a simple echo). - A channel (
AsyncChannel
orRequestResponseChannel
. This resource manages everything that it necessary to transport a message from a client to an endpoint. Depending on the transport this task can be fairly complex. For example, in the JMS case, the channel creates the required JMS destinations and registers them in JNDI, so that they can be used by the client and by the endpoint. On the other hand, for HTTP the channel implementation is very simple and basically limited to the computation of the endpoint reference.
The test kit provides the following Axis2 based test client and endpoint implementations:
One-way | Request-response | |
---|---|---|
Client | AxisAsyncTestClient |
AxisRequestResponseTestClient |
Endpoint | AxisAsyncEndpoint |
AxisEchoEndpoint |
Message encoders and decoders
Different clients, endpoints and test cases may have fairly different ways to "naturally" represent a message:- To test the listener of an HTTP transport, an obvious choice is to build a test client
that relies on standard Java classes such as
URLConnection
. For that purpose the most natural way to represent a message is as a byte sequence. - All Axis2 based test clients and endpoints already have a canonical message
representation, which is the SOAP infoset retrieved by
MessageContext.getEnvelope()
. - A test case for plain text messages would naturally represent the test message as a string.
MessageEncoder
to transform the message
from its own representation to the representation used by the test client. In the same way,
a MessageDecoder
is used to transform the message
intercepted by the endpoint (in the one-way case) or the response message received by the test client
(in the request-response case).
[TODO: currently message encoders and decoders are chosen at compile time and the transformation is is invoked indirectly by adapters; this will change in the future so that encoders and decoders are selected dynamically at runtime]
Exclusion rules
Sometimes it is necessary to exclude particular test cases (or entire groups of test cases) from the test suite generated by the test kit. There are various reasons why one would do that:- A test case fails because of some known issue in the transport. In that case it should be excluded until the issue is fixed. This is necessary to distinguish this type of failure from regressions. In general the tests checked in to source control should always succeed unless there is a regression.
- Sometimes a particular test case doesn't make sense for a given transport. For example a test case that checks that the transport is able to handle large payloads would not be applicable to the UDP transport which has a message size limitation.
- The test suite builder generates test cases by computing all possible combinations of MEPs, content types, clients, endpoints and environment setups. For some transports this results in a very high number of test cases. Since these test cases generally have a high degree of overlap, one can use exclusion rules to reduce the number of test cases to a more reasonable value.
TransportTestSuiteBuilder
defines the following
default exclusion rule:
(&(client=*)(endpoint=*)(!(|(client=axis)(endpoint=axis))))This rule excludes all test cases that would use a non Axis2 client and a non Axis2 endpoint.
Logging
Transport test cases generally involve several interacting components and some of these components may use multithreading. Also experience has shown that some test cases may randomly fail (often with a failure probablity highly dependent on the execution platform) because of subtle problems in the transport under test or in the tests themselves. All this can make debugging extremely difficult. To simplify this task, the test kit collects (or provides the necessary infrastructure to collect) a maximum of information during the execution of each test case.
The collected information is written to a set of log files managed by
TestKitLogManager
. An instance is added automatically to
the resource set of every test case and other resources can acquire a reference through the dependency
injection mechanism described above. This is the recommended approach. Alternatively, the log manager
can be used as a singleton through TestKitLogManager.INSTANCE
.
Logs files are written to subdirectories of target/testkit-logs. The directory structure has a two level hierarchy identifying the test class (by its fully qualified name) and the test case (by its ID). It should be noted that the test results themselves (in particular the exception in case of failure) are still written to the standard JUnit/Surefire logs and that these logs should be consulted first. The test kit specific log files are only meant to provide additional information.
Each test case at least produces a 01-debug.log file with the messages that were logged (using JCL) at level DEBUG during the execution of the test case. In addition, depending on the components involved in the test, the test kit will produce the following logs (XX denotes a sequence number which is generated automatically):
- XX-formatter.log
- XX-builder.log
These files are produced when Axis2 test clients and endpoints are used. XX-formatter.log will contain the payload of an incoming message as seen by the
MessageFormatter
. XX-builder.log on the other hand will contain the payload of an outgoing message as produced by theBuilder
. Note that the number of log files depends on serveral factors, such as the MEP, whether the client or endpoint is Axis2 based or not and whether the transport chooses to use message builders and formatters or not.These files provides extremely valuable information since it is very difficult to get this data using other debugging techniques. Note that the files are created by
LogAspect
which relies on Aspect/J to intercept calls to message formatters and builders. This will only work if the tests are run with the Aspect/J weaver.- XX-service-parameters.log
If the test case uses an Axis2 based endpoint, this file will contain the parameters of the
AxisService
implementing this endpoint. This information is useful since the service configuration is in general determined by different components involved in the test.
-
Interface Summary Interface Description Adapter MessageExchangeValidator -
Class Summary Class Description AdapterUtils ManagedTestSuite MessageTestData TransportTestSuiteBuilder