|
||||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |
See:
Description
Interface Summary | |
---|---|
Adapter | |
MessageExchangeValidator |
Class Summary | |
---|---|
AdapterUtils | |
ManagedTestSuite | |
MessageTestData | |
TransportTestSuiteBuilder |
Transport test kit base package.
The test kit grew out of the idea that is should be possible to apply a common set of tests (with different MEPs and content types) to several transports with a minimum of code duplication. By providing non Axis2 test clients and endpoints as well as the code that sets up the necessary environment as input, the framework should then be able to build a complete test suite for the transport.
It is clear that since each transport protocol has its own specificities, a high level of abstraction is required to achieve this goal. The following sections give a high level overview of the various abstractions that have been introduced in the test kit.
The usual approach to write JUnit tests is to extend junit.framework.TestCase.TestCase
and to define a set of methods that implement the different test cases. Since the goal of the framework
is to build test suites in an automated way and the number of test cases can be fairly high, this
approach would not be feasible. Fortunately JUnit supports another way to create a test suite
dynamically. Indeed JUnit scans the test code for methods with the following signature:
public static TestSuite suite()A typical transport test will implement this method and use
TransportTestSuiteBuilder
to let the framework create the test suite.
0076:test=REST,client=java.net,endpoint=axisThe algorithm used by the test kit to collect the key-value pairs is described in the documentation of the
org.apache.axis2.transport.testkit.name
package.
To overcome this difficulty, the test kit has a mechanism that allows a test case to reuse resources from the previous test case. This is managed in an entirely transparent way by a lightweight dependency injection container (see [TODO: need to regroup this code in a single package]), so that the test case doesn't need to care about it.
The mechanism is based on a set of simple concepts: [TODO: this is too detailed for a high level overview and should be moved to the Javadoc of the relevant package]
Every test case is linked to a set of resources which are plain Java objects (that are not
required to extend any particular class or implement any particular interface).
These objects define the resource set of the test case (which is represented
internally by a TestResourceSet
object).
The lifecycle of a resource is managed through methods annotated by
Setup
and TearDown
.
These annotations identify the methods to be called when the framework sets up and tears down the resource.
The arguments of the methods annotated using Setup
also
define the dependencies of that resource.
Example:
public class MyTestClient { \@Setup private void setUp(MyProtocolProvider provider) throws Exception { provider.connect(); } }
As shown in this example, dependencies are specified by class (which may be abstract). The actual instance that will be injected is selected during resource resolution.
Resources are (in general) resolved from the resource set of the test case. For example an instance
of the MyTestClient
class can only be used as a resource for a given test case
if the resource set of this test case also contains an instance of MyProtocolProvider
(more precisely an object that is assignment compatible with MyProtocolProvider
).
A resource will be reused across two test cases if it is part of the resource sets of both test cases and all its dependencies (including transitive dependencies) are part of both resource sets. The precise meaning of "reusing" in this context is using the same instance without calling the tear down and set up methods.
For example, consider the following test cases and resource sets:
Test case | Resource set |
---|---|
T1 | c:MyTestClient , p1:MyProtocolProvider |
T2 | c:MyTestClient , p1:MyProtocolProvider , r:SomeOtherResourceType |
T3 | c:MyTestClient , p2:MyProtocolProvider , r:SomeOtherResourceType |
Assuming that SomeOtherResourceType
is independent of MyTestClient
and
MyProtocolProvider
, the lifecycle of the different resources will be as follows:
Transition | Lifecycle actions |
---|---|
• → T1 | set up p1 , set up c |
T1 → T2 | set up r |
T2 → T3 | tear down c , tear down p1 , set up p2 , set up c |
T3 → • | tear down c , tear down p2 , tear down r |
Even if T2 and T3 use the same instance c
of MyTestClient
, this resource
is not reused (in the sense defined above) since the MyProtocolProvider
dependency
resolves to different instances.
MessageTestCase
)
at least requires three resources:
AsyncTestClient
or RequestResponseTestClient
) that
allows the test case to send messages (and receive responses).AsyncEndpoint
or InOutEndpoint
). In the one-way case,
this resource is used to receive requests send by the test client. In the request-response
case its responsibility is to generate well defined responses (typically a simple echo).AsyncChannel
or
RequestResponseChannel
. This resource
manages everything that it necessary to transport a message from a client to an endpoint.
Depending on the transport this task can be fairly complex. For example, in the JMS case,
the channel creates the required JMS destinations and registers them in JNDI, so that
they can be used by the client and by the endpoint. On the other hand, for HTTP the
channel implementation is very simple and basically limited to the computation of the
endpoint reference.The test kit provides the following Axis2 based test client and endpoint implementations:
One-way | Request-response | |
---|---|---|
Client | AxisAsyncTestClient |
AxisRequestResponseTestClient |
Endpoint | AxisAsyncEndpoint |
AxisEchoEndpoint |
URLConnection
. For that
purpose the most natural way to represent a message is as a byte sequence.MessageContext.getEnvelope()
.MessageEncoder
to transform the message
from its own representation to the representation used by the test client. In the same way,
a MessageDecoder
is used to transform the message
intercepted by the endpoint (in the one-way case) or the response message received by the test client
(in the request-response case).
[TODO: currently message encoders and decoders are chosen at compile time and the transformation is is invoked indirectly by adapters; this will change in the future so that encoders and decoders are selected dynamically at runtime]
TransportTestSuiteBuilder
defines the following
default exclusion rule:
(&(client=*)(endpoint=*)(!(|(client=axis)(endpoint=axis))))This rule excludes all test cases that would use a non Axis2 client and a non Axis2 endpoint.
The collected information is written to a set of log files managed by
LogManager
. An instance is added automatically to
the resource set of every test case and other resources can acquire a reference through the dependency
injection mechanism described above. This is the recommended approach. Alternatively, the log manager
can be used as a singleton through LogManager.INSTANCE
.
Logs files are written to subdirectories of target/testkit-logs. The directory structure has a two level hierarchy identifying the test class (by its fully qualified name) and the test case (by its ID). It should be noted that the test results themselves (in particular the exception in case of failure) are still written to the standard JUnit/Surefire logs and that these logs should be consulted first. The test kit specific log files are only meant to provide additional information.
Each test case at least produces a 01-debug.log file with the messages that were logged (using JCL) at level DEBUG during the execution of the test case. In addition, depending on the components involved in the test, the test kit will produce the following logs (XX denotes a sequence number which is generated automatically):
These files are produced when Axis2 test clients and endpoints are used.
XX-formatter.log will contain the payload of an incoming message as seen by the
MessageFormatter
. XX-builder.log on the other
hand will contain the payload of an outgoing message as produced by the
Builder
. Note that the number of log files depends on
serveral factors, such as the MEP, whether the client or endpoint is Axis2 based or not and
whether the transport chooses to use message builders and formatters or not.
These files provides extremely valuable information since it is very difficult to get this
data using other debugging techniques. Note that the files are created by
LogAspect
which relies on Aspect/J to
intercept calls to message formatters and builders. This will only work if the tests are
run with the Aspect/J weaver.
If the test case uses an Axis2 based endpoint, this file will contain the parameters
of the AxisService
implementing this endpoint.
This information is useful since the service configuration is in general determined
by different components involved in the test.
|
||||||||||
PREV PACKAGE NEXT PACKAGE | FRAMES NO FRAMES |