Home > tech > Testing a JavaEE database application

Testing a JavaEE database application

November 24th, 2010 Leave a comment Go to comments

Java EE is more than a decade old; old enough to assume that there are several applications out there that have acquired the infamous “legacy” status. One of the most challenging aspects of such an application is it’s brittle nature. Adding a feature or changing a database flag, can cause an not-so-frequently-used part of the system to break. The only way to fix this situation is to add comprehensive and automated tests.

In this post, I will tell you how I solved this hairy issue for a 10 year old Java EE  application that I “inherited”. Like any good Java EE application, this application was nicely layered into the web tier, presentation logic, application logic and the Data Access layer. Developers, with all good intentions over the years had done their part to introduce the technology that was in vogue at the time (including, but not limited to Spring ;) and  EJBs (<2.0)   :(   ).

Unfortunately, because of the age of the app, business logic was strewn across all layers: from the front-end in JavaScript – (gasp!) to scriptlets in JSPs – (double gasp!), to business logic in the presentation layer; in the services layer (yay!) and most certainly… in the database, in stored procedures!

Since I was not familiar with all the existing business logic, I used what is called “black-box” testing. This simply compares before and after states of an application. In the case of a database application, we will need to compare data in selected database tables, before and after the test is run. In addition, we will need a set of data that needs to be run for the entire test suite.

On a positive note, most, if not all services were nicely defined using a Spring application context. So as to not attempt too much at once, I decided to ensure that all functionality, service layer on down is tested. That does leave untested code in the presentation (and possibly web layer) which is important if there is a lot of business logic in there. But that’s for another day… today, I will talk about how I introduced functional testing in a Java EE application, downstream of the services layer.

Broadly, here are the steps:

  1. Create test data
  2. Create and configure annotations to be used in tests
  3. Implement the TestExecutionListener interface such that:
    1. The beforeTestClass method accesses the class level annotation, creates a new db connection, stores it on the testContext, and inserts setup data.
    2. The beforeTestmethod method accesses the method level annotation and uses the Spring db connection to insert test input data.
    3. The afterTestClass method accesses the connection from the testContext and rolls back the transaction, thereby rolling back the setup data.
  4. Check post-run database data with data in the expected dataset to determine pass/fail status of the test.

The technologies I used for this are:

  • Spring Testing Framework
  • DBUnit
  • Annotations
  • Maven

and inspiration from an earlier post where I had talked about applying cross-cutting concerns to tests.

Here is a picture showing the moving pieces:

Create test data

There are three sets of data that are needed:

  1. Seed data required for the entire test suite
  2. Data that forms the input for one test case
  3. Data that forms the expected output of the same test case

This may seem like a daunting task at first but here is a process that will make is much, much simpler, tho not completely painless.

Since we are talking about a JEE database application, we will attempt to capture the before and after states of the tests in terms of data.  We will ‘freeze’ that state of data in XML files and check that into source control as a part of the code-base (in test/resources/dbunit typically). I can see people going… XML file creation… that’s a show stopper. But with the maven-nddbunit-plugin described here , you will see that it’s not that difficult at all!

So, as a one time activity, to test your sevice, you will need to create data in a database (any database, could be your own development database). This database should have data to test your service. Typically this will exist when you are developing anyway, so if you haven’t discarded/modified that data, then you are half way there already. Note that at this stage the database will contain data not only for the service use case but also set-up data for the entire application.

Remember that the XML files will need to run against different database instance (typically a test database) than the one they were extracted from. And we cannot assume that any data is present in that test database. We will assume tho’ that the objects in the test database are current with those in development in that, the tables/views and database procedures in both databases are identical.

Next we will use the maven plugin mentioned earlier to create the XML files.Detailed documentation can be found on the plugin’s website, but here’s the gist:

You configure the plugin in your project’s pom, just like any other maven plugin, in the build section. The configuration specifies a jdbc URL, a username and a password. It has two goals, export and autoExport.

The export goal can be used to extract a set of data as specified by a SQL query into an XML file. With this goal you are responsible for the order of the extract. So if tableA depends on tableB, then you have to ensure that the order is maintained by specifying the order of the queries in the configuration section of the plugin. So, the query pulling data for tableB, should precede the query for pulling data from tableA.

The more sophisticated autoExport goal can be used to specify a base table and a where-clause. The plugin, then chases the foreign keys in the database and extracts all the data in the right order so that you do not get a constraint violation when (later on) upserting that data in your test database.

Let’s look at an example: Assume you are testing a Teller services’ balanceCheckBook() method. This method needs rows in the MONEY_TRANSACTION table that has credit and debit rows over several months because the balanceCheckBook() method needs to be able to exercise some part of the code that reconciles over quarters, say. So having those transactions in the MONEY_TRANSACTION table is crucial for you to be able to test your service. You’ve taken the trouble to create transactions in your development database. In doing so, all the data in dependent tables have to be already existent (or you would have to have created them in the past). When you extract data from the MONEY_TRANSACTION table, you have to ensure that you also extract data from dependent tables.  And that could start you on the slippery slope where TABLEA depends on TABLEB which depends on TABLEC which loops back to depend on TABLEA! It would be a nightmare to extract all those rows of data, and before you know it, you’ll find yourself making your way to the DBA to ask for the permissions (or syntax) to extract the entire database! That’s not a good idea.

That’s where the autoExport goal of the maven-nddbunit-plugin comes in useful. By specifying a base table, in our example MONEY_TRANSACTION and an optional WHERE clause, you can rely on the plugin to create an XML file of minimal data (across several dependent tables) that are needed to be inserted first to insert the required rows in the base table

Here is an example of the interaction with that plugin with using the export goal:

C:\projects\Acme> mvn nddbunit:export
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Acme-Core
[INFO]    task-segment: [nddbunit:export]
[INFO] ------------------------------------------------------------------------
[INFO] [nddbunit:export {execution: default-cli}]
[INFO] 1. sales-input-account
[INFO] 2. sales-input-create-account
[INFO] 3. sales-input-delete-account
[INFO] 4. sales-input-update-account
[INFO] 5. sales-result-one-account
[INFO] 6. sales-result-more-than-one-account
[INFO] 7. sales-result-domestic-account
[INFO] 8. sales-result-international-account
[INFO] 9. sales-seed-countries
[INFO] 10. sales-seed-states
[INFO] 11. sales-seed-currencies
[INFO] 12. sales-seed-users
[INFO] Enter the number next to the Id that you want to export, 0 to quit
3
[INFO] Exporting to DataSetPath: C:\projects\Acme/src/test/resources/dbunit/sales/sales-input-delete-account.xml using URL: jdbc:oracle:thin:@somehost:1521:somesid...
Successfully wrote file 'C:\projects\Acme/src/test/resources/dbunit/sales/sales-input-delete-account.xml'

Similarly, if using the autoExport goal, here is a sample output:

C:\projects\acme-core> mvn nddbunit:autoExport
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Acme Core
[INFO]    task-segment: [nddbunit:autoExport]
[INFO] ------------------------------------------------------------------------
[INFO] [nddbunit:autoExport {execution: default-cli}]
[INFO] 1. sales-region-input
[INFO] 2. sales-market-share-input
[INFO] 3. accounts-teller-input
[INFO] Enter the number next to the Id that you want to export, 0 to quit
3
[INFO] Accessing URL: jdbc:oracle:thin:@somehost:1521:somesid as user superman...
[INFO] Ready to export:
[INFO]  LEDGER (9 rows).
[INFO]          ACCOUNT_TYPE (1 row).
[INFO]          BANK_BRANCH (1 row).
[INFO]          TEAMMATE (2 rows).
[INFO]          ACCOUNT_CODE (1 row).
[INFO]          COUNTRY_CODE (1 row).
[INFO]          US_STATES (1 row).
[INFO]                  ACCOUNT (3 rows).
[INFO]                  TELLER (1 row).
[INFO]                  CLEARING_HOUSE (5 rows).
[INFO]                  BANK_BRANCH (1 row).
[INFO]                  SHARE (1 row).
[INFO]                  TRANSACTION_CODE (1 row).
[INFO]                          MONEY_TRANSACTION (2 rows).
[INFO] Exporting to DataSet path: C:\projects\acme-core/src/test/resources/dbunit/accounts/teller/teller-input.xml...
[INFO] Do you want to continue to export 30 rows in 13 tables to this file (Y|N)?
y
[INFO] File written...

At this stage, you have a bunch of xml files that represents your test case input (including setup) and another file (or two) that represents your output. From this point on, we are not going to be needing the database data anymore and it’s data can be modified or deleted. Note that the datasets have data in the correct order (for insert) to prevent foreign key violations.

Here is a sample xml file that is generated. Note the element called table points to a name that is going to be used by DBUnit to insert into. So for input datasets, ensure that the table name is correct. For the corresponding output dataset, the table name is not important, so long as the same name is used for the compare.

<?xml version='1.0' encoding='UTF-8'?>
<dataset>
  <table name="test_table1">
    <column>COL1</column>
    <column>COL2</column>
    <row>
      <value>slo</value>
      <value>jo</value>
    </row>
    <row>
      <value>fast</value>
      <value>lane</value>
    </row>
  </table>
</dataset>

Create and configure annotations to be used in tests

Next we need to write an annotation that will be used on the test classes. I have already talked about how to write annotations here. In this case we will need two annotations. One to be used at class level will be needed to specify the seed data that is going to be run for the entire test suite . And the other to be specified at method level for the data that is needed for each test (method).

The class level annotation can be specified like so:

@AcmeDBUnitSetUp(action=AcmeDBUnitSetUp.ACTION_REFRESH,
        setUpDataSetList={"classpath:dbunit/setup/setup.xml",
                                        "classpath:dbunit/setup/setup2.xml"})
public class SalesServiceTests
          extends AbstractTransactionalJUnit4SpringContextTests {
...

And at the method level:

@Test
@AcmeDBUnitRefresh(action=AcmeDBUnitRefresh.ACTION_REFRESH,
        dataSetList={"classpath:dbunit/sales/areas.xml",
                              "classpath:dbunit/sales/territories.xml"})
public void testSomething(){
...

Since our test inherits from AbstractTransactionJUnit4SpringContextTests, transaction semantics are already configured. Therefore I do not have to annotate each test method with @Transactional or @Rollback explicitly. Tests will automatically rollback after each execution leaving the db as it was before the test was run.

Implement the TestExecutionListener interface

Now we need to configure the TestExecutionListener interface. This interface has before and after callback methods at method and class level (new in Spring 3.0).

We will need to implement the beforeTestClass method and the beforeTestMethod method. (Note that since we will use the @TestExecutionListeners annotation at class level, we should not use JUnit4‘s or TestNG‘s @BeforeClass annotation as that will conflict with the similarly named methods on the TestExecutionListener interface).

In the beforeTestClass method implementation, we will access the annotation to get the path to the dataSet and then use the following code to insert data:

DataSource ds = (DataSource)testContext.getApplicationContext().getBean("dataSource");
IDatabaseConnection connection =
                      new DatabaseDataSourceConnection(ds, schemaName);
testContext.setAttribute("connection", connection);
...
Resource dataSetResource = testContext.getApplicationContext()
                                 .getResource(pathFromAnnotation);
IDataSet dataSet = new XmlDataSet(dataSetResource.getInputStream());
...

DatabaseOperation.REFRESH.execute(connection, dataSet);

There are a couple of things that are of note:

  1. The connection that is got is a new connection; not one used by Spring and therefore by the AbstractTransactionalJUnit4SpringContextTests class.
  2. That connection is placed in the testContext, so that it can be rolled back later in the afterTestClass method implementation.
  3. The use of DatabaseOperation.REFRESH. This causes DBUnit to leave data in the db if it already exists and add/update data that does not/changes.

In the afterTestClass method implementation, we access the connection object from the testContext and roll back the transaction such that setup data is no longer in the database.

In the beforeTestMethod implementation we do the exact same thing as in the beforeTestClass method implementation except we use Spring’s existing connection so that rollback semantics are in place.

...
Connection sqlConnection = DataSourceUtils.getConnection(ds);
IDatabaseConnection connection = new DatabaseConnection(sqlConnection, schemaName);
...

Here, since we have used DataSourceUtils.getConnection, we are guaranteed to get the connection object that is passed to the JdbcTemplate that is used by AbstractTransactionalJUnit4SpringContextTests to cause a rollback. And since DBUnit is also using the same connection, the rollback will occur as intended by Spring.

This picture will explain what’s going on:

Check post-run database data with data in the expected dataset

Now that data is setup before the test class is loaded, and data is also injected before each test appropriately, all we have to do is call our service method, have it do what it does and then check results.
The call to compare looks like:

...
public void testSales(){
  //Calculate raise for empId=1
  salesService.calculateRaise("1");
  compareDbResults("SELECT emp_salary FROM emp, dept WHERE emp_dep_id = dep_id AND emp_id = 1 AND dep_id=2",
                 "classpath:dbunit/employeeServices/calculate-raise/results/emp-salary.xml");
...

Here we see that a query is being passed in to compare the current status of the database, to a dataset that was pre-determined using the exact same query in the configuration of the maven-nddbunit-plugin.
The implementation of the compareDbResults method is something like this:

public void compareDbResults(String sqlQuery, String dataSetPath) {
...
DataSource ds = (DataSource)applicationContext.getBean("dataSource");
Connection sqlConnection = DataSourceUtils.getConnection(ds);
connection = new DatabaseConnection(sqlConnection, getSchemaName()); 

org.springframework.core.io.Resource dataSetResource = this.applicationContext.getResource(dataSetPath);
IDataSet expectedDataSet = new XmlDataSet(dataSetResource.getInputStream());

QueryDataSet actualDataSet = new QueryDataSet(connection);
actualDataSet.addTable("temp", sqlQuery);

Assertion.assertEquals(expectedDataSet, actualDataSet);
...
}

Conclusion

We have seen a powerful way to do black box testing of a JEE database application where we are comparing before and after states in the database to assert success. DBUnit offers a good mechanism to capture the state of a database in xml and allow us to apply that data using broad strokes via the REFRESH operation. The maven-nddbunit-plugin gives the added advantage of creating and managing the huge amount of xml data that will need to be produced to carry out the tests.

If you like what you read, share what you like!
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Blogplay
  • RSS
  • Technorati
  • DZone
  1. September 17th, 2013 at 16:42 | #1

    Thanks for finally talking about >technoChord ? Testing a JavaEE database application <Liked it!

  2. October 18th, 2013 at 07:53 | #2

    Hi, i read your blog occasionally and i own a similar one and
    i was just wondering if you get a lot of spam comments? If so how do you
    reduce it, any plugin or anything you can advise?
    I get so much lately it’s driving me mad so any assistance is
    very much appreciated.

  1. December 3rd, 2010 at 20:08 | #1
  2. January 9th, 2013 at 15:34 | #2

Get Adobe Flash playerPlugin by wpburn.com wordpress themes