Archive

Posts Tagged ‘maven’

Angular JS From a Different Angle!

August 18th, 2013 18 comments

I recently made the switch to a full-stack JavaScript front end framework for an enterprise application that we are building.

In this post, I’ll talk about the integration (or, rather lack thereof) of the development methodologies used for developing a server-side RESTful API vs. a client side Angular JS applications. Along the way, I’ll add some opinion to the already opinionated Angular framework.

First, let’s look why Javascript is relevant today.

The graphic below nicely shows the evolution of Javascript from ad-hoc snippets to libraries to frameworks. Of these frameworks, Angular seemed to be getting a lot of traction (being pushed by Google helps :) ).

As we can see, we went from ad-hoc javascript code snippets to libraries to full fledged frameworks. It stands to reason, then that these frameworks be subject to the same development rigor that is accorded server-side development. In achieving that objective, we see tho’ that integration of tools that enforce that rigor, is not all that seamless.

Angular JS development is what I would call web-centric. That makes sense, given that it runs in a browser! However, if we focus all our energies in building services (exposed via an API, RESTful or not); and a web front-end is just one of many ways that my services are consumed, in that scenario, the web-centric nature of Angular development can get a bit non-intuitive.

For a server-side developer, when starting with the Angular stack, issues like the following can become a hinderance:

Where’s the data coming from?

If you want to run most of Angular’s samples you need to fire up a Node JS server. Not that that is something insurmountable. But I didn’t sign up for NodeJS, just Angular. Now I have to read thru Node docs to get samples up and running.

Next, testing:

Testing or rather, the ability to write tests, has a big role to play in the Javascript renaissance. Well..ok.. let’s write some tests! But wait! I need to install PhantomJS or NodeJS or some such JS server to fire up the test harness! Oh, crap! Now I’ve got to read up on Karma (aka Testacular) to run the tests.

What about the build:

How do I build my Angular app. Well… the docs and samples say.. use npm. What’s that? So now I’ve to google and start to use the Node Package Manager to download all dependencies. Or get Grunt! (Grunt!)

All I want to do is bolt on an Angular front-end to an existing REST endpoint. Why do I need all this extra stuff?  Well.. that is because Angular takes a web-centric approach and is largely influenced by the Rails folks (See my related post here), whereas enterprise services treat the front-end as an after-thought :)

So, before I get all the Angular (and Ruby on Rails) fans all worked up, here’s the good news!

I wrote up an application that bolts on an Angular JS front-end to a Java EE CRUD application (Spring, Hibernate.. the usual suspects) on the back-end. It’s a sample therefore it obviously lacks certain niceties like security, but it does make adoption of Angular easier for someone more familiar with Java EE than Ruby on Rails.

Source and Demo

You can download and check out Angular EE (aka PointyPatient) here. In the rest of this post, I’ll refer to this sample app, so it may help to load it up in your IDE.

You can also see the app in action here.

Opinions, Opinions

One of Angular’s strengths is that it is an opinionated framework. In the cowboy-ruled landscape of the Javascript of yore, opinion is a good thing! In Angular EE, you will see that I’ve added some more opinion on top of that, to make it palatable to the Java EE folks!

So here is a list of my opinions that you will see in Angular EE:

Angular Structure

The structure of an webapp is largely predicated on if it is a servlet or not. Besides the servlet specification that mandates the existence of  a web.xml, all other webapp structures are a matter of convention. The Angular sample app, Angular-Seed,  is not a servlet. Notwithstanding the fact that Angular (and all modern front-end frameworks) are pushing for a Single Page App (SPA), I still find servlets a very alluring paradigm. So here’s my first opinion. Rather than go for a SPA, I’ve made Angular EE’s web application a servlet that is also an SPA.

If you compare the directory structure on the left (Angular-Seed) with the one on the right (PointyPatient webapp), you will see that the one on the right is a servlet that has a WEB-INF/web.xml resource. It also has an index.html at the root. This index.html does nothing but a redirect like so:

<meta HTTP-EQUIV=”REFRESH” content=”0; url=app/index.html”>

It is the index.html inside the app directory that bootstraps the angular app. And so the context root of the webapp is still the webapp directory, not webapp/app.

So what’s the advantage of making this a servlet? For one: You can use the powerful servlet filtering mechanism for any pre-processing that you may want to do on the server,  before the SPA is fired up. The web.xml is the touchpoint where you would configure all servlet filters.

For seconds, instead of having one SPA, would it not be nice if one webapp would serve up several “SPA”s?

For example, let’s say you have an application that manages patients, doctors and their medication in several hospitals. I can easily see the following SPAs:

  • Bed Management
  • Patient- Drug interaction
  • Patient-Doc Configuration
  • Patient Records

Usually, a user will use only one SPA , but on occasion will need to cross over to a different SPA. All the above SPA’s share a common http session, authentication and authorization.  The user can switch between them without having to log on repeatedly. Why load up all the functionality in a browser when only a subsystem may be needed? Use server-side (servlet) features to decide which SPAs to fire-up depending on who’s logged in (using authorization, roles, permissions etc). Delay the loading of rarely used SPAs as much as possible.

For the above reasons, I think it is a good idea to serve up your SPA (or SPAs) within the context of a servlet.

Now let’s look at the structure of just the Angular part:

Again, on the left is AngularSeed and on the right, PointyPatient.

There is no major change here except I prefer views to partials (in keeping with the MVVM model)

And secondly, I preferred to break out controllers, services,directives and filters into their own files. This will definitely lead to less Source Control problems with merges.

app.js still remains the gateway into the application with routes and config defined there. (More on that later).

.

Project Structure

Now that we have looked at differences in the Angular app, let’s step back a little and look at the larger context: The server-side components. This becomes important, only because we want to treat the Angular app as just another module in our overall project.

I am using a multi-module Maven project structure and so I define my Angular app as just another module.

  • Pointy-api is the Rest Endpoint to my services
  • Pointy-build is a pom project aggregate the maven reactor build.
  • Pointy-domain is where my domain model (hopefully rich) is stored
  • Pointy-parent is a pom project for inheriting child projects
  • Pointy-services is where the business logic resides and is the center of my app.
  • Pointy-web is our Angular app and the focus of our discussion

Anatomy of the Angular App

A Java EE applications has layers that represent separation of concerns. There is no reason we cannot adopt the same approach on the Angular stack also.

As we see in the picture below, each layer is unilaterally coupled with it’s neighbor. But the key here is dependency injection. IMO, Angular’s killer feature is how it declares dependencies in each of it’s classes and tests (more on that later). PointyPatient takes advantage of that as can be seen here.

Let is discuss each layer in turn:

Views: HTML snippets (aka partials). There is no “logic” or conditionals here. All the logic is buried either in Angular provided directives or your own directives. An example would be the use of the ng-show directive on the alert directive in the Patient.html view. Conditional logic to show/hide the alert is governed by two-way binding on the view-model that is passed to the directive. No logic means no testing of the view. This is highly desirable because the view is mainly the DOM and the DOM is the most difficult/brittle to test.

Controllers: Although, it may seem by looking at some of the samples, that we should end up with a controller per view, in my opinion, a controller should be aligned to a view-model, not a view. So in PointyPatient we see that we have one controller (PatientController) that serves up both the views (Patient.html and PatientList.html) because the view-model for both these views do not interfere with each other.

Services: Here we see common logic that is not dependent on a view-model is processed.

  • Server-side access is achieved via Restangular. Restangular returns a promise which is passed to the controller layer
  • An example of business logic that would make sense in this layer would be massaging Restangular returned data by inspecting user permissions for UI behavior. Or date conversions for UI display. Or enhancing JSON that is returned by the db with javascript methods that can be called in the view (color change etc).
    There is a subtle difference between the client and server-side service layers: The client-side service layer holds business logic that is UI related, yet not the view-model related. Server-side service layer holds business logic that should have nothing to do with the UI at all.
  • Keep the view-model (usually the $scope object) completely decoupled from this layer.
  • Since services return a promise, it is advisable to not process the error part of a promise here, but pass it on to the controller. By doing so, the controller will be well suited to change the route or show user appropriate messages based on the processing of the error block from a promise. This can be seen in PatientController and PatientService

Layers

Since we have defined layers on both, the Angular stack and the server-side, an example may help solidify the purpose of the layers. So, using this rather contrived example in the medical area, here are some sample APIs in each layer:

Server side services:
This is where ‘business logic’ that is UI independent lives.

  • Invoice calculatePatientInvoice(Long patientId);
  • boolean checkDrugInteractions(Long patientId, Long prescriptionId);
  • boolean checkBedAvailability(patientId, LodgingPreference preference);
  • List<Patient> getCriticalPatients();
  • int computeDaysToLive(Long patientId);

The number of days a patient lives will not depend on if we use Angular for the UI or not :) . It will depend, tho’ on several other APIs available only on the server-side (getVitals(), getAncestoralHistory() etc..).
Server Side controllers:
If we use Spring MVC to expose services via REST, then the controllers are a very thin layer.
There is no business logic at all. Just methods that expose the REST verbs which call the services in turn.

  • getPatientList();
  • getPatient(Long patientId);
  • createPatient(Patient patient);
  • deletePatient(Long patientId);

Angular services:
This is where ‘business logic’ that is UI dependent lies.
These are used by several angular controllers. This could involve cached JSON massaging or even servers-side calls. However, processing is always UI related.

  • highlightPatientsForCurrentDoctorAndBed();
    Assuming that doctorId and bedId are JSON data in the browser, this method changes the color of all patients assigned to the current doc and bed.
  • showDaysAgoBasedOnLocale(date);
    Returns 3 days ago, 5 hours ago etc instead of a date on the UI.
  • computeTableHeadersBasedOnUserPermission(userId);
    Depending on who’s logged in, grid/table headers may need to show more/less columns.
    Note that it is the server based service that is responsible for hiding sensitive data based on userId.
  • assignInvoiceOverageColor(invoice);
    Make invoices that are over 90 days overdue, red, for instance.
  • showModifiedMenuBasedOnPermissions(userId);
    Hide/disable menus based on user permissions (most likely cached in the browser).
  • computeColumnWidthBasedOnDevice(deviceId);
    If a tablet is being used, this nugget of info, will most likely be cached in the browser.
    This method will react to this info.

Angular controllers:
These methods are view-model ($scope) dependent. These are numerous and shallow.
Their purpose in life is to handle (route) error conditions and assign results to the scope.

  • getDoctorForPatient(Long patientId);
    This massages local(in-browser) patient-doctor data, or can access to the server via Angular services -> REST -> Server Services and assigns to a scope variable.
  • getEmptyBedsInZone(zoneId);
    scope assignment of the beds

The main difference is that here the result is assigned to the $scope, unlike the Angular Service, which is $scope independent.

Testing

While most JS Frameworks emphasize the importance of testing Javascript (and there are enough JS Testing frameworks out there), IMO, it’s only Angular that focusses on a key enabler for meaningful unit testing: Dependency Injection.

In PointyPatient, we can see how Jasmine tests are written for Controllers and Services. Correspondingly, JUnit tests are written using Spring Testing Framework and Mockito.

Let’s look at each type of test. It may help to checkout the code alongside:

  1. Angular Controller Unit tests: Here the system under test is the PatientController. I have preferred to place all tests that pertain to PatientController in one file, PatientControllerSpec.js,  as against squishing all classes’ tests in one giant file, ControllerSpec.js. (Works from the Source control perspective also). The PatientService class has been stubbed out by using a jasmine spy. The other noteworthy point is the use of the $digest() function on the $rootScope. This is necessary because the call to patientService returns a promise that is typically assigned to a $scope variable. Since $scope is evaluated when the DOM is processed (apply()‘ed), and since there is no DOM in the case of a test, the $digest() function needs to be called on the rootScope (I am not sure why localScope.$digest() doesn’t work tho’).
  2. Angular Service Unit Tests: Here the system under test in the PatientService. Similar to PatientControllerSpec.js, PatientServiceSpec.js only caters to code in PatientService.js. Restangular, the service that gets data from the server via RESTful services, is stubbed out using Jasmine spies.
    Both the PatientControllerSpec.js and PatientServiceSpec.js can be tested using the SpecRunner.html test harness using the file:/// protocol.
    However, when the same tests are to be run with the build, see the config of the jasmine-maven-plugin in the pom.xml of the pointy-web project. The only word of caution here is the order of the external dependencies that the tests depend on and which are configured in the plugin. If that order is not correct, errors can be very cryptic and difficult to debug.
    These tests (Unit tests) can therefore be executed using file:///…/SpecRunner.html during development and via the maven plugin during CI.
    In this sense, we have run these unit tests without using the Karma Test Runner, because, in the end, all the Karma test runner does, in the context of an Angular unit test is to watch for changes in your JS files. If you are ok, with not needing that convenience, then Karma is not really necessary.
  3. End-To-End Tests: These tests are run using a combination of things:
    First, note the class called: E2ETests.java. This is actually a JUnit test that is configured to run using the maven-failsafe-plugin in the integration-test phase of the maven life-cycle.
    But before the failsafe plugin is fired up, in the pre-integration-test phase, the maven-jetty-plugin is configured to fire up a servlet container that actually serves up the Angular servlet webapp (pointy-web) and the RESTful api webapp (pointy-api), and then stops both in the post-integration-test phase.
    E2ETests.java loads up a Selenium driver, fires up Firefox, points the browser url to the one where Jetty is running the servlet container (See pointy-web’s pom.xml and the config of the maven-jetty-plugin).
    Next we can see in scenario.js, we have used the Angular supplied API that navigates the DOM. Here we navigate to a landing page and then interact with input elements and traverse the DOM by using jQuery.
    If we were to run these tests using the Karma E2E test harness, we see that Karma runs the E2E test as soon as the scenario.js file changes. A similar (but not same) behavior can be simulated by running
    mvn install -Dtest=E2ETests
    on the command line.
    It is true, tho’ that running tests using maven will take  much longer to run than it’s Karma counterpart because Maven has to go thru its lifecycle phases, up until the integration-test phase.
    But if we adopt the approach that we write a few E2E tests to test the happy-path and several unit tests to maximize test coverage, we can mitigate this disadvantage.
    However, as far as brittleness is concerned, this mechanism (using Selenium)  is as brittle (or as tolerant) than the corresponding Karma approach, because ultimately, they both fire up a browser and traverse the DOM using jQuery.
  4. Restangular is not tested within our application because it’s an external dependency, altho it manifests as a separate layer in our application.
  5. On the server-side we see the Spring MVC Test Integration that is part of the Spring 3.2+ framework. This loads up the web-application context including security (and all other filters, including the CORS filter) that are configured in the web.xml. This is obviously slow (because of the context being loaded) and therefore we should use it only for testing the happy path. Since these tests do not rollback db actions, we must take care to set-up and tear-down our test fixtures.
  6. Next we have the Spring Transactional Tests. These integration tests use the Spring Test Runner which ensures that db transactions are rolled back after each test. Therefore tear-downs are not really needed. However, since the application context is loaded up on each run, these tend to be slow runners and must be used to test the happy path.
  7. Next we have Service Unit tests: PatientServiceTests.java: These are unit tests that use the Mockito test library to stub out the Repository layer. These are written to maximize test coverage and therefore will need to be numerous.
  8. Repository Unit tests (PatientRepositoryTests.java) are unit tests that stub out calls to the database by replacing the spring datasource context by a test-datasource-context.xml

Environmentally Aware!

Applications need to run in various ‘environment’s like, development, test, qa and production. Each environment has certain ‘endpoints’ (db urls, user credentials, JMS queues etc)  that are typically stored in property files. On the server side, the maven build system can be used to inject these properties into the ‘built’ (aka compiled or packaged) code. In the case of using Springframework, a very nice interface PropertyPlaceholderConfigurer can be used to inject the correct property file using Maven profiles. (I’ve blogged previously about this here). An example of this is the pointy-services/src/main/resources/prod.properties property file that is used when the -Pprod profile is invoked during the build.
The advantage of this approach is that properties can be read from a hierarchy of property files at runtime.

In AngularEE, I have extended this ability of making the code ‘environmentally aware’ in the JS stack also. However, property injection happens only at buildtime. This is affected using Maven profiles as can be seen in pom.xml of pointy-parent module. In addition, please see the environment specific property files in pointy-web/src/main/resources. Lastly, check out the customization of the maven-war-plugin where the correct ${env}.property file is filtered into the war at build time. You will see how the REST endpoint is injected into the built war module in app.js for different environments.

In summary

We have seen how we can use a homogenous development environment for building an Angular application. Since user stories slice vertically across the entire stack, there is value in striving for that homogeneity so that if a story is approved (or rolled back), it affects the entire stack, from Angular all the way to the Repository.

We have also seen a more crystalized separation of concerns in the layers that are defined in the Angular stack.

We have seen how the same test harness can be used for server and client side tests.

Lastly we have seen how to make the Angular app aware of the environment it runs in, much like property injection on the server side.

I hope this helps more Java EE folks in adopting Angular JS, which I feel is a terrific way to build a web front end for your apps!

References

Testing a JavaEE database application

November 24th, 2010 2 comments

Java EE is more than a decade old; old enough to assume that there are several applications out there that have acquired the infamous “legacy” status. One of the most challenging aspects of such an application is it’s brittle nature. Adding a feature or changing a database flag, can cause an not-so-frequently-used part of the system to break. The only way to fix this situation is to add comprehensive and automated tests.

In this post, I will tell you how I solved this hairy issue for a 10 year old Java EE  application that I “inherited”. Like any good Java EE application, this application was nicely layered into the web tier, presentation logic, application logic and the Data Access layer. Developers, with all good intentions over the years had done their part to introduce the technology that was in vogue at the time (including, but not limited to Spring ;) and  EJBs (<2.0)   :(   ).

Unfortunately, because of the age of the app, business logic was strewn across all layers: from the front-end in JavaScript – (gasp!) to scriptlets in JSPs – (double gasp!), to business logic in the presentation layer; in the services layer (yay!) and most certainly… in the database, in stored procedures!

Since I was not familiar with all the existing business logic, I used what is called “black-box” testing. This simply compares before and after states of an application. In the case of a database application, we will need to compare data in selected database tables, before and after the test is run. In addition, we will need a set of data that needs to be run for the entire test suite.

On a positive note, most, if not all services were nicely defined using a Spring application context. So as to not attempt too much at once, I decided to ensure that all functionality, service layer on down is tested. That does leave untested code in the presentation (and possibly web layer) which is important if there is a lot of business logic in there. But that’s for another day… today, I will talk about how I introduced functional testing in a Java EE application, downstream of the services layer.

Broadly, here are the steps:

  1. Create test data
  2. Create and configure annotations to be used in tests
  3. Implement the TestExecutionListener interface such that:
    1. The beforeTestClass method accesses the class level annotation, creates a new db connection, stores it on the testContext, and inserts setup data.
    2. The beforeTestmethod method accesses the method level annotation and uses the Spring db connection to insert test input data.
    3. The afterTestClass method accesses the connection from the testContext and rolls back the transaction, thereby rolling back the setup data.
  4. Check post-run database data with data in the expected dataset to determine pass/fail status of the test.

The technologies I used for this are:

  • Spring Testing Framework
  • DBUnit
  • Annotations
  • Maven

and inspiration from an earlier post where I had talked about applying cross-cutting concerns to tests.

Here is a picture showing the moving pieces:

Create test data

There are three sets of data that are needed:

  1. Seed data required for the entire test suite
  2. Data that forms the input for one test case
  3. Data that forms the expected output of the same test case

This may seem like a daunting task at first but here is a process that will make is much, much simpler, tho not completely painless.

Since we are talking about a JEE database application, we will attempt to capture the before and after states of the tests in terms of data.  We will ‘freeze’ that state of data in XML files and check that into source control as a part of the code-base (in test/resources/dbunit typically). I can see people going… XML file creation… that’s a show stopper. But with the maven-nddbunit-plugin described here , you will see that it’s not that difficult at all!

So, as a one time activity, to test your sevice, you will need to create data in a database (any database, could be your own development database). This database should have data to test your service. Typically this will exist when you are developing anyway, so if you haven’t discarded/modified that data, then you are half way there already. Note that at this stage the database will contain data not only for the service use case but also set-up data for the entire application.

Remember that the XML files will need to run against different database instance (typically a test database) than the one they were extracted from. And we cannot assume that any data is present in that test database. We will assume tho’ that the objects in the test database are current with those in development in that, the tables/views and database procedures in both databases are identical.

Next we will use the maven plugin mentioned earlier to create the XML files.Detailed documentation can be found on the plugin’s website, but here’s the gist:

You configure the plugin in your project’s pom, just like any other maven plugin, in the build section. The configuration specifies a jdbc URL, a username and a password. It has two goals, export and autoExport.

The export goal can be used to extract a set of data as specified by a SQL query into an XML file. With this goal you are responsible for the order of the extract. So if tableA depends on tableB, then you have to ensure that the order is maintained by specifying the order of the queries in the configuration section of the plugin. So, the query pulling data for tableB, should precede the query for pulling data from tableA.

The more sophisticated autoExport goal can be used to specify a base table and a where-clause. The plugin, then chases the foreign keys in the database and extracts all the data in the right order so that you do not get a constraint violation when (later on) upserting that data in your test database.

Let’s look at an example: Assume you are testing a Teller services’ balanceCheckBook() method. This method needs rows in the MONEY_TRANSACTION table that has credit and debit rows over several months because the balanceCheckBook() method needs to be able to exercise some part of the code that reconciles over quarters, say. So having those transactions in the MONEY_TRANSACTION table is crucial for you to be able to test your service. You’ve taken the trouble to create transactions in your development database. In doing so, all the data in dependent tables have to be already existent (or you would have to have created them in the past). When you extract data from the MONEY_TRANSACTION table, you have to ensure that you also extract data from dependent tables.  And that could start you on the slippery slope where TABLEA depends on TABLEB which depends on TABLEC which loops back to depend on TABLEA! It would be a nightmare to extract all those rows of data, and before you know it, you’ll find yourself making your way to the DBA to ask for the permissions (or syntax) to extract the entire database! That’s not a good idea.

That’s where the autoExport goal of the maven-nddbunit-plugin comes in useful. By specifying a base table, in our example MONEY_TRANSACTION and an optional WHERE clause, you can rely on the plugin to create an XML file of minimal data (across several dependent tables) that are needed to be inserted first to insert the required rows in the base table

Here is an example of the interaction with that plugin with using the export goal:

C:\projects\Acme> mvn nddbunit:export
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Acme-Core
[INFO]    task-segment: [nddbunit:export]
[INFO] ------------------------------------------------------------------------
[INFO] [nddbunit:export {execution: default-cli}]
[INFO] 1. sales-input-account
[INFO] 2. sales-input-create-account
[INFO] 3. sales-input-delete-account
[INFO] 4. sales-input-update-account
[INFO] 5. sales-result-one-account
[INFO] 6. sales-result-more-than-one-account
[INFO] 7. sales-result-domestic-account
[INFO] 8. sales-result-international-account
[INFO] 9. sales-seed-countries
[INFO] 10. sales-seed-states
[INFO] 11. sales-seed-currencies
[INFO] 12. sales-seed-users
[INFO] Enter the number next to the Id that you want to export, 0 to quit
3
[INFO] Exporting to DataSetPath: C:\projects\Acme/src/test/resources/dbunit/sales/sales-input-delete-account.xml using URL: jdbc:oracle:thin:@somehost:1521:somesid...
Successfully wrote file 'C:\projects\Acme/src/test/resources/dbunit/sales/sales-input-delete-account.xml'

Similarly, if using the autoExport goal, here is a sample output:

C:\projects\acme-core> mvn nddbunit:autoExport
[INFO] Scanning for projects...
[INFO] ------------------------------------------------------------------------
[INFO] Building Acme Core
[INFO]    task-segment: [nddbunit:autoExport]
[INFO] ------------------------------------------------------------------------
[INFO] [nddbunit:autoExport {execution: default-cli}]
[INFO] 1. sales-region-input
[INFO] 2. sales-market-share-input
[INFO] 3. accounts-teller-input
[INFO] Enter the number next to the Id that you want to export, 0 to quit
3
[INFO] Accessing URL: jdbc:oracle:thin:@somehost:1521:somesid as user superman...
[INFO] Ready to export:
[INFO]  LEDGER (9 rows).
[INFO]          ACCOUNT_TYPE (1 row).
[INFO]          BANK_BRANCH (1 row).
[INFO]          TEAMMATE (2 rows).
[INFO]          ACCOUNT_CODE (1 row).
[INFO]          COUNTRY_CODE (1 row).
[INFO]          US_STATES (1 row).
[INFO]                  ACCOUNT (3 rows).
[INFO]                  TELLER (1 row).
[INFO]                  CLEARING_HOUSE (5 rows).
[INFO]                  BANK_BRANCH (1 row).
[INFO]                  SHARE (1 row).
[INFO]                  TRANSACTION_CODE (1 row).
[INFO]                          MONEY_TRANSACTION (2 rows).
[INFO] Exporting to DataSet path: C:\projects\acme-core/src/test/resources/dbunit/accounts/teller/teller-input.xml...
[INFO] Do you want to continue to export 30 rows in 13 tables to this file (Y|N)?
y
[INFO] File written...

At this stage, you have a bunch of xml files that represents your test case input (including setup) and another file (or two) that represents your output. From this point on, we are not going to be needing the database data anymore and it’s data can be modified or deleted. Note that the datasets have data in the correct order (for insert) to prevent foreign key violations.

Here is a sample xml file that is generated. Note the element called table points to a name that is going to be used by DBUnit to insert into. So for input datasets, ensure that the table name is correct. For the corresponding output dataset, the table name is not important, so long as the same name is used for the compare.

<?xml version='1.0' encoding='UTF-8'?>
<dataset>
  <table name="test_table1">
    <column>COL1</column>
    <column>COL2</column>
    <row>
      <value>slo</value>
      <value>jo</value>
    </row>
    <row>
      <value>fast</value>
      <value>lane</value>
    </row>
  </table>
</dataset>

Create and configure annotations to be used in tests

Next we need to write an annotation that will be used on the test classes. I have already talked about how to write annotations here. In this case we will need two annotations. One to be used at class level will be needed to specify the seed data that is going to be run for the entire test suite . And the other to be specified at method level for the data that is needed for each test (method).

The class level annotation can be specified like so:

@AcmeDBUnitSetUp(action=AcmeDBUnitSetUp.ACTION_REFRESH,
        setUpDataSetList={"classpath:dbunit/setup/setup.xml",
                                        "classpath:dbunit/setup/setup2.xml"})
public class SalesServiceTests
          extends AbstractTransactionalJUnit4SpringContextTests {
...

And at the method level:

@Test
@AcmeDBUnitRefresh(action=AcmeDBUnitRefresh.ACTION_REFRESH,
        dataSetList={"classpath:dbunit/sales/areas.xml",
                              "classpath:dbunit/sales/territories.xml"})
public void testSomething(){
...

Since our test inherits from AbstractTransactionJUnit4SpringContextTests, transaction semantics are already configured. Therefore I do not have to annotate each test method with @Transactional or @Rollback explicitly. Tests will automatically rollback after each execution leaving the db as it was before the test was run.

Implement the TestExecutionListener interface

Now we need to configure the TestExecutionListener interface. This interface has before and after callback methods at method and class level (new in Spring 3.0).

We will need to implement the beforeTestClass method and the beforeTestMethod method. (Note that since we will use the @TestExecutionListeners annotation at class level, we should not use JUnit4‘s or TestNG‘s @BeforeClass annotation as that will conflict with the similarly named methods on the TestExecutionListener interface).

In the beforeTestClass method implementation, we will access the annotation to get the path to the dataSet and then use the following code to insert data:

DataSource ds = (DataSource)testContext.getApplicationContext().getBean("dataSource");
IDatabaseConnection connection =
                      new DatabaseDataSourceConnection(ds, schemaName);
testContext.setAttribute("connection", connection);
...
Resource dataSetResource = testContext.getApplicationContext()
                                 .getResource(pathFromAnnotation);
IDataSet dataSet = new XmlDataSet(dataSetResource.getInputStream());
...

DatabaseOperation.REFRESH.execute(connection, dataSet);

There are a couple of things that are of note:

  1. The connection that is got is a new connection; not one used by Spring and therefore by the AbstractTransactionalJUnit4SpringContextTests class.
  2. That connection is placed in the testContext, so that it can be rolled back later in the afterTestClass method implementation.
  3. The use of DatabaseOperation.REFRESH. This causes DBUnit to leave data in the db if it already exists and add/update data that does not/changes.

In the afterTestClass method implementation, we access the connection object from the testContext and roll back the transaction such that setup data is no longer in the database.

In the beforeTestMethod implementation we do the exact same thing as in the beforeTestClass method implementation except we use Spring’s existing connection so that rollback semantics are in place.

...
Connection sqlConnection = DataSourceUtils.getConnection(ds);
IDatabaseConnection connection = new DatabaseConnection(sqlConnection, schemaName);
...

Here, since we have used DataSourceUtils.getConnection, we are guaranteed to get the connection object that is passed to the JdbcTemplate that is used by AbstractTransactionalJUnit4SpringContextTests to cause a rollback. And since DBUnit is also using the same connection, the rollback will occur as intended by Spring.

This picture will explain what’s going on:

Check post-run database data with data in the expected dataset

Now that data is setup before the test class is loaded, and data is also injected before each test appropriately, all we have to do is call our service method, have it do what it does and then check results.
The call to compare looks like:

...
public void testSales(){
  //Calculate raise for empId=1
  salesService.calculateRaise("1");
  compareDbResults("SELECT emp_salary FROM emp, dept WHERE emp_dep_id = dep_id AND emp_id = 1 AND dep_id=2",
                 "classpath:dbunit/employeeServices/calculate-raise/results/emp-salary.xml");
...

Here we see that a query is being passed in to compare the current status of the database, to a dataset that was pre-determined using the exact same query in the configuration of the maven-nddbunit-plugin.
The implementation of the compareDbResults method is something like this:

public void compareDbResults(String sqlQuery, String dataSetPath) {
...
DataSource ds = (DataSource)applicationContext.getBean("dataSource");
Connection sqlConnection = DataSourceUtils.getConnection(ds);
connection = new DatabaseConnection(sqlConnection, getSchemaName()); 

org.springframework.core.io.Resource dataSetResource = this.applicationContext.getResource(dataSetPath);
IDataSet expectedDataSet = new XmlDataSet(dataSetResource.getInputStream());

QueryDataSet actualDataSet = new QueryDataSet(connection);
actualDataSet.addTable("temp", sqlQuery);

Assertion.assertEquals(expectedDataSet, actualDataSet);
...
}

Conclusion

We have seen a powerful way to do black box testing of a JEE database application where we are comparing before and after states in the database to assert success. DBUnit offers a good mechanism to capture the state of a database in xml and allow us to apply that data using broad strokes via the REFRESH operation. The maven-nddbunit-plugin gives the added advantage of creating and managing the huge amount of xml data that will need to be produced to carry out the tests.

Compliance to Corporate Coding Standards

November 12th, 2010 No comments

In software shops, code compliance is a big deal. Most of the places I have worked in, I often have found managers (and indeed, myself) say: I don’t care what coding standards we use, let’s just be consistent, use one set and be done with it!

But there’s more to that statement than meets the eye. Timing is an issue. How to propagate the ‘chosen’ configuration amongst the developer groups is another. And lastly, how do we enforce compliance.

Here’s how I overcame this last problem, recently.

The tool I used is jacobe. It is a pretty comprehensive code ‘beautifier’, judging from the default configuration file that it runs off of. Very broadly, jacobe is passed a configuration file and based on parameters specified in it, it recursively steps through all java files specified in the input parameter and formats each according the the configuration passed. It either overwrites the existing java file (if the -overwrite flag is set), or produces a file with a ‘.jacobe’ extension in the same directory as the original file.

Jacobe can be invoked in several ways:

From the command line.

By specifying as input the root directory of your java code, jacobe picks up all the *.java files under it and format’s them.

C:\jacobe\jacobe.exe -cfg=C:\jacobe\sun.cfg
-overwrite C:\projects\AcmeProject/src/main/java/com/nayidisha/plugins/jacobe

Note that there is no way we can enforce that developers will invoke this command during development.

Using the Eclipse-Jacobe Plugin

Another way to format files is to use the eclipse jacobe plugin from within the eclipse IDE. As of the current release of the plugin, the Jacobe eclipse plugin causes an icon to appear on the eclipse toolbar which the developer has to explicitly press to format files.


There is a good chance that developers will omit to press the format button and press the (regular) save button instead. And then there’s always the issue of those folks who do not use Eclipse or (gasp) any IDE at all!

Using the Maven-jacobe Plugin

Yet another way to format files is to use the maven-jacobe-plugin. This is invoked at build time and can be applied to the entire code base in one go. The plugin is configured in the pom and then the following command can be issued from the project root.

mvn jacobe:jacobe

Again, there is no systematic way to ensure that developers will invoke this plugin during development time. It can be systematically invoked at build time, but formatting at build time doesn’t have any discernible advantage.

Using a Pre-commit trigger to Source Control

Jacobe can be invoked by configuring a pre-commit trigger on CVS, SVN, P4 (or similar source control system). However,checking code compliance on a pre-commit trigger to a source control system is not a good idea for many reasons, the main one being that what get’s checked in is different from what the developer has on her machine. Also, I doubt checking in  modified code without testing is going to pass many audits ;)

So, what is the solution?

All the above techniques (save the last) rely on developer discipline and are highly error prone. And the last, is unsavory for many reasons.

So here’s a process that I used recently that does the trick. It involves the use of yet-another-maven-plugin called the maven-ndjacobe-plugin that I wrote. This plugin has, in addition to the format goal, a check-compliance goal too. You can download that plugin from here, and it’s documentation is here.

First, as a one-time process, find a time when your project is reasonably stable, that is, not too much churn is going on in the code base. Then run the process in the picture below:


Here, in the green box, we have run the format goal of the plugin. By doing that, jacobe will format/beautify your entire code base (as configured in the plugin configuration). Build it and test it, to make sure that nothing unexpected has happened to the code during formatting. After checking in the ‘beautified’ code, you are ready for the development process shown below:

Here, we see that developers may (or may not use) the Jacobe plugin in their IDE to format code before/while saving their code. But during the build process, binding the ndjacobe plugin’s check goal to the early phase of the build lifecycle can enforce compliance of the formatting of the code.

The ndjacobe:check plugin can be configured to fail (failOnError=true, default) if the check does not pass. This configuration can be specified in the corporate pom, or in the project’s pom and can be bound to the validate or initialize phases (pre-compile).

Another parameter that is useful, if the check fails is the keepJacobeFileForFailedMatches flag (default=false). Setting this to true will also keep the Jacobe generated .jacobe file that can then be used for comparison to see what part of the formatting failed and address it accordingly.

Using the ndjacobe:clean goal will remove any spurious .jacobe files from the codebase.

Since there is no way of getting around a build-time check, we can enforce compliance to the corporate coding standards of all java code going into source control. This proves to be a valuable on-going development process that ensures that your code is 100% compliant with the corporate standard as specified in the jacobe configuration file.

A process to ensure adequate test code coverage

September 21st, 2010 No comments

Writing tests is a pain. Especially if you have to write them for existing code. Recently I was tasked to make an existing code base regression-proof. The best way I could think of approaching this (rather Utopian)  task was to establish a long running process that would be incremental and measurable.

The first decision was to determine at what level should tests be written. Given that this was a web application and was  written in Java, I could use HttpUnit and Selenium to test the web layer, down to DAO or domain object  level true unit tests. However, for an existing code base, I found that testing the service layer (in this case Spring managed services) on down was the most effective way to get the most ROI. I chose the Spring Testing Framework  and JUnit4 as my testing framework.

The next thing was to determine a way to measure how much of my code was being covered by the tests. There are several tools available to do this like Clover and Cobertura. I chose the latter only because it is well supported via a maven plugin and it’s free!

Lastly, I needed a process that could be followed over the next several months by several developers and with metrics to measure progress. To enable this I used the Eclipse IDE, Maven and the plugins that work in the maven eco-system.

There is a lot of documentation (and a reference implementation) of the Spring Testing Framework that will explain how to write tests, so I will not cover that here. Instead, I will focus on how to measure code coverage and how to establish a process to add tests and measure overall progress.

Code Coverage

Cobertura  can be used in several ways. I chose to use it via the cobertura-maven-plugin. The steps are :

  1. Instrument the code
  2. Write the tests
  3. Run the tests
  4. Measure Coverage
  5. Loop back to 2 till satisfied with the coverage.
  1. Instrument the code – This compiles your java code with asm libraries thereby placing extra smarts in the code to be able to state when a line or branch of code is visited. If used via the cobertura-maven-plugin, the goal that is needed to be run is, not surprisingly, the instrument goal. This places the newly complied classes in ${basedir}/target/generated-classes/cobertura/ directory. In addition, a file called cobertura.ser which is a serialized java class, is placed in ${basedir}/target/cobertura directory. This file is written to after tests are run and then it is subsequently used by the reporting component of cobertura to determine what is covered. To actually instrument the code, you can configure the cobertura-maven-plugin like so:
    <plugin>
    <groupId>org.codehaus.mojo</groupId>
        <artifactId>cobertura-maven-plugin</artifactId>
        <version>2.4</version>
        <configuration>
             <instrumentation>
                <includes>
                    <include>com/acme/myapp/sales/**/*.class</include>
                    <include>com/acme/myapp/accounting/**/*.class</include>
                </includes>
                <excludes>
                    <exclude>**/*Test.class</exclude>.
                </excludes>
            </instrumentation>
       </configuration>
            <executions>
                <execution>
                    <phase>package</phase>
                <goals>
                    <goal>clean</goal>
                    <goal>instrument</goal>
                </goals>
                </execution>
            </executions>
    </plugin>

    The clean goal of the plugin merely deletes the cobertura.ser file. Since this is tied to the package phase of the default build lifecycle, running mvn package will cause the code to be instrumented and placed in ${basedir}/target/generated-classes/cobertura.
    Note that Tests have been excluded from instrumentation. This is important, because as you modify your tests to re-run them to increase code coverage, if the tests are included in the instrumented code, they will be placed first in your classpath (in Eclipse or maven-surefire-plugin as explained later) and therefore, your modified test class will never be executed.

    To run the plugin, simply issue:

    > mvn clean package -Dmaven.test.skip=true

    Also note that it is not necessary to run any tests during instrumentation. Therefore they are skipped by using the -D parameter.

  2. Write the tests – The Spring Testing Framework gives us a good pointers to writing tests. So let’s just skip to the next step.
  3. Run your tests against the instrumented code – Tests can be run either from within your IDE (like eclipse) or via the maven-surefire-plugin which is configured (by default) to pick up all tests in the ${basedir}/src/test/java directory.
    In both cases it is important that the tests are run using the instrumented classes. Since we are defining a process wherein developers can rapidly go through the above 5 step cycle, running surefire tests will not be optimal (they take too long). Instead we will run selected tests using Eclipse. To setup  your run configuration in Eclipse, do the following:
    Go to Run Configurations | New JUnit Configuration | Classpath tab | UserEntries node | Advanced button | Add Folders radio button and then navigate down to ${basedir/target/generated-classes/cobertura and save this configuration. Make sure that this directory is the first of the user entries.
    Now run your test by clicking the run button. When you do so, Cobertura will keep track of what lines of code and branches were exercised by the test.
    Just to complete this discussion, let us see what happens when code coverage is to be determined as a part of running all the tests. For doing this, the cobertura-maven-plugin uses the check goal to run tests and then update the cobertura.ser file. The cobertura-maven-plugin forks a custom lifecycle (called cobertura) that, in it’s test phase, replaces the classesDirectory parameter with the value of: ${project.build.directory}/generated-classes/cobertura. Since the maven-surefire-plugin is configured to run in the test phase, it subsequently is invoked and runs with the new value of classesDirectory, thereby using the instrumented code. To see the configuration of the custom lifecycle, check out lifecycle.xml in the META-INF/maven directory of the plugin.
  4. Measure code coverage – Assuming that the test ran successfully, the cobertura.ser file will be suitably modified by the instrumented code. At this point you can either run mvn site, and if the cobertura plugin is configured in the reporting section of your pom like so:
     <reporting>
        <plugins>
            ...
            <plugin>
                    <groupId>org.codehaus.mojo</groupId>
                    <artifactId>cobertura-maven-plugin</artifactId>
                    <configuration>
                        <formats>
                            <format>html</format>
                        </formats>
                    </configuration>
             </plugin>
             ...
         </plugins>
     </reporting>

    …you should get, eventually, after your entire site has generated, a set of cobertura reports.

    However, this may be a slow process. That is where the maven-ndcobertura-plugin comes in handy. This plugin that can be used to give faster results because it can be invoked outside of the site lifecycle.The plugin can be downloaded from here and it’s documentation is here.

    When the showCoverage goal of this plugin is run, a LineCoverageRate for the passed in class is shown along with a TotalCoverageRate for the entire codebase.

    >C:\projects\AcmeWebapp\acme-core> mvn ndobertura:showCoverage -DclassToTest=AccountService
    [INFO] Scanning for projects...
    [INFO] Searching repository for plugin with prefix: 'ndcobertura'.
    [INFO] ------------------------------------------------------------------------
    [INFO] Building Acme Core
    [INFO]    task-segment: [ndcobertura:showCoverage]
    [INFO] ------------------------------------------------------------------------
    [INFO] [ndcobertura:showCoverage {execution: default-cli}]
    Cobertura: Loaded information on 1695 classes.
    [INFO]
    [INFO] Class: com/acme/accounting/AccountService.java: LineCoverageRate: 0.47 (35 out of 75 lines) and BranchCoverageRate: 0.17 (8 out of 48 branches).
    [INFO]
    [INFO]  Please see line-by-line coverage for these classes by running mvn ndcobertura:generateReports.
    [INFO]  Covered Lines: 1616, Total Lines: 117182, Total Coverage Rate: 0.013790513901452441
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESSFUL
    [INFO] ------------------------------------------------------------------------

    Similarly, once the developer has an idea of the code coverage (in terms of number of lines), s/he can run the other goal of this plugin to view cobertura reports showing line-by-line coverage statistics. The command to generate the reports outside of the site lifecycle is:

    >C:\projects\AcmeWebapp\acme-core> mvn ndcobertura:generateReports
    [INFO] Scanning for projects...
    [INFO] Searching repository for plugin with prefix: 'ndcobertura'.
    [INFO] ------------------------------------------------------------------------
    [INFO] Building Acme Core
    [INFO]    task-segment: [ndcobertura:generateReports]
    [INFO] ------------------------------------------------------------------------
    [INFO] [ndcobertura:generateReports {execution: default-cli}]
    [INFO]
    [INFO] Starting report generation in C:\projects\AcmeWebapp\acme-core\target\acme-core-cobertura-reports...
    [INFO] Cobertura 1.9.4.1 - GNU GPL License (NO WARRANTY) - See COPYRIGHT file
    Cobertura: Loaded information on 1680 classes.
    Report time: 22313ms
    
    [INFO] ...Done. Please see reports by clicking on index.html in C:\projects\AcmeWebapp\acme-core\target\acme-core-cobertura-reports
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESSFUL
    [INFO] ------------------------------------------------------------------------

    Here is a sample report:

    Now that developers can rapidly see the code coverage of their tests, they can easily revert back to step 2 till they are satisfied with the code coverage. Since the cobertura report and the statistics that are generated based on the numbers recorded in the serialized cobertura.ser file, note that once a line of code is visited, it will forever show up on the report. In other words, the recording on the cobertura.ser file is cumulative till the clean and instrument goal of the cobertura-maven-plugin is run again.

    What about measuring progress ?

    The above steps are good for a development environment where developers can check their own work. But it is important to be able to measure progress for a given code base. For that we have to fall back on the process of QAS and Production deployment and the process of continuous integration. The cobertura-maven-plugin can be configured to also run the check goal with every nightly run and thereby produce statistics of code coverage. From there,  either the tools that come along with Continuous Integration tools (like Continuum or Hudson) will help to track coverage statistics over time, or you may use the  recordCoverage goal of this plugin to record line coverage with every nightly run.
    The recordCoverage goal of the plugin stores the statistics of each run in an xml file. The nightly run environment can be configured to run the recordProgress goal after tests are run against instrumented code (and the cobertura.ser file is produced).
    Subsequently the showProgress goal of the same plugin can be used to plot the lines covered over time. This goal uses the xml file that is produced by the recordCoverage goal resulting in something like this:

    and for branch coverage:

    The idea is to maximize the blue and minimize the pink.

Now that we have a complete process, from writing the tests to being able to measure progress, it’s just a matter of time before you can claim to have a regression-proof codebase! At least.. that’s the theory!

Using GWT in an enterprise application

May 19th, 2010 10 comments

I started playing around with Google Web Toolkit (GWT) for building something beyond  the obligatory “Hello World”  and I found many challenges along the way. GWT docs talk a lot about the front-end design and layout. But when it comes to integrating with an enterprise java application, I did not find much by way of pointers or guides. So I took a look under the covers and here’s the gist of that experiment.

When  considering enterprise Java development, the  frameworks/technologies that spring to mind are: Spring (:) and Maven. So I decided to use both in my “Hello Enterprise” application. So in the end, we’ll have an application with the following features:

  • A front-end that is written in Java but runs in Javascript.
  • Makes asynchronous calls to the server and updates the front end.
  • Modular, in that clear separation between the presentation and service layers
  • A multi-module project built with Maven.
  • Can access existing (and new) Spring services.

Before we dive deep, let’s talk a bit about GWT at a high level:

  1. GWT is a Java library that produces Javascript when compiled. The Javascript is bundled in the webapp and invoked via an html page.
  2. The javascipt does the heavy lifting, and is typically used for:
    1. Layout of the page, although it can also be used to replace DOM objects (using id tags) in an “shell” HTML page also.
    2. Making asynch calls to the server
  3. GWT allows users to interact with back-end services in two modes, both asynchronous (doesn’t have to be async, but in todays world of RIA and AJAX, who cares about synchronous UIs!):
    1. Using RPC (Remote Procedure Calls) to access server-side Servlets necessarily served from the same server that is serving the HTML (where the GWT javascript is running).
    2. Using GWT classes in the HTTP package that access the server

So to really get to the bottom of what is GWT and what is not, it will help to build the application in the following stages:

  1. Set up the Eclipse environment
  2. GWT app that only draws out some HTML objects but uses Maven
  3. GWT app that makes a call to a GWT Servlet using RPC
  4. GWT app that makes a call to a plain Servlet using the HTTP package
  5. GWT app that makes a call to a Spring service using RPC
  6. GWT app that makes a call to a Spring service using the HTTP package

With each of these steps(steps 2 thru 6 actually), there’s  a download-able source and binary file that you can follow along with.

Set up the Eclipse environment

Eclipse is the IDE I chose to use for this project just as a convenience. If you prefer to use UltraEdit or Textpad or NotePad, that’s fine, just skip this step.

If you have decided to use Eclipse (and you are not the purist/bare-bones emacs kind of guy ;) , may as well get 2 eclipse plugins to make your life easier:

  • The m2Eclipse plugin – > Among other things, this will cause you to record your dependencies only in one place: the pom, instead of in your project and in Eclipse’s classpath.
  • The google-plugin-for-eclipse -> Although this has many uses including running a light-weight server, the only thing I will use it for is running my GWT code from within Eclipse.

Once you have these plugins, your .project file should look like the following:

<natures>
<nature>org.maven.ide.eclipse.maven2Nature</nature>
<nature>org.eclipse.jdt.core.javanature</nature>
<nature>com.google.gwt.eclipse.core.gwtNature</nature>
</natures>

With the m2eclipse plugin you will see Maven dependencies pulled from the pom.xml instead of the Eclipse classpath like so:

The Google Plugin allows you to specify a Run Configuration for running the GWT application in Development Mode (fka Hosted Mode). The advantage of doing this is that you can debug your presentation layer in Java, and while Firebug is all great and dandy, I find it quite cumbersome compared to debugging in Java.

To configure this Run configuration, you must keep the following in mind:

  • Make sure that the main class is com.google.gwt.dev.DevMode
  • Make sure that the following arguments are specified in the arguments tab:
  • -war C:\<<pathToYourProject>>\target\<<yourProjectArtifactId>>-1.0.0 -remoteUI “${gwt_remote_ui_server_port}:${unique_id}” -logLevel INFO -port 8888  -startupUrl <<thHtmlFileThatMakesTheCallToNocache.js>.html com.acme.web.gwt.Hello

GWT app that only draws out some HTML objects but uses Maven

You can download the application source here and war file here.

As you can see, there’s not much going on here. But is serves as a good starting point to flesh out the pom and project structure. For some reason, the GWT folks place the “war” directory directly under root. The structure proposed by GWT is specified here. This flies in the face of the structure that Maven proposes, especially when building several modules in a multi-module project.

Let’s first check out the project structure of this project in Figure A.


Figure A

The GWT module is defined in Hello.gwt.xml. The GWT classes are all in Hello.java. Besides that, there’s just hello.html, the file where anchorPoints are defined and are referenced in Hello.java. web.xml has no significance besides defining a welcome file.

Note that the name of the javascript file (that is invoked in hello.html) below:

<script type=”text/javascript” language=”javascript” src=”GWTMvnNoRPC/GWTMvnNoRPC.nocache.js”></script>

is based on the name specified in the GWT module definition for rename-to:

<?xml version=”1.0″ encoding=”UTF-8″?>
<module rename-to=’GWTMvnNoRPC‘>
<!– Inherit the core Web Toolkit stuff.                        –>
<inherits name=’com.google.gwt.user.User’/>
–>
<inherits name=’com.google.gwt.user.theme.standard.Standard’/>
<entry-point class=’com.acme.firm.client.Hello’/>
<source path=’client’/>

</module>

And it corresponds to the directory structure that is produced by the build (actually by the gwt-maven-plugin: compile goal):


Figure B

The pom has packaging of type war with just the dependencies defined. Here is where the gwt-maven-plugin is declared. Since there are no interfaces to produce the Asynch version of, the plugin is configured as below:

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>gwt-maven-plugin</artifactId>
<version>1.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<!–  not needed because there is nothing to generate as there is no Service for RPC
<goal>generateAsync</goal>
–>
</goals>
</execution>
</executions>
</plugin>

And that’s about it. Run it by deploying the war file to your favorite app server  and you will see the buttons do nothing and the text goes nowhere!
Let’s make it do something more meaningful by adding some RPC calls.

GWT app that makes a call to a GWT Servlet using RPC

Download the source from here and the war from here.

The project structure is shown below. No surprises here except that we have added a service interface (HelloService.java) and a implementation (HelloServiceImpl.java).


Figure C

Here’s where this graphic come in handy. (Look at the section that says “RPC Plumbing Diagram”). Accordingly, HelloService extends RemoteService and HelloServiceImpl extends RemoteServiceServlet.

Also, the GWT module has included in it’s  <source />  element, the directory that contains both Hello.java and HelloService.java. That is what tells the generateAsync goal of the gwt-maven-plugin that it needs to produce a HelloServiceAsync.java interface. An instance of that interface is invoked by the Hello GWT class.


Figure D

If you look at the HelloServiceAsync.java, you will see that it uses the annotation that you have set in the corresponding interface (HelloService.java) to specify the url that is being sent to the server.

HelloService.java

@RemoteServiceRelativePath(“xyz“)
public interface HelloService extends RemoteService {
public double calculateTax(double income, int year) throws IllegalArgumentException;
}

produces

HelloServiceAsync.java
public static final HelloServiceAsync getInstance()
{
if ( instance == null )
{
instance = (HelloServiceAsync) GWT.create( HelloService.class );
ServiceDefTarget target = (ServiceDefTarget) instance;
target.setServiceEntryPoint( GWT.getModuleBaseURL() + “xyz” );
}
return instance;
}

In this manner you can have many services that can be called by your GWT module.

Note also that the gwt-maven-plugin is configured to drop the async class in the generated-sources directory (the generateSync goal  is bound to the generate-sources phase by default).

Although HelloServiceImpl extends RemoteServiceServlet, it is deployed as a regular servlet in the web.xml. There is nothing GWT-specific in the way it is deployed. However, the fact that it extends RemoteServiceServlet is what (IMO) is the downside of this approach. Either I have to write wrapper servlets that extend RemoteServiceServlet and call my business service layer or I make all my business service objects extend RemoteServiceServlet. Both these options are not very great, which is why you should continue reading to see the other ways we can skin the cat!

GWT app that makes a call to a plain Servlet using the HTTP package

Download the source from here and the war from here.

Let’s begin by looking at the project structure:


Figure E

GWT provides another way to access server side components by using classes in it’s HTTP package. The class RequesBuilder is where it all starts. This class has methods in it to make asynchronous calls. The URL is specified to this class that points to any arbitrary string that is mapped (in web.xml) to the servlet (in this case SimpleServlet). An HttpRequest carries (via a GET or POST) name value pairs from the client UI to the Servlet, where they are peeled off the request and passed on to a business service. The results from the business service are placed in the response object which is accessed via an asynchronous callback method defined in the HTTP API (RequestCallback).

The difference between this and the RPC call is that here, SimpleServlet does not extend RemoteServiceServlet. Also, note that there is no business interface. That, IMO is a bad thing. Not having a clear contract bewtween what the client can call slows down development and complicates testing. OTOH, our servlet is any servlet which is a good thing! Just goes to show, you can’t have your cake and eat it too!

Also, since there is no business interface, there is nothing to produce an Async interface for. So there is one less step in the build process. There is still javascript produced that is accessed by the shell html file as usual.

Moving on to the next level in our integration to back-end service, we’ll look at Spring-GWT integration.

GWT app that makes a call to a Spring service using RPC

Download the source from here and the war from here.

Lets first look at the project structure:

Figure F

I have tried to separate the presentation and service layers so that the service layer does not have any dependency on the presentation layer (only the presentation layer depends upon the service layer). This is a maven multi-module project. The presentation “module” (Called GWTMvnRPCWithSPring-Presentation) has in it’s pom all GWT dependencies and is built using Spring MVC. So the dispatcher-servlet.xml file has two URL mappings: a SimpleUrlHandlerMapping and a GWTHandler. Just for reference, I threw in a RegularController that is pointed to by the simpleUrlHandlerMapping. This (RegularController) controller is nothing special, but the GWTHandler is where all the magic happens. This class is a part of a GWT Widget Library.

Right from GWTHandler’s  javadoc:

The GWTHandler implements a Spring  HandlerMapping which maps RPC from
URLs to RemoteService implementations. It does so by wrapping service
beans with a GWTRPCServiceExporter dynamically proxying all
RemoteService interfaces implemented by the service and delegating
RPC to these interfaces to the service.

Therefore when I define a GwtBusinessService1, you will see that that interface extends RemoteService and defines one business method: caculateTax(…). The call to this business interface is dynamically proxied on the Spring service (businessService1) that is injected into the GWTHandler:

<bean id=”urlMapping1″ class=”org.gwtwidgets.server.spring.GWTHandler” >
<property name=”mappings”>
<map>
<entry key=”/hello/service1.htm” value-ref=”businessService1″ />
<entry key=”/hello/service2.htm” value-ref=”politicalService” />
</map>
</property>
<property name=”order” value=”2″ />
</bean>

The dispatcher-servlet.xml has only spring beans needed for SpringMVC and spring-services-core.xml is the application context that is defined in the services module (GWTMvnRPCWithSpring-services) and contains only services that are presentation-agnostic. Both the contexts are made available in the web.xml.

Other than that there is nothing else that is noteworthy. Let’s move on to accessing Spring services using the HTTP package.

GWT app that makes a call to a Spring service using the HTTP package

Download the source from here and the war from here.

Observe the project structure in Figure G:

Figure G

Comparing it to Figure F, the first thing to notice is the absence of the business interfaces. In it’s place is a SimpleServlet defined in the presentation package. The SimpleServlet is a vanilla implementation (in that it does not extend a specific interface or class except HttpServlet). Here is what this servlet does:

  • Accept requests (typically asynch) from the GWT module.
  • Peel of name/value pairs from the Request object.
  • Invoke Spring services using WebApplicationContextUtils
  • Make the service call passing in values
  • Process the return value(s) from the service
  • Place the value(s) in the response

Also, there is no Spring-MVC here(no dispatcher-servlet.xml), just plain Spring (only spring-service-core.xml). The GWThandler cannot be used here because for the GWTHandler to work we will need interfaces that implement RemoteService which we do not have here. And we do not have those interfaces because HTTP api calls can be used to make any call on the server via HttpRequest/Response unlike RPC where calls were made on an Asynch interface produced via a GWT supplied utility.

That’s it folks! I found this exercise rewarding in learning GWT technologies to build a real-world enterprise app that can scale. My next step is to use smartGWT and then the holy grail of AJAX frameworks: GWT-GoogleMaps! Maybe I’ll blog about it sometime… stay tuned!

  1. GWT allows users to interact with back-end services in two modes, both asynchronous (doesn’t have to be async, but in todays world of RIA and AJAX, who cares about synchronous UIs!):
    1. Using RPC (Remote Procedure Calls) to access server-side Servlets necessarily served from the same server that is serving the HTML (where the GWT javascript is running).
    2. Using GWT classes in the HTTP package that access the server

Getting Eclipse to use dependencies from your POM

April 15th, 2010 1 comment

I’ve been using the m2eclipse plugin for sometime now and I can never (consistently) get it to cause eclipse to use the dependencies in my pom.

I tried the enable/disable dependency management switches and also enable/disable Nested modules.

In the end, what I ended up doing is adding the following line (bold) in my .classpath.

<?xml version=”1.0″ encoding=”UTF-8″?>
<classpath>
<classpathentry kind=”src” path=”xxx-core/src/test/java”/>
<classpathentry kind=”src” path=”xxx-web/src/main/java”/>
<classpathentry kind=”src” path=”xxx-core/src/main/java”/>
<classpathentry kind=”con” path=”org.eclipse.jdt.launching.JRE_CONTAINER”/>
<classpathentry exported=”true” kind=”con” path=”org.maven.ide.eclipse.MAVEN2_CLASSPATH_CONTAINER”/>
<classpathentry kind=”output” path=”bin”/>
</classpath>

That did the trick!

Of course, you have to have the m2eclipse plugin to get this to work ;)

The Pink Arrow – Setting up a Corporate Dev Environment With Archiva – Part 2

February 10th, 2010 2 comments

In a previous post I had mentioned a process for easily transferring artifacts from a development/un-monitored maven repository to a production/monitored maven repository. In that post, I had used a graphic to depict this process by a pink arrow.
The unmonitored repository can have any combination of artifacts because it is connected to the internet. However, what gets into the monitored repository can be systematically controlled. There are many ways to do this but I opted to effect this via a Maven Plugin that can be downloaded from here. Once you have downloaded it, make sure it is uploaded to the Production Repository using Archiva (include the pom, to keep it clean).

Information on the plugin and it’s use is here.

In a later post I will talk about how this plugin can be incorporated into a corporate-wide pom along with other goodies like static analysis reports and testing plugins.

Using Archiva For Setting Up A Corporate Development Environment – Part 1

January 17th, 2010 No comments

In many corporate development environments, there is a race to embrace open-source tools that are out there. While the legal beagles agonize over the licensing details (GPL, LGPL or whatever), the techie-types are straining at the leash to try out the latest and greatest “stuff” that is beckoning from maven repositories, all over the world. Here is one approach that can be used to control what is finally making it into production, and yet not stymie the creative development process.

Looking at the picture below, we see 3 repositories that are managed by Archiva. These are the development, production and third party repositories represented by the cylinder like structures (typically reserved for databases, but note that Maven Repositories are file system based).

The key to this set up is that only the Development repository is connected to the internet via proxy connectors. Therefore developers are free to use this repository as their sandbox for testing new projects/software as dependencies. The Third Party repository is meant for software that your company may have purchased, or even home-grown software projects. (oracle drivers, weblogic.jar etc). Therefore, artifacts are pushed to it via a manual deploy deploy-file. The Production repository is not accessible to the internet. The only way that artifacts can get into the Production Repository is via a process that transfers artifacts from the Development Repository to the Production Repository. This process (represented by the pink arrow) ensures that what get’s into the Production Repository is vetted by your organization. I will discuss this process (the pink arrow) in a later post.

Also note that I have also configured a parallel set of snapshot repositories and snapshot repository groups. This is important (at least with Archiva, because I found that  Archiva does not play well when dealing with the metadata of the same artifact (maven-metadata.xml) of the released and snapshot repositories together in one group).


Figure A

The two build environments at the lower end of the figure represents the touch-point between the source code and the Archiva environment. Maven expects a file called Settings.xml that (typically) lives in $USER_HOME/.m2 directory, to tell it where to look for maven artifacts.The settings.xml file points to repository groups. The Repository Groups funnel one or more managed repositories together. Note that the Development Repository Group pulls data from the Production, Development and Third Party Repositories in that order. The Production Repository Group only pulls from the Production and Third Party Repositories. In this manner, what get’s built into production is assured to be vetted by your organization.

The firewall protects the corporate environment from the internet with all the goodies that the open source community has to offer hosted on several maven repositories. Archiva acts as the gateway where we can throttle what comes into our corporate environments.

With that background, let’s dive into the details to see how this can be achieved.

1. Install archiva from here. For doing so, I would recommend the webapp version because it will automatically deploy to a JEE server like tomcat and be web-enabled out-of-the-box. (The standalone version runs in a Plexus App server, which may be yet another app server to learn to configure (at least for me)). Instructions on how to install as a webapp are here.

2. Install a database: This is used for storing repository info as well as authenticate users (for archiva admin). To begin with, I just went with the pre-packaged Derby db. Note, however that altho’ the documentation here suggests that we can use any version of derby above 10.1.3.1, I found that 10.5.x did not work for me. (Resulted in ArrayOutOfBoundExceptions). So I went ahead and just used the 10.1.3.1 version instead. Also Derby is a file based db, so in the archiva.xml file (in the TOMCAT_HOME/conf/catalina/localhost directory), I configured the db like so:

<Context path=”/archiva”
docBase=”${catalina.home}/archiva/apache-archiva-1.3.war”>

<Resource name=”jdbc/users” auth=”Container” type=”javax.sql.DataSource”
username=”sa”
password=”"
driverClassName=”org.apache.derby.jdbc.EmbeddedDriver”
url=”jdbc:derby:C:\archivaDatabase\users;create=true” />

<Resource name=”jdbc/archiva” auth=”Container” type=”javax.sql.DataSource”
username=”sa”
password=”"
driverClassName=”org.apache.derby.jdbc.EmbeddedDriver”
url=”jdbc:derby:C:\archivaDatabase\archiva;create=true” />

<Resource name=”mail/Session” auth=”Container”
type=”javax.mail.Session”
mail.smtp.host=”localhost”/>
</Context>

3. Make sure that archiva runs: Fire up Tomcat and make sure that you the archiva webapp is up (check the logs and view the tomcat manager app and list all the deployed apps. Archiva should be one of them and successfully deployed.)

4. Make an admin archiva user: Navigate to http://localhost:yourPort/archiva. You should be asked to make an admin user. Do so.

5. Edit repositories: Go to the repository node. You should see two Archiva managed repos and two repos out on the internet. Change the location of the Archiva Managed repos by Editing them and changing them to some meaningful directories. (Note, do pre-create the directories, or Archiva barfs complaining with a NPE. If that happens, the second try works because it creates the directory in the first try.) I created two more managed repositories and ended up with:

C:\archivaManagedRepositories\prodReleases -> Production Repository
C:\archivaManagedRepositories\devReleases -> Development Repository
C:\archivaManagedRepositories\thirdParty -> Third Party Repository
C:\archivaManagedRepositories\snapshots  -> Snapshot Repository

as my directories instead of burying them inside of the Tomcat installation dir. Also note that the webDAV url that is mentioned is kind of misleading in that, you would expect a physical directory in TOMCAT_HOME. Instead, click here and you will see that the URL is in this format:

http://[URL TO ARCHIVA]/repository/[REPOSITORY ID]

That helped me demystify why the webDAV url and the directory did not synch up.
Note that it may be a good idea to create a snapshot repository also for work-in-progress kind of uploads that need to be shared, but are not ‘released’ yet.

6. Setup a network proxy for your office or home. This will allow archiva to access the internet for getting info into the repositories it manages. Do this using Archiva’s web interface. Pretty straight forward.
7. Setup a proxy connector. This ties a remote repository via a network proxy to an Archiva managed repository. Ensure that the proxy connector is set up only for the Development and Snapshot Repositories and not for the Production and Snapshot repositories. By doing so, we are allowing only the Development and Snapshot repositories to access the internet and pull in artifacts that can be experimented with. The Production and Third Party repositories are sheltered and therefore, what gets in there is predictable and vetted by the corporation. How that’s done (getting stuff into the Production Repository, I’ll discuss in the next post). After configuration, here’s what my proxy connectors looked like:

Note that you can set black and white lists to proxyConnectors. For the development repository, you may consider configuring black lists for certain artifacts that you know violate some corporate policy. Still, by and large the development repository can be left open (un-restricted) for developers to experiment and research. It is the production repository that is more of concern. Note that for the production repository there is no proxyConnector and therefore there is little danger of unwanted stuff getting in there.

8. Set permissions on repositories: Ensure that the guest user (that is automatically set up, when archiva is installed) has permissions to read from all repos and write to the DEVELOPMENT repository. To do so, log into the archiva console as Sys admin. then User Management | guest | edit roles and then make sure that the page looks like this:

9. Setting up Repository Groups: Repository groups are used to bunch repositories together. Since we will have two environments (Prod and Dev) we will make two such Repository Groups. Mine looked as follows:

10. Setting up settings.xml: Here we will make two sets of settings.xml files, one for production use and the other for non-production. Settings.xml is what Maven uses to determine where to go to get artifacts. Maven, out-of-the-box uses a settings.xml that points to central/repo1. We are changing that behavior by using two such files, for production and non-production and in each, pointing to our Archiva groups instead. Everything else is configured within the Archiva group.

Development Settings.xml

<settings>
<!– use a more sane sounding dir instead of “Documents and settings” with embedded spaces. –>
<localRepository>c:/temp/m2repo/</localRepository>
<interactiveMode>true</interactiveMode>

<mirrors>
<mirror>
<id>archiva.default</id>
<url>http://localhost:5080/archiva/repository/devGroupRepos/</url>
<mirrorOf>*</mirrorOf>
</mirror>
</mirrors>
</settings>

.

Production Settings.xml
<settings>
<!– use a more sane sounding dir instead of “Documents and Settings” with embedded spaces. –>
<localRepository>c:/temp/m2repo/</localRepository>
<interactiveMode>true</interactiveMode>

<mirrors>
<mirror>
<id>archiva.default</id>
<url>http://localhost:5080/archiva/repository/prodGroupRepos/</url>
<mirrorOf>*</mirrorOf>
</mirror>
</mirrors>
</settings>

11. Create a Maven Project with dependencies: Download maven from here. As in optional step install m2eclipse plug-in from here . This will just help you start a New Maven Project in Eclipse. But if you prefer to do so manually, that’s fine. Ensure that the new project pom.xml has some dependencies in it.

12. Run the Maven Project: First ensure that the production settings.xml file (from above) is set in your $USER_HOME/.m2 directory. Then navigate down to the directory that holds the pom.xml of your project and fire off mvn package. This should fail complaining that it cannot find plug-ins or dependencies. That is good. No access to the internet for the Production repository or Production Repository Group that the production settings.xml is pointing to.
Next change the settings.xml file in USER_HOME/.m2 to the contents of the development settings.xml. Now run mvn package. When you do so, you should see the artifacts being downloaded. If you look at the directory that you set up the devReleases in, in Archiva, you should see it filling up. Also filling up, will be the localDirectory that you have specified in the settings.xml, which in this case is c:\temp\m2Repo. And your build should be successful (at least the dependencies part). If you have got this far, that’s progress!

13. Set up the build environments: Distribute the development settings.xml file to your development group. I would recommend placing it out on the corporate wiki or internally accessible place. Let the developers use it to build their projects. However, on the production build box (or the place where QA or production builds are made), set the production settings.xml (in the USER_HOME/.m2 directory of the build user). This settings.xml points to the Production Repository Group which, in turn points to the Production Repository, which points nowhere!

14. Populate the PRODUCTION repository – The Pink Arrow: As discussed above, the PRODUCTION repository, prodReleases, does not proxy any internet maven repository. As a result, that repository will not have any artifacts downloaded into it. When the first production build is done, it will fail miserably, not finding even the basic maven plug-ins. That is where a separate process will need to be introduced that validates what goes into that repository. I will discuss that process, denoted by the pink arrow (in figure A), in my next post.

Maven Plugin to Execute SQL remotely

November 5th, 2009 No comments

Here is a maven plugin that will allow you to execute SQL commands (DDL or DML) from the box that this plugin runs on, to any box that has a TNS entry to it.

This plugin currently ONLY works with Oracle Clients. So, make sure that you have SQL*Plus installed on the box that runs this plugin.

Details of the plugin can be found here.

Maven plugin to distribute files to remote machines

October 24th, 2009 1 comment

Here is a plugin that I wrote which distributes file(s) from one machine (where it runs) to one or more remote machines.

It does so using the codehaus supplied wagon class that, in turn uses ssh/scp. So it is necessary that all machines that need files distributed to them, have ssh established (without a passphrase).

The plugin does the following:

  1. Moves tars, zips, jars or any file to a configured location.
  2. Optionally explodes those files.
  3. Optionally establishes a symbolic link to the directory where exploded/or copied.
  4. Runs a pre-configured set of commands on the remote box.

Details of the plugin are here.

Downloads of the plugin are here.

Enjoy!

Tags: ,
Get Adobe Flash playerPlugin by wpburn.com wordpress themes