The DDD Epiphany

February 22nd, 2014 No comments

There are certain books that are so pithy, that you’ve got to read them a couple of times to really ‘get it’!

From my college days, I can recollect that Kernighan and Ritchie’s C Programming Language was one such. More recently while trying to get my head around the nuggets of wisdom from The Bhagvad Gita, I felt the same way.

Now add to that haloed book shelf, Eric Evans’ Domain Driven Design! While (re)reading that seminal book recently, I got two insights that I thought I could share:

  1. Why is Object Oriented Programming well suited for Domain Driven Design (DDD).
  2. What kind of objects would you expect to deal with, while implementing domain driven design.

The DDD Object Oriented advantage

I’m an application developer. What that means is I spend most of my waking hours (and sometimes, sleeping hours :) ) developing software that runs businesses. Not the business of chipsets for a microprocessor, not the business of building database software, nor Operating Systems or compilers, games or mobile apps. But the business of automating enterprise processes like supply chain, health-care, financial transactions, insurance or energy.  The stuff that is typically called Enterprise Software.

It is my contention, that Domain Driven Design provides most value, the largest ROI, if you will, if practiced in the enterprise domain because the enterprise domain is ever changing, compared to more stable areas like compilers, scientific or database design. In other words, it’s only in the enterprise space that there is real need for Domain Driven Design. Let’s see how:

First let’s look at the major programming paradigms that are available today. Not counting the work that is being done on the bleeding edge of research labs, the common paradigms being used today are:

  • Procedural Programming
  • Object Oriented Programming
  • Functional Programming

There’s a lot of discussion surrounding the strengths of these approaches on the net. Here’s my take on this:

  • Procedural programming disassociates behavior from data. There’s data that exists elsewhere, and then there are functions that act on the data in your program. That, however is not a good representation of real life where behavior and data go together, and explains the progressive demise of this approach in the enterprise domain.
  • Object Oriented Programming is the next logical step in modelling reality where data and behavior live together. This approach is suited for a model where there are several “things” (that need modeled) and since each has distinct behavior and instance data, the mechanics of the OO language is suited for organizing and capturing that reality.
  • Functional Programming is a (relatively) new approach; It’s strength lies in dynamically changing behavior  (functions) of a relatively few “things”.

Looking at the chart below we can see how different programming paradigms are suited to a differing number of  “things” to model.

In the red, reddish and green areas, the entities used are well defined and limited. Here one would be typically manipulating sets, stacks, hash maps, linked-lists or tree structures. But in the blue enterprise software area, the problem space is vast and therefore the number of entities is huge. We have to deal with entities like Account, Warehouse, Product, Car, Patient etc. In other words, the enterprise domain is modeled with an infinite set of stuff that is highly customizable. OO techniques are great to model these entities because of the 4 tenets of OO Programming.

Functional Programming (FP), on the other hand,  allows ad-hoc behavior to be applied to a limited set of structures. This works great if the number of entities are small or manageable. But when the number of entities becomes large, the same adhoc-ness  becomes a bad thing unless used judiciously. There are some programming languages that allow the use of functional paradigms and Object Oriented paradigms all in one.  The inclusion of Lambda expressions in C# (and shortly in Java) is one such example. Lambda expressions break encapsulation. If one uses Lambda expressions in the enterprise domain liberally, the code tends to become unmanageable and unmaintainable.

The enterprise domain evolves. Business practices change, new entities emerge and old ones lose significance. Using OO techniques, a designer can give structure to this evolution, such that there is high cohesion and low coupling.For example, the business that was  selling the Car entity, could travel(), playMusic() and consumeGas(). Now, let’s say the business went solar and now wants to add a consumeSolar() behavior to all SolarCars, yet, retain the original behavior on their older line of cars.

Using the FP approach, you would slap on the consumeSolar() function to the original Car object and it was up to the client of the Car API to ensure that it was indeed a solarCar instance that it was being applied to. Using the OO approach, only the SolarCar object has that behavior, not the Car object. So behavior does not bleed out of the Car object as we saw with FP, where the calling API had to possess knowledge what kind of object was it calling consumeSolar() on.

In the space of data retrieval or compiler design, on the other hand, there is little danger of the stack or map evolving into something else. There is a chance, however, that for this one time, you may want to implement a diff  ‘function’ on a stack data structure and have it applied to all such data structures in the application. For such times, the Functional Programming model serves well.

Domain Object Classification

Eric Evans talks about 3 kinds of domain objects, fundamentally:

  1. Entities
  2. Value Objects
  3. Services

Then he talks about Identity and state of these objects.

If I were to look at how the Spring Framework has implemented the concepts of DDD, the table below sums things up nicely.

Prototype beans have both identity and state and would therefore qualify as entity beans. Value objects that are associated to an entity but have no identity of their own, yet they are stateful.

Singleton Services are stateless but have an identity. But when programmers implement services with static methods, they are depriving those services of identity.

Summary

In summary, if you have an evolving set of entities, DDD and object orientation are your friends! On the other hand, if you have a fairly stable set of structures to which varying behavior needs to be applied, then the Functional paradigm seems like the right choice.

Most importantly, don’t use Function Programming techniques just because the language you happen to be coding in supports it. Enterprise software can tolerate Functional Programming, but only in small doses.

Lastly we saw a way to classify domain objects in a neat little matrix.

If you like what you read, share what you like!
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Blogplay
  • RSS
  • Technorati
  • DZone

Angular JS From a Different Angle!

August 18th, 2013 18 comments

I recently made the switch to a full-stack JavaScript front end framework for an enterprise application that we are building.

In this post, I’ll talk about the integration (or, rather lack thereof) of the development methodologies used for developing a server-side RESTful API vs. a client side Angular JS applications. Along the way, I’ll add some opinion to the already opinionated Angular framework.

First, let’s look why Javascript is relevant today.

The graphic below nicely shows the evolution of Javascript from ad-hoc snippets to libraries to frameworks. Of these frameworks, Angular seemed to be getting a lot of traction (being pushed by Google helps :) ).

As we can see, we went from ad-hoc javascript code snippets to libraries to full fledged frameworks. It stands to reason, then that these frameworks be subject to the same development rigor that is accorded server-side development. In achieving that objective, we see tho’ that integration of tools that enforce that rigor, is not all that seamless.

Angular JS development is what I would call web-centric. That makes sense, given that it runs in a browser! However, if we focus all our energies in building services (exposed via an API, RESTful or not); and a web front-end is just one of many ways that my services are consumed, in that scenario, the web-centric nature of Angular development can get a bit non-intuitive.

For a server-side developer, when starting with the Angular stack, issues like the following can become a hinderance:

Where’s the data coming from?

If you want to run most of Angular’s samples you need to fire up a Node JS server. Not that that is something insurmountable. But I didn’t sign up for NodeJS, just Angular. Now I have to read thru Node docs to get samples up and running.

Next, testing:

Testing or rather, the ability to write tests, has a big role to play in the Javascript renaissance. Well..ok.. let’s write some tests! But wait! I need to install PhantomJS or NodeJS or some such JS server to fire up the test harness! Oh, crap! Now I’ve got to read up on Karma (aka Testacular) to run the tests.

What about the build:

How do I build my Angular app. Well… the docs and samples say.. use npm. What’s that? So now I’ve to google and start to use the Node Package Manager to download all dependencies. Or get Grunt! (Grunt!)

All I want to do is bolt on an Angular front-end to an existing REST endpoint. Why do I need all this extra stuff?  Well.. that is because Angular takes a web-centric approach and is largely influenced by the Rails folks (See my related post here), whereas enterprise services treat the front-end as an after-thought :)

So, before I get all the Angular (and Ruby on Rails) fans all worked up, here’s the good news!

I wrote up an application that bolts on an Angular JS front-end to a Java EE CRUD application (Spring, Hibernate.. the usual suspects) on the back-end. It’s a sample therefore it obviously lacks certain niceties like security, but it does make adoption of Angular easier for someone more familiar with Java EE than Ruby on Rails.

Source and Demo

You can download and check out Angular EE (aka PointyPatient) here. In the rest of this post, I’ll refer to this sample app, so it may help to load it up in your IDE.

You can also see the app in action here.

Opinions, Opinions

One of Angular’s strengths is that it is an opinionated framework. In the cowboy-ruled landscape of the Javascript of yore, opinion is a good thing! In Angular EE, you will see that I’ve added some more opinion on top of that, to make it palatable to the Java EE folks!

So here is a list of my opinions that you will see in Angular EE:

Angular Structure

The structure of an webapp is largely predicated on if it is a servlet or not. Besides the servlet specification that mandates the existence of  a web.xml, all other webapp structures are a matter of convention. The Angular sample app, Angular-Seed,  is not a servlet. Notwithstanding the fact that Angular (and all modern front-end frameworks) are pushing for a Single Page App (SPA), I still find servlets a very alluring paradigm. So here’s my first opinion. Rather than go for a SPA, I’ve made Angular EE’s web application a servlet that is also an SPA.

If you compare the directory structure on the left (Angular-Seed) with the one on the right (PointyPatient webapp), you will see that the one on the right is a servlet that has a WEB-INF/web.xml resource. It also has an index.html at the root. This index.html does nothing but a redirect like so:

<meta HTTP-EQUIV=”REFRESH” content=”0; url=app/index.html”>

It is the index.html inside the app directory that bootstraps the angular app. And so the context root of the webapp is still the webapp directory, not webapp/app.

So what’s the advantage of making this a servlet? For one: You can use the powerful servlet filtering mechanism for any pre-processing that you may want to do on the server,  before the SPA is fired up. The web.xml is the touchpoint where you would configure all servlet filters.

For seconds, instead of having one SPA, would it not be nice if one webapp would serve up several “SPA”s?

For example, let’s say you have an application that manages patients, doctors and their medication in several hospitals. I can easily see the following SPAs:

  • Bed Management
  • Patient- Drug interaction
  • Patient-Doc Configuration
  • Patient Records

Usually, a user will use only one SPA , but on occasion will need to cross over to a different SPA. All the above SPA’s share a common http session, authentication and authorization.  The user can switch between them without having to log on repeatedly. Why load up all the functionality in a browser when only a subsystem may be needed? Use server-side (servlet) features to decide which SPAs to fire-up depending on who’s logged in (using authorization, roles, permissions etc). Delay the loading of rarely used SPAs as much as possible.

For the above reasons, I think it is a good idea to serve up your SPA (or SPAs) within the context of a servlet.

Now let’s look at the structure of just the Angular part:

Again, on the left is AngularSeed and on the right, PointyPatient.

There is no major change here except I prefer views to partials (in keeping with the MVVM model)

And secondly, I preferred to break out controllers, services,directives and filters into their own files. This will definitely lead to less Source Control problems with merges.

app.js still remains the gateway into the application with routes and config defined there. (More on that later).

.

Project Structure

Now that we have looked at differences in the Angular app, let’s step back a little and look at the larger context: The server-side components. This becomes important, only because we want to treat the Angular app as just another module in our overall project.

I am using a multi-module Maven project structure and so I define my Angular app as just another module.

  • Pointy-api is the Rest Endpoint to my services
  • Pointy-build is a pom project aggregate the maven reactor build.
  • Pointy-domain is where my domain model (hopefully rich) is stored
  • Pointy-parent is a pom project for inheriting child projects
  • Pointy-services is where the business logic resides and is the center of my app.
  • Pointy-web is our Angular app and the focus of our discussion

Anatomy of the Angular App

A Java EE applications has layers that represent separation of concerns. There is no reason we cannot adopt the same approach on the Angular stack also.

As we see in the picture below, each layer is unilaterally coupled with it’s neighbor. But the key here is dependency injection. IMO, Angular’s killer feature is how it declares dependencies in each of it’s classes and tests (more on that later). PointyPatient takes advantage of that as can be seen here.

Let is discuss each layer in turn:

Views: HTML snippets (aka partials). There is no “logic” or conditionals here. All the logic is buried either in Angular provided directives or your own directives. An example would be the use of the ng-show directive on the alert directive in the Patient.html view. Conditional logic to show/hide the alert is governed by two-way binding on the view-model that is passed to the directive. No logic means no testing of the view. This is highly desirable because the view is mainly the DOM and the DOM is the most difficult/brittle to test.

Controllers: Although, it may seem by looking at some of the samples, that we should end up with a controller per view, in my opinion, a controller should be aligned to a view-model, not a view. So in PointyPatient we see that we have one controller (PatientController) that serves up both the views (Patient.html and PatientList.html) because the view-model for both these views do not interfere with each other.

Services: Here we see common logic that is not dependent on a view-model is processed.

  • Server-side access is achieved via Restangular. Restangular returns a promise which is passed to the controller layer
  • An example of business logic that would make sense in this layer would be massaging Restangular returned data by inspecting user permissions for UI behavior. Or date conversions for UI display. Or enhancing JSON that is returned by the db with javascript methods that can be called in the view (color change etc).
    There is a subtle difference between the client and server-side service layers: The client-side service layer holds business logic that is UI related, yet not the view-model related. Server-side service layer holds business logic that should have nothing to do with the UI at all.
  • Keep the view-model (usually the $scope object) completely decoupled from this layer.
  • Since services return a promise, it is advisable to not process the error part of a promise here, but pass it on to the controller. By doing so, the controller will be well suited to change the route or show user appropriate messages based on the processing of the error block from a promise. This can be seen in PatientController and PatientService

Layers

Since we have defined layers on both, the Angular stack and the server-side, an example may help solidify the purpose of the layers. So, using this rather contrived example in the medical area, here are some sample APIs in each layer:

Server side services:
This is where ‘business logic’ that is UI independent lives.

  • Invoice calculatePatientInvoice(Long patientId);
  • boolean checkDrugInteractions(Long patientId, Long prescriptionId);
  • boolean checkBedAvailability(patientId, LodgingPreference preference);
  • List<Patient> getCriticalPatients();
  • int computeDaysToLive(Long patientId);

The number of days a patient lives will not depend on if we use Angular for the UI or not :) . It will depend, tho’ on several other APIs available only on the server-side (getVitals(), getAncestoralHistory() etc..).
Server Side controllers:
If we use Spring MVC to expose services via REST, then the controllers are a very thin layer.
There is no business logic at all. Just methods that expose the REST verbs which call the services in turn.

  • getPatientList();
  • getPatient(Long patientId);
  • createPatient(Patient patient);
  • deletePatient(Long patientId);

Angular services:
This is where ‘business logic’ that is UI dependent lies.
These are used by several angular controllers. This could involve cached JSON massaging or even servers-side calls. However, processing is always UI related.

  • highlightPatientsForCurrentDoctorAndBed();
    Assuming that doctorId and bedId are JSON data in the browser, this method changes the color of all patients assigned to the current doc and bed.
  • showDaysAgoBasedOnLocale(date);
    Returns 3 days ago, 5 hours ago etc instead of a date on the UI.
  • computeTableHeadersBasedOnUserPermission(userId);
    Depending on who’s logged in, grid/table headers may need to show more/less columns.
    Note that it is the server based service that is responsible for hiding sensitive data based on userId.
  • assignInvoiceOverageColor(invoice);
    Make invoices that are over 90 days overdue, red, for instance.
  • showModifiedMenuBasedOnPermissions(userId);
    Hide/disable menus based on user permissions (most likely cached in the browser).
  • computeColumnWidthBasedOnDevice(deviceId);
    If a tablet is being used, this nugget of info, will most likely be cached in the browser.
    This method will react to this info.

Angular controllers:
These methods are view-model ($scope) dependent. These are numerous and shallow.
Their purpose in life is to handle (route) error conditions and assign results to the scope.

  • getDoctorForPatient(Long patientId);
    This massages local(in-browser) patient-doctor data, or can access to the server via Angular services -> REST -> Server Services and assigns to a scope variable.
  • getEmptyBedsInZone(zoneId);
    scope assignment of the beds

The main difference is that here the result is assigned to the $scope, unlike the Angular Service, which is $scope independent.

Testing

While most JS Frameworks emphasize the importance of testing Javascript (and there are enough JS Testing frameworks out there), IMO, it’s only Angular that focusses on a key enabler for meaningful unit testing: Dependency Injection.

In PointyPatient, we can see how Jasmine tests are written for Controllers and Services. Correspondingly, JUnit tests are written using Spring Testing Framework and Mockito.

Let’s look at each type of test. It may help to checkout the code alongside:

  1. Angular Controller Unit tests: Here the system under test is the PatientController. I have preferred to place all tests that pertain to PatientController in one file, PatientControllerSpec.js,  as against squishing all classes’ tests in one giant file, ControllerSpec.js. (Works from the Source control perspective also). The PatientService class has been stubbed out by using a jasmine spy. The other noteworthy point is the use of the $digest() function on the $rootScope. This is necessary because the call to patientService returns a promise that is typically assigned to a $scope variable. Since $scope is evaluated when the DOM is processed (apply()‘ed), and since there is no DOM in the case of a test, the $digest() function needs to be called on the rootScope (I am not sure why localScope.$digest() doesn’t work tho’).
  2. Angular Service Unit Tests: Here the system under test in the PatientService. Similar to PatientControllerSpec.js, PatientServiceSpec.js only caters to code in PatientService.js. Restangular, the service that gets data from the server via RESTful services, is stubbed out using Jasmine spies.
    Both the PatientControllerSpec.js and PatientServiceSpec.js can be tested using the SpecRunner.html test harness using the file:/// protocol.
    However, when the same tests are to be run with the build, see the config of the jasmine-maven-plugin in the pom.xml of the pointy-web project. The only word of caution here is the order of the external dependencies that the tests depend on and which are configured in the plugin. If that order is not correct, errors can be very cryptic and difficult to debug.
    These tests (Unit tests) can therefore be executed using file:///…/SpecRunner.html during development and via the maven plugin during CI.
    In this sense, we have run these unit tests without using the Karma Test Runner, because, in the end, all the Karma test runner does, in the context of an Angular unit test is to watch for changes in your JS files. If you are ok, with not needing that convenience, then Karma is not really necessary.
  3. End-To-End Tests: These tests are run using a combination of things:
    First, note the class called: E2ETests.java. This is actually a JUnit test that is configured to run using the maven-failsafe-plugin in the integration-test phase of the maven life-cycle.
    But before the failsafe plugin is fired up, in the pre-integration-test phase, the maven-jetty-plugin is configured to fire up a servlet container that actually serves up the Angular servlet webapp (pointy-web) and the RESTful api webapp (pointy-api), and then stops both in the post-integration-test phase.
    E2ETests.java loads up a Selenium driver, fires up Firefox, points the browser url to the one where Jetty is running the servlet container (See pointy-web’s pom.xml and the config of the maven-jetty-plugin).
    Next we can see in scenario.js, we have used the Angular supplied API that navigates the DOM. Here we navigate to a landing page and then interact with input elements and traverse the DOM by using jQuery.
    If we were to run these tests using the Karma E2E test harness, we see that Karma runs the E2E test as soon as the scenario.js file changes. A similar (but not same) behavior can be simulated by running
    mvn install -Dtest=E2ETests
    on the command line.
    It is true, tho’ that running tests using maven will take  much longer to run than it’s Karma counterpart because Maven has to go thru its lifecycle phases, up until the integration-test phase.
    But if we adopt the approach that we write a few E2E tests to test the happy-path and several unit tests to maximize test coverage, we can mitigate this disadvantage.
    However, as far as brittleness is concerned, this mechanism (using Selenium)  is as brittle (or as tolerant) than the corresponding Karma approach, because ultimately, they both fire up a browser and traverse the DOM using jQuery.
  4. Restangular is not tested within our application because it’s an external dependency, altho it manifests as a separate layer in our application.
  5. On the server-side we see the Spring MVC Test Integration that is part of the Spring 3.2+ framework. This loads up the web-application context including security (and all other filters, including the CORS filter) that are configured in the web.xml. This is obviously slow (because of the context being loaded) and therefore we should use it only for testing the happy path. Since these tests do not rollback db actions, we must take care to set-up and tear-down our test fixtures.
  6. Next we have the Spring Transactional Tests. These integration tests use the Spring Test Runner which ensures that db transactions are rolled back after each test. Therefore tear-downs are not really needed. However, since the application context is loaded up on each run, these tend to be slow runners and must be used to test the happy path.
  7. Next we have Service Unit tests: PatientServiceTests.java: These are unit tests that use the Mockito test library to stub out the Repository layer. These are written to maximize test coverage and therefore will need to be numerous.
  8. Repository Unit tests (PatientRepositoryTests.java) are unit tests that stub out calls to the database by replacing the spring datasource context by a test-datasource-context.xml

Environmentally Aware!

Applications need to run in various ‘environment’s like, development, test, qa and production. Each environment has certain ‘endpoints’ (db urls, user credentials, JMS queues etc)  that are typically stored in property files. On the server side, the maven build system can be used to inject these properties into the ‘built’ (aka compiled or packaged) code. In the case of using Springframework, a very nice interface PropertyPlaceholderConfigurer can be used to inject the correct property file using Maven profiles. (I’ve blogged previously about this here). An example of this is the pointy-services/src/main/resources/prod.properties property file that is used when the -Pprod profile is invoked during the build.
The advantage of this approach is that properties can be read from a hierarchy of property files at runtime.

In AngularEE, I have extended this ability of making the code ‘environmentally aware’ in the JS stack also. However, property injection happens only at buildtime. This is affected using Maven profiles as can be seen in pom.xml of pointy-parent module. In addition, please see the environment specific property files in pointy-web/src/main/resources. Lastly, check out the customization of the maven-war-plugin where the correct ${env}.property file is filtered into the war at build time. You will see how the REST endpoint is injected into the built war module in app.js for different environments.

In summary

We have seen how we can use a homogenous development environment for building an Angular application. Since user stories slice vertically across the entire stack, there is value in striving for that homogeneity so that if a story is approved (or rolled back), it affects the entire stack, from Angular all the way to the Repository.

We have also seen a more crystalized separation of concerns in the layers that are defined in the Angular stack.

We have seen how the same test harness can be used for server and client side tests.

Lastly we have seen how to make the Angular app aware of the environment it runs in, much like property injection on the server side.

I hope this helps more Java EE folks in adopting Angular JS, which I feel is a terrific way to build a web front end for your apps!

References

If you like what you read, share what you like!
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Blogplay
  • RSS
  • Technorati
  • DZone

Does Test Driven Development Really Work?

March 23rd, 2013 1 comment

In an assembly of geeks, whenever someone suggests the possibility of adopting Test Driven Development(TDD), I can almost see the collective eye-roll! Although widely regarded as a great technique for developing modular testable code, it is still regarded as a mostly utopian concept. In other words…you don’t usually see the enthusiasm in the room to jump on the TDD bandwagon.

In this post, I will attempt to demonstrate to you using a concrete, but contrived example, how TDD encourages developing modular code. To be absolutely clear, we are going to be demonstrating the benefits of Test First Development.

I will use the popular JUnit framework and some mocking frameworks to demonstrate this.

Unit testing is an overloaded term. Depending on how you look at it, what is defined as a unit can vary from a single method call, to an entire class to an entire user story to, sometimes an entire epic!

JUnit provides the developer with a framework to write tests but does not have an opinion on what is defined as a unit.

The definition of a unit, or rather, the assumption of what a unit should be, is baked into the correct use of several mocking frameworks. [All except PowerMock,  which breaks the rules pretty blatantly].

Most such frameworks define a System Under Test (SUT) and collaborators with the SUT. The attempt is, that once the SUT is identified, all it’s collaborators may be stubbed out. What is left in the SUT, then, is the unit.

By that yardstick, a “badly” written class would be one where there are not many collaborators. All functionality, is therefore jammed into one class, even though, it may be spread into several method calls. So, in other words, you have several small methods but your code may still not be modular. To make your code modular, what you need is collaborators that are candidates for stubbing.

So why is it that methods in a SUT cannot be stubbed out but collaborators can. The reason is simple and consistent across all mocking frameworks: Mocks are created by extending classes or interfaces and the SUT is never mocked. (If it is, you’d be testing a mock, not the original class). Therefore a method in your SUT can never be mocked in an elegant manner (one can always use reflection, but that’s a slippery slope).

I have worked with teams that state that the very reason for breaking up a large class is clashes while checking into source control. That may very well be, but breaking up methods to allow re-use (where the same code is used in several places) will only get you so far. What will really make the code modular, and enforce Separation Of Concerns, is the adoption of two fundamental principles:

The Greedy App without TDD

We will start with a User Story:

User Story 1: As a club manager I would like to invite users to my club who have a net worth of more than a million dollars. The invited user should be persisted and a welcome email be sent to him.

Let us first build this application the conventional way, with no tests first.

Here is one possible way the UserService may be coded:

public class UserServiceImplWithoutTDD implements UserService {
@Inject
private UserRepository userRepository;
@Inject
private EmailService emailService;

@Override
public void inviteUser(User user) {
	if (user != null){
		if (user.getId() != null){
			//determine net worth
			if ((user.getStateOfResidence() == "CA" &&
				(user.getIncome() - user.getExpenses()) > 1000000.0) ||
				(user.getStateOfResidence() == "TX" &&
				(user.getIncome() - user.getExpenses() *1.19) > 1000000.0)
				){
			        //millionaire!
				userRepository.addUser(user);
				String welcomeEmail = "Hi " + user.getName()
                                        + ", welcome to the club!";
				emailService.sendMail(welcomeEmail);
			}
		}
	}
}
}

As we can see, business logic is checked to see if the user qualifies as per the story and then the user object is persisted and welcomed with an email message.

Since we are not using TDD, we have not written a test to begin with. Now using Mockito, here is how we can write a test for this class after the code is written:

@RunWith(MockitoJUnitRunner.class)
public class UserServiceUnitTests {
	@Mock
	private UserRepository userRepository;
	@Mock
	private EmailService emailService;

	@InjectMocks
	private UserService userService;

	private User user ;
	@Before
	public void setup(){
                //given
                user = new User();
		user.setIncome(1200000.0);
		user.setExpenses(150000.0);
		user.setStateOfResidence("CA");
	}

	@Test
	public void testInviteUserWhenUserIsAMillionaire(){
		//when
		//Following stubs are not necessary because voids on mocks are noops anyway...but just for clarity
		doNothing().when(userRepository).addUser(any(User.class));
		doNothing().when(emailService).sendEmail(anyString());

		//call SUT
		userService.inviteUser(user);

		//then
		verify(userRepository, times(1)).addUser(any(User.class));
		verify(emailService, times(1)).sendEmail(anyString());

	}
}

So far, all looks good. You have managed to satisfy the user story and written a test for it. However, do make note of that painfully written “given” section in setup(). The developer has to scrutinize the complex business logic and engineer data so that the conditional is true so that the “then” section is satisfied.
Also.. the code doesn’t look clean. So, you decide to refactor a bit and extract the business logic into a method of it’s own:

@Override
public void inviteUser(User user) {
	if (user != null){
		if (user.getId() != null){
			if (this.determineNetWorth(user)){
				//millionaire!
				userRepository.addUser(user);
				String welcomeEmail = "Hi " + user.getName() + ", welcome to the club!";
				emailService.sendMail(welcomeEmail);
			}
		}
	}
}
}
private boolean determineNetWorth(User user){
	return ((user.getStateOfResidence() == "CA" &&
			(user.getIncome() - user.getExpenses()) > 1000000.0) ||
			(user.getStateOfResidence() == "TX" &&
			(user.getIncome() - user.getExpenses() *1.19) > 1000000.0)
			);
}

Looks better. The test still passes.
Till the customer decides to add another story:

User Story 2: As a club manager I would like to invite users to my club who have paid taxes in the past 10 years at least and have stayed in one state for more than 3 years.

Back to the drawing board. This time you come up with nicely refactored code:

@Override
public void inviteUser(User user) {
	if (user != null){
		if (user.getId() != null){
			//determine net worth
			if (this.determineNetWorth(user)){
				//millionaire!
				if (this.determineTaxQualification(user)){
					//User has paid taxes in the past 10 years while living in state for 3
					userRepository.addUser(user);

					String welcomeEmail = "Hi " + user.getName() + ", welcome to the club!";
					emailService.sendMail(welcomeEmail);
				}
			}
		}
	}
}

private boolean determineTaxQualification(User user){
	return user.getTaxesPaidInPastYearsMap() != null &&
			user.getTaxesPaidInPastYearsMap().size() >= 10 &&
			user.getNumberofYearsInCurrentState() > 3.0;
}
private boolean determineNetWorth(User user){
	return ((user.getStateOfResidence() == "CA" &&
			(user.getIncome() - user.getExpenses()) > 1000000.0) ||
			(user.getStateOfResidence() == "TX" &&
			(user.getIncome() - user.getExpenses() *1.19) > 1000000.0)
			);
}

Testing this however is a bit of a challenge because now you have to test the four combination of conditions, for each of the determineNetWorth and determineTaxQualification methods returning true or a false.
For doing  that, you will need to prepare the User object in the setup() with values that passes both the conditionals.
Maybe for this contrived example we can get away with engineering some data. But as you can see, this approach will soon go out of hand and will not scale for more realistic and complex cases.

Ideally, what should happen is that when we write the test for testing a user who has passed the taxQualification test, we should stub out the netWorth method so that we do not need to engineer data that *it* tests to return a certain condition. Likewise, when testing the netWorth conditional, we should be able to stub out the tax qualification method without dealing with the pain of engineering data for tax qualification. In that sense, two tests will need to be written, where the unit has changed for each.

However, since the two “business operations” are implemented as methods in the SUT, it is impossible for the mock frameworks to stub them out (because the SUT is not mocked, and only methods of a mocked class can be stubbed out).
So the next best thing to do is to place this functionality into classes of their own and make then collaborators of the SUT.

One way to do that is to do that manually when we reach such a point in our development. Another way is to use TDD.
I’ll show you now, how adopting TDD will naturally lead you down the path where you end up with classes amenable to testing using mocks and stubbing.

The Greedy App with TDD

In the spirit of TDD, write a test that fails:

The test:
@RunWith(MockitoJUnitRunner.class)
public class UserServiceWithTDDTests {
        @Mock
        private UserRepository userRepository;
        @Mock
        private EmailService emailService;
	@InjectMocks
	private UserService userService;
	private User user ;
	@Before
	public void setup(){
		user = new User("foo");
	}
	public void testInviteUserWhenAMillionaire() {
		userService.inviteUser(user);

		verify(userRepository, times(1)).addUser(any(User.class));
		verify(emailService, times(1)).sendEmail(anyString());
	}
}

And the corresponding service class:

public class UserServiceImplWithTDD implements UserService {
	@Inject
	private UserRepository userRepository;
	@Inject
	private EmailService emailService;

	@Override
	public void inviteUser(User user) {
		if (user != null){
			if (user.getId() != null){
				//determine net worth
				if (this.isNetWorthAcceptable(user)){
					userRepository.addUser(user);

					String welcomeEmail = "Hi " + user.getName() + ", welcome to the club!";
					emailService.sendMail(welcomeEmail);
				}
			}
		}
	}
	@Override
	public boolean isNetWorthAcceptable(User user) {
		return false;
	}
}

In the above scenario, the test fails because the calls to addUser and sendEmail were never made.

So, how can we fix this? A simple possibility is the have the isNetWorthAcceptable(User user) method return a true instead of a false. However this does not test the condition mentioned in the user story.

So let us implement the logic asked for by the user story:

public boolean isNetWorthAcceptable(User user) {
	return ((user.getStateOfResidence() == "CA" &&
			(user.getIncome() - user.getExpenses()) > 1000000.0) ||
			(user.getStateOfResidence() == "TX" &&
			(user.getIncome() - user.getExpenses() *1.19) > 1000000.0)
			);
}

This will fail too as the user object is not populated with the correct data.
Now I have two options: either I fill in the User object in the setup() to make this condition pass, or, to test if the userRepository and emailService calls are actually made, I have to change the conditions coded inside the getNetWorthAcceptable(…) method. The first option is unpalatable. To do the second, I will have to change code that I have already coded.
So I consider mocking out the isNetWorthAcceptable(…) call. But how do I mock out a method call in the System Under Test (SUT)?
The only way that can be done is if I put that method into a class of it’s own. To really make it generic, I should code an interface called NetWorthVerifier with a corresponding implementation.

So I write an interface and an implementation like so:

Interface:
public interface NetWorthVerifier {
	public boolean isNetWorthAcceptable(User user);
}

Implemenation:

public class NetWorthVerifierImpl implements NetWorthVerifier {

	@Override
	public boolean isNetWorthAcceptable(User user) {
		return ((user.getStateOfResidence() == "CA" &&
				(user.getIncome() - user.getExpenses()) > 1000000.0) ||
				(user.getStateOfResidence() == "TX" &&
				(user.getIncome() - user.getExpenses() *1.19) > 1000000.0)
				);
	}

}

Better…now I can mock out this class and only test out the userRepository and emailService calls.

@RunWith(MockitoJUnitRunner.class)
public class UserServiceWithTDDTests {
	@Mock
	private NetWorthVerifier netWorthVerifier;
	@InjectMocks
	private UserService userService;
	private User user ;
	@Before
	public void setup(){
		//Given
		user = new User("foo");
	}
	public void testInviteUserWhenAMillionaire() {
		//When
		when(netWorthVerifier.isNetWorthAcceptable()).thenReturn(true);

		userService.inviteUser(user);

		//Then
		verify(userRepository, times(1)).addUser(any(User.class));
		verify(emailService, times(1)).sendEmail(anyString());
	}
}

And voila! We have a successful test without having to engineer data.
But.. what is more important, is that I have encapsulated the checking of net worth in a class of it’s own, thereby ensuring a clean separation of concerns.

Wait… what about testing the actual logic to see if the user’s net worth is met or not? We have not tested that.
Correct… and the reason is that that’s another test! In that case the NetWorthVerifier will be the SUT and the User object will be the collaborator.
So not only do we have modular code, we have modular tests too. That’s the beauty of it!

Hopefully I have been able to show you a pretty neat way to build software where we are always considering SUTs and collaborators that don’t necessarily line up with layers in our architecture.

So the next time someone proposes TDD… please give it a chance.. it may not be such a bad idea after all!

If you like what you read, share what you like!
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Blogplay
  • RSS
  • Technorati
  • DZone
Tags:

Discovering REST

February 1st, 2013 No comments

In this post I’ll cover two aspects about designing RESTful applications:

  1. How to design the URLs that can be used to interact with your application, so that the state of the resource can represent the action that needs to be taken.
  2. How to make RESTful services discoverable in the least brittle manner.

We will use a sample application with the following domain to illustrate this:

A company has Employees that belong to Departments. Each Department has a Manager. Employees can be fired, given raises and re-assigned to different departments.

The user interaction model would be:

  1. User can create an Employee
  2. User can create a department
  3. User can assign an employee to a department.
  4. User can give an Employee a raise
  5. User can fire employee
  6. User can hire employee
  7. User can make and Employee the manager of a department.
  8. Delete a department
  9. Delete an Employee
  10. Search for a nonexistent employee or department (Error handling)

Behavior can be modeled as:

  • Employees can be hired(), fired(), givenRaises(), reassigned()
  • Departments can be assignedManager(),  dropped()

We will attempt to use the 4 verbs provided by HTTP (GET, PUT, DELETE, POST) , to model all the behavior needed by this application. The attempt is to use the state of the resources to represent the behavior after the action is carried out.

The accompanying sample project can be found here. It will be useful to check it out and reference it as I talk about different parts of this sample project.

REST based services are implemented using Spring MVC extensions/annotations.

First let’s look at the Employee class:

...
import lombok.Data;

import org.springframework.hateoas.Identifiable;

public @Data class Employee implements Identifiable{
	public Employee(Long id, String fName, String lName) {
		this.id = id;
		this.fName = fName;
		this.lName = lName;

		this.status = EmployeeStatus.WORKING;
	}
	private final Long id;
	private final String fName;
	private final String lName;
 	private double salary;
 	private Long depId;
 	private EmployeeStatus status;
}

There is nothing noteworthy in the above POJO except that it implements Identifiable from the Spring HATEOAS project. This ensures that all resources that need to be exposed in a RESTful manner,  implement  a getId() method.

Also to implement getters and setters, the @Data annotation from the lombok project comes in handy.

Next, check out the very standard repository and service  interfaces and implementations. There is nothing REST or HATEOAS specific there.

The Controller and the classes in the com.nayidisha.empdep.rest packages is where the bulk of the work happens.

Starting with the EmployeeController:

    @RequestMapping(method = RequestMethod.POST, value = "")
    ResponseEntity createEmployee(@RequestBody Map body) {
        Employee emp = managementService.createEmployee(body.get("fName"), body.get("lName")) ; 

        ManagementResource resource = managementResourceAssembler.toResource(emp, null);
        resource.add(linkTo(EmployeeController.class).withSelfRel());
        return new ResponseEntity(resource, HttpStatus.CREATED);
    }

The POST method is being used to create an employee using standard Spring MVC. So far there is nothing especially noteworthy.
However, the managementResourceAssembler is where we use the HATEOAS API to tell our users what links should be exposed when an employee is created. In short, depending on the state of the resource, appropriate links should be exposed to the clients as shown below:

public ManagementResource toResource(Employee employee, Department department) {
       ManagementResource resource = new ManagementResource();
       resource = this.getEmployeeResource(employee, resource);
       resource = this.getDepartmentResource(department, resource);
       resource.departmentList = new ArrayList();
       resource.departmentList.add(department);
       resource.employeeList  = new ArrayList();
       resource.employeeList.add(employee);
       return resource;
}

and

private ManagementResource getEmployeeResource(Employee employee, ManagementResource resource) {
	if (employee != null) {
		if (EmployeeStatus.WORKING.equals(employee.getStatus())) {
			//working employees can be fired, assigned to departments, made managers of departments, and given raises
			resource.add(linkTo(EmployeeController.class).slash(employee.getId()).withRel("fire"));

		} else if (EmployeeStatus.FIRED.equals(employee.getStatus())) {
			//Fired employees can be transitioned to WORKING status
			resource.add(linkTo(EmployeeController.class).slash(employee.getId()).withRel("hire"));

		} else if (employee == null || employee.getStatus() == null){
			//Employee not yet created. So client can create an employee
			resource.add(linkTo(EmployeeController.class).withRel("createEmployee"));
		}
	}
	return resource;
}

We can see that links are being created based on what state the Employee resource is in.
Here “rels” are relative links to carry out actions later on. All documentation is anchored to them. So the idea is that we can change the internal representation of our resources as much as we want over the life of the application. The rels should not change. Client applications will then traverse the links node and access the desired “rel” to carry out a business operation as specified in the documentation.

First we will use the rest-shell project to see some output from our sample project:

First create an employee

http://localhost:9080/emp:> post /employees --data "{fName:"Tim", lName:"Rice"}"
> POST http://localhost:9080/emp//employees
> Accept: application/json

< 201 CREATED
< Server: Apache-Coyote/1.1
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Thu, 31 Jan 2013 23:35:50 GMT
<
{
  "links" : [ {
    "rel" : "fire",
    "href" : "http://localhost:9080/emp/employees/0"
  }, {
    "rel" : "createDepartment",
    "href" : "http://localhost:9080/emp/departments"
  }, {
    "rel" : "self",
    "href" : "http://localhost:9080/emp/employees"
  } ],
  "employeeList" : [ {
    "id" : 0,
    "salary" : 0.0,
    "depId" : null,
    "status" : "WORKING",
    "fname" : "Tim",
    "lname" : "Rice"
  } ],
  "departmentList" : [ null ]
}

Above we see that we can only see all employees using the “self” rel, and fire an employee using the “fire” rel.

But now, how does the client know what parameters to pass to fire the employee. For that the client has to look up documentation for the “fire” rel.
The documentation states that to fire an employee we must issue the PUT verb with the following data map:
{ status:”FIRED” }

http://localhost:9080/emp:> put /employees/0 --data "{status:"FIRED"}"
> PUT http://localhost:9080/emp//employees/0
> Accept: application/json

< 200 OK
< Server: Apache-Coyote/1.1
< Content-Type: application/json;charset=UTF-8
< Transfer-Encoding: chunked
< Date: Thu, 31 Jan 2013 23:48:38 GMT
<
{
  "links" : [ {
    "rel" : "hire",
    "href" : "http://localhost:9080/emp/employees/0"
  }, {
    "rel" : "createDepartment",
    "href" : "http://localhost:9080/emp/departments"
  }, {
    "rel" : "self",
    "href" : "http://localhost:9080/emp/employees"
  } ],
  "employeeList" : [ {
    "id" : 0,
    "salary" : 0.0,
    "depId" : null,
    "status" : "FIRED",
    "fname" : "Tim",
    "lname" : "Rice"
  } ],
  "departmentList" : [ null ]
}

We see that the employee is fired ( if that were ever possible, as Tim Rice is my favorite composer :) ) and that the links are updated to hire him again.
Similar semantics surround the interaction for the other “actions” that are represented by the final state of the resource and the four HTTP verbs.

The one area that I’ve not had a chance to try out is the jsonPath class (also a part of the Spring MVC improvements in Spring 3.2) that allows REST clients to traverse the json returned to get a fix on a certain “rel” in it and from there the URI to access. (It looks really promising! XPath for JSON is a sure winner!)

If you like what you read, share what you like!
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Blogplay
  • RSS
  • Technorati
  • DZone

Moving to GitHub!

January 9th, 2013 1 comment

I’m finally finding the time to move a lot of work I’ve done over the years to Github.

The first in the series:

  • Maven Plugins
    • maven-nddbunit-plugin – Helps in creation of test data from a working RDBMS. Related blog post here.
    • maven-sql-plugin
    • maven-distribute-plugin
    • maven-authorize-plugin
    • maven-ndcobertura-plugin – Used to ensure adequate test coverage. Related blog post here describes a process to ensure coverage in existing code bases.
    • maven-ndjacobe-plugin – This plugin enforces code compliance at build-time. Related blog post here. Also referenced on the Jacobe website.
  • Games
    • Mathwiz
  • Projects
    • TwitterAround – This app, that I wrote in the 2008/09 time frame uses the twitter4j API and does the following:
      • Allows an admin to define Campaigns.
      • A Campaign defines keywords to look for, in tweeters’ profile
      • Stores the tweeters in a local db using a Quartz scheduled job
      • Allows admins to store tweets per campaign, or across all campaigns.
      • Allows the campaign to be configured to send out tweets at a certain frequency to the stored tweeters in the campaign, using another Quartz scheduled job
      • The tweets appear in the sending tweeters timeline.

      For doing so, this application uses the following features:

      • Quartz for scheduling
      • Hibernate for persistence into MySQL
      • IceFaces for presentation
      • Several other ancillary projects like SimpleCaptcha and Spring Testing Framework.

… more to follow

If you like what you read, share what you like!
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Blogplay
  • RSS
  • Technorati
  • DZone
Tags:

Rails vs Grails

January 8th, 2013 10 comments

I had a week between projects so I decided to take a deep dive into the Ruby on Rails (RoR) ecosystem and compare it against my earlier experience with Grails (from the Groovy, Java and Spring camp).

First, the 10,000 feet view:

  1. Both enable Rapid Application Development. You can get a slick, non-trivial application developed in less than a day using either approach.
  2. Both rely on a Domain Specific Language (DSL) to do the heavy lifting. The domain, in this case is web enabling your application. The DSLs are what allow scaffolding, Domain creation and persistence (among other things). They are written in Ruby for the Rails environment  and are bundled as ‘gems’. Similar functionality is written in Groovy for the Grails environment and are bundled as plugins.
  3. Both encourage a test driven approach (test first), although, it seems that RoR may have been slightly ahead of the curve on that front.
  4. Both rely on  several community projects to make your application enterprise ready. Rails relies heavily on ‘helper’ projects written in PERL, JS and (*nix) shell scripting whereas Grails relies on ‘helper’ projects from the Java/Spring ecosystem.

Now let’s take a closer look:

  1. Agility: In RoR apps incremental development (agile techniques) are encouraged by adding properties incrementally to the model. This is handled via a ‘migration’ in RoR on the db gem. Correspondingly, in Grails, to handle incremental db changes you would have to resort to external projects like liquibase or flyway (which, btw, are totally effective in a non-web environment also).
  2. Persistence: Rails persists the model using a gem called activerecord. In Grails, persistence is carried out by the GORM (Grails Object Relational Mapping) plugin.
  3. Performance: I’m not totally sure of this one because I do not have benchmark numbers.. But based on the fact that Ruby is interpreted (and remains so), whereas Groovy is compiled into Java and then into bytecode, I’d say that Grails outperforms an RoR app, all else being equal.
  4. Testing: Testing is ingrained in the fabric of RoR. It is available via a DSL that encourages Behavior Driven Development (BDD). Tests can be written on the client-side as well as the server side code. Popular frameworks are RSpec and Cucumber.  Grails relies on the JUnit framework and related test runners like Mockito, PowerMock, EasyMock etc.
  5. Tooling: IMO, tooling in RoR is severely lacking. In fact, in some sense, the use of tools is looked down upon (maybe it’s because of it’s rich lineage with unix based systems). So we are left with mainly command line interface based usage for RoR dev. In the Grails camp, Spring Source Toolsuite has terrific support for all Grails development including code completion, testing, plugin discovery and installation and deployment.
  6. Services Access: RoR apps typically access all backend services via RESTful calls. Grails does that too but has the added advantage of being able to inject Spring managed services (with all other proxies) injected into Grail controller classes. So not all access has to be RESTful.
  7. Build: RoR: Rake, bundler, Grails: Gradle, Ivy, Maven
  8. Version Control: RoR is firmly committed to Git (no pun intended:) whereas Grails is rapidly cutting over to Git with some projects still in SVN.
  9. Authentication: RoR: Clearance, AuthLogic, CanCan, Devise. Grails: Spring Security Framework
  10. MVC: Both offer a clean implementation of MVC2 via controllers, views and domain objects. Although I like the fact that domain objects in Groovy do not have to inherit from a superclass (As is the case in RoR where domain objects inherit from ActiveRecord to implement persistence). This keeps the domain model clean altho’ it can be argued that for a Grails app, GORM annotations can still mar the pristine domain model :)
  11. ServerSide UI generation preprocessors: RoR: Embedded Ruby (erb), Grails: Embedded Groovy expressions, Tiles, SiteMesh
  12. UI Templating: RoR: There are several available amongst Haml, CoffeeScript and HandleBars, Grails: css framework plugin called Blueprint (amongst others), SiteMesh
  13. Language: Ruby is a dynamically typed language. So is Groovy. And therefore the niceties afforded to dynamic typing are available to both. However, based on the debate of which is superior: dynamic or static typing, I just love the fact that Groovy is an extension of Java and so I can fall back to the safety of Java if I want to. So you have dynamic typing if you need dev time flexibility, and static typing if you want run-time certainty. That, IMO is a killer feature, right there!
  14. The center of ‘gravity’: This, IMO is the main difference between the two technologies. Grails comes from the Spring camp, where there is already tremendous support for non-web based initiatives, like Enterprise Application Integration, Batch processes support, Social APIs, Data (No-SQL persistence), Persistence (JPA/Hibernate support), Tooling (Roo, STS) and Queuing (AMQP, MQ). So while Groovy has become known (to me, at least) via Grails, there is life beyond Grails too.
    While Ruby is a great dynamically typed language, there were not many projects built around it (no useful DSLs) till Rails came along.  And so the ecosystem that is available is built mainly to support Rails. I have to clarify, that I am not stating that before Rails came along, there was no support for non-web based applications. There was. However, it was not as focused and organized as we see in the Enterprise Java world.
    Put another way: Grails is domain centric. Rails is web centric.

In Conclusion

Rails has got a very strong, vibrant and vocal user community. But I think the euphoria is based of the earlier success of Rails when the developer community moved from the tedious and chaotic world of web app development to the (almost) magical world that Rails offered. Since that time a lot of water has flowed under the bridge, so to speak. Several convention-over-configuration, rapid application development “frameworks” have sprung up and leading the pack in the Java space is Grails with the support of the Spring behemoth behind it. Here is a figure that supports that point of view. But, here‘s a Google Trends chart that (at the time of writing) says the complete opposite!

So my take on this is:
If you have the need to build a web based application starting from scratch, pick one based on the expertise you have in your team. Both Rails and Grails will deliver quickly.

However, if you want to build an application that, at it’s core, may not be web-based (or minimally web based), adopting Grails may be the better option in the long run. Or if you have an app that is already out there and all you want to do, is web-enable it and you have Java expertise in-house, Grails may be the way to go.

If you like what you read, share what you like!
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Blogplay
  • RSS
  • Technorati
  • DZone

The Guava EventBus and Spring Proxies

December 11th, 2012 No comments

The Guava EventBus model is a neat way to use events (both synchronous and async) in your application without having to use the explicit Listener interfaces. More can be read about the simplicity of that approach  here.

However, I recently discovered that Spring Proxies and the EventBus registration, as it’s implemented currently,  don’t play well together.

First a bit about the key differences between the implementations of the Observer/Observable pattern using the Listener Interface and the EventBus model:

  • Instead of having each of your Observer classes implement a Listener interface, you now use an annotation on a certain method(s) of a class(es) that are registered with the event bus.
  • The EventBus.register(…) method now looks for those annotations and invokes those methods when events are posted to the bus.

The registration of the class(es) that have the @Subscribe annotation in them is usually done at initialization time. In the case of a Spring application, we can register these classes when  loading up the application context.

In the course of developing  an application, typically we are not sure which Spring managed bean will end up having the @Subscribe annotation in it.  So a good idea is to register all beans with the event bus at start up. And to do so, a BeanPostProcessor implementation would implement this as follows:

First define the EventBus in an explicit xml configuration, because we cannot @Autowire a BeanPostProcessor:

...
<task:executor "threadedEventBusExecutor" pool-size="10" />
<bean: id="eventBus" class="com.google.common.eventbus.AsyncEventBus"
c:executor-ref="threadedEventBusExecutor" />
<bean id="eventBusRegistration" class="com.acme.eventbus.EventBusRegistrationBeanPostProcessor"
c:eventBus-ref = "eventBus" />
...

And next define the BeanPostProcessor like so:

public class EventBusRegistrationBeanPostProcessor implements BeanPostProcessor {

private final EventBus eventBus;

public EventBusRegistrationBeanPostProcessor(EventBus eventBus) {
    this.eventBus = eventBus;
}

@Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
    return bean;
}

@Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
    eventBus.register(bean);
    return bean;
}

However, here is where we discover that this approach will not work for Spring managed beans for which Spring has created a JDK dynamic proxy. This is because the annotations on the original class are not available on the proxy. So while the proxy will be registered on the eventbus, it will be ineffective because the bus does not know of the @Subscribe annotation on it’s methods.
Looking at how the eventBus has been implemented we see that there is only one implementation of the HandlerFindingStrategy interface: the AnnotatedHandlerFinder.

So, how do we get around this?

One approach would be to simply use the postProcessBeforeInitialization callback instead of the postProcessAfterInitialization and register the bean on the event bus before Spring has had a chance to create a proxy for it.

This may well work except in the cases where another  BeanPostProcesser (like the PropertyPlaceHolderConfigurer) has been used to specify the Event class on the method signature of the @Subscribed method. In that case, the wrong event class will be subscribed. (There may even be an error issued at time of registration, if there’s a property value specified).

So, a more robust way to make this work could be to give the EventBusRegistrationBeanPostProcessor a low order of precedence and do something like this:

public class EventBusRegistrationBeanPostProcessor implements BeanPostProcessor {

private final EventBus eventBus;
private Object preInitializedBean;

public EventBusRegistrationBeanPostProcessor(EventBus eventBus) {
this.eventBus = eventBus;
}
@Override
public Object postProcessBeforeInitialization(Object bean, String beanName) throws BeansException {
    this.preInitializedBean = bean;
    return bean;
}
@Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
    if (bean instanceof Proxy) {
        eventBus.register(preInitializedBean);
    } else {
        eventBus.register(bean);
    }
    return bean;
}

By doing so, we see that :

  1. All BeanPostProcessors would have had a chance to post process the bean instance.
  2. Proxied beans still work as they still proxy the original bean which is now registered properly with the eventBus.

Hope this helps someone till such time other implementations of the HandlerFindingStrategy become available. A useful implementation would be one that looks for a tag interface (like Subscribed, maybe), which will automatically be available on the proxy. Then there will be no need to place that instanceof check in the post processor.

If you like what you read, share what you like!
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Blogplay
  • RSS
  • Technorati
  • DZone

Database Stored Procedure vs Middle Tier Services

May 25th, 2012 No comments

When designing a typical enterprise application using a JEE stack, that is backed by a relational database, the question often arises: Should we be placing business logic in the middle tier  or in database stored procedures.

Like anything else, there are trade-offs in both approaches. And like anything else, the answer lies somewhere in the middle.

However, it’s important to understand the reasons and trade-offs. Here’s my attempt to dig a bit deeper into this (relatively ancient) debate.

A brief history

Stored procedures were tremendously popular in the client-server days. They provided a means to consolidate business logic and were a logical place for all “clients” to access that logic. In that scenario, it was a no-brainer to understand that they out-performed similar logic coded on the client by making network trips for the next iteration of data. Then came the concept of an application server that was pushed by companies like BEA, IBM and later by open-source projects like apache jakarta.  Moving business logic to an object oriented middle tier made sense.  However around that same time JEE suffered the effects of an experiment gone wrong; namely EJB 2.1. The middle tier started being called the muddle-tier because of the associated FUD factor. ORM Frameworks like EJB-CMP, Toplink and Hibernate emerged. As first generation ORM solutions they too could not compete with the well established stored procedure.

But then Spring and Hibernate 3.3 came on the scene breathed a fresh lease of life into the aging JEE stack that was rapidly acquiring the infamous “legacy status”.

Not to be left behind, Oracle and other db heavy hitters, enhanced their stored procedure offerings by introducing dynamic data structures, cached SQL and overall improved performance.

A Common Myth

The advantage of using a stored procedure is that it allows SQL to become procedural. Combine that with the fact that it executes close to the data store, we can see why it performs well. However, there’s a common myth that the SQL statement itself will perform better(faster) if executed within the context of a stored procedure which is not true at all. We must separate the procedural processing that surrounds the SQL from the execution of the SQL statement itself.

With that background I will try to make a case for why it makes sense to having an intent to code behavior in the middle tier as against the database.

Performance:

In a multi-tier aplication, performance needs a bit of redefinition. It can no longer be defined as the time it takes for a set of rows to be returned by a SQL query. Instead, it may make sense to define it as the time it takes from when the user makes a request to when results show up on her web page (say). This metric has to consider network hops and caching in both the database and the application tiers and query performance.

In the picture above we see one key point with an ORM solution. There is caching in the middle tier too, which is absent if only stored procedures are used. What that means is better performance because of one less network hop to the database tier. In the context of the point I made above, that of separating the execution of the SQL statement with the procedural logic that surrounds it, we can see that it is conceivable that that procedural logic can be executed on data that is cached in the middle tier. This scenario can play out in a three tier configuration and avoid a round trip to the database.

Modularity and reuse

There is no doubt that code written in PL/SQL, T-SQL and the like can be made modular and certainly re-usable. However, there are certain “niceties” that an Object Oriented environment gives us which seems a bit clunky in the world of Stored Procedures. Modularity and reuse enablers such as Separation of Concerns, Coding to Interfaces and Inversion of Control, are almost always used  in the context of an OO eco system.  OTOH, discussions that surround stored procedures almost always center around performance. Therefore, it would seem to be easier to code our business logic in an environment more conducive to modularity and reuse. Not that it cannot be done in the database. But if we were to focus on which environment provides greater support for modularity and re-use, I think the middle tier would win hands down.

Security

There is no question that data needs to be secure. But so does business logic. In fact, depending on the kind of business you run, business logic sometimes needs to be more secure than data! So now we have a situation where two areas need protection: Business logic in the middle tier and data in the database.

From the user’s point of view, security should be a seamless experience of behavior and data. Joe would like to make withdrawals (behavior)  from his  Swiss bank account (data) whereas Jill would like to transfer funds (behavior) from her New York account (data). A common pattern that meets this need is to push the database behind the corporate firewall (thus preventing all access except via the middle tier) and place all security checks in the middle tier.  This pattern plays well with business logic coded in the middle tier because often there is a grey area between security issues and business logic and one cannot tell which is which. If you place business logic in the middle tier, you don’t have to!

Transactional consistency

This only applies to stored procedures that start transactions.

Transactions are central to any business. In the classic example from the financial world, you wouldn’t want to make a withdrawal without a deposit. Either they both succeed or both fail. In other words, what is treated as an atomic unit-of-work also comprises behavior or business logic. (Making only deposits and not caring about withdrawals can be a legit business case). So it is logical that the transaction boundary is set in the middle tier where the business logic lives so that it can be maintained along with other business logic.

Caveat Emptor

Having made a case for placing business logic in the middle tier, I have to state: The devil is in the details! For each of the above desirable dimensions, you will find that your mileage may vary. But I am hoping the above illustrates the rationale for moving business logic to the middle tier. There will always be corner cases for which we will have to resort to that stalwart of yore… the mighty stored procedure!

If you like what you read, share what you like!
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Blogplay
  • RSS
  • Technorati
  • DZone
Tags:

Distributed Development with Git

May 16th, 2012 1 comment

I recently started working in an environment where we have to develop software using teams that are geographically dispersed over several timezones. There are many challenges (and benefits) of such a set up. The one I will address today is to do with team collaboration and source control.
I have been a user of Subversion and the Atlassian suite of tools for some time now but discovered that when it comes to distributed teams, JIRA and Cruicible fall short. One obvious shortcoming is the inability for team members to do code reviews without checking in un-reviewed code into Source Control. I will use this as the basis of making a case to switch to a Distributed Version Control System, and in doing so, will describe how to use Git.

Git is complex. But distributed version control is a complex problem so it is understandable that the internals of Git makes you feel like you should have a PhD in Graph Theory. However, in true Agile fashion, I approached the problem of understanding Git with a User Story in mind. And my User Story is the following:

“How can I use Git to allow and enforce peer-code reviews, maintain a couple of branches of the product and not have to remember hundreds of Git commands on my finger tips”

Of the several of ways to choose from, to achieve the above, here’s one possible way that worked for me.

First I will describe the overall approach and then discuss each step in detail.

  1. Use EGit to create a local repo and to switch and create branches.
  2. Use GitHub to host remote repository (It’s free)
  3. Use EGit to push a feature branch to the remote repo
  4. Use GitHub to send out Pull Requests to potential reviewers based on the push.
  5. Use Github to view Pull Requests that already exist.
  6. Use EGit to confirm that the changes in the pull request are good and to  merge into master locally
  7. Use EGit to push the master to remote.

As you can see, I’ve used EGit, an Eclipse plugin for the Git client and GitHub.com for hosting a remote repository. This choice has two benefits:

  1. Keep away from the command line interface of Git with it’s formidable array of commands and options.
  2. And the bigger reason: Get introduced to GIT best practices that these tools default to.

Steps:

  1. Get EGit. This is an Eclipse client for Git and can be downloaded as a plugin by using this update site. Note, that there is no need to download and install Git itself.
  2. Create a project in Eclipse and add a few files to it.
  3. Click on that project and go to Team -> Share and select Git as the SCM provider and then create a new repository. Note that you can also clone an existing Git repository, which typically means that you can “copy” a remote repository to your local environment. But we will start upstream of that event just for completeness.
    It is stated in EGit documentation that it is not a good idea to create a Git repository in the eclipse workspace. Since I am not totally convinced of the reasons given, I will create a directory in my eclipse workspace called gitprojects and create the repository within that (YMMV).
  4. Press Create to create a new repository. Git allows us to have as many code repositories as we like. If you have used the git clone command for downloading an open source project, you have essentially copied an Git repository to your local machine. One nice feature of Git is that it doesn’t clutter up  each directory of your code base with dot directories (hidden directories) like Subversion does. But it does create one .git directory at the top level of your directory structure. This .git directory represents a repository, not a project. You can have as many projects as you like within that repository.
  5. Make sure that you have added gitignores by going to Team|Ignore. Then ensure that you also check in .gitignore for others to use. If, you goof up and ignore .gitignore, you can simply edit the .gitignore file and delete the /.gitignore entry. A far sight better than SVN’s handling of svn-ignores!
  6. Now let us do the initial commit of our project like so:
  7. By doing all above, you will have a local project and it’s shared to your local Git repo. Now go ahead and make a change to a file or files. By doing so, you are working on your local master trunk. No branches yet.  Now, I’d like to find out what I’ve changed. For that, I can simply do a Team|Synchronize View which will show me the diffs between my workspace and my local Git. Go ahead and commit the changes, just like you would, in any source control system. Just to make it look real, make another change (like adding a file) and commit again.
  8. Now let’s say that you need to start working on a User Story called Add Validation.  In the SVN world, you would have continued to work in your synched up workspace to make changes by say, modifying a file and adding another. But not so in Git. Here is where we start to use Git’s cool feature of branching.  First confirm that you are on the master branch on your local like so:
    Next, select Team|Switch To| New Branch to get the dialog shown below

    Notice that we are branching off of the master. We see that the current ‘head’ of our project has changed:
    So now we can start to develop in the for-validation branch.
  9. This time, you can modify the United.java class and add another class called Validation.java. And check that in. So now you have a commit on the new branch. Now the interesting part. Select Team|SwitchTo| [master | for-validation] and see the code in your workspace switch. That is pretty neat because by merely changing a branch, an entire new code base is swapped in, including added and dropped files.
  10. At this point, I would like to visually see the changes I have made. So I go to Team|Show In History. The view shows me two commits on the master branch and then one on the for-validation branch.
  11. However, to see true branching behavior, let’s  start working on yet another story for say, changing-theme. But you don’t want to make that change on the validation code you just added. So we will create another branch off of master by going to Team|Switch To|master and then Team|Switch To | New Branch. This results in a dialog as below:

    Note that the source-ref represents the place you branch out from. Since we want to branch from before we started working on for-validation, select master as your source-ref. Specify a new branch called for-world-peace. By doing so, eclipse (EGit actually) will implicitly check out the new branch and make that current (aka head).
  12. Make changes on that branch (to implement world-peace), commit a couple of times, so that we can see some nice nodes. Now let’s see how it looks in a graphical representation by checking out the history:
    Here we see two commits off of the for-world-peace branch.
  13. Next I want to  have someone review my changes that I have made on the FEATURE-123-Use-Validation branch.  To do that I will need to push that branch over to github and issue a pull-request to all contributors of the repository.
    But I do not yet have a repository on github to which I can push. So I can go over to github, log into my account and create a “New Repository”. In the case of a corporate repository, this step will not be necessary as, hopefully that repository will already have been created by your company admin, and you would have been given access to it.
    Once you have created that repository, you can cut-n-paste it’s URL and then in your local Eclipse, go over to Team|Remote|Push and enter it like so:

    Upon hitting “next” I see the following dialog that basically askes me to map my local branch to a remote branch (defaulting to the same name):

    Finally getting:

    Hop over to github.com to see if your files actually made it… they should have.

  14. On github.com, you can now switch over to the just checked in branch (from a drop down) and issue a pull-request to the committers of the repository.
  15. Let’s switch roles now, and assume that you are the one reviewing the pull request. Github will give you nice tools to do diffs on files modified and you are able to add comments to each file. After this interaction with the developer, let us suppose that you are ok with the changes and would like to merge feature-123-use-validation to the master branch on the remote.
    The point to note is that the merge will be done on a local developer’s machine and then pushed back to remote.
  16. Go to Team|Remote|Fetch From.. and fill in the following dialog
    Here we are pulling the remote master into the local master branch.
  17. Check Team|History to ensure that the picture of the repository looks good. Ensure that the current branch is master and merge feature-123-use-validaton into it.
  18. Select for-validation and hit ok. There are no conflicts here so we have a clean merge and GitX displays as below:
    As we can see, master and feature-123-for-validation now sit at the same head.
  19. Now let’s merge for-world-peace and master. Do the same thing, switch to master (if not already there) and merge for-world-peace into it.
    Here we see a slight problem. There is a conflict because the same file was modified in the two branches.
    Correspondingly, Git modifies the file(s) in conflict by showing the following diff (The stuff above the ===== is in the head, the rest is in the branch being merged

    public class SouthWest {
    <<<<<<< HEAD //A line to suport validation ======= //To support worldPeace //Some more world peace >>>>>>> refs/heads/for-world-peace
    }

    So fix the file as you see fit and then select “Add to Index” from the menu below. Follow the above action with a commit so that changes are actually merged. At this point we have merged feature-123-use-validation to the master locally. Ensure that Team|Show In History shows the right branches

  20. Now it’s time to push the master to the remote repository so that it can be subsequently built and deployed (by a Continuous Integration tool, say). Use Team | Remote|Push to push the master branch to the remote much like we had pushed the feature-123-use-validaton branch earlier on.

Gotchas:

  1. When accessing a remote repository via EGit you may see:
    “Exception caught during execution of ls-remote command”

    I found that this happens if you have NOT used EGit’s project import wizard to pull in the project initially from the remote repo. (Instead, you may have used the command line option of git clone <url>)
    To fix, this, if possible, use File|Import|From Git … and get the project that way.
    Now, if you go to Team|Remote|Fetch From… you will be able to complete the dialog that would otherwise issue the error above.

  2. EGit, as of May 2012, doesn’t support the git stash command. That’s a bummer, because stash is a very useful thing to have around because of the maddening “feature” that does not allow one to switch branches without committing changes that you may have currently made. To get around that, since there is no EGit equivalent, I had to resort to the cli :(
    • git stash
    • git checkout the-other-branch
    • git stash pop

    Now you will have all your changes in the newly checked out branch.

What’s JGit

JGit is the pure Java implementation of Git (which is written in C, Perl, shell an Ruby(?)). EGit is an Osgi plugin for Eclipse based on JGit.

Resources:

  • EGit: This is a comprehensive EGit guide: http://wiki.eclipse.org/EGit/User_Guide#Getting_Started
  • The Git Book (I found Chapter 5 is especially very useful where usage scenarios are discussed): http://git-scm.com/book/en/Distributed-Git-Contributing-to-a-Project
If you like what you read, share what you like!
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Blogplay
  • RSS
  • Technorati
  • DZone

Java Property Management Across Build Environments

April 1st, 2012 2 comments

Using Java properties to specify “boundary conditions” or user preferences of any non-trivial application is standard practice in many development shops. However, even though using properties is a good abstraction, if the property files are packaged with the application, they are not exactly a 100% configurable at runtime.

On the other hand, if they are not packaged with the application and interjected on the application’s classpath, they can be configured at runtime. But that is not a really palatable option from source code management point of view. In this post, I will talk about the pros and cons of these two approaches, and then describe a hybrid approach that works well and keeps everyone happy!

There are two technologies that I will rely on for the proposed solution:

Let us assume that the following build environments exist:

  • DEV – local developer’s environment.
  • NIGHTLY – build server’s continuous build
  • TEST – internal testing
  • UAT – User Acceptance Testing
  • STAGE – pre-production
  • PROD – production

Approach 1: Build-time Injection of Properties

This approach uses Maven’s Resource filtering and Profiles.

First create the following property files and place them in src/main/filters of your mavenized project:

  • acme-common.properties – Holds properties that are not environment specific, like customer.telephone.number, company.tag.line etc
  • acme-${env}.properties – There should be as many files as there are build environments. For example, acme-dev.properties, acme-nightly.properties etc. These files contain properties that are environment specific. For example, datasource urls including userId and passwords, JMS endpoints, email flags etc.

Optionally create one more file as below and place it in ${user.home}

  • my-acme.properties – This holds all properties that are specific to the developer. Examples could be email address of the developer, local db url etc.

Since this property file is developer specific, notice that it is not in the source tree (and therefore not in source control).

Now configure the filter section of the pom of your project like so:

    <build>
        ...
	    <filters>
	      <filter> src/main/filters/acme-common.properties </filter>
	      <filter> src/main/filters/acme-${env}.properties </filter>
	    </filters>
		<resources>
			<resource>
				<directory>src/main/resources</directory>
				<filtering>true</filtering>
			</resource>
		</resources>
          ...
	</build>

Note that files specified later in the filters section override the earlier ones for the same property name.

In addition, set up a profile section like so:

    <profiles>
        <profile>
            <id>DEV</id>
             <properties><env>dev</env></properties>
        </profile>
        <profile>
            <id>NIGHTLY</id>
            <properties><env>nightly</env></properties>
        </profile>
		...
        <profile>
            <id>PROD</id>
            <properties><env>prod</env></properties>
        </profile>
    </profiles>

By doing the above and issuing a maven package -PUAT will ensure that values specified in acme-common.properties, acme-uat.properties and my-acme.properties are injected appropriately in the (xml files) that reside in the resources directory. Therefore the war or jar or ear (whatever the final artifact is) will have the correct values in them.

Note,that in this approach, there should not be any property file in the final classpath of the artifact, because properties are already injected into appropriate files at build time.

Pros:

  • Since property files are part of the code base, and the code base is built for a specific environment, there is a lower chance of the wrong file being used in the wrong environment.
  • There is no deploy-time  modification of the build environment. There is only one process that modifies the run-time environment, and that is the deploy process.

Cons:

  • Properties cannot be changed without re-building and re-deploying the application. Changing property file values in a packaged war can lead to all sort of problems from cached properties issues to re-loading of the webapp automatically when the timestamp on the property file changes (depends on the servlet container).
  • Secure passwords have to be checked into source control.
  • The application has to be re-built for every environment.

Approach 2: Run-time substitution of Properties

In this approach, we will use Spring’s PropertyPlaceHolderConfigurer and a property file that is placed on the application’s classpath.

First create one property file (or several, for that matter) that hold all the properties used by the application and place it in a location outside source control.

The add the following bean definition in Spring’s application context file:

<bean id="propertyPlaceholderConfigurer" class="org.springframework.beans.factory.config.PropertyPlaceholderConfigurer">
  <property name="locations">
	    <list>
	      <value>classpath:acme-runtime.properties</value>
  </property>
  <property name="ignoreResourceNotFound" value="true"/>
</bean>

Or, if you prefer, the more recent:

<context:property-placeholder system-properties-mode="OVERRIDE" location="acme-runtime.properties"/>

By doing he above, we will have ensured that at run-time, all properties are substituted with the correct values.

Pros:

  • Properties can be changed on the fly, without having to build and re-deploy the application.
  • The property file(s) are not under source control, so secure passwords are not in source control
  • Password management can happen for all environments by a trusted source instead of the development group.

Cons:

  • Since the files are not under source control, there is a chance of the wrong file getting into the wrong environment. This is especially cumbersome in development environments where a property file will have to be passed around to each developer separately to get her up and running.
  • A separate process will have to be implemented (apart from the deploy process) to mange these property files.

Approach 3: Combine the two approaches

Here we will use Maven filtering and Spring’s PropertyPlaceHolderConfigurer together.
See figure below:

However, the key is the balance between how much is achieved via Build time Injection vs Run time substitution. Depending on what is done using each of these technologies, the above diagonal line can be made to move to the right (for more build-time behavior) or to the left (for more run-time behavior).

Here are the steps for one possible configuration:

  1. Set Maven filtering on on the resources directory like so. Note that there are no filter files.
  2. 	<build>
    	    ...
    		<resources>
    			<resource>
    				<directory>src/main/resources</directory>
    				<filtering>true</filtering>
    			</resource>
    		</resources>
    		...
    	</build>
  3. Create the following property files in your resources directory. All the property files will be in the packaged artifact.
  4.       src/main/resources/
                             |
                             +-- properties/
                                         |
                                         +-- acme-common.properties
                                         +-- acme-dev.properties
                                         +-- acme-qas.properties
                                         +-- acme-prd.properties
  5. Specify the following profiles and property values in the pom.xml
  6.     <profiles>
            <profile>
                <id>DEV</id>
                 <properties>
                    <env>dev</env>
                    <log-file-location>${user.home}/logs/acme-dev.log</log-file-location>
                    <acme-runtime-properties-location>${user.home}/acme-runtime.properties</acme-runtime-properties-location>
                </properties>
            </profile>
            <profile>
                <id>QAS</id>
                 <properties>
                    <env>qas</env>
                    <log-file-location>/var/tmp/logs/acme-qas.log</log-file-location>
                    <acme-runtime-properties-location>/var/tmp/acme-runtime.properties</acme-runtime-properties-location>
                </properties>
            </profile>
            <profile>
                <id>PRD</id>
                 <properties>
                    <env>prd</env>
                    <log-file-location>/opt/app/logs/acme-prd.log</log-file-location>
                    <acme-runtime-properties-location>/opt/app/acme-runtime.properties</acme-runtime-properties-location>
                </properties>
            </profile>
        </profiles>
  7. Lastly, place the spring contexts in src/main/resources and introduce the PropertyPlaceHolderConfigurer in the bootstrap context like so:
  8.     	<context:property-placeholder system-properties-mode="OVERRIDE"
    			location="classpath:properties/acme-common.properties,
    			classpath:properties/acme-${env}.properties,
    			file://${acme-runtime-properties-location}"
    			ignore-resource-not-found="true"
    			/>
Tying it all together

By specifying the configuration like above, we will have achieved:

  1. The ability to place environment specific properties in the environment specific property file(s) and have the correct file passed on to the PropertyPlaceHolderConfigurer at build time. Note that even though, all the property files will exist in the final artifact, only the correct file will be specified to the PropertyPlaceHolderConfigrer to enable run time substitution.
  2. The PropertyPlaceHolderConfigurer will substitute the correct values in the rest of the spring contexts at run time from the passed in common and environment specific files.
  3. Those properties that are not specified in the common or environment specific property files are picked up from the runtime-properties file. (Even if the same property is specified in the common or environment specific files AND the runtime property file, the runtime property will trump the similarly named property in the other files). The runtime property file, therefore becomes a good place to specify passwords and other secure information that will not be checked into source control.
  4. Note that the run-time property file has been specified using the file protocol. This implies that that file can be placed anywhere in the file system, outside the application or system classpath. And certainly out of source control.
  5. Since log4j.xml file (that has the log file location specified via a property) is NOT managed by spring (in this configuration), it’s log-file-location property is specified outside the property files and directly in the pom profiles.
  6. Note the use of ${user.home} indirection for log file location. This is to prevent unnecessary changes in the pom which would occur if each developer were to specify her own log-file-location differently.

Now that we know the order the files above are accessed in (at run time) we can decide for which environment we would like to place secure information in source control by placing them in environment specific property files, and for which, not, by placing them in the runtime property file.

If you like what you read, share what you like!
  • Print
  • Digg
  • del.icio.us
  • Facebook
  • Mixx
  • Google Bookmarks
  • Blogplay
  • RSS
  • Technorati
  • DZone
Get Adobe Flash playerPlugin by wpburn.com wordpress themes