Archive

Posts Tagged ‘spring’

Angular JS From a Different Angle!

August 18th, 2013 18 comments

I recently made the switch to a full-stack JavaScript front end framework for an enterprise application that we are building.

In this post, I’ll talk about the integration (or, rather lack thereof) of the development methodologies used for developing a server-side RESTful API vs. a client side Angular JS applications. Along the way, I’ll add some opinion to the already opinionated Angular framework.

First, let’s look why Javascript is relevant today.

The graphic below nicely shows the evolution of Javascript from ad-hoc snippets to libraries to frameworks. Of these frameworks, Angular seemed to be getting a lot of traction (being pushed by Google helps :) ).

As we can see, we went from ad-hoc javascript code snippets to libraries to full fledged frameworks. It stands to reason, then that these frameworks be subject to the same development rigor that is accorded server-side development. In achieving that objective, we see tho’ that integration of tools that enforce that rigor, is not all that seamless.

Angular JS development is what I would call web-centric. That makes sense, given that it runs in a browser! However, if we focus all our energies in building services (exposed via an API, RESTful or not); and a web front-end is just one of many ways that my services are consumed, in that scenario, the web-centric nature of Angular development can get a bit non-intuitive.

For a server-side developer, when starting with the Angular stack, issues like the following can become a hinderance:

Where’s the data coming from?

If you want to run most of Angular’s samples you need to fire up a Node JS server. Not that that is something insurmountable. But I didn’t sign up for NodeJS, just Angular. Now I have to read thru Node docs to get samples up and running.

Next, testing:

Testing or rather, the ability to write tests, has a big role to play in the Javascript renaissance. Well..ok.. let’s write some tests! But wait! I need to install PhantomJS or NodeJS or some such JS server to fire up the test harness! Oh, crap! Now I’ve got to read up on Karma (aka Testacular) to run the tests.

What about the build:

How do I build my Angular app. Well… the docs and samples say.. use npm. What’s that? So now I’ve to google and start to use the Node Package Manager to download all dependencies. Or get Grunt! (Grunt!)

All I want to do is bolt on an Angular front-end to an existing REST endpoint. Why do I need all this extra stuff?  Well.. that is because Angular takes a web-centric approach and is largely influenced by the Rails folks (See my related post here), whereas enterprise services treat the front-end as an after-thought :)

So, before I get all the Angular (and Ruby on Rails) fans all worked up, here’s the good news!

I wrote up an application that bolts on an Angular JS front-end to a Java EE CRUD application (Spring, Hibernate.. the usual suspects) on the back-end. It’s a sample therefore it obviously lacks certain niceties like security, but it does make adoption of Angular easier for someone more familiar with Java EE than Ruby on Rails.

Source and Demo

You can download and check out Angular EE (aka PointyPatient) here. In the rest of this post, I’ll refer to this sample app, so it may help to load it up in your IDE.

You can also see the app in action here.

Opinions, Opinions

One of Angular’s strengths is that it is an opinionated framework. In the cowboy-ruled landscape of the Javascript of yore, opinion is a good thing! In Angular EE, you will see that I’ve added some more opinion on top of that, to make it palatable to the Java EE folks!

So here is a list of my opinions that you will see in Angular EE:

Angular Structure

The structure of an webapp is largely predicated on if it is a servlet or not. Besides the servlet specification that mandates the existence of  a web.xml, all other webapp structures are a matter of convention. The Angular sample app, Angular-Seed,  is not a servlet. Notwithstanding the fact that Angular (and all modern front-end frameworks) are pushing for a Single Page App (SPA), I still find servlets a very alluring paradigm. So here’s my first opinion. Rather than go for a SPA, I’ve made Angular EE’s web application a servlet that is also an SPA.

If you compare the directory structure on the left (Angular-Seed) with the one on the right (PointyPatient webapp), you will see that the one on the right is a servlet that has a WEB-INF/web.xml resource. It also has an index.html at the root. This index.html does nothing but a redirect like so:

<meta HTTP-EQUIV=”REFRESH” content=”0; url=app/index.html”>

It is the index.html inside the app directory that bootstraps the angular app. And so the context root of the webapp is still the webapp directory, not webapp/app.

So what’s the advantage of making this a servlet? For one: You can use the powerful servlet filtering mechanism for any pre-processing that you may want to do on the server,  before the SPA is fired up. The web.xml is the touchpoint where you would configure all servlet filters.

For seconds, instead of having one SPA, would it not be nice if one webapp would serve up several “SPA”s?

For example, let’s say you have an application that manages patients, doctors and their medication in several hospitals. I can easily see the following SPAs:

  • Bed Management
  • Patient- Drug interaction
  • Patient-Doc Configuration
  • Patient Records

Usually, a user will use only one SPA , but on occasion will need to cross over to a different SPA. All the above SPA’s share a common http session, authentication and authorization.  The user can switch between them without having to log on repeatedly. Why load up all the functionality in a browser when only a subsystem may be needed? Use server-side (servlet) features to decide which SPAs to fire-up depending on who’s logged in (using authorization, roles, permissions etc). Delay the loading of rarely used SPAs as much as possible.

For the above reasons, I think it is a good idea to serve up your SPA (or SPAs) within the context of a servlet.

Now let’s look at the structure of just the Angular part:

Again, on the left is AngularSeed and on the right, PointyPatient.

There is no major change here except I prefer views to partials (in keeping with the MVVM model)

And secondly, I preferred to break out controllers, services,directives and filters into their own files. This will definitely lead to less Source Control problems with merges.

app.js still remains the gateway into the application with routes and config defined there. (More on that later).

.

Project Structure

Now that we have looked at differences in the Angular app, let’s step back a little and look at the larger context: The server-side components. This becomes important, only because we want to treat the Angular app as just another module in our overall project.

I am using a multi-module Maven project structure and so I define my Angular app as just another module.

  • Pointy-api is the Rest Endpoint to my services
  • Pointy-build is a pom project aggregate the maven reactor build.
  • Pointy-domain is where my domain model (hopefully rich) is stored
  • Pointy-parent is a pom project for inheriting child projects
  • Pointy-services is where the business logic resides and is the center of my app.
  • Pointy-web is our Angular app and the focus of our discussion

Anatomy of the Angular App

A Java EE applications has layers that represent separation of concerns. There is no reason we cannot adopt the same approach on the Angular stack also.

As we see in the picture below, each layer is unilaterally coupled with it’s neighbor. But the key here is dependency injection. IMO, Angular’s killer feature is how it declares dependencies in each of it’s classes and tests (more on that later). PointyPatient takes advantage of that as can be seen here.

Let is discuss each layer in turn:

Views: HTML snippets (aka partials). There is no “logic” or conditionals here. All the logic is buried either in Angular provided directives or your own directives. An example would be the use of the ng-show directive on the alert directive in the Patient.html view. Conditional logic to show/hide the alert is governed by two-way binding on the view-model that is passed to the directive. No logic means no testing of the view. This is highly desirable because the view is mainly the DOM and the DOM is the most difficult/brittle to test.

Controllers: Although, it may seem by looking at some of the samples, that we should end up with a controller per view, in my opinion, a controller should be aligned to a view-model, not a view. So in PointyPatient we see that we have one controller (PatientController) that serves up both the views (Patient.html and PatientList.html) because the view-model for both these views do not interfere with each other.

Services: Here we see common logic that is not dependent on a view-model is processed.

  • Server-side access is achieved via Restangular. Restangular returns a promise which is passed to the controller layer
  • An example of business logic that would make sense in this layer would be massaging Restangular returned data by inspecting user permissions for UI behavior. Or date conversions for UI display. Or enhancing JSON that is returned by the db with javascript methods that can be called in the view (color change etc).
    There is a subtle difference between the client and server-side service layers: The client-side service layer holds business logic that is UI related, yet not the view-model related. Server-side service layer holds business logic that should have nothing to do with the UI at all.
  • Keep the view-model (usually the $scope object) completely decoupled from this layer.
  • Since services return a promise, it is advisable to not process the error part of a promise here, but pass it on to the controller. By doing so, the controller will be well suited to change the route or show user appropriate messages based on the processing of the error block from a promise. This can be seen in PatientController and PatientService

Layers

Since we have defined layers on both, the Angular stack and the server-side, an example may help solidify the purpose of the layers. So, using this rather contrived example in the medical area, here are some sample APIs in each layer:

Server side services:
This is where ‘business logic’ that is UI independent lives.

  • Invoice calculatePatientInvoice(Long patientId);
  • boolean checkDrugInteractions(Long patientId, Long prescriptionId);
  • boolean checkBedAvailability(patientId, LodgingPreference preference);
  • List<Patient> getCriticalPatients();
  • int computeDaysToLive(Long patientId);

The number of days a patient lives will not depend on if we use Angular for the UI or not :) . It will depend, tho’ on several other APIs available only on the server-side (getVitals(), getAncestoralHistory() etc..).
Server Side controllers:
If we use Spring MVC to expose services via REST, then the controllers are a very thin layer.
There is no business logic at all. Just methods that expose the REST verbs which call the services in turn.

  • getPatientList();
  • getPatient(Long patientId);
  • createPatient(Patient patient);
  • deletePatient(Long patientId);

Angular services:
This is where ‘business logic’ that is UI dependent lies.
These are used by several angular controllers. This could involve cached JSON massaging or even servers-side calls. However, processing is always UI related.

  • highlightPatientsForCurrentDoctorAndBed();
    Assuming that doctorId and bedId are JSON data in the browser, this method changes the color of all patients assigned to the current doc and bed.
  • showDaysAgoBasedOnLocale(date);
    Returns 3 days ago, 5 hours ago etc instead of a date on the UI.
  • computeTableHeadersBasedOnUserPermission(userId);
    Depending on who’s logged in, grid/table headers may need to show more/less columns.
    Note that it is the server based service that is responsible for hiding sensitive data based on userId.
  • assignInvoiceOverageColor(invoice);
    Make invoices that are over 90 days overdue, red, for instance.
  • showModifiedMenuBasedOnPermissions(userId);
    Hide/disable menus based on user permissions (most likely cached in the browser).
  • computeColumnWidthBasedOnDevice(deviceId);
    If a tablet is being used, this nugget of info, will most likely be cached in the browser.
    This method will react to this info.

Angular controllers:
These methods are view-model ($scope) dependent. These are numerous and shallow.
Their purpose in life is to handle (route) error conditions and assign results to the scope.

  • getDoctorForPatient(Long patientId);
    This massages local(in-browser) patient-doctor data, or can access to the server via Angular services -> REST -> Server Services and assigns to a scope variable.
  • getEmptyBedsInZone(zoneId);
    scope assignment of the beds

The main difference is that here the result is assigned to the $scope, unlike the Angular Service, which is $scope independent.

Testing

While most JS Frameworks emphasize the importance of testing Javascript (and there are enough JS Testing frameworks out there), IMO, it’s only Angular that focusses on a key enabler for meaningful unit testing: Dependency Injection.

In PointyPatient, we can see how Jasmine tests are written for Controllers and Services. Correspondingly, JUnit tests are written using Spring Testing Framework and Mockito.

Let’s look at each type of test. It may help to checkout the code alongside:

  1. Angular Controller Unit tests: Here the system under test is the PatientController. I have preferred to place all tests that pertain to PatientController in one file, PatientControllerSpec.js,  as against squishing all classes’ tests in one giant file, ControllerSpec.js. (Works from the Source control perspective also). The PatientService class has been stubbed out by using a jasmine spy. The other noteworthy point is the use of the $digest() function on the $rootScope. This is necessary because the call to patientService returns a promise that is typically assigned to a $scope variable. Since $scope is evaluated when the DOM is processed (apply()‘ed), and since there is no DOM in the case of a test, the $digest() function needs to be called on the rootScope (I am not sure why localScope.$digest() doesn’t work tho’).
  2. Angular Service Unit Tests: Here the system under test in the PatientService. Similar to PatientControllerSpec.js, PatientServiceSpec.js only caters to code in PatientService.js. Restangular, the service that gets data from the server via RESTful services, is stubbed out using Jasmine spies.
    Both the PatientControllerSpec.js and PatientServiceSpec.js can be tested using the SpecRunner.html test harness using the file:/// protocol.
    However, when the same tests are to be run with the build, see the config of the jasmine-maven-plugin in the pom.xml of the pointy-web project. The only word of caution here is the order of the external dependencies that the tests depend on and which are configured in the plugin. If that order is not correct, errors can be very cryptic and difficult to debug.
    These tests (Unit tests) can therefore be executed using file:///…/SpecRunner.html during development and via the maven plugin during CI.
    In this sense, we have run these unit tests without using the Karma Test Runner, because, in the end, all the Karma test runner does, in the context of an Angular unit test is to watch for changes in your JS files. If you are ok, with not needing that convenience, then Karma is not really necessary.
  3. End-To-End Tests: These tests are run using a combination of things:
    First, note the class called: E2ETests.java. This is actually a JUnit test that is configured to run using the maven-failsafe-plugin in the integration-test phase of the maven life-cycle.
    But before the failsafe plugin is fired up, in the pre-integration-test phase, the maven-jetty-plugin is configured to fire up a servlet container that actually serves up the Angular servlet webapp (pointy-web) and the RESTful api webapp (pointy-api), and then stops both in the post-integration-test phase.
    E2ETests.java loads up a Selenium driver, fires up Firefox, points the browser url to the one where Jetty is running the servlet container (See pointy-web’s pom.xml and the config of the maven-jetty-plugin).
    Next we can see in scenario.js, we have used the Angular supplied API that navigates the DOM. Here we navigate to a landing page and then interact with input elements and traverse the DOM by using jQuery.
    If we were to run these tests using the Karma E2E test harness, we see that Karma runs the E2E test as soon as the scenario.js file changes. A similar (but not same) behavior can be simulated by running
    mvn install -Dtest=E2ETests
    on the command line.
    It is true, tho’ that running tests using maven will take  much longer to run than it’s Karma counterpart because Maven has to go thru its lifecycle phases, up until the integration-test phase.
    But if we adopt the approach that we write a few E2E tests to test the happy-path and several unit tests to maximize test coverage, we can mitigate this disadvantage.
    However, as far as brittleness is concerned, this mechanism (using Selenium)  is as brittle (or as tolerant) than the corresponding Karma approach, because ultimately, they both fire up a browser and traverse the DOM using jQuery.
  4. Restangular is not tested within our application because it’s an external dependency, altho it manifests as a separate layer in our application.
  5. On the server-side we see the Spring MVC Test Integration that is part of the Spring 3.2+ framework. This loads up the web-application context including security (and all other filters, including the CORS filter) that are configured in the web.xml. This is obviously slow (because of the context being loaded) and therefore we should use it only for testing the happy path. Since these tests do not rollback db actions, we must take care to set-up and tear-down our test fixtures.
  6. Next we have the Spring Transactional Tests. These integration tests use the Spring Test Runner which ensures that db transactions are rolled back after each test. Therefore tear-downs are not really needed. However, since the application context is loaded up on each run, these tend to be slow runners and must be used to test the happy path.
  7. Next we have Service Unit tests: PatientServiceTests.java: These are unit tests that use the Mockito test library to stub out the Repository layer. These are written to maximize test coverage and therefore will need to be numerous.
  8. Repository Unit tests (PatientRepositoryTests.java) are unit tests that stub out calls to the database by replacing the spring datasource context by a test-datasource-context.xml

Environmentally Aware!

Applications need to run in various ‘environment’s like, development, test, qa and production. Each environment has certain ‘endpoints’ (db urls, user credentials, JMS queues etc)  that are typically stored in property files. On the server side, the maven build system can be used to inject these properties into the ‘built’ (aka compiled or packaged) code. In the case of using Springframework, a very nice interface PropertyPlaceholderConfigurer can be used to inject the correct property file using Maven profiles. (I’ve blogged previously about this here). An example of this is the pointy-services/src/main/resources/prod.properties property file that is used when the -Pprod profile is invoked during the build.
The advantage of this approach is that properties can be read from a hierarchy of property files at runtime.

In AngularEE, I have extended this ability of making the code ‘environmentally aware’ in the JS stack also. However, property injection happens only at buildtime. This is affected using Maven profiles as can be seen in pom.xml of pointy-parent module. In addition, please see the environment specific property files in pointy-web/src/main/resources. Lastly, check out the customization of the maven-war-plugin where the correct ${env}.property file is filtered into the war at build time. You will see how the REST endpoint is injected into the built war module in app.js for different environments.

In summary

We have seen how we can use a homogenous development environment for building an Angular application. Since user stories slice vertically across the entire stack, there is value in striving for that homogeneity so that if a story is approved (or rolled back), it affects the entire stack, from Angular all the way to the Repository.

We have also seen a more crystalized separation of concerns in the layers that are defined in the Angular stack.

We have seen how the same test harness can be used for server and client side tests.

Lastly we have seen how to make the Angular app aware of the environment it runs in, much like property injection on the server side.

I hope this helps more Java EE folks in adopting Angular JS, which I feel is a terrific way to build a web front end for your apps!

References

Using Spring Security to Secure an Existing Web Application

September 15th, 2010 1 comment

Recently I was involved in securing a web application using Spring Security. The web application already had a “home-grown” security module in place. Therefore, adding Spring Security to the existing application, required going beyond the “intelligent defaults” and peeling the layers to understand the touch points between Spring Security and the web application.

Although Spring Security is highly configurable and extend-able, and follows the coding to interfaces paradigm to the T, it is still geared towards modern applications, aka web applications. To really get a handle on what’s going on under the covers, we’ll also look at securing a Java Application, with no servlet container to run in.

Those who are new to Spring Security, or those who have only seen it in it’s Acegi days, will notice a difference in the way Spring Security is configured. Instead of specifying all the relevant beans in the Spring context, Spring Security makes use of Spring Namespaces.

Spring namespaces allow the user to specify some elements in a Spring application context and by merely including or excluding some elements, the Spring core classes will register the appropriate classes as Spring managed beans. (The only way to see what namespace elements and attributes are available would be to look at the security namespace schema. Although the documentation has talked about the namespace it is not comprehensive; what I found a more practical was to get the Spring plugin for Eclipse and allow code-completion to tell you what’s available).

Since web applications is where Spring Security provides the most out-of-the-box features, let’s start with the web.xml. The following lines tell the servlet container to add a Virtual Filter Chain to all URLs ensuring that the class DelegatingFilterProxy is called before passing on the HttpRequest to any servlet that may be configured additionally.

	<filter>
	  <filter-name>springSecurityFilterChain</filter-name>
	  <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
	</filter>

	<filter-mapping>
	  <filter-name>springSecurityFilterChain</filter-name>
	  <url-pattern>/*</url-pattern>
	</filter-mapping>

springSecurityFilterChain is a Spring managed bean that is configured by the Spring namespace when the container starts up. The DelegatingFilterProxy is where all the other approximately dozen filters are configured.
With that background let’s start by looking at Authentication.

Authentication

The purpose of authentication is two fold: Check the users credentials and if successful  then place an Authentication object in a ThreadLocal<SecurityContext> variable.

Lets look at a web application scenario. Like stated earlier, the URL is passed through a series of filters. Here is the chronology of events:

1. One of the filters called the UsernamePasswordAuthenticationFilter creates a UsernamePasswordAuthenticationToken (which is a subclass of Authentication)  and provides it to the ProviderManager. At this point the UsenamePasswordAuthenticationToken contains just the entered username and password.

2. The ProviderManager (which is an instance of AuthenticationManager) loops through all the AuthenticationProviders that are registered with it and attempts to pass the token to any provider that accepts it

3. Each provider calls it’s authenticate(…) method. One such provider is the DaoAuthenticationProvider that has been configured with a UserDetailService. The UserDetailService is where SQL can be specified to access your custom schema and return a UserDetails object. This UserDetails object  is used to fill out the missing password and authorities in the UsernamePasswordAuthenticationToken. An example of a customized UserDetailService is here.

4. The ProviderManager, if configured with a PasswordEncoder, compares the password entered to the (possibly decrypted) password from the UserDetails object and if it matches, then it passes on the Authentication object (UsernamePasswordAuthenticationToken) to the next filter

5. The SecurityContextpersistenceFilter persists the Authentication object in a SecurityContextHolder that is bound to a ThreadLocal<SecurityContext> variable.

All the above happens if you configure the namespace like so:

	  <authentication-manager>
	    <authentication-provider user-service-ref='acmeUserDetailsService' >
	      <password-encoder ref="passwordEncoder" />
	    </authentication-provider>
	  </authentication-manager>

	  <beans:bean id="acmeUserDetailsService"
	      class="com.acme.security.AcmeUserDetailService">
	    <beans:property name="dataSource" ref="pooledDataSource"/>
	  </beans:bean>

	 <beans:bean id="passwordEncoder" class="com.acme.security.AcmePasswordEncoder" />

Now let us look at how we can achieve (almost) the same effect if we were to secure a java application. There are no servlet filters to rely on so we have to resort to the APIs. Assuming you have a swing application, you will have to make the UI that accepts a username and password and then call the following code:

public boolean login(String username, String password) {

       boolean boo = false;
       UsernamePasswordAuthenticationToken token =
		new UsernamePasswordAuthenticationToken(username, password);

       JdbcDaoImpl userDetailService = new JdbcDaoImpl();
       UserDetails ud = userDetailService.loadUserByUsername(username);
       UsernamePasswordAuthenticationToken token =
		new UsernamePasswordAuthenticationToken(username, password, ud.getAuthorities());

	try {
		Authentication auth = authenticationManager.authenticate(token);
		SecurityContext securityContext = new SecurityContext ();

		//Places in ThredLocal for future retrieval
		SecurityContextHolder.setContext(securityContext);
		SecurityContextHolder.getContext().setAuthentication(auth);
		boo =  true;

	} catch (AuthenticationException e) {
		//log it and rethrow
	}

	return boo;
}

Note that the userDetailService above is the out-of-the-box JdbcDaoImpl, but it very well could be a customized UserDetailService as downloadable fron here. Also the authenticationManager is the Spring managed bean that is registered by Spring Security as shown above.

Once the Authentication object is available as a ThreadLocal variable, it can be accessed from anywhere in the application (even the Service or DAO layers) using this.

Authorization

Now that we have identified who is accessing the web application via the Authentication object, let’s look at details of what is accessible.

Let’s define a few key entities are shown in the picture below.

There is some confusion on what constitutes an Authority as it is interchangeably called Role or Permission throughout the documentation, but essentially they are the same thing.
A Role/Authority/Permission is a granular definition of a business use case. For example: ACCESS_ACCOUNT, DELETE_INVOICE, TRANSFER_BALANCE etc could be all treated as business use cases.
A Group, can have multiple Authorities and typically represents what is commonly called outside the Spring Security world, a ‘role’. For example a business department or function like ACCOUNTS, SALES, ADMIN etc could be Group.
A Resource is anything that needs protected. In the context of Spring Security it could be a URL, a method on a Spring managed bean (service bean) or a domain object.

To add to the confusion, there is the concept of a RoleHierarchy that allows you to nest Authorities. That makes it tempting to use a RoleHierarchy to model aggregations of Authorities by using only nested Authorities. But that is not recommended because nesting Authority aggregations is typically an admin function. This nesting relationship of roles is typically not stored in a database (at least there is no schema that Spring supports to persist role hierarchies).

And finally, Spring Security allows you to map Users to Groups OR Users to Authorities, bypassing Groups completely! That opens a host of ways to configure authorization. I’ll leave it at this: Given that there is no out-of-the-box way to persist Role hierarchies, and given that mapping macro business functions to granular business use cases is usually done via an admin UI, it’s best to leave Role hierarchies to when it’s absolutely necessary and stick to the scenario on the picture above.

In the picture above, we see two sections: resources-authorities mapping and users-groups-authorities mapping. The first is stored in xml configuration files and the second in database tables. That is a desirable goal:

We decide what resource can be mapped to what authorities at development time and therefore we set that up in code (security-context.xml, typically). So, for instance, we can secure URLs, say that sales/*.jsp can be accessed only by those users that have the SALES_VIEW and SALES_CREATE authorities. Similarly, we can secure service methods by assigning , for instance, the DefaultServiceImpl.createNewSale() method to the SALES_CREATE authority.

On the other hand, the users-groups-authorities relationship is more fluid and can be assigned at run time, via admin screens/pages that manipulate this relationship in a database. For instance, via an admin screen we would like to assign the SALES_VIEW and SALES_CREATE authorities to the SALES group. Then, since user Bob recently moved into the Sales department, we can assign him to the Sales group (and re-assign Jill from the SALES group to the ACCOUNTS group).

In that manner Bob ends up accessing sales/*.jsp and createNewSale() service and Jill, who has been removed from the Sales group, finds she cannot.

That has hopefully helped clear some confusion about these terms. Now lets see how to use them.
Spring Security provides classes and database schemas/table definitions to manage/store the group-user relationship and the group-authority relationship.

The authority-resource relationship however can be broken up as follows:

Securing URLs: This is achieved by the http and intercept-url elements in the security namespace.
Securing service methods: This is achieved by adding the intercept-method element to the bean definition in a spring context file
Securing business domain objects: This is achieved by configuring Spring ACLs. This is a combination of Database tables and Spring configuration. There is also a good article on this topic here.

With that background let’s look at securing URLs and securing methods. We will not cover domain level security (Spring ACLs) in this post.

Securing URLs

The filter specified in the web.xml called springSecurityFilterChain automatically loads a FilterChainProxy where the rest of the filters that are needed for web application security are configured.

One of those filters is a FilterSecurityInterceptor which is responsible for authorization. Here is the chronology of what happens when a url has to be authorized:
1. The FilterSecurityInterceptor is an instance of AbstractSecurityInterceptor.
2. The AbstractSecurityInterceptor defines the workflow (described here) to to actually carry out the authorization by using an instance of the AccessDecisionManager interface called AffirmativeBased (default).
3. The AffirmativeBased AccessDecisionManager is configured by default with a series of DecisionVoters. An AffirmativeBased AccessDecisionManager means that if any one of it’s configured decisionVoters votes a ‘yes’, the rest of the voting process is aborted and the final vote is a ‘yes’.
4. One of the DecisionVoters is RoleVoter that is responsible for voting on ConfigAttributes that are the acutal GrantedAuthorities name strings. By default the GrantedAuthorities are prefixed with ‘ROLE_’.
5. Another DecisionVoter is AuthenticatedVoter that is responsible for voting on strings like: IS_AUTHENTICATED_FULLY or IS_AUTHENTICATED_REMEMBERED or IS_AUTHENTICATED_ANONYMOUSLY.
6. For expressions, new in release 3.0 is the WebExpressionVoter. This will be used if expressions are used.

To do all the above, the following needs added to the spring security context:

<http auto-config='true' use-expressions="true">
	<intercept-url pattern="/login.jsp" access="permitAll" />
	<intercept-url pattern="/secure/**" access="hasRole('ACCOUNT_DELETE')" />
	<intercept-url pattern="/**" access="isAuthenticated()" />

	<form-login login-page="/login.jsp"  authentication-failure-url="/login.jsp?login_error=1" login-processing-url="/loginURL"/>
	<logout invalidate-session="true" />
</http>

The above causes any URL to be accessed only by authenticated users. Any url that begins with ‘secure’ will need ACCOUNT_DELETE authority granted to that user. login.jsp is accessible by all. use-expressions tells the Spring security classes to register the WebExpressionVoter (instead of the RoleVoter) as a decisionVoter on the AffirmativeBased AccessDecisionManager.

Securing Service Level Methods

For those familiar with Spring’s AoP features, this functionality should come as no surprise. There are three ways you can secure methods: Use annotations as described here . Use point cuts on certain methods on Spring managed beans, much like transaction semantics.Lastly we can configure method-level protection by using  intercept-method in the bean definition itself. Here is an example of that:

<beans:bean id="acmeService" class="com.acme.service.AcmeService">
<intercept-methods access-decision-manager-ref="customAccessDecisionManager">
  <protect access="SALES_CREATE" method="businessMethodToCreateASale"/>
</intercept-methods>
</beans:bean>
...
<beans:bean id="customAccessDecisionManager" class="org.springframework.security.access.vote.AffirmativeBased" >
<beans:property name="decisionVoters" >
	<beans:list>
		  <beans:bean class="org.springframework.security.access.vote.RoleVoter">
			<beans:property name="rolePrefix" value=""/>
		  </beans:bean>
	</beans:list>
</beans:property>
</beans:bean>

Besides that point that we have protected a certain businessMethod, note that because we want to configure access without a prefix (ROLE_ by default), we have had to define a customAccessDecisionManager that in turn uses a RoleVoter that is configured for no prefix. (There is an attribute in the http element of the namespace for taking care of expressions called use-expressions=”true”; but on the method side of the fence, the namespace configuration does not offer any such convenience. Hence we have  to resort to a customAccessDecisionManager with a prefix-less roleVoter)

Conclusion

At a very high level we’ve seen what Spring Security namespaces do for us. We have visited the main classes involved with Spring Security, talked a little bit about the confusion (that at least I faced) regarding GROUPS, AUTHORITIES and ROLES. We’ve gone through the steps that Authentication and Authorization entail. Lastly we looked at how to secure URLs and Service methods.

As with anything Spring, Spring Security is a highly configurable framework and the more we know the internals of the various classes, the more we can customize for our circumstances.

Using GWT in an enterprise application

May 19th, 2010 10 comments

I started playing around with Google Web Toolkit (GWT) for building something beyond  the obligatory “Hello World”  and I found many challenges along the way. GWT docs talk a lot about the front-end design and layout. But when it comes to integrating with an enterprise java application, I did not find much by way of pointers or guides. So I took a look under the covers and here’s the gist of that experiment.

When  considering enterprise Java development, the  frameworks/technologies that spring to mind are: Spring (:) and Maven. So I decided to use both in my “Hello Enterprise” application. So in the end, we’ll have an application with the following features:

  • A front-end that is written in Java but runs in Javascript.
  • Makes asynchronous calls to the server and updates the front end.
  • Modular, in that clear separation between the presentation and service layers
  • A multi-module project built with Maven.
  • Can access existing (and new) Spring services.

Before we dive deep, let’s talk a bit about GWT at a high level:

  1. GWT is a Java library that produces Javascript when compiled. The Javascript is bundled in the webapp and invoked via an html page.
  2. The javascipt does the heavy lifting, and is typically used for:
    1. Layout of the page, although it can also be used to replace DOM objects (using id tags) in an “shell” HTML page also.
    2. Making asynch calls to the server
  3. GWT allows users to interact with back-end services in two modes, both asynchronous (doesn’t have to be async, but in todays world of RIA and AJAX, who cares about synchronous UIs!):
    1. Using RPC (Remote Procedure Calls) to access server-side Servlets necessarily served from the same server that is serving the HTML (where the GWT javascript is running).
    2. Using GWT classes in the HTTP package that access the server

So to really get to the bottom of what is GWT and what is not, it will help to build the application in the following stages:

  1. Set up the Eclipse environment
  2. GWT app that only draws out some HTML objects but uses Maven
  3. GWT app that makes a call to a GWT Servlet using RPC
  4. GWT app that makes a call to a plain Servlet using the HTTP package
  5. GWT app that makes a call to a Spring service using RPC
  6. GWT app that makes a call to a Spring service using the HTTP package

With each of these steps(steps 2 thru 6 actually), there’s  a download-able source and binary file that you can follow along with.

Set up the Eclipse environment

Eclipse is the IDE I chose to use for this project just as a convenience. If you prefer to use UltraEdit or Textpad or NotePad, that’s fine, just skip this step.

If you have decided to use Eclipse (and you are not the purist/bare-bones emacs kind of guy ;) , may as well get 2 eclipse plugins to make your life easier:

  • The m2Eclipse plugin – > Among other things, this will cause you to record your dependencies only in one place: the pom, instead of in your project and in Eclipse’s classpath.
  • The google-plugin-for-eclipse -> Although this has many uses including running a light-weight server, the only thing I will use it for is running my GWT code from within Eclipse.

Once you have these plugins, your .project file should look like the following:

<natures>
<nature>org.maven.ide.eclipse.maven2Nature</nature>
<nature>org.eclipse.jdt.core.javanature</nature>
<nature>com.google.gwt.eclipse.core.gwtNature</nature>
</natures>

With the m2eclipse plugin you will see Maven dependencies pulled from the pom.xml instead of the Eclipse classpath like so:

The Google Plugin allows you to specify a Run Configuration for running the GWT application in Development Mode (fka Hosted Mode). The advantage of doing this is that you can debug your presentation layer in Java, and while Firebug is all great and dandy, I find it quite cumbersome compared to debugging in Java.

To configure this Run configuration, you must keep the following in mind:

  • Make sure that the main class is com.google.gwt.dev.DevMode
  • Make sure that the following arguments are specified in the arguments tab:
  • -war C:\<<pathToYourProject>>\target\<<yourProjectArtifactId>>-1.0.0 -remoteUI “${gwt_remote_ui_server_port}:${unique_id}” -logLevel INFO -port 8888  -startupUrl <<thHtmlFileThatMakesTheCallToNocache.js>.html com.acme.web.gwt.Hello

GWT app that only draws out some HTML objects but uses Maven

You can download the application source here and war file here.

As you can see, there’s not much going on here. But is serves as a good starting point to flesh out the pom and project structure. For some reason, the GWT folks place the “war” directory directly under root. The structure proposed by GWT is specified here. This flies in the face of the structure that Maven proposes, especially when building several modules in a multi-module project.

Let’s first check out the project structure of this project in Figure A.


Figure A

The GWT module is defined in Hello.gwt.xml. The GWT classes are all in Hello.java. Besides that, there’s just hello.html, the file where anchorPoints are defined and are referenced in Hello.java. web.xml has no significance besides defining a welcome file.

Note that the name of the javascript file (that is invoked in hello.html) below:

<script type=”text/javascript” language=”javascript” src=”GWTMvnNoRPC/GWTMvnNoRPC.nocache.js”></script>

is based on the name specified in the GWT module definition for rename-to:

<?xml version=”1.0″ encoding=”UTF-8″?>
<module rename-to=’GWTMvnNoRPC‘>
<!– Inherit the core Web Toolkit stuff.                        –>
<inherits name=’com.google.gwt.user.User’/>
–>
<inherits name=’com.google.gwt.user.theme.standard.Standard’/>
<entry-point class=’com.acme.firm.client.Hello’/>
<source path=’client’/>

</module>

And it corresponds to the directory structure that is produced by the build (actually by the gwt-maven-plugin: compile goal):


Figure B

The pom has packaging of type war with just the dependencies defined. Here is where the gwt-maven-plugin is declared. Since there are no interfaces to produce the Asynch version of, the plugin is configured as below:

<plugin>
<groupId>org.codehaus.mojo</groupId>
<artifactId>gwt-maven-plugin</artifactId>
<version>1.2</version>
<executions>
<execution>
<goals>
<goal>compile</goal>
<!–  not needed because there is nothing to generate as there is no Service for RPC
<goal>generateAsync</goal>
–>
</goals>
</execution>
</executions>
</plugin>

And that’s about it. Run it by deploying the war file to your favorite app server  and you will see the buttons do nothing and the text goes nowhere!
Let’s make it do something more meaningful by adding some RPC calls.

GWT app that makes a call to a GWT Servlet using RPC

Download the source from here and the war from here.

The project structure is shown below. No surprises here except that we have added a service interface (HelloService.java) and a implementation (HelloServiceImpl.java).


Figure C

Here’s where this graphic come in handy. (Look at the section that says “RPC Plumbing Diagram”). Accordingly, HelloService extends RemoteService and HelloServiceImpl extends RemoteServiceServlet.

Also, the GWT module has included in it’s  <source />  element, the directory that contains both Hello.java and HelloService.java. That is what tells the generateAsync goal of the gwt-maven-plugin that it needs to produce a HelloServiceAsync.java interface. An instance of that interface is invoked by the Hello GWT class.


Figure D

If you look at the HelloServiceAsync.java, you will see that it uses the annotation that you have set in the corresponding interface (HelloService.java) to specify the url that is being sent to the server.

HelloService.java

@RemoteServiceRelativePath(“xyz“)
public interface HelloService extends RemoteService {
public double calculateTax(double income, int year) throws IllegalArgumentException;
}

produces

HelloServiceAsync.java
public static final HelloServiceAsync getInstance()
{
if ( instance == null )
{
instance = (HelloServiceAsync) GWT.create( HelloService.class );
ServiceDefTarget target = (ServiceDefTarget) instance;
target.setServiceEntryPoint( GWT.getModuleBaseURL() + “xyz” );
}
return instance;
}

In this manner you can have many services that can be called by your GWT module.

Note also that the gwt-maven-plugin is configured to drop the async class in the generated-sources directory (the generateSync goal  is bound to the generate-sources phase by default).

Although HelloServiceImpl extends RemoteServiceServlet, it is deployed as a regular servlet in the web.xml. There is nothing GWT-specific in the way it is deployed. However, the fact that it extends RemoteServiceServlet is what (IMO) is the downside of this approach. Either I have to write wrapper servlets that extend RemoteServiceServlet and call my business service layer or I make all my business service objects extend RemoteServiceServlet. Both these options are not very great, which is why you should continue reading to see the other ways we can skin the cat!

GWT app that makes a call to a plain Servlet using the HTTP package

Download the source from here and the war from here.

Let’s begin by looking at the project structure:


Figure E

GWT provides another way to access server side components by using classes in it’s HTTP package. The class RequesBuilder is where it all starts. This class has methods in it to make asynchronous calls. The URL is specified to this class that points to any arbitrary string that is mapped (in web.xml) to the servlet (in this case SimpleServlet). An HttpRequest carries (via a GET or POST) name value pairs from the client UI to the Servlet, where they are peeled off the request and passed on to a business service. The results from the business service are placed in the response object which is accessed via an asynchronous callback method defined in the HTTP API (RequestCallback).

The difference between this and the RPC call is that here, SimpleServlet does not extend RemoteServiceServlet. Also, note that there is no business interface. That, IMO is a bad thing. Not having a clear contract bewtween what the client can call slows down development and complicates testing. OTOH, our servlet is any servlet which is a good thing! Just goes to show, you can’t have your cake and eat it too!

Also, since there is no business interface, there is nothing to produce an Async interface for. So there is one less step in the build process. There is still javascript produced that is accessed by the shell html file as usual.

Moving on to the next level in our integration to back-end service, we’ll look at Spring-GWT integration.

GWT app that makes a call to a Spring service using RPC

Download the source from here and the war from here.

Lets first look at the project structure:

Figure F

I have tried to separate the presentation and service layers so that the service layer does not have any dependency on the presentation layer (only the presentation layer depends upon the service layer). This is a maven multi-module project. The presentation “module” (Called GWTMvnRPCWithSPring-Presentation) has in it’s pom all GWT dependencies and is built using Spring MVC. So the dispatcher-servlet.xml file has two URL mappings: a SimpleUrlHandlerMapping and a GWTHandler. Just for reference, I threw in a RegularController that is pointed to by the simpleUrlHandlerMapping. This (RegularController) controller is nothing special, but the GWTHandler is where all the magic happens. This class is a part of a GWT Widget Library.

Right from GWTHandler’s  javadoc:

The GWTHandler implements a Spring  HandlerMapping which maps RPC from
URLs to RemoteService implementations. It does so by wrapping service
beans with a GWTRPCServiceExporter dynamically proxying all
RemoteService interfaces implemented by the service and delegating
RPC to these interfaces to the service.

Therefore when I define a GwtBusinessService1, you will see that that interface extends RemoteService and defines one business method: caculateTax(…). The call to this business interface is dynamically proxied on the Spring service (businessService1) that is injected into the GWTHandler:

<bean id=”urlMapping1″ class=”org.gwtwidgets.server.spring.GWTHandler” >
<property name=”mappings”>
<map>
<entry key=”/hello/service1.htm” value-ref=”businessService1″ />
<entry key=”/hello/service2.htm” value-ref=”politicalService” />
</map>
</property>
<property name=”order” value=”2″ />
</bean>

The dispatcher-servlet.xml has only spring beans needed for SpringMVC and spring-services-core.xml is the application context that is defined in the services module (GWTMvnRPCWithSpring-services) and contains only services that are presentation-agnostic. Both the contexts are made available in the web.xml.

Other than that there is nothing else that is noteworthy. Let’s move on to accessing Spring services using the HTTP package.

GWT app that makes a call to a Spring service using the HTTP package

Download the source from here and the war from here.

Observe the project structure in Figure G:

Figure G

Comparing it to Figure F, the first thing to notice is the absence of the business interfaces. In it’s place is a SimpleServlet defined in the presentation package. The SimpleServlet is a vanilla implementation (in that it does not extend a specific interface or class except HttpServlet). Here is what this servlet does:

  • Accept requests (typically asynch) from the GWT module.
  • Peel of name/value pairs from the Request object.
  • Invoke Spring services using WebApplicationContextUtils
  • Make the service call passing in values
  • Process the return value(s) from the service
  • Place the value(s) in the response

Also, there is no Spring-MVC here(no dispatcher-servlet.xml), just plain Spring (only spring-service-core.xml). The GWThandler cannot be used here because for the GWTHandler to work we will need interfaces that implement RemoteService which we do not have here. And we do not have those interfaces because HTTP api calls can be used to make any call on the server via HttpRequest/Response unlike RPC where calls were made on an Asynch interface produced via a GWT supplied utility.

That’s it folks! I found this exercise rewarding in learning GWT technologies to build a real-world enterprise app that can scale. My next step is to use smartGWT and then the holy grail of AJAX frameworks: GWT-GoogleMaps! Maybe I’ll blog about it sometime… stay tuned!

  1. GWT allows users to interact with back-end services in two modes, both asynchronous (doesn’t have to be async, but in todays world of RIA and AJAX, who cares about synchronous UIs!):
    1. Using RPC (Remote Procedure Calls) to access server-side Servlets necessarily served from the same server that is serving the HTML (where the GWT javascript is running).
    2. Using GWT classes in the HTTP package that access the server

J2EE Lite with Spring Framework – Performance

February 10th, 2006 No comments

When I submitted this post to JDJ for publication, the editor was a bit charry about posting my performance analyisis that followed the article logically. So I thought, this may be a good place to continue that discussion.

First.. Here is the article that is available on the JDJ website:
And here is a downloadable webapp that is a standalone Spring application. (Click on the link titled “Additional Code II” for the complete downloadable artifact).

Just as a tickler tho’… the webapp (made2order) shows the following services built using Spring IoC and AOP:

The webapp is (almost) completetly self contained, in that it will work when deployed to a webapp server (like Tomcat); the only external system properties that need to be supplied are database driver, url, user and password.
The source code for this application is also available in the zip.

Now that we have seperated infrastructure code from business code, we can look at what is the price paid for interjecting the same services using EJBs, if any.
So I created an Stateless Session EJB that provided transactional semantics via declarative transaction management and CMT.
Similarly I created a Transactional Spring Proxy. I facaded the same business method (that UPDATEd a database table) with both the Spring proxy and the SLSB.
Then I accessed that SLSB via a JNDI lookup and looped the remote instance 300 times.
Compared that with the Spring Proxy and similarly iterated 300 times.


For 300 iterations of the UPDATE, here are the results:

The results of the test shows that the proxy out-performs EJB every time although, not by much. Note that the proxy has more interceptors than just Transactions (5 others) whereas the EJB is only providing us Transaction Services (in this use case). Also note that when the SLSB was accessed via a Spring Bean Factory, and the same test was run, there was no significant difference in the results.

Performance of Hard-coded Logging statements vs Logging via and Advised Proxy:
The following test was executed to see the slow down in performance of a proxied logging advice versus hard-coded Log statements.

It was seen that for 300 iterations, the advised methods took 6 to 9 milliseconds longer to produce the same output. While the proxy is certainly slower, we have to take into consideration the benefits of a declarative approach compared to relying on developer discipline to code the Log statements.

Those are just the facts, (ma’am :) ) The rest is interpretation.. I’ll leave that to more experienced folks to comment on..

Tags: , , ,
Get Adobe Flash playerPlugin by wpburn.com wordpress themes