Archive

Archive for September, 2010

Using JMX for Managing Application Properties in Production

September 29th, 2010 9 comments

JEE web applications that use the Spring container typically use property files for configuring the application for different customers, resources or boundary conditions. The property file is either read by a utility class or used by Spring’s popular PropertyPlaceholderConfigurer class to substitute properties in the spring application context at runtime.

When an application is deployed, the properties are (typically) loaded in the JVM in some sort of cache at startup and then used throughout the life of the application. Most applications provide a UI to manipulate the properties in production but more often than not, UI development cannot keep up with the rate at which properties are added to the application. The application is deployed and before long it becomes necessary to change a property value in production.. but.. oops.. no UI! The only option left is to change the property, rebuild and redeploy, which, of course means an outage for the users.

That’s when this relatively easy method of manipulating application properties via JMX at runtime may be useful. Using Spring JMX, we will expose  properties to a controlled set of users so that they can be read and written to.

Broadly, the steps are:

  1. Implement the DynamicMBean interface to expose all the properties in an application as attributes.
  2. Start the MBean server on Tomcat
  3. Expose that implementation using Spring JMX and deploy the application.
  4. Use JConsole to access the MBean Server implementation and manipulate the values of the properties.

Let’s look at each in turn:

Implement DynamicMBean

Spring JMX has made it extremely easy to expose a bean via JMX. Through Spring’s proxy mechanism, we can expose any Spring managed bean as a Managed Bean. The bean does not even have to implement the MBean interface. However, if we are using Spring’s interface proxies (as against class based proxies), it may be a good idea to have the managed bean implement some interface.

Additionally, if  it is necessary to transport data back to the JMX Client, then using an MXBean may be a better approach. (See here for a comparison of MBean to an MXBean).

For what we are trying to do here, we need to expose an arbitrary number of properties in the JMX client. The number (and names) of properties is not known at compile time;  the JMX specification provides the DynamicMBean interface where the interface (to be exposed) is defined at runtime.

Let’s begin by defining a business interface like so:

public interface ManageableProperties{
	public Properties getProperties();
	public Collection getHiddenPropertiesList();
        public Collection getReadOnlyPropertiesList();
}

This ‘business’ aspect of this interface tells the JMX client what properties need exposed, what properties are hidden and what are read-only.

Then define the class that is going to be managed via JMX and make that class implement the ManageableProperties interface.

public class JMXPropertyManager implements ManageableProperties {
	@Override
	public Properties getProperties(){
		return (Properties)AcmePropertyManager.getProperties();
	}
	@Override
	public Collection<String> getHiddenPropertiesList() {
		Collection<String> c = new ArrayList<String>();
		c.add("acme.db.password");
		return c;
	}
	@Override
	public Collection<String> getReadOnlyPropertiesList() {
		Collection<String> c = new ArrayList<String>();
		c.add("acme.allowConcurrentUsers");
		return c;
	}
}

Here the implementation of the interface is specifying what properties should be hidden (after all, you may not want to expose all properties in your application for administration), and what properties should be read-only.

Finally, let’s introduce the DynamicMBean interface to the same class (JMXPropertyManager). Doing that adds the following code:

private Properties properties;

public JMXPropertyManager(){
	properties = this.getProperties();
}
private boolean isHidden(String key){
    	boolean boo = false;
    	Collection hiddenList = this.getHiddenPropertiesList();
    	for (String string : hiddenList) {
		if (key.equalsIgnoreCase(string)){
			boo = true;
			break;
		}
	}
    	return boo;
}
private boolean isReadOnly(String key){
    	boolean boo = false;
    	Collection roList = this.getReadOnlyPropertiesList();
    	for (String string : roList) {
		if (key.equalsIgnoreCase(string)){
		        boo = true;
			break;
		}
	}
    	return boo;
}
@Override
public MBeanInfo getMBeanInfo() {
    MBeanAttributeInfo[] attributes = new MBeanAttributeInfo[properties.size()];
    MBeanInfo mBeanInfo = null;
    MBeanOperationInfo[] operations = new MBeanOperationInfo[1];
    int i = 0;
    for (Iterator iterator = properties.keySet().iterator(); iterator.hasNext();) {
	String key = (String) iterator.next();
        if (this.isReadOnly(key) || isHidden(key)){
		attributes[i] = new MBeanAttributeInfo
                     (key, "java.lang.String", key, true, false, false);
        } else {
		attributes[i] = new MBeanAttributeInfo
                     (key, "java.lang.String", key, true, true, false);
	}
        i++;
     }
     operations[0] = new MBeanOperationInfo("refreshCache", "Refresh Caches", null , null, MBeanOperationInfo.ACTION);
     mBeanInfo = new MBeanInfo(this.getClass().getName(),
            "Manage Properties", attributes, null,  operations, null);
     return mBeanInfo;
}
@Override
public Object getAttribute(String attribute) throws AttributeNotFoundException,
               MBeanException, ReflectionException {
        Object o = null;
        if (isHidden(attribute)) {
            o = "XXX-HIDDEN-VALUE-XXX";
        } else {
            o =  properties.get(attribute);
        }
	return o;
}
@Override
public void setAttribute(Attribute attribute) throws AttributeNotFoundException,
           InvalidAttributeValueException, MBeanException, ReflectionException {
    String key = attribute.getName();
    String value = (String)attribute.getValue();
    properties.put(key, value);
}
@Override
public Object invoke(String actionName, Object[] params, String[] signature) throws MBeanException, ReflectionException {
     Object ret = null;
     if (actionName == null){
       throw new RuntimeOperationsException( new IllegalArgumentException( "Operation name cannot be null"), "Cannot invoke a null operation");
     }
     if (actionName.equals("refreshCache")){
         AcmePropertyManager.refreshCache();
     }
    //Returning null because we would like to avoid passing back a complex object
    //to the JMXClient because we have not implemented a MXBean
    return ret;
}

//---other unimplemeted methods of the DynamicMBean interface------

Here is where most of the action happens: The getMBeanInfo method creates a MBeanInfo object that has all the attributes (hidden, non-hidden, read-only and writable) defined. The In addition, an operation called refreshCaches is also defined.

Start the MBean server on Tomcat

Starting Tomcat with JMX enabled is explained here.

Deploying to a MBean Server

Thank God for Spring JMX! Deploying to an existing MBean Server was never easier! Just add the following to you Spring context:

    <!-- this bean must not be lazily initialized if the exporting is to happen -->
    <bean id="exporter" class="org.springframework.jmx.export.MBeanExporter" lazy-init="false">
    <property name="beans">
      <map>
        <entry key="bean:name=acmeProperties" value-ref="jmxPropertyManager"/>
      </map>
    </property>
    </bean>

    <bean id="jmxPropertyManager" class="com.acme.JMXPropertyManager">
    </bean>

Assuming that you are deploying your application to an application server that has an inbuilt (exactly one) MBeanServer, such at Tomcat, that’s all there is to it! The JMXPropertyManager is now available as a MBean as seen in the picture below.

Use JConsole to access the MBean Server

Accessing a Tomcat JMX Server is explained here.

Clicking on the bean | acmeProperties node shows us:

We see that the properties that are masked and read-only in our ‘business’ interface, ManegableProperties,  correctly behave as specified.

That’s it. Have fun managing application properties in production!

A process to ensure adequate test code coverage

September 21st, 2010 No comments

Writing tests is a pain. Especially if you have to write them for existing code. Recently I was tasked to make an existing code base regression-proof. The best way I could think of approaching this (rather Utopian)  task was to establish a long running process that would be incremental and measurable.

The first decision was to determine at what level should tests be written. Given that this was a web application and was  written in Java, I could use HttpUnit and Selenium to test the web layer, down to DAO or domain object  level true unit tests. However, for an existing code base, I found that testing the service layer (in this case Spring managed services) on down was the most effective way to get the most ROI. I chose the Spring Testing Framework  and JUnit4 as my testing framework.

The next thing was to determine a way to measure how much of my code was being covered by the tests. There are several tools available to do this like Clover and Cobertura. I chose the latter only because it is well supported via a maven plugin and it’s free!

Lastly, I needed a process that could be followed over the next several months by several developers and with metrics to measure progress. To enable this I used the Eclipse IDE, Maven and the plugins that work in the maven eco-system.

There is a lot of documentation (and a reference implementation) of the Spring Testing Framework that will explain how to write tests, so I will not cover that here. Instead, I will focus on how to measure code coverage and how to establish a process to add tests and measure overall progress.

Code Coverage

Cobertura  can be used in several ways. I chose to use it via the cobertura-maven-plugin. The steps are :

  1. Instrument the code
  2. Write the tests
  3. Run the tests
  4. Measure Coverage
  5. Loop back to 2 till satisfied with the coverage.
  1. Instrument the code – This compiles your java code with asm libraries thereby placing extra smarts in the code to be able to state when a line or branch of code is visited. If used via the cobertura-maven-plugin, the goal that is needed to be run is, not surprisingly, the instrument goal. This places the newly complied classes in ${basedir}/target/generated-classes/cobertura/ directory. In addition, a file called cobertura.ser which is a serialized java class, is placed in ${basedir}/target/cobertura directory. This file is written to after tests are run and then it is subsequently used by the reporting component of cobertura to determine what is covered. To actually instrument the code, you can configure the cobertura-maven-plugin like so:
    <plugin>
    <groupId>org.codehaus.mojo</groupId>
        <artifactId>cobertura-maven-plugin</artifactId>
        <version>2.4</version>
        <configuration>
             <instrumentation>
                <includes>
                    <include>com/acme/myapp/sales/**/*.class</include>
                    <include>com/acme/myapp/accounting/**/*.class</include>
                </includes>
                <excludes>
                    <exclude>**/*Test.class</exclude>.
                </excludes>
            </instrumentation>
       </configuration>
            <executions>
                <execution>
                    <phase>package</phase>
                <goals>
                    <goal>clean</goal>
                    <goal>instrument</goal>
                </goals>
                </execution>
            </executions>
    </plugin>

    The clean goal of the plugin merely deletes the cobertura.ser file. Since this is tied to the package phase of the default build lifecycle, running mvn package will cause the code to be instrumented and placed in ${basedir}/target/generated-classes/cobertura.
    Note that Tests have been excluded from instrumentation. This is important, because as you modify your tests to re-run them to increase code coverage, if the tests are included in the instrumented code, they will be placed first in your classpath (in Eclipse or maven-surefire-plugin as explained later) and therefore, your modified test class will never be executed.

    To run the plugin, simply issue:

    > mvn clean package -Dmaven.test.skip=true

    Also note that it is not necessary to run any tests during instrumentation. Therefore they are skipped by using the -D parameter.

  2. Write the tests – The Spring Testing Framework gives us a good pointers to writing tests. So let’s just skip to the next step.
  3. Run your tests against the instrumented code – Tests can be run either from within your IDE (like eclipse) or via the maven-surefire-plugin which is configured (by default) to pick up all tests in the ${basedir}/src/test/java directory.
    In both cases it is important that the tests are run using the instrumented classes. Since we are defining a process wherein developers can rapidly go through the above 5 step cycle, running surefire tests will not be optimal (they take too long). Instead we will run selected tests using Eclipse. To setup  your run configuration in Eclipse, do the following:
    Go to Run Configurations | New JUnit Configuration | Classpath tab | UserEntries node | Advanced button | Add Folders radio button and then navigate down to ${basedir/target/generated-classes/cobertura and save this configuration. Make sure that this directory is the first of the user entries.
    Now run your test by clicking the run button. When you do so, Cobertura will keep track of what lines of code and branches were exercised by the test.
    Just to complete this discussion, let us see what happens when code coverage is to be determined as a part of running all the tests. For doing this, the cobertura-maven-plugin uses the check goal to run tests and then update the cobertura.ser file. The cobertura-maven-plugin forks a custom lifecycle (called cobertura) that, in it’s test phase, replaces the classesDirectory parameter with the value of: ${project.build.directory}/generated-classes/cobertura. Since the maven-surefire-plugin is configured to run in the test phase, it subsequently is invoked and runs with the new value of classesDirectory, thereby using the instrumented code. To see the configuration of the custom lifecycle, check out lifecycle.xml in the META-INF/maven directory of the plugin.
  4. Measure code coverage – Assuming that the test ran successfully, the cobertura.ser file will be suitably modified by the instrumented code. At this point you can either run mvn site, and if the cobertura plugin is configured in the reporting section of your pom like so:
     <reporting>
        <plugins>
            ...
            <plugin>
                    <groupId>org.codehaus.mojo</groupId>
                    <artifactId>cobertura-maven-plugin</artifactId>
                    <configuration>
                        <formats>
                            <format>html</format>
                        </formats>
                    </configuration>
             </plugin>
             ...
         </plugins>
     </reporting>

    …you should get, eventually, after your entire site has generated, a set of cobertura reports.

    However, this may be a slow process. That is where the maven-ndcobertura-plugin comes in handy. This plugin that can be used to give faster results because it can be invoked outside of the site lifecycle.The plugin can be downloaded from here and it’s documentation is here.

    When the showCoverage goal of this plugin is run, a LineCoverageRate for the passed in class is shown along with a TotalCoverageRate for the entire codebase.

    >C:\projects\AcmeWebapp\acme-core> mvn ndobertura:showCoverage -DclassToTest=AccountService
    [INFO] Scanning for projects...
    [INFO] Searching repository for plugin with prefix: 'ndcobertura'.
    [INFO] ------------------------------------------------------------------------
    [INFO] Building Acme Core
    [INFO]    task-segment: [ndcobertura:showCoverage]
    [INFO] ------------------------------------------------------------------------
    [INFO] [ndcobertura:showCoverage {execution: default-cli}]
    Cobertura: Loaded information on 1695 classes.
    [INFO]
    [INFO] Class: com/acme/accounting/AccountService.java: LineCoverageRate: 0.47 (35 out of 75 lines) and BranchCoverageRate: 0.17 (8 out of 48 branches).
    [INFO]
    [INFO]  Please see line-by-line coverage for these classes by running mvn ndcobertura:generateReports.
    [INFO]  Covered Lines: 1616, Total Lines: 117182, Total Coverage Rate: 0.013790513901452441
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESSFUL
    [INFO] ------------------------------------------------------------------------

    Similarly, once the developer has an idea of the code coverage (in terms of number of lines), s/he can run the other goal of this plugin to view cobertura reports showing line-by-line coverage statistics. The command to generate the reports outside of the site lifecycle is:

    >C:\projects\AcmeWebapp\acme-core> mvn ndcobertura:generateReports
    [INFO] Scanning for projects...
    [INFO] Searching repository for plugin with prefix: 'ndcobertura'.
    [INFO] ------------------------------------------------------------------------
    [INFO] Building Acme Core
    [INFO]    task-segment: [ndcobertura:generateReports]
    [INFO] ------------------------------------------------------------------------
    [INFO] [ndcobertura:generateReports {execution: default-cli}]
    [INFO]
    [INFO] Starting report generation in C:\projects\AcmeWebapp\acme-core\target\acme-core-cobertura-reports...
    [INFO] Cobertura 1.9.4.1 - GNU GPL License (NO WARRANTY) - See COPYRIGHT file
    Cobertura: Loaded information on 1680 classes.
    Report time: 22313ms
    
    [INFO] ...Done. Please see reports by clicking on index.html in C:\projects\AcmeWebapp\acme-core\target\acme-core-cobertura-reports
    [INFO] ------------------------------------------------------------------------
    [INFO] BUILD SUCCESSFUL
    [INFO] ------------------------------------------------------------------------

    Here is a sample report:

    Now that developers can rapidly see the code coverage of their tests, they can easily revert back to step 2 till they are satisfied with the code coverage. Since the cobertura report and the statistics that are generated based on the numbers recorded in the serialized cobertura.ser file, note that once a line of code is visited, it will forever show up on the report. In other words, the recording on the cobertura.ser file is cumulative till the clean and instrument goal of the cobertura-maven-plugin is run again.

    What about measuring progress ?

    The above steps are good for a development environment where developers can check their own work. But it is important to be able to measure progress for a given code base. For that we have to fall back on the process of QAS and Production deployment and the process of continuous integration. The cobertura-maven-plugin can be configured to also run the check goal with every nightly run and thereby produce statistics of code coverage. From there,  either the tools that come along with Continuous Integration tools (like Continuum or Hudson) will help to track coverage statistics over time, or you may use the  recordCoverage goal of this plugin to record line coverage with every nightly run.
    The recordCoverage goal of the plugin stores the statistics of each run in an xml file. The nightly run environment can be configured to run the recordProgress goal after tests are run against instrumented code (and the cobertura.ser file is produced).
    Subsequently the showProgress goal of the same plugin can be used to plot the lines covered over time. This goal uses the xml file that is produced by the recordCoverage goal resulting in something like this:

    and for branch coverage:

    The idea is to maximize the blue and minimize the pink.

Now that we have a complete process, from writing the tests to being able to measure progress, it’s just a matter of time before you can claim to have a regression-proof codebase! At least.. that’s the theory!

Using Spring Security to Secure an Existing Web Application

September 15th, 2010 1 comment

Recently I was involved in securing a web application using Spring Security. The web application already had a “home-grown” security module in place. Therefore, adding Spring Security to the existing application, required going beyond the “intelligent defaults” and peeling the layers to understand the touch points between Spring Security and the web application.

Although Spring Security is highly configurable and extend-able, and follows the coding to interfaces paradigm to the T, it is still geared towards modern applications, aka web applications. To really get a handle on what’s going on under the covers, we’ll also look at securing a Java Application, with no servlet container to run in.

Those who are new to Spring Security, or those who have only seen it in it’s Acegi days, will notice a difference in the way Spring Security is configured. Instead of specifying all the relevant beans in the Spring context, Spring Security makes use of Spring Namespaces.

Spring namespaces allow the user to specify some elements in a Spring application context and by merely including or excluding some elements, the Spring core classes will register the appropriate classes as Spring managed beans. (The only way to see what namespace elements and attributes are available would be to look at the security namespace schema. Although the documentation has talked about the namespace it is not comprehensive; what I found a more practical was to get the Spring plugin for Eclipse and allow code-completion to tell you what’s available).

Since web applications is where Spring Security provides the most out-of-the-box features, let’s start with the web.xml. The following lines tell the servlet container to add a Virtual Filter Chain to all URLs ensuring that the class DelegatingFilterProxy is called before passing on the HttpRequest to any servlet that may be configured additionally.

	<filter>
	  <filter-name>springSecurityFilterChain</filter-name>
	  <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class>
	</filter>

	<filter-mapping>
	  <filter-name>springSecurityFilterChain</filter-name>
	  <url-pattern>/*</url-pattern>
	</filter-mapping>

springSecurityFilterChain is a Spring managed bean that is configured by the Spring namespace when the container starts up. The DelegatingFilterProxy is where all the other approximately dozen filters are configured.
With that background let’s start by looking at Authentication.

Authentication

The purpose of authentication is two fold: Check the users credentials and if successful  then place an Authentication object in a ThreadLocal<SecurityContext> variable.

Lets look at a web application scenario. Like stated earlier, the URL is passed through a series of filters. Here is the chronology of events:

1. One of the filters called the UsernamePasswordAuthenticationFilter creates a UsernamePasswordAuthenticationToken (which is a subclass of Authentication)  and provides it to the ProviderManager. At this point the UsenamePasswordAuthenticationToken contains just the entered username and password.

2. The ProviderManager (which is an instance of AuthenticationManager) loops through all the AuthenticationProviders that are registered with it and attempts to pass the token to any provider that accepts it

3. Each provider calls it’s authenticate(…) method. One such provider is the DaoAuthenticationProvider that has been configured with a UserDetailService. The UserDetailService is where SQL can be specified to access your custom schema and return a UserDetails object. This UserDetails object  is used to fill out the missing password and authorities in the UsernamePasswordAuthenticationToken. An example of a customized UserDetailService is here.

4. The ProviderManager, if configured with a PasswordEncoder, compares the password entered to the (possibly decrypted) password from the UserDetails object and if it matches, then it passes on the Authentication object (UsernamePasswordAuthenticationToken) to the next filter

5. The SecurityContextpersistenceFilter persists the Authentication object in a SecurityContextHolder that is bound to a ThreadLocal<SecurityContext> variable.

All the above happens if you configure the namespace like so:

	  <authentication-manager>
	    <authentication-provider user-service-ref='acmeUserDetailsService' >
	      <password-encoder ref="passwordEncoder" />
	    </authentication-provider>
	  </authentication-manager>

	  <beans:bean id="acmeUserDetailsService"
	      class="com.acme.security.AcmeUserDetailService">
	    <beans:property name="dataSource" ref="pooledDataSource"/>
	  </beans:bean>

	 <beans:bean id="passwordEncoder" class="com.acme.security.AcmePasswordEncoder" />

Now let us look at how we can achieve (almost) the same effect if we were to secure a java application. There are no servlet filters to rely on so we have to resort to the APIs. Assuming you have a swing application, you will have to make the UI that accepts a username and password and then call the following code:

public boolean login(String username, String password) {

       boolean boo = false;
       UsernamePasswordAuthenticationToken token =
		new UsernamePasswordAuthenticationToken(username, password);

       JdbcDaoImpl userDetailService = new JdbcDaoImpl();
       UserDetails ud = userDetailService.loadUserByUsername(username);
       UsernamePasswordAuthenticationToken token =
		new UsernamePasswordAuthenticationToken(username, password, ud.getAuthorities());

	try {
		Authentication auth = authenticationManager.authenticate(token);
		SecurityContext securityContext = new SecurityContext ();

		//Places in ThredLocal for future retrieval
		SecurityContextHolder.setContext(securityContext);
		SecurityContextHolder.getContext().setAuthentication(auth);
		boo =  true;

	} catch (AuthenticationException e) {
		//log it and rethrow
	}

	return boo;
}

Note that the userDetailService above is the out-of-the-box JdbcDaoImpl, but it very well could be a customized UserDetailService as downloadable fron here. Also the authenticationManager is the Spring managed bean that is registered by Spring Security as shown above.

Once the Authentication object is available as a ThreadLocal variable, it can be accessed from anywhere in the application (even the Service or DAO layers) using this.

Authorization

Now that we have identified who is accessing the web application via the Authentication object, let’s look at details of what is accessible.

Let’s define a few key entities are shown in the picture below.

There is some confusion on what constitutes an Authority as it is interchangeably called Role or Permission throughout the documentation, but essentially they are the same thing.
A Role/Authority/Permission is a granular definition of a business use case. For example: ACCESS_ACCOUNT, DELETE_INVOICE, TRANSFER_BALANCE etc could be all treated as business use cases.
A Group, can have multiple Authorities and typically represents what is commonly called outside the Spring Security world, a ‘role’. For example a business department or function like ACCOUNTS, SALES, ADMIN etc could be Group.
A Resource is anything that needs protected. In the context of Spring Security it could be a URL, a method on a Spring managed bean (service bean) or a domain object.

To add to the confusion, there is the concept of a RoleHierarchy that allows you to nest Authorities. That makes it tempting to use a RoleHierarchy to model aggregations of Authorities by using only nested Authorities. But that is not recommended because nesting Authority aggregations is typically an admin function. This nesting relationship of roles is typically not stored in a database (at least there is no schema that Spring supports to persist role hierarchies).

And finally, Spring Security allows you to map Users to Groups OR Users to Authorities, bypassing Groups completely! That opens a host of ways to configure authorization. I’ll leave it at this: Given that there is no out-of-the-box way to persist Role hierarchies, and given that mapping macro business functions to granular business use cases is usually done via an admin UI, it’s best to leave Role hierarchies to when it’s absolutely necessary and stick to the scenario on the picture above.

In the picture above, we see two sections: resources-authorities mapping and users-groups-authorities mapping. The first is stored in xml configuration files and the second in database tables. That is a desirable goal:

We decide what resource can be mapped to what authorities at development time and therefore we set that up in code (security-context.xml, typically). So, for instance, we can secure URLs, say that sales/*.jsp can be accessed only by those users that have the SALES_VIEW and SALES_CREATE authorities. Similarly, we can secure service methods by assigning , for instance, the DefaultServiceImpl.createNewSale() method to the SALES_CREATE authority.

On the other hand, the users-groups-authorities relationship is more fluid and can be assigned at run time, via admin screens/pages that manipulate this relationship in a database. For instance, via an admin screen we would like to assign the SALES_VIEW and SALES_CREATE authorities to the SALES group. Then, since user Bob recently moved into the Sales department, we can assign him to the Sales group (and re-assign Jill from the SALES group to the ACCOUNTS group).

In that manner Bob ends up accessing sales/*.jsp and createNewSale() service and Jill, who has been removed from the Sales group, finds she cannot.

That has hopefully helped clear some confusion about these terms. Now lets see how to use them.
Spring Security provides classes and database schemas/table definitions to manage/store the group-user relationship and the group-authority relationship.

The authority-resource relationship however can be broken up as follows:

Securing URLs: This is achieved by the http and intercept-url elements in the security namespace.
Securing service methods: This is achieved by adding the intercept-method element to the bean definition in a spring context file
Securing business domain objects: This is achieved by configuring Spring ACLs. This is a combination of Database tables and Spring configuration. There is also a good article on this topic here.

With that background let’s look at securing URLs and securing methods. We will not cover domain level security (Spring ACLs) in this post.

Securing URLs

The filter specified in the web.xml called springSecurityFilterChain automatically loads a FilterChainProxy where the rest of the filters that are needed for web application security are configured.

One of those filters is a FilterSecurityInterceptor which is responsible for authorization. Here is the chronology of what happens when a url has to be authorized:
1. The FilterSecurityInterceptor is an instance of AbstractSecurityInterceptor.
2. The AbstractSecurityInterceptor defines the workflow (described here) to to actually carry out the authorization by using an instance of the AccessDecisionManager interface called AffirmativeBased (default).
3. The AffirmativeBased AccessDecisionManager is configured by default with a series of DecisionVoters. An AffirmativeBased AccessDecisionManager means that if any one of it’s configured decisionVoters votes a ‘yes’, the rest of the voting process is aborted and the final vote is a ‘yes’.
4. One of the DecisionVoters is RoleVoter that is responsible for voting on ConfigAttributes that are the acutal GrantedAuthorities name strings. By default the GrantedAuthorities are prefixed with ‘ROLE_’.
5. Another DecisionVoter is AuthenticatedVoter that is responsible for voting on strings like: IS_AUTHENTICATED_FULLY or IS_AUTHENTICATED_REMEMBERED or IS_AUTHENTICATED_ANONYMOUSLY.
6. For expressions, new in release 3.0 is the WebExpressionVoter. This will be used if expressions are used.

To do all the above, the following needs added to the spring security context:

<http auto-config='true' use-expressions="true">
	<intercept-url pattern="/login.jsp" access="permitAll" />
	<intercept-url pattern="/secure/**" access="hasRole('ACCOUNT_DELETE')" />
	<intercept-url pattern="/**" access="isAuthenticated()" />

	<form-login login-page="/login.jsp"  authentication-failure-url="/login.jsp?login_error=1" login-processing-url="/loginURL"/>
	<logout invalidate-session="true" />
</http>

The above causes any URL to be accessed only by authenticated users. Any url that begins with ‘secure’ will need ACCOUNT_DELETE authority granted to that user. login.jsp is accessible by all. use-expressions tells the Spring security classes to register the WebExpressionVoter (instead of the RoleVoter) as a decisionVoter on the AffirmativeBased AccessDecisionManager.

Securing Service Level Methods

For those familiar with Spring’s AoP features, this functionality should come as no surprise. There are three ways you can secure methods: Use annotations as described here . Use point cuts on certain methods on Spring managed beans, much like transaction semantics.Lastly we can configure method-level protection by using  intercept-method in the bean definition itself. Here is an example of that:

<beans:bean id="acmeService" class="com.acme.service.AcmeService">
<intercept-methods access-decision-manager-ref="customAccessDecisionManager">
  <protect access="SALES_CREATE" method="businessMethodToCreateASale"/>
</intercept-methods>
</beans:bean>
...
<beans:bean id="customAccessDecisionManager" class="org.springframework.security.access.vote.AffirmativeBased" >
<beans:property name="decisionVoters" >
	<beans:list>
		  <beans:bean class="org.springframework.security.access.vote.RoleVoter">
			<beans:property name="rolePrefix" value=""/>
		  </beans:bean>
	</beans:list>
</beans:property>
</beans:bean>

Besides that point that we have protected a certain businessMethod, note that because we want to configure access without a prefix (ROLE_ by default), we have had to define a customAccessDecisionManager that in turn uses a RoleVoter that is configured for no prefix. (There is an attribute in the http element of the namespace for taking care of expressions called use-expressions=”true”; but on the method side of the fence, the namespace configuration does not offer any such convenience. Hence we have  to resort to a customAccessDecisionManager with a prefix-less roleVoter)

Conclusion

At a very high level we’ve seen what Spring Security namespaces do for us. We have visited the main classes involved with Spring Security, talked a little bit about the confusion (that at least I faced) regarding GROUPS, AUTHORITIES and ROLES. We’ve gone through the steps that Authentication and Authorization entail. Lastly we looked at how to secure URLs and Service methods.

As with anything Spring, Spring Security is a highly configurable framework and the more we know the internals of the various classes, the more we can customize for our circumstances.

Get Adobe Flash playerPlugin by wpburn.com wordpress themes