TurboManage

David Chandler's Journal of Java Web and Mobile Development

  • David M. Chandler


    Web app developer since 1994 and Google Cloud Platform Instructor now residing in Colorado. Besides tech, I enjoy landscape photography and share my work at ColoradoPhoto.gallery.

  • Subscribe

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 223 other followers

  • Sleepless Nights…

    July 2010
    S M T W T F S
     123
    45678910
    11121314151617
    18192021222324
    25262728293031
  • Blog Stats

    • 1,029,169 hits

Archive for July 12th, 2010

Caching, batching dispatcher for gwt-dispatch

Posted by David Chandler on July 12, 2010

In a previous post, I mentioned that I’d beefed up CachingDispatchAsync to do batching and queueing of commands (gwt-dispatch Action/Result pairs). I’m finally ready to publish this code. Today we’ll look at caching and batching, and tomorrow at queuing (or callback chaining).

First, let’s review caching. The basic idea is to save trips to the server by returning results from client-side cache when possible. We can use a simple marker interface to indicate whether an Action should be cacheable:

package com.turbomanage.gwt.client.dispatch;

/**
 * Marker interface for Action classes whose corresponding Results can be cached
 * 
 * @author David Chandler
 */
public interface Cacheable {

}

To enable caching, simply implement Cacheable in your Action class and override equals() and hashcode() as discussed in the earlier post. In this example, the FindUserListSubsAction implements Cacheable, so the caching dispatcher will always attempt to return a result from cache if available.

package com.roa.app.shared.rpc;

import com.turbomanage.gwt.client.dispatch.Cacheable;

import net.customware.gwt.dispatch.shared.Action;

public class FindUserListSubsAction implements Action<FindUserListSubsResult>, Cacheable
{

	public FindUserListSubsAction()
	{
		// Empty constructor for GWT-RPC
	}

	@Override
	public boolean equals(Object obj)
	{
		// All instances of this class should return the same cached Result
		return this.getClass().equals(obj.getClass());
	}

	@Override
	public int hashCode()
	{
		return this.getClass().hashCode();
	}

}

I debated marking Result classes as Cacheable rather than Action classes. It doesn’t matter as long as Action and Result classes are paired 1:1, but I ended up deciding on making Actions cacheable to remove ambiguity in the unlikely event that multiple Action classes return the same Result class.

The cache is implemented as a HashMap, where the key is an Action instance and the value is the corresponding Result. The dispatcher’s execute() method checks to see if an Action is Cacheable. If so, it will first try to return the result from cache. If no result is available, it will go ahead and call the dispatcher RPC service and cache the result. Because the cache uses an instance of an Action as the map key, you may have multiple results in the cache for a given Action class. For example, a FindUserByIdAction that takes a user ID argument will cache the result for each unique user ID (provided, of course, that you correctly override equals() and hashcode() on FindUserByIdAction).

Here’s my caching implementation of DispatchAsync:

package com.turbomanage.gwt.client.dispatch;

import java.util.ArrayList;
import java.util.HashMap;

import net.customware.gwt.dispatch.client.DispatchAsync;
import net.customware.gwt.dispatch.shared.Action;
import net.customware.gwt.dispatch.shared.Result;

import com.google.gwt.core.client.GWT;
import com.google.gwt.user.client.rpc.AsyncCallback;
import com.google.inject.Inject;
import com.google.inject.Singleton;

/**
 * Dispatcher which supports caching of data in memory
 * 
 * In order for caching to work, Action classes must override
 * equals() and hashCode() appropriately! Alternatively, you can pass the
 * same instance of an Action with subsequent requests (i.e., use new
 * only once).
 */
@Singleton
public class CachingDispatchAsync implements DispatchAsync
{
	private DispatchAsync dispatcher;
	private static HashMap<Action<Result>, Result> cache = new HashMap<Action<Result>, Result>();
	private static HashMap<Action<Result>, ArrayList<AsyncCallback<Result>>> pendingCallbacks = new HashMap<Action<Result>, ArrayList<AsyncCallback<Result>>>();

	@Inject
	public CachingDispatchAsync(final DispatchAsync dispatcher)
	{
		this.dispatcher = dispatcher;
	}

	/**
	 * (non-Javadoc)
	 * 
	 * @see net.customware.gwt.dispatch.client.DispatchAsync#execute(A,
	 * com.google.gwt.user.client.rpc.AsyncCallback)
	 */
	public <A extends Action<R>, R extends Result> void execute(final A action,
			final AsyncCallback<R> callback)
	{
		if (action instanceof Cacheable)
		{
			executeWithCache(action, callback);
		}
		else
		{
			dispatcher.execute(action, callback);
		}
	}

	/**
	 * Execute the give Action. If the Action was executed before it will get
	 * fetched from the cache
	 * 
	 * @param Action
	 *            implementation
	 * @param Result
	 *            implementation
	 * @param action
	 *            the action
	 * @param callback
	 *            the callback
	 */
	@SuppressWarnings("unchecked")
	private <A extends Action<R>, R extends Result> void executeWithCache(
			final A action, final AsyncCallback<R> callback)
	{
		GWT.log("Executing with cache " + action.toString());
		final ArrayList<AsyncCallback<Result>> pending = pendingCallbacks.get(action);
		// TODO need a timeout here?
		if (pending != null)
		{
			GWT.log("Command pending for " + action, null);
			// Add to pending commands for this action
			pending.add((AsyncCallback<Result>) callback);
			return;
		}
		Result r = cache.get(action);

		if (r != null)
		{
			GWT.log("Cache hit for " + action, null);
			callback.onSuccess((R) r);
		}
		else
		{
			GWT.log("Calling real service for " + action, null);
			pendingCallbacks.put((Action<Result>) action, new ArrayList<AsyncCallback<Result>>());
			dispatcher.execute(action, new AsyncCallback<R>()
			{
				public void onFailure(Throwable caught)
				{
					// Process all pending callbacks for this action
					ArrayList<AsyncCallback<Result>> callbacks = pendingCallbacks.remove((Action<Result>) action);
					for (AsyncCallback<Result> pendingCallback : callbacks)
					{
						pendingCallback.onFailure(caught);
					}
					callback.onFailure(caught);
				}

				public void onSuccess(R result)
				{
					GWT.log("Real service returned successfully " + action, null);
					// Process all pending callbacks for this action
					ArrayList<AsyncCallback<Result>> callbacks = pendingCallbacks.remove((Action<Result>) action);
					for (AsyncCallback<Result> pendingCallback : callbacks)
					{
						pendingCallback.onSuccess(result);
					}
					cache.put((Action) action, (Result) result);
					callback.onSuccess(result);
				}
			});
		}
	}
	
	/**
	 * Clear the cache
	 */
	public void clear()
	{
		cache.clear();
	}

	/**
	 * Clear the cache for a specific Action
	 * 
	 * @param action
	 */
	@SuppressWarnings("unchecked")
	public <A extends Action<R>, R extends Result> void clear(A action)
	{
		cache.put((Action<Result>) action, null);
	}

}

To wire it up, simply bind it in your GIN module and inject it into your presenters or services.

In your GIN module:

		// for gwt-dispatch
		bind(CachingDispatchAsync.class);

In a presenter or service class:

	...
	private final EventBus eventBus;
	private final CachingDispatchAsync cachingDispatch;

	@Inject
	public MyServiceImpl(final EventBus eventBus, final CachingDispatchAsync dispatch)
	{
		this.cachingDispatch = dispatch;
		this.eventBus = eventBus;
	}
	...

If you wish, you can bind and inject CachingDispatchAsync as an implementation of DispatchAsync; however, if you do this, you won’t be able to call the clear() methods because they are not present on DispatchAsync.

One happy consequence of using a caching dispatcher is that it helps prevent what some have called “exploding event classes.” In the absence of a caching dispatcher, the most efficient way to supply the same data to multiple presenters is to fire a custom event containing a Result from an RPC call. Each presenter that needs the data can then listen for the event. The unhappy side effect of this is that you may end up with a custom event for each service call. With a caching dispatcher, this is no longer necessary. Each presenter can call the dispatcher “just in time,” and the dispatcher will return it from cache if available. Custom events are then only needed when multiple presenters must be notified immediately of changes to data.

Closely related to this, a caching dispatcher makes startup with multiple presenters much easier. Let’s say you have three presenters that all need the same data on initial load. Previously, you would have to use a custom event (or roll your own caching) to distribute the data to each presenter. Now, however, each presenter can call the dispatcher as if it were the only presenter, and the dispatcher will return the data from cache or a service call as needed.

Which brings us to batching. What happens if, during startup, three presenters all call dispatch.execute() for the same Action at the same time? The dispatcher can’t return a result from cache until at least one of the service calls has completed, so you will likely end up with all three dispatch requests resulting in RPC calls, thereby defeating the whole purpose of caching. Fortunately, the caching dispatcher above is smart enough to deal with this. Before it sends an Action over RPC, it checks to see if the same Action is already in progress. If so, it waits for the response and invokes all waiting callback methods for that Action with the Result.

The combination of caching and batching makes it possible to write network-efficient code quite simply. Tomorrow, we’ll look at queuing service calls to run in a particular order.

Posted in Google Web Toolkit | 13 Comments »

 
%d bloggers like this: