TurboManage

David Chandler's Journal of Java Web and Mobile Development

  • David M. Chandler

    Google Cloud Platform Data Engineering Instructor with ROI Training now residing in Colorado with the wife of my youth (31 years). Besides tech, I enjoy aviation and landscape photography.

  • Subscribe

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 1,120 other subscribers
  • Sleepless Nights…

    July 2010
    S M T W T F S
     123
    45678910
    11121314151617
    18192021222324
    25262728293031
  • Blog Stats

    • 1,046,424 hits

Caching, batching dispatcher for gwt-dispatch

Posted by David Chandler on July 12, 2010

In a previous post, I mentioned that I’d beefed up CachingDispatchAsync to do batching and queueing of commands (gwt-dispatch Action/Result pairs). I’m finally ready to publish this code. Today we’ll look at caching and batching, and tomorrow at queuing (or callback chaining).

First, let’s review caching. The basic idea is to save trips to the server by returning results from client-side cache when possible. We can use a simple marker interface to indicate whether an Action should be cacheable:

package com.turbomanage.gwt.client.dispatch;

/**
 * Marker interface for Action classes whose corresponding Results can be cached
 * 
 * @author David Chandler
 */
public interface Cacheable {

}

To enable caching, simply implement Cacheable in your Action class and override equals() and hashcode() as discussed in the earlier post. In this example, the FindUserListSubsAction implements Cacheable, so the caching dispatcher will always attempt to return a result from cache if available.

package com.roa.app.shared.rpc;

import com.turbomanage.gwt.client.dispatch.Cacheable;

import net.customware.gwt.dispatch.shared.Action;

public class FindUserListSubsAction implements Action<FindUserListSubsResult>, Cacheable
{

	public FindUserListSubsAction()
	{
		// Empty constructor for GWT-RPC
	}

	@Override
	public boolean equals(Object obj)
	{
		// All instances of this class should return the same cached Result
		return this.getClass().equals(obj.getClass());
	}

	@Override
	public int hashCode()
	{
		return this.getClass().hashCode();
	}

}

I debated marking Result classes as Cacheable rather than Action classes. It doesn’t matter as long as Action and Result classes are paired 1:1, but I ended up deciding on making Actions cacheable to remove ambiguity in the unlikely event that multiple Action classes return the same Result class.

The cache is implemented as a HashMap, where the key is an Action instance and the value is the corresponding Result. The dispatcher’s execute() method checks to see if an Action is Cacheable. If so, it will first try to return the result from cache. If no result is available, it will go ahead and call the dispatcher RPC service and cache the result. Because the cache uses an instance of an Action as the map key, you may have multiple results in the cache for a given Action class. For example, a FindUserByIdAction that takes a user ID argument will cache the result for each unique user ID (provided, of course, that you correctly override equals() and hashcode() on FindUserByIdAction).

Here’s my caching implementation of DispatchAsync:

package com.turbomanage.gwt.client.dispatch;

import java.util.ArrayList;
import java.util.HashMap;

import net.customware.gwt.dispatch.client.DispatchAsync;
import net.customware.gwt.dispatch.shared.Action;
import net.customware.gwt.dispatch.shared.Result;

import com.google.gwt.core.client.GWT;
import com.google.gwt.user.client.rpc.AsyncCallback;
import com.google.inject.Inject;
import com.google.inject.Singleton;

/**
 * Dispatcher which supports caching of data in memory
 * 
 * In order for caching to work, Action classes must override
 * equals() and hashCode() appropriately! Alternatively, you can pass the
 * same instance of an Action with subsequent requests (i.e., use new
 * only once).
 */
@Singleton
public class CachingDispatchAsync implements DispatchAsync
{
	private DispatchAsync dispatcher;
	private static HashMap<Action<Result>, Result> cache = new HashMap<Action<Result>, Result>();
	private static HashMap<Action<Result>, ArrayList<AsyncCallback<Result>>> pendingCallbacks = new HashMap<Action<Result>, ArrayList<AsyncCallback<Result>>>();

	@Inject
	public CachingDispatchAsync(final DispatchAsync dispatcher)
	{
		this.dispatcher = dispatcher;
	}

	/**
	 * (non-Javadoc)
	 * 
	 * @see net.customware.gwt.dispatch.client.DispatchAsync#execute(A,
	 * com.google.gwt.user.client.rpc.AsyncCallback)
	 */
	public <A extends Action<R>, R extends Result> void execute(final A action,
			final AsyncCallback<R> callback)
	{
		if (action instanceof Cacheable)
		{
			executeWithCache(action, callback);
		}
		else
		{
			dispatcher.execute(action, callback);
		}
	}

	/**
	 * Execute the give Action. If the Action was executed before it will get
	 * fetched from the cache
	 * 
	 * @param Action
	 *            implementation
	 * @param Result
	 *            implementation
	 * @param action
	 *            the action
	 * @param callback
	 *            the callback
	 */
	@SuppressWarnings("unchecked")
	private <A extends Action<R>, R extends Result> void executeWithCache(
			final A action, final AsyncCallback<R> callback)
	{
		GWT.log("Executing with cache " + action.toString());
		final ArrayList<AsyncCallback<Result>> pending = pendingCallbacks.get(action);
		// TODO need a timeout here?
		if (pending != null)
		{
			GWT.log("Command pending for " + action, null);
			// Add to pending commands for this action
			pending.add((AsyncCallback<Result>) callback);
			return;
		}
		Result r = cache.get(action);

		if (r != null)
		{
			GWT.log("Cache hit for " + action, null);
			callback.onSuccess((R) r);
		}
		else
		{
			GWT.log("Calling real service for " + action, null);
			pendingCallbacks.put((Action<Result>) action, new ArrayList<AsyncCallback<Result>>());
			dispatcher.execute(action, new AsyncCallback<R>()
			{
				public void onFailure(Throwable caught)
				{
					// Process all pending callbacks for this action
					ArrayList<AsyncCallback<Result>> callbacks = pendingCallbacks.remove((Action<Result>) action);
					for (AsyncCallback<Result> pendingCallback : callbacks)
					{
						pendingCallback.onFailure(caught);
					}
					callback.onFailure(caught);
				}

				public void onSuccess(R result)
				{
					GWT.log("Real service returned successfully " + action, null);
					// Process all pending callbacks for this action
					ArrayList<AsyncCallback<Result>> callbacks = pendingCallbacks.remove((Action<Result>) action);
					for (AsyncCallback<Result> pendingCallback : callbacks)
					{
						pendingCallback.onSuccess(result);
					}
					cache.put((Action) action, (Result) result);
					callback.onSuccess(result);
				}
			});
		}
	}
	
	/**
	 * Clear the cache
	 */
	public void clear()
	{
		cache.clear();
	}

	/**
	 * Clear the cache for a specific Action
	 * 
	 * @param action
	 */
	@SuppressWarnings("unchecked")
	public <A extends Action<R>, R extends Result> void clear(A action)
	{
		cache.put((Action<Result>) action, null);
	}

}

To wire it up, simply bind it in your GIN module and inject it into your presenters or services.

In your GIN module:

		// for gwt-dispatch
		bind(CachingDispatchAsync.class);

In a presenter or service class:

	...
	private final EventBus eventBus;
	private final CachingDispatchAsync cachingDispatch;

	@Inject
	public MyServiceImpl(final EventBus eventBus, final CachingDispatchAsync dispatch)
	{
		this.cachingDispatch = dispatch;
		this.eventBus = eventBus;
	}
	...

If you wish, you can bind and inject CachingDispatchAsync as an implementation of DispatchAsync; however, if you do this, you won’t be able to call the clear() methods because they are not present on DispatchAsync.

One happy consequence of using a caching dispatcher is that it helps prevent what some have called “exploding event classes.” In the absence of a caching dispatcher, the most efficient way to supply the same data to multiple presenters is to fire a custom event containing a Result from an RPC call. Each presenter that needs the data can then listen for the event. The unhappy side effect of this is that you may end up with a custom event for each service call. With a caching dispatcher, this is no longer necessary. Each presenter can call the dispatcher “just in time,” and the dispatcher will return it from cache if available. Custom events are then only needed when multiple presenters must be notified immediately of changes to data.

Closely related to this, a caching dispatcher makes startup with multiple presenters much easier. Let’s say you have three presenters that all need the same data on initial load. Previously, you would have to use a custom event (or roll your own caching) to distribute the data to each presenter. Now, however, each presenter can call the dispatcher as if it were the only presenter, and the dispatcher will return the data from cache or a service call as needed.

Which brings us to batching. What happens if, during startup, three presenters all call dispatch.execute() for the same Action at the same time? The dispatcher can’t return a result from cache until at least one of the service calls has completed, so you will likely end up with all three dispatch requests resulting in RPC calls, thereby defeating the whole purpose of caching. Fortunately, the caching dispatcher above is smart enough to deal with this. Before it sends an Action over RPC, it checks to see if the same Action is already in progress. If so, it waits for the response and invokes all waiting callback methods for that Action with the Result.

The combination of caching and batching makes it possible to write network-efficient code quite simply. Tomorrow, we’ll look at queuing service calls to run in a particular order.

13 Responses to “Caching, batching dispatcher for gwt-dispatch”

  1. Eric Landry said

    Great stuff. Thanks, David!

  2. Matt said

    Great article, thanks.

  3. Stephen said

    A bit of self-promotion, but if you want auto-implementation of hashCode and equals for your Actions to be cacheable, you might try gwt-mpv-apt:

    http://github.com/stephenh/gwt-mpv-apt

    It doesn’t know about the new Cacheable interface, so you’d need to use a base class, e.g.

    @GenDispatch(baseAction = SomeCacheableBaseClass.NAME)
    class FooSpec { … }

    Though adding a gwt-dispatch-specific cacheable=true annotation parameter to the GenDispatch annotation would be cool too.

    • Thanks, Stephen. Is the project also hosted on code.google.com? Seems like I saw an announcement on a list…

      • Stephen said

        The project is only on github, but I’ve mentioned it on the gwt-dispatch and gwt-platform mailing lists, so that is probably where you’ve seen it.

  4. First David, congrats for your job at Google! If you see a position opening up, send them my way. 😉

    Next, I wanted to pick your brain about what you wrote:
    “In the unlikely event that multiple Action classes return the same Result class”

    I actually use that quite a bit, with classes like “NullResult” or “StringResult”. Do you see anything wrong in this practice?

  5. […] Caching, batching dispatcher for gwt-dispatch […]

  6. Thanks, Philippe. I see nothing wrong with reusing Result types. Reuse reduces the number of types overall, thereby reducing compile time, size, etc., and simple types like StringResult don’t benefit from the expressiveness of more model-centric types like FindUserResult, anyway.

  7. ChrisV said

    Thanks for the post and the code, David! I love the idea of this client side cacheing. It will be so much easier to just make the call and not worry about if you’ve already pulled the data or not. I’m curious on your cache clearing policy. I see that you can clear all cache or clear only for a specific action.

    I’d love to see how are you using these? When are you calling them? The app I’m working on is multi-user(one user might be making changes that another user should see). While it doesn’t have to update with new data real time I’d like it to update more often then once a session. At the same time I’d something a little smarter then call the clear() every 5 minutes.

    • I clear the cache whenever I make an RPC call that would change the results in the cache (for example, when modifying or deleting a record). In the case of objects which may have been updated by other users, how often you clear will depend on how much stale data you can live with.

      /dmc

  8. Hi, David. Thanks for this. Can I use this code in my app under a GPLv2+ license? (I’d include a credit in the file header, of course.)

Leave a comment