TurboManage

David Chandler's Journal of Java Web and Mobile Development

  • David M. Chandler


    Web app developer since 1994 and former Developer Advocate with Google now residing in Peru. I am currently offering public and private developer training courses in the US and Latin America as well as working on Android, GWT, and App Engine projects.

  • Subscribe

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 227 other followers

  • Sleepless Nights…

    December 2012
    S M T W T F S
    « Sep   Feb »
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  
  • Blog Stats

    • 670,998 hits

stORM implementation notes

Posted by David Chandler on December 18, 2012

In my previous post, I omitted discussion of how stORM works in the interest of getting the word out about my new DAO tool for Android. In the current post, we’ll take a deeper look at the code generated by the stORM annotation processor.

DbFactory class

Let’s start with the root of all runtime activity, the DbFactory class. This class is generated from your DatabaseHelper class annotated with @Database. As you can see in the TestDbFactory class below, the database name and version from the annotation are hard-coded as static fields. Also, there is an array of TableHelpers associated with this DatabaseFactory, one for each class annotated with @Entity. It is possible for your app to include multiple databases, in which case each @Entity must include the dbName attribute to indicate in which database to store the entity.

public class TestDbFactory implements DatabaseFactory {

	private static final String DB_NAME = "testDb";
	private static final int DB_VERSION = 2;
	private static final TableHelper[] TABLE_HELPERS = new TableHelper[] {
		new com.turbomanage.storm.entity.dao.SimpleEntityTable()
	};
	private static DatabaseHelper mInstance;

	/**
	 * Provides a singleton instance of the DatabaseHelper per application
	 * to prevent threading issues. See
	 * https://github.com/commonsguy/cwac-loaderex#notes-on-threading
	 *
	 * @param ctx Application context
	 * @return {@link SQLiteOpenHelper} instance
	 */
	public static DatabaseHelper getDatabaseHelper(Context ctx) {
		if (mInstance==null) {
			// in case this is called from an AsyncTask or other thread
		    synchronized(TestDbFactory.class) {
		    		if (mInstance == null)
					mInstance = new com.turbomanage.storm.TestDatabaseHelper(
									ctx.getApplicationContext(),
									new TestDbFactory());
			}
		}
		return mInstance;
	}
	...
}

The purpose of the generated DbFactory class is to provide a singleton instance of your DatabaseHelper class. The DatabaseHelper base class in stORM extends SQLiteOpenHelper, which is used to connect to a SQLite database. A singleton instance of SQLiteOpenHelper is important because it allows SQLiteDatabase to synchronize all operations. In the DbFactory class, getDatabaseHelper() is implemented using the double-lock pattern to efficiently prevent a race condition. For more information on the use of singletons vs. Application, see this post.

The DbFactory class allows you to connect to the database from anywhere in your app like this:

TestDbFactory.getDatabaseHelper(ctx).getWritableDatabase();

The Context supplied as an argument to getDatabaseHelper() ultimately gets passed through to SQLiteOpenHelper.

Entity DAO

The annotation processor generates a DAO class for each @Entity. Here’s an example:

public class SimpleEntityDao extends SQLiteDao<SimpleEntity>{

	public DatabaseHelper getDbHelper(Context ctx) {
		return com.turbomanage.storm.TestDbFactory.getDatabaseHelper(ctx);
	}

	@SuppressWarnings("rawtypes")
	public TableHelper getTableHelper() {
		return new com.turbomanage.storm.entity.dao.SimpleEntityTable();
	}

	public SimpleEntityDao(Context ctx) {
		super(ctx);
	}

}

The entity DAO is a thin subclass of the SqliteDao base class and mainly exists to supply the generic type parameter for the entity type. The generated DAO also uses the template method design pattern to associate the entity with its DbFactory and TableHelper classes via the getDbHelper() and getTableHelper() methods. The super constructor maintains a reference to the provided Context in the superclass which is used to obtain the singelton SQLiteOpenHelper instance whenever it is needed.

SQLiteDao<T>

The DAO base class contains the meat of the runtime. This is where you’ll find the insert, update, delete, and query methods. The value of using a generified DAO is that all the moderately complex code which actually touches the database is centralized in one place vs. generated for each entity. Thanks to syntax highlighting and other IDE support, it’s much easier to write and debug Java code than a generator template containing Java code.

Note that the base DAO obtains the readable or writable database from the singleton DatabaseHelper in each DAO method. It is not necessary to cache the database obtained from SQLiteOpenHelper.getWritableDatabase() because we’re using a singleton instance of SQLiteOpenHelper (the DatabaseHelper superclass), which itself caches the database connection. This approach keeps the DAO stateless, which reduces memory usage and prevents stateful bugs.

TableHelper<T>

The TableHelper class generated for each entity contains only code specific to the entity, including column definitions, Cursor bindings, and all SQL. This keeps the generated code as small as possible. There are a couple of features worth pointing out.

First, column definitions are stored as an enum. Here’s an example from the SimpleEntityTable class generated for SimpleEntity in the unit tests:

public enum Columns implements TableHelper.Column {
	BLOBFIELD,
	BOOLEANFIELD,
	BYTEFIELD,
	CHARFIELD,
	DOUBLEFIELD,
	ENUMFIELD,
	FLOATFIELD,
	_ID,
	...
}

The Columns enum permits typesafe column references throughout your app and also maintains column order for methods that need to iterate over all the columns. Because the enum implements the marker interface TableHelper.Column, you can refer to the actual enum values in the various methods of FilterBuilder or create your own methods that take a column name as an argument. By convention, the column names are just the name of the underlying entity field in all caps. At some point, I may add an @Column annotation for entity fields to allow column renaming.

Now let’s look at the binding methods of the TableHelper class. The first, newInstance(Cursor c), creates a new entity instance from a Cursor. It is used to convert one row to its object representation. Here’s an example, again from the generated SimpleEntityTable in the unit tests:

@Override
public SimpleEntity newInstance(Cursor c) {
	SimpleEntity obj = new SimpleEntity();
	obj.setBlobField(c.getBlob(0));
	obj.setBooleanField(c.getInt(1) == 1 ? true : false);
	obj.setByteField((byte) c.getShort(2));
	obj.setCharField((char) c.getInt(3));
	obj.setDoubleField(c.getDouble(4));
	obj.setEnumField(c.isNull(5) ? null : com.turbomanage.storm.entity.SimpleEntity.EnumType.valueOf(c.getString(5)));
	obj.setFloatField(c.getFloat(6));
	obj.setId(c.getLong(7));
	obj.setIntField(c.getInt(8));
	obj.setLongField(c.getLong(9));
	obj.setPrivateField(c.getInt(10));
	obj.setShortField(c.getShort(11));
	obj.setwBooleanField(BooleanConverter.GET.fromSql(getIntOrNull(c, 12)));
	obj.setwByteField(ByteConverter.GET.fromSql(getShortOrNull(c, 13)));
	obj.setwCharacterField(CharConverter.GET.fromSql(getIntOrNull(c, 14)));
	obj.setwDateField(DateConverter.GET.fromSql(getLongOrNull(c, 15)));
	obj.setwDoubleField(DoubleConverter.GET.fromSql(getDoubleOrNull(c, 16)));
	obj.setwFloatField(FloatConverter.GET.fromSql(getFloatOrNull(c, 17)));
	obj.setwIntegerField(IntegerConverter.GET.fromSql(getIntOrNull(c, 18)));
	obj.setwLatitudeField(LatitudeConverter.GET.fromSql(getDoubleOrNull(c, 19)));
	obj.setwLongField(LongConverter.GET.fromSql(getLongOrNull(c, 20)));
	obj.setwShortField(ShortConverter.GET.fromSql(getShortOrNull(c, 21)));
	obj.setwStringField(c.getString(22));
	return obj;
}

Fields of primitive type are mapped directly via the corresponding primitive getter methods on the Cursor, which preserves the efficiency of using primitive types in your entities. Wrapper types like Double call the primitive getter on the Cursor but then get converted to a Double instance via the getDoubleOrNull() method in the TableHelper base class. This introduces object creation overhead; however, that overhead is implied by the entity’s use of the wrapper type in the first place. Either it happens explicitly in the getDoubleOrNull() method, or would happen implicitly via auto-boxing if you were to assign the result of Cursor.getDouble() to an entity field of type Double. Of course, the newInstance() method must create an instance of the entity itself; however, this is inherent in the idea of object relational mapping. If you are pursuing absolute maximum efficiency using SQLite without any object representations, stORM is not for you. Admittedly, there are some cases, especially queries, where you are only interested in one or two fields of an object and it would just be waste to populate all fields as above. In these cases, you can simply bypass stORM. It’s perfectly safe to obtain the readable or writable database using the DatabaseFactory class as shown in the first section of this article.

The corollary method to newInstance() is getEditableValues(T obj), which returns a ContentValues map populated with the object’s fields. The ContentValues object can then be passed to the insert() or update() methods on a writable SQLiteDatabase. Here’s the generated getEditableValues() method in SimpleEntityTable:

@Override
public ContentValues getEditableValues(SimpleEntity obj) {
	ContentValues cv = new ContentValues();
	cv.put("BLOBFIELD", obj.getBlobField());
	cv.put("BOOLEANFIELD", obj.isBooleanField() ? 1 : 0);
	cv.put("BYTEFIELD", (short) obj.getByteField());
	cv.put("CHARFIELD", (int) obj.getCharField());
	cv.put("DOUBLEFIELD", obj.getDoubleField());
	cv.put("ENUMFIELD", obj.getEnumField() == null ? null : obj.getEnumField().name());
	cv.put("FLOATFIELD", obj.getFloatField());
	cv.put("_ID", obj.getId());
	cv.put("INTFIELD", obj.getIntField());
	cv.put("LONGFIELD", obj.getLongField());
	cv.put("PRIVATEFIELD", obj.getPrivateField());
	cv.put("SHORTFIELD", obj.getShortField());
	cv.put("WBOOLEANFIELD", BooleanConverter.GET.toSql(obj.getwBooleanField()));
	cv.put("WBYTEFIELD", ByteConverter.GET.toSql(obj.getwByteField()));
	cv.put("WCHARACTERFIELD", CharConverter.GET.toSql(obj.getwCharacterField()));
	cv.put("WDATEFIELD", DateConverter.GET.toSql(obj.getwDateField()));
	cv.put("WDOUBLEFIELD", DoubleConverter.GET.toSql(obj.getwDoubleField()));
	cv.put("WFLOATFIELD", FloatConverter.GET.toSql(obj.getwFloatField()));
	cv.put("WINTEGERFIELD", IntegerConverter.GET.toSql(obj.getwIntegerField()));
	cv.put("WLATITUDEFIELD", LatitudeConverter.GET.toSql(obj.getwLatitudeField()));
	cv.put("WLONGFIELD", LongConverter.GET.toSql(obj.getwLongField()));
	cv.put("WSHORTFIELD", ShortConverter.GET.toSql(obj.getwShortField()));
	cv.put("WSTRINGFIELD", obj.getwStringField());
	return cv;
}

As in the newInstance() method, primitive types are bound with no conversion overhead by calling the variant of the overloaded put() method which takes the primitive type. In the case of wrapper types like Double, the object returned by the TypeConverter’s toSql() method will be implicitly unwrapped. I suspect that using ContentValues in an insert or update may be slightly less efficient than using a prepared statement because that’s no doubt what’s happening under the covers. If this becomes an issue, it would be easy enough to modify stORM to generate the code and bindings for a prepared statement instead.

FilterBuilder

The FilterBuilder object returned by the DAO’s filter() method is currently an elementary query builder that can AND together a series of equality conditions. The query-by-example methods in the base DAO class use FilterBuilder. I’m not completely happy with this API yet, particularly the overloaded eq() methods for every wrapper type. However, it seems necessary to minimize lookup overhead for the appropriate TypeConverter, as well as to handle special cases like Double and byte[], which require special treatment for equality conditions. Consider this a work in progress.

TypeConverter<J,S>

The TypeConverter interface is at the heart of all the code to convert fields to/from SQL. SQLite has only four native data types. In stORM, these are represented by the TypeConverter.SqlType enum: INTEGER, REAL, BLOB, and TEXT. Closely related to these are the seven primitive types available in the Cursor.getX() and ContentValues.put() methods. These are represented by the TypeConverter.BindType enum: BLOB, DOUBLE, FLOAT, INT, LONG, SHORT, and STRING. The TypeConverter interface and its corresponding @Converter annotation offer an extensible mechanism by which you can convert any Java type, including your own classes, to a SQL representation. Simply extend TypeConverter and annotate it with @Converter. As an example, here’s the built-in BooleanConverter:

package com.turbomanage.storm.types;

import com.turbomanage.storm.api.Converter;
import com.turbomanage.storm.types.TypeConverter.BindType;
import com.turbomanage.storm.types.TypeConverter.SqlType;

@Converter(forTypes = { boolean.class, Boolean.class }, bindType = BindType.INT, sqlType = SqlType.INTEGER)
public class BooleanConverter extends TypeConverter<Boolean,Integer> {

	public static final BooleanConverter GET = new BooleanConverter();

	@Override
	public Integer toSql(Boolean javaValue) {
		if (javaValue == null)
			return null;
		return (javaValue==Boolean.TRUE) ? 1 : 0;
	}

	@Override
	public Boolean fromSql(Integer sqlValue) {
		if (sqlValue == null)
			return null;
		return sqlValue==0 ? Boolean.FALSE : Boolean.TRUE;
	}

	@Override
	public Integer fromString(String strValue) {
		if (strValue == null)
			return null;
		return Integer.valueOf(strValue);
	}

}

A TypeConverter must implement methods to convert to and from the type’s SQL representation, or more precisely, the Java type which is directly bindable via one of the Cursor.getX() methods. In addition, it must be able to convert to/from a String. The String methods are used only by the CSV utilities. All TypeConverter return types and argument types must be object types, not primitives. In the case of queries using FilterBuilder, this may result in one unnecessary object creation per field when primitives are implicitly wrapped; however, the use of object types throughout allows the TypeConverter interface to be parameterized, which, besides its elegance, makes it easier to create custom TypeConverters from scratch.

Note that the @Converter annotation must specify the Java types for which the converter may be used (forTypes) as well as the corresponding SQL representation type and binding type. The latter is used by the annotation processor to generate the name of the corresponding Cursor.getX() method. Fortunately, byte[] is an Object type so byte arrays can be handled the same way as all the wrapper types. Also fortunately, primitive types do have an associated class even though they are not Objects, so boolean.class is valid in the annotation’s forTypes attribute. In addition to implementing the TypeConverter interface and providing the @Converter annotation, a custom TypeConverter must declare a static field named GET which returns a singleton instance of the converter. This is used by the generated TableHelper methods to minimize the number of converter instances overall.

One final note about TypeConverter: the reason that SQL type and bind type are specified as annotation attributes vs. methods in TypeConverter is because the latter would require the annotation processor to obtain an instance of a TypeConverter in order to invoke a method like getBindType(). But the annotation processor can only inspect source code using the TypeMirror API, which doesn’t permit direct reference to Class objects such as would be needed to get a new instance via reflection. Thus, including these attributes in the @Converter annotation allows you to include custom TypeConverters in your application project directly vs. packaging them in a jar as would be required otherwise.

Supporting incremental compilation

One of the biggest surprises when writing the annotation processor  (see stORM’s MainProcessor) was the realization that incremental compilation support (in Eclipse, at least) exposes to the annotation processor only those source files which have been modified in the current save operation. Thus, if you add an @Entity annotation, the annotation processor can only inspect the code for that entity. This is problematic for stORM because it needs to know about all the annotated converters, databases, and entities in order to update the generated code for DbFactory and TableHelper classes with each save. In order to preserve state about previously-inspected annotations, it was necessary to utilize a file to maintain state. This file, called stormEnv, is in the same directory as the generated code (in Eclipse, .apt_generated). The annotation processor reads/writes to it before/after each run. If you experience problems with generated code or extraneous artifacts, just clean the project. This will wipe the stormEnv file and trigger a complete rebuild which processes all the annotations at once.

Summary

Now that you’re aware of some of stORM’s nuances and implementation details, do give it a whirl and let me know what you think. Comments are welcome here as well as on G+, where I’ve also shared this post. Also note that you can click on the title of this post in WordPress to link directly to it, which enables the Like and Share buttons for G+, Twitter, and LinkedIn. Enjoy!

One Response to “stORM implementation notes”

  1. David Houser said

    I am running into a few problems. I would like to create my own custom types, in particular I need to persist a Location from an Adroid device. I can’t figure out how to make custom types even after looking at the source code. I also can’t @vagrant a variable so I am stuck.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
Follow

Get every new post delivered to your Inbox.

Join 227 other followers

%d bloggers like this: