TurboManage

David Chandler's Journal of Java Web and Mobile Development

  • David M. Chandler


    Web app developer since 1994 and Google Cloud Platform Instructor now residing in Colorado. Besides tech, I enjoy landscape photography and share my work at ColoradoPhoto.gallery.

  • Subscribe

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 223 other followers

  • Sleepless Nights…

    December 2012
    S M T W T F S
     1
    2345678
    9101112131415
    16171819202122
    23242526272829
    3031  
  • Blog Stats

    • 1,029,242 hits

Archive for December, 2012

stORM implementation notes

Posted by David Chandler on December 18, 2012

In my previous post, I omitted discussion of how stORM works in the interest of getting the word out about my new DAO tool for Android. In the current post, we’ll take a deeper look at the code generated by the stORM annotation processor.

DbFactory class

Let’s start with the root of all runtime activity, the DbFactory class. This class is generated from your DatabaseHelper class annotated with @Database. As you can see in the TestDbFactory class below, the database name and version from the annotation are hard-coded as static fields. Also, there is an array of TableHelpers associated with this DatabaseFactory, one for each class annotated with @Entity. It is possible for your app to include multiple databases, in which case each @Entity must include the dbName attribute to indicate in which database to store the entity.

public class TestDbFactory implements DatabaseFactory {

	private static final String DB_NAME = "testDb";
	private static final int DB_VERSION = 2;
	private static final TableHelper[] TABLE_HELPERS = new TableHelper[] {
		new com.turbomanage.storm.entity.dao.SimpleEntityTable()
	};
	private static DatabaseHelper mInstance;

	/**
	 * Provides a singleton instance of the DatabaseHelper per application
	 * to prevent threading issues. See
	 * https://github.com/commonsguy/cwac-loaderex#notes-on-threading
	 *
	 * @param ctx Application context
	 * @return {@link SQLiteOpenHelper} instance
	 */
	public static DatabaseHelper getDatabaseHelper(Context ctx) {
		if (mInstance==null) {
			// in case this is called from an AsyncTask or other thread
		    synchronized(TestDbFactory.class) {
		    		if (mInstance == null)
					mInstance = new com.turbomanage.storm.TestDatabaseHelper(
									ctx.getApplicationContext(),
									new TestDbFactory());
			}
		}
		return mInstance;
	}
	...
}

The purpose of the generated DbFactory class is to provide a singleton instance of your DatabaseHelper class. The DatabaseHelper base class in stORM extends SQLiteOpenHelper, which is used to connect to a SQLite database. A singleton instance of SQLiteOpenHelper is important because it allows SQLiteDatabase to synchronize all operations. In the DbFactory class, getDatabaseHelper() is implemented using the double-lock pattern to efficiently prevent a race condition. For more information on the use of singletons vs. Application, see this post.

The DbFactory class allows you to connect to the database from anywhere in your app like this:

TestDbFactory.getDatabaseHelper(ctx).getWritableDatabase();

The Context supplied as an argument to getDatabaseHelper() ultimately gets passed through to SQLiteOpenHelper.

Entity DAO

The annotation processor generates a DAO class for each @Entity. Here’s an example:

public class SimpleEntityDao extends SQLiteDao<SimpleEntity>{

	public DatabaseHelper getDbHelper(Context ctx) {
		return com.turbomanage.storm.TestDbFactory.getDatabaseHelper(ctx);
	}

	@SuppressWarnings("rawtypes")
	public TableHelper getTableHelper() {
		return new com.turbomanage.storm.entity.dao.SimpleEntityTable();
	}

	public SimpleEntityDao(Context ctx) {
		super(ctx);
	}

}

The entity DAO is a thin subclass of the SqliteDao base class and mainly exists to supply the generic type parameter for the entity type. The generated DAO also uses the template method design pattern to associate the entity with its DbFactory and TableHelper classes via the getDbHelper() and getTableHelper() methods. The super constructor maintains a reference to the provided Context in the superclass which is used to obtain the singelton SQLiteOpenHelper instance whenever it is needed.

SQLiteDao<T>

The DAO base class contains the meat of the runtime. This is where you’ll find the insert, update, delete, and query methods. The value of using a generified DAO is that all the moderately complex code which actually touches the database is centralized in one place vs. generated for each entity. Thanks to syntax highlighting and other IDE support, it’s much easier to write and debug Java code than a generator template containing Java code.

Note that the base DAO obtains the readable or writable database from the singleton DatabaseHelper in each DAO method. It is not necessary to cache the database obtained from SQLiteOpenHelper.getWritableDatabase() because we’re using a singleton instance of SQLiteOpenHelper (the DatabaseHelper superclass), which itself caches the database connection. This approach keeps the DAO stateless, which reduces memory usage and prevents stateful bugs.

TableHelper<T>

The TableHelper class generated for each entity contains only code specific to the entity, including column definitions, Cursor bindings, and all SQL. This keeps the generated code as small as possible. There are a couple of features worth pointing out.

First, column definitions are stored as an enum. Here’s an example from the SimpleEntityTable class generated for SimpleEntity in the unit tests:

public enum Columns implements TableHelper.Column {
	BLOBFIELD,
	BOOLEANFIELD,
	BYTEFIELD,
	CHARFIELD,
	DOUBLEFIELD,
	ENUMFIELD,
	FLOATFIELD,
	_ID,
	...
}

The Columns enum permits typesafe column references throughout your app and also maintains column order for methods that need to iterate over all the columns. Because the enum implements the marker interface TableHelper.Column, you can refer to the actual enum values in the various methods of FilterBuilder or create your own methods that take a column name as an argument. By convention, the column names are just the name of the underlying entity field in all caps. At some point, I may add an @Column annotation for entity fields to allow column renaming.

Now let’s look at the binding methods of the TableHelper class. The first, newInstance(Cursor c), creates a new entity instance from a Cursor. It is used to convert one row to its object representation. Here’s an example, again from the generated SimpleEntityTable in the unit tests:

@Override
public SimpleEntity newInstance(Cursor c) {
	SimpleEntity obj = new SimpleEntity();
	obj.setBlobField(c.getBlob(0));
	obj.setBooleanField(c.getInt(1) == 1 ? true : false);
	obj.setByteField((byte) c.getShort(2));
	obj.setCharField((char) c.getInt(3));
	obj.setDoubleField(c.getDouble(4));
	obj.setEnumField(c.isNull(5) ? null : com.turbomanage.storm.entity.SimpleEntity.EnumType.valueOf(c.getString(5)));
	obj.setFloatField(c.getFloat(6));
	obj.setId(c.getLong(7));
	obj.setIntField(c.getInt(8));
	obj.setLongField(c.getLong(9));
	obj.setPrivateField(c.getInt(10));
	obj.setShortField(c.getShort(11));
	obj.setwBooleanField(BooleanConverter.GET.fromSql(getIntOrNull(c, 12)));
	obj.setwByteField(ByteConverter.GET.fromSql(getShortOrNull(c, 13)));
	obj.setwCharacterField(CharConverter.GET.fromSql(getIntOrNull(c, 14)));
	obj.setwDateField(DateConverter.GET.fromSql(getLongOrNull(c, 15)));
	obj.setwDoubleField(DoubleConverter.GET.fromSql(getDoubleOrNull(c, 16)));
	obj.setwFloatField(FloatConverter.GET.fromSql(getFloatOrNull(c, 17)));
	obj.setwIntegerField(IntegerConverter.GET.fromSql(getIntOrNull(c, 18)));
	obj.setwLatitudeField(LatitudeConverter.GET.fromSql(getDoubleOrNull(c, 19)));
	obj.setwLongField(LongConverter.GET.fromSql(getLongOrNull(c, 20)));
	obj.setwShortField(ShortConverter.GET.fromSql(getShortOrNull(c, 21)));
	obj.setwStringField(c.getString(22));
	return obj;
}

Fields of primitive type are mapped directly via the corresponding primitive getter methods on the Cursor, which preserves the efficiency of using primitive types in your entities. Wrapper types like Double call the primitive getter on the Cursor but then get converted to a Double instance via the getDoubleOrNull() method in the TableHelper base class. This introduces object creation overhead; however, that overhead is implied by the entity’s use of the wrapper type in the first place. Either it happens explicitly in the getDoubleOrNull() method, or would happen implicitly via auto-boxing if you were to assign the result of Cursor.getDouble() to an entity field of type Double. Of course, the newInstance() method must create an instance of the entity itself; however, this is inherent in the idea of object relational mapping. If you are pursuing absolute maximum efficiency using SQLite without any object representations, stORM is not for you. Admittedly, there are some cases, especially queries, where you are only interested in one or two fields of an object and it would just be waste to populate all fields as above. In these cases, you can simply bypass stORM. It’s perfectly safe to obtain the readable or writable database using the DatabaseFactory class as shown in the first section of this article.

The corollary method to newInstance() is getEditableValues(T obj), which returns a ContentValues map populated with the object’s fields. The ContentValues object can then be passed to the insert() or update() methods on a writable SQLiteDatabase. Here’s the generated getEditableValues() method in SimpleEntityTable:

@Override
public ContentValues getEditableValues(SimpleEntity obj) {
	ContentValues cv = new ContentValues();
	cv.put("BLOBFIELD", obj.getBlobField());
	cv.put("BOOLEANFIELD", obj.isBooleanField() ? 1 : 0);
	cv.put("BYTEFIELD", (short) obj.getByteField());
	cv.put("CHARFIELD", (int) obj.getCharField());
	cv.put("DOUBLEFIELD", obj.getDoubleField());
	cv.put("ENUMFIELD", obj.getEnumField() == null ? null : obj.getEnumField().name());
	cv.put("FLOATFIELD", obj.getFloatField());
	cv.put("_ID", obj.getId());
	cv.put("INTFIELD", obj.getIntField());
	cv.put("LONGFIELD", obj.getLongField());
	cv.put("PRIVATEFIELD", obj.getPrivateField());
	cv.put("SHORTFIELD", obj.getShortField());
	cv.put("WBOOLEANFIELD", BooleanConverter.GET.toSql(obj.getwBooleanField()));
	cv.put("WBYTEFIELD", ByteConverter.GET.toSql(obj.getwByteField()));
	cv.put("WCHARACTERFIELD", CharConverter.GET.toSql(obj.getwCharacterField()));
	cv.put("WDATEFIELD", DateConverter.GET.toSql(obj.getwDateField()));
	cv.put("WDOUBLEFIELD", DoubleConverter.GET.toSql(obj.getwDoubleField()));
	cv.put("WFLOATFIELD", FloatConverter.GET.toSql(obj.getwFloatField()));
	cv.put("WINTEGERFIELD", IntegerConverter.GET.toSql(obj.getwIntegerField()));
	cv.put("WLATITUDEFIELD", LatitudeConverter.GET.toSql(obj.getwLatitudeField()));
	cv.put("WLONGFIELD", LongConverter.GET.toSql(obj.getwLongField()));
	cv.put("WSHORTFIELD", ShortConverter.GET.toSql(obj.getwShortField()));
	cv.put("WSTRINGFIELD", obj.getwStringField());
	return cv;
}

As in the newInstance() method, primitive types are bound with no conversion overhead by calling the variant of the overloaded put() method which takes the primitive type. In the case of wrapper types like Double, the object returned by the TypeConverter’s toSql() method will be implicitly unwrapped. I suspect that using ContentValues in an insert or update may be slightly less efficient than using a prepared statement because that’s no doubt what’s happening under the covers. If this becomes an issue, it would be easy enough to modify stORM to generate the code and bindings for a prepared statement instead.

FilterBuilder

The FilterBuilder object returned by the DAO’s filter() method is currently an elementary query builder that can AND together a series of equality conditions. The query-by-example methods in the base DAO class use FilterBuilder. I’m not completely happy with this API yet, particularly the overloaded eq() methods for every wrapper type. However, it seems necessary to minimize lookup overhead for the appropriate TypeConverter, as well as to handle special cases like Double and byte[], which require special treatment for equality conditions. Consider this a work in progress.

TypeConverter<J,S>

The TypeConverter interface is at the heart of all the code to convert fields to/from SQL. SQLite has only four native data types. In stORM, these are represented by the TypeConverter.SqlType enum: INTEGER, REAL, BLOB, and TEXT. Closely related to these are the seven primitive types available in the Cursor.getX() and ContentValues.put() methods. These are represented by the TypeConverter.BindType enum: BLOB, DOUBLE, FLOAT, INT, LONG, SHORT, and STRING. The TypeConverter interface and its corresponding @Converter annotation offer an extensible mechanism by which you can convert any Java type, including your own classes, to a SQL representation. Simply extend TypeConverter and annotate it with @Converter. As an example, here’s the built-in BooleanConverter:

package com.turbomanage.storm.types;

import com.turbomanage.storm.api.Converter;
import com.turbomanage.storm.types.TypeConverter.BindType;
import com.turbomanage.storm.types.TypeConverter.SqlType;

@Converter(forTypes = { boolean.class, Boolean.class }, bindType = BindType.INT, sqlType = SqlType.INTEGER)
public class BooleanConverter extends TypeConverter<Boolean,Integer> {

	public static final BooleanConverter GET = new BooleanConverter();

	@Override
	public Integer toSql(Boolean javaValue) {
		if (javaValue == null)
			return null;
		return (javaValue==Boolean.TRUE) ? 1 : 0;
	}

	@Override
	public Boolean fromSql(Integer sqlValue) {
		if (sqlValue == null)
			return null;
		return sqlValue==0 ? Boolean.FALSE : Boolean.TRUE;
	}

	@Override
	public Integer fromString(String strValue) {
		if (strValue == null)
			return null;
		return Integer.valueOf(strValue);
	}

}

A TypeConverter must implement methods to convert to and from the type’s SQL representation, or more precisely, the Java type which is directly bindable via one of the Cursor.getX() methods. In addition, it must be able to convert to/from a String. The String methods are used only by the CSV utilities. All TypeConverter return types and argument types must be object types, not primitives. In the case of queries using FilterBuilder, this may result in one unnecessary object creation per field when primitives are implicitly wrapped; however, the use of object types throughout allows the TypeConverter interface to be parameterized, which, besides its elegance, makes it easier to create custom TypeConverters from scratch.

Note that the @Converter annotation must specify the Java types for which the converter may be used (forTypes) as well as the corresponding SQL representation type and binding type. The latter is used by the annotation processor to generate the name of the corresponding Cursor.getX() method. Fortunately, byte[] is an Object type so byte arrays can be handled the same way as all the wrapper types. Also fortunately, primitive types do have an associated class even though they are not Objects, so boolean.class is valid in the annotation’s forTypes attribute. In addition to implementing the TypeConverter interface and providing the @Converter annotation, a custom TypeConverter must declare a static field named GET which returns a singleton instance of the converter. This is used by the generated TableHelper methods to minimize the number of converter instances overall.

One final note about TypeConverter: the reason that SQL type and bind type are specified as annotation attributes vs. methods in TypeConverter is because the latter would require the annotation processor to obtain an instance of a TypeConverter in order to invoke a method like getBindType(). But the annotation processor can only inspect source code using the TypeMirror API, which doesn’t permit direct reference to Class objects such as would be needed to get a new instance via reflection. Thus, including these attributes in the @Converter annotation allows you to include custom TypeConverters in your application project directly vs. packaging them in a jar as would be required otherwise.

Supporting incremental compilation

One of the biggest surprises when writing the annotation processor  (see stORM’s MainProcessor) was the realization that incremental compilation support (in Eclipse, at least) exposes to the annotation processor only those source files which have been modified in the current save operation. Thus, if you add an @Entity annotation, the annotation processor can only inspect the code for that entity. This is problematic for stORM because it needs to know about all the annotated converters, databases, and entities in order to update the generated code for DbFactory and TableHelper classes with each save. In order to preserve state about previously-inspected annotations, it was necessary to utilize a file to maintain state. This file, called stormEnv, is in the same directory as the generated code (in Eclipse, .apt_generated). The annotation processor reads/writes to it before/after each run. If you experience problems with generated code or extraneous artifacts, just clean the project. This will wipe the stormEnv file and trigger a complete rebuild which processes all the annotations at once.

Summary

Now that you’re aware of some of stORM’s nuances and implementation details, do give it a whirl and let me know what you think. Comments are welcome here as well as on G+, where I’ve also shared this post. Also note that you can click on the title of this post in WordPress to link directly to it, which enables the Like and Share buttons for G+, Twitter, and LinkedIn. Enjoy!

Posted in Android | 3 Comments »

stORM: a lightweight DAO generator for Android SQLite

Posted by David Chandler on December 11, 2012

In my previous blog post on writing a Java annotation processor, I mentioned that I’d been working on a lightweight ORM for Android SQLite. I’m finally ready to offer a preview of project stORM with this caveat: at this stage, it’s more of a DAO (data access object) code generator than a full-fledged ORM. The generated DAO classes offer a simple interface to save and retrieve objects to SQLite without having to write any SQL code; however, stORM doesn’t attempt to offer features of full-fledged ORMs such as lazy loading of an object graph.

Goals

stORM is intended as an easy way to create a basic object store with SQLite. It lets you insert, update, delete, and query objects without having to write any SQL. It can persist fields of any primitive or wrapper type, including enums. in addition, for more complex types, you can provide your own custom TypeConverter with @Converter. stORM does not yet support modeling of relations. Support for foreign keys and indexes is planned.

stORM is implemented as a Java annotation processor so it can be used with or without an IDE. To use it, you simply annotate POJO (plain old Java object) classes that you want to persist in a SQLite database and stORM automatically generates a SQLiteOpenHelper class and DAO classes for you.

Example

Here’s an example of stORM usage. The Person class is a POJO with a few String and primitive types (firstName, lastName, weight, and height). Note that it includes an id field of type long. An ID field of type long is required along with corresponding accessor methods. To name the ID field differently, place the @Id annotation on any field of type long.

package com.turbomanage.demo.entity;

import com.turbomanage.storm.api.Entity;

@Entity
public class Person {

	private long id;
	private String firstName, lastName;
	private int weight;
	private double height;

	// accessor methods omitted
}

The Person class is annotated with @Entity, which enables the annotation processor to generate a corresponding DAO class. To insert a new Person in the database, we simply use the generated DAO class. The DAO package name is derived by appending “.dao” to your entity package, and the DAO classname is the entity classname + “Dao”.

import com.turbomanage.demo.entity.dao.PersonDao;
...
PersonDao dao = new PersonDao(getContext());
Person newPerson = new Person();
newPerson.setFirstName("David");
newPerson.setLastName("Chandler");
// insert
long newPersonId = dao.insert(newPerson);
// query by example
Person examplePerson = new Person();
examplePerson.setFirstName("David");
List allDavids = dao.listByExample(examplePerson);

The insert() method returns the id of the newly inserted row. In addition, the id field of the Person object is populated with this value.

The DAO’s listByExample() method is the simplest way to do a query. It lets you retrieve all rows matching the fields of an example object with non-default values. There is also a filter() method which lets you specify conditions directly.

Setup

To get started with stORM:

  1. Download the storm-api and storm-apt jars from storm-gen.googlecode.com.
  2. Add storm-api.jar to your project’s libs/ folder. This contains the stORM annotations and runtime.
  3. Add storm-apt.jar to your project’s annotation factory class path. In Eclipse, you can find this under project properties | Java Compiler | Annotation Processing | Factory Path. The apt jar contains the annotation processor which inspects your project’s source code and generates the required database classes.
Eclipse properties dialog

Add storm-apt.jar to your project’s annotation factory classpath in Eclipse.

Usage

Once you’ve added the jars to your project, it’s easy to use stORM. You just need a DatabaseHelper class annotated with @Database and one or more POJO classes annotated with @Entity.

  1. Create a new class that extends DatabaseHelper.
  2. Add @Database and supply a database name and version.
  3. Override getUpgradeStrategy() to choose one of the available strategies (DROP_CREATE for new projects).
  4. Create a POJO class you want to persist to the database.
  5. Make sure it has a field of type long named “id” or annotated with @Id.
  6. Add the @Entity annotation to the class.

Following these steps, you’ll see 3 generated classes under .apt_generated (you might need to unfilter hidden resources in the Eclipse Project Explorer to see it):

  • a DbFactory class. This provides a singleton instance of your DatabaseHelper class (which in turn extends SQLiteOpenHelper). Having only one instance of SQLiteOpenHelper ensures that SQLite’s own threadsafe mechanisms work as intended.
  • a DAO class for each entity. This extends SQLiteDao and mainly supplies the necessary type parameter to the base class.
  • a Table class for each entity. This contains the generated code to convert an entity instance to and from a SQL row.

You can use the DAO simply by creating a new instance. You must pass it the application context as this is required by the underlying SQLiteOpenHelper.

PersonDao dao = new PersonDao(getContext());

The DAO constructor obtains the singleton instance of your DatabaseHelper class from the generated factory class. Because of this, it’s safe to create new DAO instances anywhere you need them. You can now invoke any of the public methods on the DAO, such as insert(), update(), and listAll(). There are also listByExample() and getByExample() methods that build a query from an example object.

You can also use the FilterBuilder API (still under contruction) to run a query like this:

List<Person> allDavids = dao.filter().eq(Columns.FIRSTNAME, "David").list();

Version upgrades

One of the more challenging aspects of using SQLite can be upgrading your app’s database schema with new versions. stORM makes this easier by providing several upgrade strategies which you can choose in your DatabaseHelper class:

  • DROP_CREATE will drop all tables and create the database again using the new schema implied by your new entity definitions. This will destroy all data in the database, so should only be used in testing, not after your app is in production.
  • BACKUP_RESTORE will first backup the database to CSV files, then drop and recreate the database, and finally restore all the data from the backup CSV files. Any new fields (columns) in your entities will receive default values, and dropped columns will simply disappear. It is not yet possible to rename a field or change the field type.
  • UPGRADE lets you implement your own strategy (probably an ALTER TABLE statement) by overriding the onUpgrade() method in the DatabaseHelper class.

Dump to CSV

Finally, stORM can automatically backup your database to CSV files and restore it from the same. This is used by the BACKUP_RESTORE UpgradeStrategy. You can also manually dump to CSV or restore from CSV by calling methods directly on your DatabaseHelper class. Example:

DemoDbFactory.getDatabaseHelper(getContext()).backupAllTablesToCsv();
DemoDbFactory.getDatabaseHelper(getContext()).restoreAllTablesFromCsv();

You can browse the CSV files using adb shell under /data/data/your_app_package/files. Be aware that any byte[] fields are Base64 encoded, and fields of type double are encoded using their exact hex representation. See the code in BlobConverter.fromString() and DoubleConverter.fromString() if you want to read and parse these values independently of stORM.

Coming soon: ContentProvider

Another pain point for many developers is creating a ContentProvider. Stay tuned for a way to generate ContentProvider contract stubs and implementations also.

Source code

storm-gen.googlecode.com

The issue tracker is open…

Enjoy!

Posted in Android | 37 Comments »

 
%d bloggers like this: