TurboManage

David Chandler's Journal of Java Web and Mobile Development

  • David M. Chandler


    Web app developer since 1994 and Google Cloud Platform Instructor now residing in Colorado. Besides tech, I enjoy landscape photography and share my work at ColoradoPhoto.gallery.

  • Subscribe

  • Enter your email address to subscribe to this blog and receive notifications of new posts by email.

    Join 224 other subscribers
  • Sleepless Nights…

    March 2023
    S M T W T F S
     1234
    567891011
    12131415161718
    19202122232425
    262728293031  
  • Blog Stats

    • 1,039,383 hits

Archive for the ‘Android’ Category

And the best digital photo frame is…

Posted by David Chandler on September 20, 2021

…your Smart TV. I recently bought a digital photo frame only to discover that the app (Frameo) only allows you to send 10 photos at a time. Yuck, returned it.

Then I bought a Google Nest Home Hub Max, which is a really sweet device. It has great sound including bass (which you can tweak via Settings in the Google Home app), home control on screen as well as via Google Assistant (voice), a built-in camera which can be used as a Nest security cam or for video calling with Google or Zoom, the ability to play various music services, YouTube videos or Nextflix, and even cast to a Roku or Chromecast (Android TV). But I mainly bought it as a digital photo frame. Whenever you’re not interacting with it, Max cycles through your selected Google Photos album. You can create and choose a “live album” like Friends and Family, which automatically shows new photos containing the faces you’ve selected. And the photo show goes on while playing music in the background. My wife can now finally see all the photos of our kids we’ve been taking for the last 20 years… AND turn off the lights in Google Home without having me around.

I definitely plan to keep the Max. It’s a beautiful device with a lot of useful features. But after I set it up, I realized that with respect to photos, I could have done the same thing with the Sony Android TV already in my living room, which for the last year has been displaying beautiful, curated nature photos from around the world (the default setting). I forgot that once you add the TV (or Chromecast device) to the Google Home app, you can select an album to display in “ambient mode”, including a live album. Now that I think about it, I’m just guessing that devices from Apple TV and Amazon Fire TV Stick have the same capability, which might explain why the dedicated photo frame apps are so limited in functionality. The market has moved on, and I’m just now getting the message 🙂

Advertisement

Posted in Android, PC Tech, Photography | Leave a Comment »

Debug an Android annotation processor with gradle and IntelliJ (or Eclipse)

Posted by David Chandler on June 9, 2014

Alex Gherschon and I have recently added maven support to storm-gen (note: I plan to push to Maven Central this week). You can now build and install storm-gen to your local maven repository with

git clone https://github.com/turbomanage/storm-gen.git
cd storm-gen/storm-apt
mvn clean install

Since gradle can pull in maven artifacts, you can then include storm-gen in your gradle build file like this:

apply plugin: 'android'
apply plugin: 'android-apt'


android {
    compileSdkVersion 19
    buildToolsVersion "19.0.3"

    defaultConfig {
        minSdkVersion 10
        targetSdkVersion 19
        versionCode 1
        versionName "1.0"
    }
    buildTypes {
        release {
            runProguard false
            proguardFiles getDefaultProguardFile('proguard-android.txt'), 'proguard-rules.txt'
        }
    }
}

dependencies {
    compile 'com.android.support:appcompat-v7:+'
    compile 'log4j:log4j:1.2.17'
    compile 'javax.persistence:persistence-api:1.0'
    compile 'com.turbomanage.storm:storm-api:0.98'
    apt 'com.turbomanage.storm:storm-impl:0.98'
}

Now here’s the fun part. Previously, in order to debug and set breakpoints in the annotation processor itself, I used Eclipse RCP and ran it as an Eclipse plugin. Fortunately, there’s now an easier way using Java remote debugging. There are several variations on this technique (configuring an annotation processor directly in IntelliJ or debugging a gradle script). Unfortunately, Android Studio rejected my adding the Xdebug JVM args directly to the gradle script. However, I found this workaround. It is not fully integrated with AS, but does allow me to debug the annotation processor on demand by running the gradle script.

Add these lines to your ~/.gradle/gradle.properties:

org.gradle.daemon=true
org.gradle.jvmargs=-agentlib:jdwp=transport=dt_socket,server=y,suspend=n,address=5005

Then do something that causes the gradle daemon to start.

gradle --daemon

Now you can go into IntelliJ or Android Studio and attach a remote debugger. I use IntelliJ because AS won’t import the storm-gen maven projects and the whole point of this exercise is to set breakpoints in the storm-gen annotation processor. The screenshot shows configuring a remote debugger in IntelliJ. Just accept the defaults and make sure the port number matches your gradle.properties. It should also work in Eclipse.

remote debug config

Configure a remote debugger in IntelliJ

The remote debugger should attach to the running gradle daemon. You can then tickle the annotation processor by running a build. For example, this runs the storm-gen tests on an already-running emulator or connected device:

cd storm-test
gradle clean connectedCheck

When the annotation processor runs, your breakpoint in IntelliJ (or Eclipse) should fire.

One caution with this technique is that Android Studio will also use the gradle daemon because it’s now configured in your gradle.properties. But if you launched the daemon from the command line, AS will try to launch it again and you’ll see an error message about the port already in use. In that case, simply kill the running gradle process and AS should be happy.

Posted in Android, IntelliJ | 1 Comment »

Android + Cloud with Mobile Backend Starter

Posted by David Chandler on May 21, 2013

The Google Cloud Solutions team has made it easier than ever to create a cloud backend for your Android application. Brad Abrams and I presented it last week at Google I/O, and Brad has published a gigantic step-by-step blog post with copious screenshots on how we built our sample app, Geek Serendipity. Here’s all the related content in one handy list:

Posted in Android, AppEngine | 8 Comments »

Android demo tips: behind the scenes at Google I/O

Posted by David Chandler on May 20, 2013

Last week at Google I/O, Brad Abrams and I decided to go for the gold and give a talk on building a cloud backend for your Android app, complete with live coding on stage. For a more interesting example, we chose to build a sample app which relies on location services and Google Cloud Messaging. It all seemed like a good idea at the time. For the benefit of other Android presenters, I wanted to give a behind-the-scenes view of how we overcame some significant challenges to pull off the demos. Also, I will explain the anomalies we saw.

Device display. The easiest way to show an app is using an emulator, but Maps V2 API is part of Google Play Services, which doesn’t currently run on the Android emulator, so we needed physical devices. Droid@Screen works, but it updates slowly and requires the USB cable, which we needed for hardwired Ethernet. Some devices support HDMI out (Galaxy Nexus, N10) which can be used with a BlackMagic or similar HDMI capture box to show the device display on your Mac screen. Note that this requires a Mac with two Thunderbolt ports (hello Retina! BlackMagic in, Mac display adapter out). We could not use this because the phone’s HDMI connector is shared with the USB connector, which we needed for hardwired Ethernet. Fortunately, we had a Wolfvision projector camera available and a 4-port video switcher to show our Macs or the camera. For best results with a projector, set the phone display brightness to max.

The network. Gather 5000+ geeks in a massive concrete structure, give them all a new wifi device, and then try to get a working Internet connection… Despite Google’s stellar efforts, getting wireless Internet at Moscone Center remains very challenging. 3G / 4G / LTE is inhibited by the concrete and steel, and even 5GHz wifi suffered from RF interference or sheer demand. Thus, we knew in advance that we would need hardwired Internet. This was no problem for our laptops, but we also needed 2 phones for the demos to show continuous queries in action. Fortunately, Galaxy Nexus phones support wired connections using a USB OTG adapter paired with a USB to Ethernet adapter, and it just worked. So far so good.

Deferred launch. Because we were coding live in Eclipse, we deployed the code to the phones in real time via the USB cable. When you run an Android app in Eclipse, the default action is to install and run the app. But this  created a problem for the simple version of our demo app, which needed an Internet connection on launch to send a location update immediately. We had to disconnect the USB cable, walk the phone over to the camera, plug in the hardwired Ethernet, and only then launch the app via the home screen. Fortunately, Eclipse has a way to support this. In the project’s run configuration on the Android tab, simply select “Do nothing” for the launch action. This installs the app, but doesn’t launch it.

Live coding. Speaking of live coding, we have all seen those demos with lots of tedious cut and paste operations from a text file to the IDE. As a presenter, I want something visible in the IDE to prompt me for the fully manually steps as well as an easy way to paste in blocks of code. Eclipse snippets are just the thing. You can show them off to the side of your main editor window, then double-click on a snippet to paste it in. I used snippet titles without code to prompt me for the manual steps. Another challenge of live coding is that you might make a mistake at an early stage of the demo which ruins a later stage. In order to facilitate recovery, we created snapshots of working demo code at six different stages during our presentation. It would have been nice to preinstall these app versions on the devices; however, they all use the same package name and the Android installer doesn’t allow more than one at the same time. Hence the need for adb install and the deferred launch technique. Also we had to uninstall the prior version of the app on the device before each coding stage. Furthermore, Eclipse doesn’t let you create multiple projects with the same name. There is a workaround, but it kind of spoils the magic of our talk and I’ve already spilled enough secrets… let’s just say Eclipse workspaces and “Copy project” are a beautiful thing.

Location services. This is the one that snagged us in the first demo. It turns out that location services are not in fact magic, but require GPS (in Moscone?!) or wifi, which we turned off just before speaking to eliminate the possibility of conflict with the hardwired Ethernet. Fortunately, we were able to get a good enough wifi signal in a later demo to show location. On desktop devices, geolocation works with a hard-wired connection, so I’m not yet sure whether this is a bug or a feature on mobile… it is admittedly an uncommon use case.

The unanticipated. After recovering from the first demo glitch by turning on wifi, I demonstrated that location services worked, but it never sent the location to the server. It turns out the code was working as intended–I had already enabled authentication for my next segment, but the app from the previous segment which I revisited did not have auth turned on. Finally, the last demo showed two markers when it should have showed only one. The second marker turned out be a fellow Googler in the audience who had helped dogfood an early version of the demo app. So once again, the code worked as intended, just not as I intended at the moment. But… this is the stuff live demos are made of. I still prefer live demos with a few glitches to an hour full of slides 🙂

So now you know what Brad and I were busily doing while the other was talking and why talks of this complexity are pretty much impossible without two people and a lot of presentation gear. It was a teeny bit intimidating knowing that our session was part of the Google I/O live stream with thousands of viewers, but in the end, it came off (almost) without glitches and our gracious audience was wowed by the Mobile Backend Starter project. Mission accomplished!

See my next post for a complete list of resources on Mobile Backend Starter.

Posted in Android, Eclipse, Headsmack | 10 Comments »

ProgressDialog considered harmful

Posted by David Chandler on March 26, 2013

Roman Nurik recently wrote a G+ post favoring the use of inline ProgressBar over the pop-up ProgressDialog. Here are some additional reasons to consider.

To be sure, ProgressDialog is very convenient. It has a nice spinner and does its job very simply. You can create one as simply as

ProgressDialog pd = ProgressDialog.show(this, "Please wait", "working...");
...
pd.cancel();

The first problem is that by default, there is no way for the user to dismiss a ProgressDialog. A common scenario is to show the ProgressDialog before making a network request. But what happens if the user gives up before the request times out? The back button doesn’t work. The home button works as usual, but if you return to the app, the ProgressDialog is still there. To get rid of it, you have to force close the app.

This is easily remedied by setting an additional property:

ProgressDialog pd = ProgressDialog.show(this, "Please wait", "working...");
pd.setCancelable(true);
...
pd.cancel();

However, there is another problem which leads to a lot of confusion for Android or Java newbies. Consider this code which I have inadvertently written myself:

ProgressDialog pd = new ProgressDialog(this);
pd.setCancelable(true);
pd.show(this, "Please wait", "working...");

Like the first example, this will show a ProgressDialog which cannot be dismissed with the back button. Why? Because the show() method used here is a static method which returns a new ProgressDialog. The instance named pd is never shown. The compiler will show a warning that a static method is being invoked in a non-static way so you might get a clue that something is wrong; still, this is a really confusing API with no documentation. If you want to use the ProgressDialog constructor directly (as you must in order to subclass it), you must do this instead:

ProgressDialog pd = new ProgressDialog(this);
pd.setTitle("Please wait");
pd.setMessage("working...");
pd.setCancelable(true);
pd.show();

Note that the no-arg show() method used here is an instance method inherited from the Dialog class. The resulting ProgressDialog may indeed be canceled.

StackOverflow indicates that ProgressDialog has been a headache for more than a few developers.

So, like Roman said, use ProgressBar instead. Or if you don’t, make sure you’re using ProgressDialog in such a way that it can in fact be dismissed.

Posted in Android, Headsmack | Leave a Comment »

Optimizing your mobile Web app

Posted by David Chandler on February 28, 2013

I frequently meet with mobile app developers using WebView who are looking for guidance on performance optimization. Besides going native, which typically provides the very best user experience, here is an assortment of tips and tools you can use to build better mobile Web pages. These are very rough notes from recent meetings with third party developers and folks on the Chrome and Android teams; nevertheless, I hope there are some helpful nuggets here.

Because most Web developer tooling is for mobile browsers, not WebView per se, the best approach is to optimize your Web site running in Android Browser or Chrome for Android. Most improvements you make should carry over to WebView  as it’s also WebKit-based.

Debugging

  • Chrome Remote Debugging with Chrome for Android
  • WEb INspector REmote targeting the Android Browser. This 3rd party tool works with WebKit-based browsers and has an elements panel and console similar to Chrome Developer Tools.

Profiling

Chrome Developer Tools has several features which can be used to measure performance.

  • Inspect Element -> Timelines -> Frames -> Record to see how long it takes to construct and render a frame. To avoid dropping frames, keep your render time under 30ms.
  • Hit h on any DOM element to hide it so doesn’t contribute to layout
  • Uncheck any CSS styles to see which ones are expensive (like border-radius + box-shadow)
  • Use timeline to see where slowness is occurring; i.e., don’t jump straight into JS performance as it might be paint time, layout, etc.
  • Inspect Element -> Settings (bottom right corner)
    • Can show frame rates and memory usage in real time. Good for seeing how much certain effects cost on your rendering and how the cost of multiple effects multiplies up.
    • Show FPS frames per second on pages with animation / loops
    • Show paint rectangles to see if you are doing large repaints when not necessary
    • Continuous page refresh setting available to help profile (available in Chrome dev channel)
  • http://dev.chromium.org/developers/how-tos/trace-event-profiling-tool
  • Web Tracing Framework
  • PageSpeed Insights extension
  • http://jankfree.com

Performance tips

Android

  • Chrome for Android Tracing explains the mobile tracing tool and how to understand the tracing results. See how much time spent in V8 to represent JS processing, layouts, etc.
    • Avoid unnecessary image resizing. Serving multiple devices is a challenge.
      • Image decode, resize ~50 ms each on Galaxy Nexus
    • Chrome for Android & Android Browser with fixed viewport doesn’t need fastclick
    • Swiping between pages: ViewPager + multiple WebViews faster than CSS
  • Hardware accelerated transitions
    • Many devs report issues with views flickering when HW acceleration turned on.  Workaround is to have ‘next’ view peek in 1px so that it renders earlier so doesn’t flash
    • Many devs report inconsistent glitches with acceleration turned on (especially when using JS scrolling vs. native)
  • ViewPager
    • Issues with bezel swipe… HW accelerated swipe doesn’t complete (bounces) because first WebView has to render before the next View will handle the swipe event (because the JS is busy doing something else). Only an issue when swiping from the edge.
  • WebView JS interface
    • Single thread used for all JS can be a bottleneck.; e.g., have three WebViews in a ViewPager, each trying to do DOM manipulation. This can cause slowdowns.
  • WebView performance talk from I/O ‘12
  • Best Practices for Web Apps from developer.android.com (see especially links at end)
  • Hybrid approach popular: download content outside the WebView and inject it in, mainly for offline support

If you know of any additional resources that have been helpful to you, please feel free to add them in the comments.

Posted in Android | 1 Comment »

Enable multi-user support in the Android emulator

Posted by David Chandler on February 20, 2013

Android 4.2 (API 17) offers multi-user support for tablet sharing. To add a user on a physical tablet running 4.2 or later, go to Settings | Users. You can also do this in an emulator, but you first have to set an emulator property as follows:

  1. Create a new AVD based on a tablet (say, the Nexus 7 device definition in the latest ADT) with API 17.
  2. Start the emulator
  3. Run these adb commands:

> adb shell setprop fw.max_users 4
> adb shell stop
> adb shell start

You can now add a user in the emulator. To see the new user bubble on the lock screen, press F7. You may have to press it twice for the slide lock to become active.

Posted in Android | Tagged: | 1 Comment »

stORM implementation notes

Posted by David Chandler on December 18, 2012

In my previous post, I omitted discussion of how stORM works in the interest of getting the word out about my new DAO tool for Android. In the current post, we’ll take a deeper look at the code generated by the stORM annotation processor.

DbFactory class

Let’s start with the root of all runtime activity, the DbFactory class. This class is generated from your DatabaseHelper class annotated with @Database. As you can see in the TestDbFactory class below, the database name and version from the annotation are hard-coded as static fields. Also, there is an array of TableHelpers associated with this DatabaseFactory, one for each class annotated with @Entity. It is possible for your app to include multiple databases, in which case each @Entity must include the dbName attribute to indicate in which database to store the entity.

public class TestDbFactory implements DatabaseFactory {

	private static final String DB_NAME = "testDb";
	private static final int DB_VERSION = 2;
	private static final TableHelper[] TABLE_HELPERS = new TableHelper[] {
		new com.turbomanage.storm.entity.dao.SimpleEntityTable()
	};
	private static DatabaseHelper mInstance;

	/**
	 * Provides a singleton instance of the DatabaseHelper per application
	 * to prevent threading issues. See
	 * https://github.com/commonsguy/cwac-loaderex#notes-on-threading
	 *
	 * @param ctx Application context
	 * @return {@link SQLiteOpenHelper} instance
	 */
	public static DatabaseHelper getDatabaseHelper(Context ctx) {
		if (mInstance==null) {
			// in case this is called from an AsyncTask or other thread
		    synchronized(TestDbFactory.class) {
		    		if (mInstance == null)
					mInstance = new com.turbomanage.storm.TestDatabaseHelper(
									ctx.getApplicationContext(),
									new TestDbFactory());
			}
		}
		return mInstance;
	}
	...
}

The purpose of the generated DbFactory class is to provide a singleton instance of your DatabaseHelper class. The DatabaseHelper base class in stORM extends SQLiteOpenHelper, which is used to connect to a SQLite database. A singleton instance of SQLiteOpenHelper is important because it allows SQLiteDatabase to synchronize all operations. In the DbFactory class, getDatabaseHelper() is implemented using the double-lock pattern to efficiently prevent a race condition. For more information on the use of singletons vs. Application, see this post.

The DbFactory class allows you to connect to the database from anywhere in your app like this:

TestDbFactory.getDatabaseHelper(ctx).getWritableDatabase();

The Context supplied as an argument to getDatabaseHelper() ultimately gets passed through to SQLiteOpenHelper.

Entity DAO

The annotation processor generates a DAO class for each @Entity. Here’s an example:

public class SimpleEntityDao extends SQLiteDao<SimpleEntity>{

	public DatabaseHelper getDbHelper(Context ctx) {
		return com.turbomanage.storm.TestDbFactory.getDatabaseHelper(ctx);
	}

	@SuppressWarnings("rawtypes")
	public TableHelper getTableHelper() {
		return new com.turbomanage.storm.entity.dao.SimpleEntityTable();
	}

	public SimpleEntityDao(Context ctx) {
		super(ctx);
	}

}

The entity DAO is a thin subclass of the SqliteDao base class and mainly exists to supply the generic type parameter for the entity type. The generated DAO also uses the template method design pattern to associate the entity with its DbFactory and TableHelper classes via the getDbHelper() and getTableHelper() methods. The super constructor maintains a reference to the provided Context in the superclass which is used to obtain the singelton SQLiteOpenHelper instance whenever it is needed.

SQLiteDao<T>

The DAO base class contains the meat of the runtime. This is where you’ll find the insert, update, delete, and query methods. The value of using a generified DAO is that all the moderately complex code which actually touches the database is centralized in one place vs. generated for each entity. Thanks to syntax highlighting and other IDE support, it’s much easier to write and debug Java code than a generator template containing Java code.

Note that the base DAO obtains the readable or writable database from the singleton DatabaseHelper in each DAO method. It is not necessary to cache the database obtained from SQLiteOpenHelper.getWritableDatabase() because we’re using a singleton instance of SQLiteOpenHelper (the DatabaseHelper superclass), which itself caches the database connection. This approach keeps the DAO stateless, which reduces memory usage and prevents stateful bugs.

TableHelper<T>

The TableHelper class generated for each entity contains only code specific to the entity, including column definitions, Cursor bindings, and all SQL. This keeps the generated code as small as possible. There are a couple of features worth pointing out.

First, column definitions are stored as an enum. Here’s an example from the SimpleEntityTable class generated for SimpleEntity in the unit tests:

public enum Columns implements TableHelper.Column {
	BLOBFIELD,
	BOOLEANFIELD,
	BYTEFIELD,
	CHARFIELD,
	DOUBLEFIELD,
	ENUMFIELD,
	FLOATFIELD,
	_ID,
	...
}

The Columns enum permits typesafe column references throughout your app and also maintains column order for methods that need to iterate over all the columns. Because the enum implements the marker interface TableHelper.Column, you can refer to the actual enum values in the various methods of FilterBuilder or create your own methods that take a column name as an argument. By convention, the column names are just the name of the underlying entity field in all caps. At some point, I may add an @Column annotation for entity fields to allow column renaming.

Now let’s look at the binding methods of the TableHelper class. The first, newInstance(Cursor c), creates a new entity instance from a Cursor. It is used to convert one row to its object representation. Here’s an example, again from the generated SimpleEntityTable in the unit tests:

@Override
public SimpleEntity newInstance(Cursor c) {
	SimpleEntity obj = new SimpleEntity();
	obj.setBlobField(c.getBlob(0));
	obj.setBooleanField(c.getInt(1) == 1 ? true : false);
	obj.setByteField((byte) c.getShort(2));
	obj.setCharField((char) c.getInt(3));
	obj.setDoubleField(c.getDouble(4));
	obj.setEnumField(c.isNull(5) ? null : com.turbomanage.storm.entity.SimpleEntity.EnumType.valueOf(c.getString(5)));
	obj.setFloatField(c.getFloat(6));
	obj.setId(c.getLong(7));
	obj.setIntField(c.getInt(8));
	obj.setLongField(c.getLong(9));
	obj.setPrivateField(c.getInt(10));
	obj.setShortField(c.getShort(11));
	obj.setwBooleanField(BooleanConverter.GET.fromSql(getIntOrNull(c, 12)));
	obj.setwByteField(ByteConverter.GET.fromSql(getShortOrNull(c, 13)));
	obj.setwCharacterField(CharConverter.GET.fromSql(getIntOrNull(c, 14)));
	obj.setwDateField(DateConverter.GET.fromSql(getLongOrNull(c, 15)));
	obj.setwDoubleField(DoubleConverter.GET.fromSql(getDoubleOrNull(c, 16)));
	obj.setwFloatField(FloatConverter.GET.fromSql(getFloatOrNull(c, 17)));
	obj.setwIntegerField(IntegerConverter.GET.fromSql(getIntOrNull(c, 18)));
	obj.setwLatitudeField(LatitudeConverter.GET.fromSql(getDoubleOrNull(c, 19)));
	obj.setwLongField(LongConverter.GET.fromSql(getLongOrNull(c, 20)));
	obj.setwShortField(ShortConverter.GET.fromSql(getShortOrNull(c, 21)));
	obj.setwStringField(c.getString(22));
	return obj;
}

Fields of primitive type are mapped directly via the corresponding primitive getter methods on the Cursor, which preserves the efficiency of using primitive types in your entities. Wrapper types like Double call the primitive getter on the Cursor but then get converted to a Double instance via the getDoubleOrNull() method in the TableHelper base class. This introduces object creation overhead; however, that overhead is implied by the entity’s use of the wrapper type in the first place. Either it happens explicitly in the getDoubleOrNull() method, or would happen implicitly via auto-boxing if you were to assign the result of Cursor.getDouble() to an entity field of type Double. Of course, the newInstance() method must create an instance of the entity itself; however, this is inherent in the idea of object relational mapping. If you are pursuing absolute maximum efficiency using SQLite without any object representations, stORM is not for you. Admittedly, there are some cases, especially queries, where you are only interested in one or two fields of an object and it would just be waste to populate all fields as above. In these cases, you can simply bypass stORM. It’s perfectly safe to obtain the readable or writable database using the DatabaseFactory class as shown in the first section of this article.

The corollary method to newInstance() is getEditableValues(T obj), which returns a ContentValues map populated with the object’s fields. The ContentValues object can then be passed to the insert() or update() methods on a writable SQLiteDatabase. Here’s the generated getEditableValues() method in SimpleEntityTable:

@Override
public ContentValues getEditableValues(SimpleEntity obj) {
	ContentValues cv = new ContentValues();
	cv.put("BLOBFIELD", obj.getBlobField());
	cv.put("BOOLEANFIELD", obj.isBooleanField() ? 1 : 0);
	cv.put("BYTEFIELD", (short) obj.getByteField());
	cv.put("CHARFIELD", (int) obj.getCharField());
	cv.put("DOUBLEFIELD", obj.getDoubleField());
	cv.put("ENUMFIELD", obj.getEnumField() == null ? null : obj.getEnumField().name());
	cv.put("FLOATFIELD", obj.getFloatField());
	cv.put("_ID", obj.getId());
	cv.put("INTFIELD", obj.getIntField());
	cv.put("LONGFIELD", obj.getLongField());
	cv.put("PRIVATEFIELD", obj.getPrivateField());
	cv.put("SHORTFIELD", obj.getShortField());
	cv.put("WBOOLEANFIELD", BooleanConverter.GET.toSql(obj.getwBooleanField()));
	cv.put("WBYTEFIELD", ByteConverter.GET.toSql(obj.getwByteField()));
	cv.put("WCHARACTERFIELD", CharConverter.GET.toSql(obj.getwCharacterField()));
	cv.put("WDATEFIELD", DateConverter.GET.toSql(obj.getwDateField()));
	cv.put("WDOUBLEFIELD", DoubleConverter.GET.toSql(obj.getwDoubleField()));
	cv.put("WFLOATFIELD", FloatConverter.GET.toSql(obj.getwFloatField()));
	cv.put("WINTEGERFIELD", IntegerConverter.GET.toSql(obj.getwIntegerField()));
	cv.put("WLATITUDEFIELD", LatitudeConverter.GET.toSql(obj.getwLatitudeField()));
	cv.put("WLONGFIELD", LongConverter.GET.toSql(obj.getwLongField()));
	cv.put("WSHORTFIELD", ShortConverter.GET.toSql(obj.getwShortField()));
	cv.put("WSTRINGFIELD", obj.getwStringField());
	return cv;
}

As in the newInstance() method, primitive types are bound with no conversion overhead by calling the variant of the overloaded put() method which takes the primitive type. In the case of wrapper types like Double, the object returned by the TypeConverter’s toSql() method will be implicitly unwrapped. I suspect that using ContentValues in an insert or update may be slightly less efficient than using a prepared statement because that’s no doubt what’s happening under the covers. If this becomes an issue, it would be easy enough to modify stORM to generate the code and bindings for a prepared statement instead.

FilterBuilder

The FilterBuilder object returned by the DAO’s filter() method is currently an elementary query builder that can AND together a series of equality conditions. The query-by-example methods in the base DAO class use FilterBuilder. I’m not completely happy with this API yet, particularly the overloaded eq() methods for every wrapper type. However, it seems necessary to minimize lookup overhead for the appropriate TypeConverter, as well as to handle special cases like Double and byte[], which require special treatment for equality conditions. Consider this a work in progress.

TypeConverter<J,S>

The TypeConverter interface is at the heart of all the code to convert fields to/from SQL. SQLite has only four native data types. In stORM, these are represented by the TypeConverter.SqlType enum: INTEGER, REAL, BLOB, and TEXT. Closely related to these are the seven primitive types available in the Cursor.getX() and ContentValues.put() methods. These are represented by the TypeConverter.BindType enum: BLOB, DOUBLE, FLOAT, INT, LONG, SHORT, and STRING. The TypeConverter interface and its corresponding @Converter annotation offer an extensible mechanism by which you can convert any Java type, including your own classes, to a SQL representation. Simply extend TypeConverter and annotate it with @Converter. As an example, here’s the built-in BooleanConverter:

package com.turbomanage.storm.types;

import com.turbomanage.storm.api.Converter;
import com.turbomanage.storm.types.TypeConverter.BindType;
import com.turbomanage.storm.types.TypeConverter.SqlType;

@Converter(forTypes = { boolean.class, Boolean.class }, bindType = BindType.INT, sqlType = SqlType.INTEGER)
public class BooleanConverter extends TypeConverter<Boolean,Integer> {

	public static final BooleanConverter GET = new BooleanConverter();

	@Override
	public Integer toSql(Boolean javaValue) {
		if (javaValue == null)
			return null;
		return (javaValue==Boolean.TRUE) ? 1 : 0;
	}

	@Override
	public Boolean fromSql(Integer sqlValue) {
		if (sqlValue == null)
			return null;
		return sqlValue==0 ? Boolean.FALSE : Boolean.TRUE;
	}

	@Override
	public Integer fromString(String strValue) {
		if (strValue == null)
			return null;
		return Integer.valueOf(strValue);
	}

}

A TypeConverter must implement methods to convert to and from the type’s SQL representation, or more precisely, the Java type which is directly bindable via one of the Cursor.getX() methods. In addition, it must be able to convert to/from a String. The String methods are used only by the CSV utilities. All TypeConverter return types and argument types must be object types, not primitives. In the case of queries using FilterBuilder, this may result in one unnecessary object creation per field when primitives are implicitly wrapped; however, the use of object types throughout allows the TypeConverter interface to be parameterized, which, besides its elegance, makes it easier to create custom TypeConverters from scratch.

Note that the @Converter annotation must specify the Java types for which the converter may be used (forTypes) as well as the corresponding SQL representation type and binding type. The latter is used by the annotation processor to generate the name of the corresponding Cursor.getX() method. Fortunately, byte[] is an Object type so byte arrays can be handled the same way as all the wrapper types. Also fortunately, primitive types do have an associated class even though they are not Objects, so boolean.class is valid in the annotation’s forTypes attribute. In addition to implementing the TypeConverter interface and providing the @Converter annotation, a custom TypeConverter must declare a static field named GET which returns a singleton instance of the converter. This is used by the generated TableHelper methods to minimize the number of converter instances overall.

One final note about TypeConverter: the reason that SQL type and bind type are specified as annotation attributes vs. methods in TypeConverter is because the latter would require the annotation processor to obtain an instance of a TypeConverter in order to invoke a method like getBindType(). But the annotation processor can only inspect source code using the TypeMirror API, which doesn’t permit direct reference to Class objects such as would be needed to get a new instance via reflection. Thus, including these attributes in the @Converter annotation allows you to include custom TypeConverters in your application project directly vs. packaging them in a jar as would be required otherwise.

Supporting incremental compilation

One of the biggest surprises when writing the annotation processor  (see stORM’s MainProcessor) was the realization that incremental compilation support (in Eclipse, at least) exposes to the annotation processor only those source files which have been modified in the current save operation. Thus, if you add an @Entity annotation, the annotation processor can only inspect the code for that entity. This is problematic for stORM because it needs to know about all the annotated converters, databases, and entities in order to update the generated code for DbFactory and TableHelper classes with each save. In order to preserve state about previously-inspected annotations, it was necessary to utilize a file to maintain state. This file, called stormEnv, is in the same directory as the generated code (in Eclipse, .apt_generated). The annotation processor reads/writes to it before/after each run. If you experience problems with generated code or extraneous artifacts, just clean the project. This will wipe the stormEnv file and trigger a complete rebuild which processes all the annotations at once.

Summary

Now that you’re aware of some of stORM’s nuances and implementation details, do give it a whirl and let me know what you think. Comments are welcome here as well as on G+, where I’ve also shared this post. Also note that you can click on the title of this post in WordPress to link directly to it, which enables the Like and Share buttons for G+, Twitter, and LinkedIn. Enjoy!

Posted in Android | 3 Comments »

stORM: a lightweight DAO generator for Android SQLite

Posted by David Chandler on December 11, 2012

In my previous blog post on writing a Java annotation processor, I mentioned that I’d been working on a lightweight ORM for Android SQLite. I’m finally ready to offer a preview of project stORM with this caveat: at this stage, it’s more of a DAO (data access object) code generator than a full-fledged ORM. The generated DAO classes offer a simple interface to save and retrieve objects to SQLite without having to write any SQL code; however, stORM doesn’t attempt to offer features of full-fledged ORMs such as lazy loading of an object graph.

Goals

stORM is intended as an easy way to create a basic object store with SQLite. It lets you insert, update, delete, and query objects without having to write any SQL. It can persist fields of any primitive or wrapper type, including enums. in addition, for more complex types, you can provide your own custom TypeConverter with @Converter. stORM does not yet support modeling of relations. Support for foreign keys and indexes is planned.

stORM is implemented as a Java annotation processor so it can be used with or without an IDE. To use it, you simply annotate POJO (plain old Java object) classes that you want to persist in a SQLite database and stORM automatically generates a SQLiteOpenHelper class and DAO classes for you.

Example

Here’s an example of stORM usage. The Person class is a POJO with a few String and primitive types (firstName, lastName, weight, and height). Note that it includes an id field of type long. An ID field of type long is required along with corresponding accessor methods. To name the ID field differently, place the @Id annotation on any field of type long.

package com.turbomanage.demo.entity;

import com.turbomanage.storm.api.Entity;

@Entity
public class Person {

	private long id;
	private String firstName, lastName;
	private int weight;
	private double height;

	// accessor methods omitted
}

The Person class is annotated with @Entity, which enables the annotation processor to generate a corresponding DAO class. To insert a new Person in the database, we simply use the generated DAO class. The DAO package name is derived by appending “.dao” to your entity package, and the DAO classname is the entity classname + “Dao”.

import com.turbomanage.demo.entity.dao.PersonDao;
...
PersonDao dao = new PersonDao(getContext());
Person newPerson = new Person();
newPerson.setFirstName("David");
newPerson.setLastName("Chandler");
// insert
long newPersonId = dao.insert(newPerson);
// query by example
Person examplePerson = new Person();
examplePerson.setFirstName("David");
List allDavids = dao.listByExample(examplePerson);

The insert() method returns the id of the newly inserted row. In addition, the id field of the Person object is populated with this value.

The DAO’s listByExample() method is the simplest way to do a query. It lets you retrieve all rows matching the fields of an example object with non-default values. There is also a filter() method which lets you specify conditions directly.

Setup

To get started with stORM:

  1. Download the storm-api and storm-apt jars from storm-gen.googlecode.com.
  2. Add storm-api.jar to your project’s libs/ folder. This contains the stORM annotations and runtime.
  3. Add storm-apt.jar to your project’s annotation factory class path. In Eclipse, you can find this under project properties | Java Compiler | Annotation Processing | Factory Path. The apt jar contains the annotation processor which inspects your project’s source code and generates the required database classes.

Eclipse properties dialog

Add storm-apt.jar to your project’s annotation factory classpath in Eclipse.

Usage

Once you’ve added the jars to your project, it’s easy to use stORM. You just need a DatabaseHelper class annotated with @Database and one or more POJO classes annotated with @Entity.

  1. Create a new class that extends DatabaseHelper.
  2. Add @Database and supply a database name and version.
  3. Override getUpgradeStrategy() to choose one of the available strategies (DROP_CREATE for new projects).
  4. Create a POJO class you want to persist to the database.
  5. Make sure it has a field of type long named “id” or annotated with @Id.
  6. Add the @Entity annotation to the class.

Following these steps, you’ll see 3 generated classes under .apt_generated (you might need to unfilter hidden resources in the Eclipse Project Explorer to see it):

  • a DbFactory class. This provides a singleton instance of your DatabaseHelper class (which in turn extends SQLiteOpenHelper). Having only one instance of SQLiteOpenHelper ensures that SQLite’s own threadsafe mechanisms work as intended.
  • a DAO class for each entity. This extends SQLiteDao and mainly supplies the necessary type parameter to the base class.
  • a Table class for each entity. This contains the generated code to convert an entity instance to and from a SQL row.

You can use the DAO simply by creating a new instance. You must pass it the application context as this is required by the underlying SQLiteOpenHelper.

PersonDao dao = new PersonDao(getContext());

The DAO constructor obtains the singleton instance of your DatabaseHelper class from the generated factory class. Because of this, it’s safe to create new DAO instances anywhere you need them. You can now invoke any of the public methods on the DAO, such as insert(), update(), and listAll(). There are also listByExample() and getByExample() methods that build a query from an example object.

You can also use the FilterBuilder API (still under contruction) to run a query like this:

List<Person> allDavids = dao.filter().eq(Columns.FIRSTNAME, "David").list();

Version upgrades

One of the more challenging aspects of using SQLite can be upgrading your app’s database schema with new versions. stORM makes this easier by providing several upgrade strategies which you can choose in your DatabaseHelper class:

  • DROP_CREATE will drop all tables and create the database again using the new schema implied by your new entity definitions. This will destroy all data in the database, so should only be used in testing, not after your app is in production.
  • BACKUP_RESTORE will first backup the database to CSV files, then drop and recreate the database, and finally restore all the data from the backup CSV files. Any new fields (columns) in your entities will receive default values, and dropped columns will simply disappear. It is not yet possible to rename a field or change the field type.
  • UPGRADE lets you implement your own strategy (probably an ALTER TABLE statement) by overriding the onUpgrade() method in the DatabaseHelper class.

Dump to CSV

Finally, stORM can automatically backup your database to CSV files and restore it from the same. This is used by the BACKUP_RESTORE UpgradeStrategy. You can also manually dump to CSV or restore from CSV by calling methods directly on your DatabaseHelper class. Example:

DemoDbFactory.getDatabaseHelper(getContext()).backupAllTablesToCsv();
DemoDbFactory.getDatabaseHelper(getContext()).restoreAllTablesFromCsv();

You can browse the CSV files using adb shell under /data/data/your_app_package/files. Be aware that any byte[] fields are Base64 encoded, and fields of type double are encoded using their exact hex representation. See the code in BlobConverter.fromString() and DoubleConverter.fromString() if you want to read and parse these values independently of stORM.

Coming soon: ContentProvider

Another pain point for many developers is creating a ContentProvider. Stay tuned for a way to generate ContentProvider contract stubs and implementations also.

Source code

storm-gen.googlecode.com

The issue tracker is open…

Enjoy!

Posted in Android | 37 Comments »

Getting Started with Java APT for Android in Eclipse

Posted by David Chandler on September 28, 2012

Fellow Googler Ian Ni-Lewis and I have recently been working on annotation-driven code generation tools for Android and I wanted to pass along a few tech notes on getting it all up and running. The tool I’m working on is a lightweight ORM for the SQLite database in the Android OS. In a nutshell, you add the @Entity to any POJO that you want to make persistable, and the annotation processor generates a corresponding DAO class and the necessary SQLiteOpenHelper.

In this post, I won’t focus on the ORM tool as it’s not quite ready, but rather on how to work with Eclipse to build an annotation processor of your own (or contribute to the ORM project once I make it available). Despite the annoyances of the Mirror API, I found it reasonably straightforward and rewarding to write my own annotation processor.

What can APT do?

Java 5 introduced an Annotation Processing Tool (apt) which can be run standalone. Eclipse 3.3 added support for annotation processing as part of the Java build process. In a nutshell, an annotation processor can find annotations in your source code, inspect the code using something like reflection, and generate new source code as a result. In addition, an annotation processor can throw errors which will show up as red squigglies in Eclipse in the original annotated source code. This is extremely powerful as it allows your annotation processor to alert the developer regarding the correct use of the annotation. For example, when you tag a POJO with @Entity, my forthcoming ORM tool shows you immediately if any field types in the POJO are unsupported.

What’s in an annotation processor?

  1. The definition of your annotations using @interface.
  2. A class that extends AbstractProcessor, part of the Java 6 annotation processing APIs. One processor class can process multiple annotations.
  3. Code that inspects the annotated source code using the Mirror API.
  4. META-INF/services/javax.annotation.processing.Processor, which contains merely the fully-qualified name of your AbstractProcessor class, like “com.example.storm.MainProcessor”.

How to run your annotation processor

To use your annotation processor, package it in a jar. There are several ways to run it:

  1. On the command line, run Java apt. In Java 6 and later, javac will automatically search the classpath for annotation processors. You can bypass the default discovery mechanism using the -processor and -processorpath compiler options. You can also include your own AnnotationProcessorFactory to replace the default factory.
  2. In ant, add the -processorPath and -processor options or use the ant apt task (pre-Java 6).
  3. In Eclipse, add your annotation processor jar to the annotation factory classpath as described in the Eclipse guide to APT .

While the Eclipse JDT-APT integration is cool as it enables editor integration (hello, red squigglies), there is one limitation that makes it a pain to work on the annotation processor code itself. Whether by design or bug, when you make changes to your annotation processor, you must repackage as a jar and manually remove / add it to the annotation factory classpath of any projects using it. The packaging can be easily automated as I will show, but I haven’t yet found a way to get Eclipse to pick up changes to apt jars, even when they’re referenced as jars in an Eclipse project (vs. external jars).

Getting Started

During development, I found it useful to organize the annotation processor code into 3 separate projects: API, impl, and test.

The API project

The API project contains the @interface definitions for your annotations. Since this code will need to be on the classpath of any project that uses your annotations, it’s reasonable to also include other non-generated code that will be used at runtime. For example, my API project includes a DAO base class and other runtime code as well as the annotations. Since some of the runtime code has Android dependencies, I made this an Android library project by checking the Library box when running the New Android Application Project wizard. The project has no Activities, and the AndroidManifest.xml contains only the <uses-sdk> element.

As we noted earlier, the only way to run an annotation processor in Eclipse is from a jar or plugin. Since the API is required by the annotation processor, it must exist as a jar on the annotation factory classpath of any client projects such as the test project below. Therefore, it’s helpful to build the jar every time you make a change to the project. Eclipse does not do this for you automatically. However, you can set it up using ant as follows:

Once you’ve created the API project and configured the build path as required, export an ant build file using File | Export | Ant Buildfiles.

Edit the build.xml and add this inside the build-project target:

<jar destfile="your-api.jar" basedir="bin/classes"></jar>

Go to project properties | Builders and add a New Ant Builder. In the Main tab, click Browse Workspace… to choose the build.xml file, then finish. The default targets are OK. Now every time you save a change to the API, you’ll see it build your-api.jar in the project.

The APT implementation project

The impl project includes one or more AbstractProcessors, supporting model classes, and the META-INF file that specifies the name of your processor class. The APT impl project has no Android dependencies because the Android references exist only in the Freemarker templates. This project has on its build path the API project. You could depend on the API jar since it’s getting updated by your ant task, but depending on the project will facilitate refactoring. As for the API project, you’ll want to use an Ant Builder to update the impl jar every time the project is built.

My annotation processor uses the Freemarker template language to generate source code. The basic approach is to inspect the annotated code, populate a model class (like EntityModel) which simply has getters and setters for the fields that will be needed by the template, then invoke Freemarker, passing it the model and the template. This is my annotation processor:

package com.example.storm;

import java.io.IOException;
import java.io.Writer;
import java.util.ArrayList;
import java.util.List;
import java.util.Set;

import javax.annotation.processing.AbstractProcessor;
import javax.annotation.processing.RoundEnvironment;
import javax.annotation.processing.SupportedAnnotationTypes;
import javax.annotation.processing.SupportedSourceVersion;
import javax.lang.model.SourceVersion;
import javax.lang.model.element.Element;
import javax.lang.model.element.TypeElement;
import javax.tools.JavaFileObject;

import freemarker.cache.ClassTemplateLoader;
import freemarker.template.Configuration;
import freemarker.template.Template;
import freemarker.template.TemplateException;

@SupportedAnnotationTypes({ "com.example.storm.Entity","com.example.storm.Database" })
@SupportedSourceVersion(SourceVersion.RELEASE_6)
public class MainProcessor extends AbstractProcessor {
	private ProcessorLogger logger;
	private Configuration cfg = new Configuration();
	private List entities = new ArrayList();

	@Override
	public boolean process(Set<? extends TypeElement> annotations,
			RoundEnvironment roundEnv) {
		this.logger = new ProcessorLogger(processingEnv.getMessager());
		logger.info("Running MainProcessor...");

		cfg.setTemplateLoader(new ClassTemplateLoader(this.getClass(), "/res"));

//		for (TypeElement annotationType : annotations) {}

		for (Element element : roundEnv.getElementsAnnotatedWith(Entity.class)) {
			Entity annotation = element.getAnnotation(Entity.class);
			EntityModel em = new EntityModel(element);
			processTemplateForModel(em);
			// retain for DatabaseHelper class
			entities.add(em);
		}

		for (Element element : roundEnv.getElementsAnnotatedWith(Database.class)) {
			Database dbAnno = element.getAnnotation(Database.class);
			DatabaseModel dm = new DatabaseModel(element, dbAnno.name(), dbAnno.version());
			// Add entity DAOs
			for (EntityModel em : entities) {
				dm.addDaoClass(em.getQualifiedClassName());
			}
			processTemplateForModel(dm);
		}

		return true;
	}

	private void processTemplateForModel(ClassModel model) {
		JavaFileObject file;
		try {
			file = processingEnv.getFiler().createSourceFile(model.getQualifiedClassName());
			logger.info("Creating file  " + file.getName());
			Writer out = file.openWriter();
			Template t = cfg.getTemplate(model.getTemplatePath());
			logger.info("Processing template " + t.getName());
			t.process(model, out);
			out.flush();
			out.close();
		} catch (IOException e) {
			logger.error("EntityProcessor error", e, model.getElement());
		} catch (TemplateException e) {
			logger.error("EntityProcessor error", e, model.getElement());
		}
	}

}

The hardest part about using Freemarker was locating the templates on the classpath. I found it is easiest to use Freemarker’s ClassTemplateLoader and put my .ftl templates in src/res. The name happens to coincide with the standard Android resources folder; however, this is not required as the impl project is not an Android project. Furthermore, my res folder is under src, not a sibling directory as the standard Android folder. This is the easiest way for ClassTemplateLoader to find it.

Note the catch clauses in processTemplateForModel() that invoke logger.error(). Messages written to the ProcessorLogger show up in Eclipse’s Error Log view. In addition, by passing the Element which triggered the error, Eclipse can generate red squigglies in source code using the annotation.

In my current design, EntityModel and DatabaseModel correspond to the @Entity and @Database annotations and extend ClassModel, which does the heavy lifting using the Mirror API. Here are the key methods from ClassModel which show how to introspect on an annotated class and its fields using the Mirror API:

	public ClassModel(Element element) {
		TypeElement typeElement = (TypeElement) element;
		this.typeElement = typeElement;
		readFields(typeElement);
	}

	protected void readFields(TypeElement type) {
		// Read fields from superclass if any
		TypeMirror superClass = type.getSuperclass();
		if (TypeKind.DECLARED.equals(superClass.getKind())) {
			DeclaredType superType = (DeclaredType) superClass;
			readFields((TypeElement) superType.asElement());
		}
		for (Element child : type.getEnclosedElements()) {
				if (child.getKind() == ElementKind.FIELD) {
					VariableElement field = (VariableElement) child;
					Set modifiers = field.getModifiers();
					if (!modifiers.contains(Modifier.TRANSIENT) && !modifiers.contains(Modifier.PRIVATE)) {
						String javaType = getFieldType(field);
						addField(field.getSimpleName().toString(), javaType);
					}
				}
		}
	}

	protected String getFieldType(VariableElement field) {
		TypeMirror fieldType = field.asType();
		return fieldType.toString();
	}

I’m not completely happy with this design and plan to move all the mirror API code into processor classes, leaving the model classes pure. This should enable even more fine-grained error reporting. Here’s a bit of my Freemarker template src/res/EntityDao.ftl. All variable names inside ${} invoke the corresponding getters in the model class passed to the process() method above.

public class ${className} extends ${baseDaoClass}<${entityName}>{

	public static void onCreate(SQLiteDatabase db) {
		String sqlStmt =
			"CREATE TABLE ${tableName}(" +
				<#list fields as field>
				"${field.colName} ${field.sqlType}<#if field.primitive> NOT NULL<#if field_has_next>," +
				</#list>
			")";
		db.execSQL(sqlStmt);
	}
	...
}

The test project

The test project is an Android Application Project which has on its build path the API project. However, because the API project is an Android Library project, you must add it as a referenced library under project properties | Android, not the usual Java Build Path. In addition, you must add 3 jars to the annotation factory classpath in Eclipse: the Freemarker jar, the API jar, and the impl jar.

There are several ways to invoke Android unit tests. I found it easiest to include test instrumentation directly in an Android Application Project created using the Application Project wizard (not the Android Test Project wizard, as that requires yet another project under test). It has the default MainActivity, and you can simply add the test instrumentation to AndroidManifest.xml as follows:

    <instrumentation
        android:name="android.test.InstrumentationTestRunner"
        android:targetPackage="com.example.storm.test" />

    <application
        android:icon="@drawable/ic_launcher"
        android:label="@string/app_name"
        android:theme="@style/AppTheme" >
        <activity
            android:name=".MainActivity"
            android:label="@string/title_activity_main" >
            <intent-filter>
                <action android:name="android.intent.action.MAIN" />
                <category android:name="android.intent.category.LAUNCHER" />
            </intent-filter>
        </activity>

        <uses-library android:name="android.test.runner" />
    </application>

The activity should already be there; you just need to add the uses-library tag and the instrumentation tag pointing to the package which contains your Junit 3 tests (note: Android doesn’t use Junit 4). To create tests, simply run the New Junit Test wizard. To run them, right-click on the project and select Run As | Android Junit Test. This will run the test on an emulator or device and show results in the standard JUnit console.

Wrapping up

Creating an annotation processor has been a pretty fun learning project, and I trust the final product will be useful to a lot of Android developers. The most painful part is having to manually remove and re-add the jars from the test project’s annotation factory classpath every time I make a change to one of the jars. I have to go to project properties | Java Compiler > Annotation Processing > Factory Path, uncheck the boxes next to the api and impl jars, click OK and rebuild, then go back in to recheck the boxes and rebuild. Doesn’t take long with keyboard shortcuts, but still… I’ll be glad when then the ORM jars are stable, and (I hope), so will you.

Posted in Android, Eclipse | 8 Comments »

 
%d bloggers like this: