I'm pretty sure you can use a timestamp as a partitionColumn, It's
Timestamp type in MySQL.  It's at base a numeric type and Spark requires a
numeric type passed in.

This doesn't work as the where parameter in MySQL becomes raw numerics
which won't query against the mysql Timestamp.
minTimeStamp = 1325605540 <-- This is wrong, but I'm not sure what to put

mysql DB schema:

I'm obviously doing it wrong, but couldn't find anything obvious while
digging around.

The query that gets generated looks like this (not exactly, it's optimized
to include some upstream query parameters):

Gary Lucas
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB