My bad, I realized my question was unclear.

I did a partitionBy when using saveAsHadoopFile. My question was about
doing the same thing for Parquet file. We were using Spark 1.3.x, but now
that we've updated to 1.4.1 I totally forgot this makes things possible :-)

Thanks for the answer, then!

On 8 September 2015 at 12:58, Cheng Lian <[EMAIL PROTECTED]> wrote:

*Adrien Mogenet*
Head of Backend/Infrastructure
50, avenue Montaigne - 75008 Paris
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB