Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Pig >> mail # user >> DataByteArray as Input in Load Function


Copy link to this message
-
Re: DataByteArray as Input in Load Function
Or is it only possible to execute the load function at the beginning the
script? Otherwise it should be theoretical possible to handover information
that are created while the programm is running.

2013/9/17 John <[EMAIL PROTECTED]>

> Hi,
>
> Im using Pig+Hbase. I try to create a Pig programm that looks like this:
>
> MY_BLOOMFILTER = load 'hbase://bloomfilterTable' using ..."
>
> ... // do something to transform it to a DataByteArray
>
> Now I want to load data outside of hbase based on the bloomfilter,
> therefor I've build my own LoadFunction, but how can call my constructor in
> the pig programm. My Load constructor looks like this:
>
> HBaseLoadUDF(String columnList, String optString, String rowKey,
> DataByteArray myBloomfilter);
>
> The programm should look like this, but this:
>
> MY_DATA = load 'hbase://PO_S' using package.udfs.HBase LoadUDF('mycf',
> '','oneRowKey', MY_BLOOMFILTER.$0) as(output:map[]);
>
> but this doesn't work.
>
> Is it even possible?
>
>
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB