Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Drill >> mail # user >> Usage of 'store' operation in Logical Queries


Copy link to this message
-
Re: Usage of 'store' operation in Logical Queries
Hello Varadha,

I am actually working on the writer interface for Drill right now.
Unfortunately we currently only have reading implementations for JSON and
Parquet and neither of those storage engines can export data.

I am working on designing a unified reader/writer interface that will
enable us to add more formats faster, while simultaneously adding read and
write support with each implementation.

I will be posting my patches to the list for review soon, I invite you to
take a look if you have an interest in Drill from a development perspective.

Regards,
Jason Altekruse
On Sat, Oct 19, 2013 at 1:06 PM, Varadharajan M
<[EMAIL PROTECTED]>wrote:

> Hi All,
>
> I've a logical plan, thats intended to read from a json file and write the
> output to another file. I was under the impression that the write operation
> could be done with the help of 'store' operation. Here is my logical plan:
>
> {
>   "head" : {
>     "type" : "APACHE_DRILL_LOGICAL",
>     "version" : "1",
>     "generator" : {
>     "type" : "optiq",
>     "info" : "na"
>     }
>   },
>
>   "storage" : {
>     "jsonl" : {
>       "type" : "json",
>       "dfsName" : "file:///"
>     },
>     "filel" : {
>       "type" : "fs",
>       "root" : "file:///"
>     }
>   },
>
>   "query" : [ {
>     "op" : "scan",
>     "memo" : "initial_scan",
>     "ref" : "_MAP",
>     "storageengine" : "jsonl",
>     "selection" : [ {
>       "path" : "/tmp/students.json"
>     } ],
>     "@id" : 1
>   },
>
>   {
>     "op" : "store",
>     "input" : 1,
>     "memo" : "output sink",
>     "storageengine": "filel",
>     "@id" : 2,
>      target: {
>         file: "file:///tmp/do.json"
>       }
>   } ]
> }
>
>
> Now, if i try to run this logical plan with the submit_plan shell script, i
> get the output to the console rather than to the specified file (That is,
> there is no data written to the file). Am i missing anything here?
>
> - Varadha
>
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB