Why do you need to build an in-memory graph which you would want to
read/write to? You could store the graph in HBase directly. As pointed out,
HBase might not be the best suited for SPARQL queries, but its not
impossible to do. Using the triples, you can form a graph that can be
represented in HBase as an adjacency list. I've stored graphs with 16-17M
nodes which was data equivalent to about 600M triples. And this was on a
small cluster and could certainly scale way more than 16M graph nodes.

In case you are interested in working on SPARQL over HBase, we could
collaborate on it...

-ak
Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz
On Wed, Mar 31, 2010 at 11:56 AM, Andrew Purtell <[EMAIL PROTECTED]>wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB