Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Accumulo >> mail # user >> Re: Out of Memory in Embedded Jetty


Copy link to this message
-
Out of Memory in Embedded Jetty
I am trying to write a web page that paginates through an Accumulo table.
The code works but when Jetty restarts the application I seem to run into
the following error. I'm hoping that I am just forgetting to close a
resource or something similar. I'm using jetty-9.1.3.v20140225 and Accumulo
1.5.0.

The error:

java.lang.OutOfMemoryError: PermGen space
    at
org.eclipse.jetty.server.handler.ErrorHandler.handle(ErrorHandler.java:109)

The code:

        Connector connector = null;
        Instance instance = new ZooKeeperInstance(accumuloInstanceName,
accumuloZookeeperEnsemble);
        try {
            connector = instance.getConnector(accumuloUser,
accumuloPassword.getBytes());
        } catch (AccumuloException | AccumuloSecurityException e) {
            throw new RuntimeException("Error getting connector from
instance.", e);
        }

        tableName = "TedgeField";

        Scanner scan = null;
        try {
            scan = connector.createScanner(tableName, new Authorizations());
        } catch (TableNotFoundException e) {
            throw new RuntimeException("Error getting scanning table.", e);
        }
        scan.setBatchSize(10);
        if (lastRow != null) {
            scan.setRange(new Range(new Text(lastRow), false, null, true));
        }

        Map<String, Integer> columns = new TreeMap<>();

        IteratorSetting iter = new IteratorSetting(15, "fieldNames",
RegExFilter.class);
        String rowRegex = null;
        String colfRegex = null;
        String colqRegex = "field";
        String valueRegex = null;
        boolean orFields = false;
        RegExFilter.setRegexs(iter, rowRegex, colfRegex, colqRegex,
valueRegex, orFields);
        scan.addScanIterator(iter);

        int fetchCount = 0;
        Iterator<Map.Entry<Key, org.apache.accumulo.core.data.Value>>
iterator = scan.iterator();
        while (iterator.hasNext()) {
            Map.Entry<Key, org.apache.accumulo.core.data.Value> entry =
iterator.next();
            String columnName = entry.getKey().getRow().toString();
            Integer entryCount =
Integer.parseInt(entry.getValue().toString());
            columns.put(columnName, entryCount);
            fetchCount++;
            if (fetchCount > scan.getBatchSize()) {
                lastRow = entry.getKey().getRow();
                break;
            }
        }

        scan.close();

I'd be happy to update my D4M_Schema project with this code if anyone wants
to run it locally to validate the error. I didn't want to push broken code.

 
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB