Home | About | Sematext search-lucene.com search-hadoop.com
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB
 Search Hadoop and all its subprojects:

Switch to Threaded View
Flume >> mail # user >> Custom sink - "close() called when transaction is OPEN" error


Copy link to this message
-
Custom sink - "close() called when transaction is OPEN" error
Hi,

I have a custom sink which has been working fine, but recently I have
started seeing this error in the logs:

Unable to deliver event. Exception follows.
java.lang.IllegalStateException: close() called when transaction is OPEN -
you must either commit or rollback first
        at
com.google.common.base.Preconditions.checkState(Preconditions.java:176)
...
After having a google and finding
https://issues.apache.org/jira/browse/FLUME-1089, I have double checked I
am using the correct try, catch, finally idiom that other sinks use, and I
seem to be doing the same. I do the following:

public Status process() throws EventDeliveryException {
Status status = Status.READY;

Channel channel = getChannel();
 Transaction transaction = channel.getTransaction();

try {
transaction.begin();

                        // does a bit of processing and
                        // writes out the event to MongoDB

                        transaction.commit();

} catch (Throwable t) {
transaction.rollback();

if (t instanceof Error) {
 throw (Error) t;
} else if  (t instanceof EventDeliveryException) {
throw (EventDeliveryException) t;
 } else if (t instanceof ChannelException) {
logger.error("Brodie Log Sink " + getName() + ": Unable to get event from" +
 " channel " + channel.getName() + ". Exception follows.", t);
status = Status.BACKOFF;
 } else {
throw new EventDeliveryException("Failed to send events", t);
}
 } finally {
transaction.close();
}

return status;
}

}

All of this code came from looking at other sinks (Avro and HDFS), so I am
pretty sure its correct.

Can anyone see anything that might be a problem, or is there anything else
I can do to avoid this error?

Thanks,
Andrew
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB