Presuming you're using the Lily indexer[1], yes it relies on hbase's
built in cross-cluster replication.

The replication system stores WALs until it can successfully send them
for replication. If you look in ZK you should be able to see which
regionserver(s) are waiting to send those WALs over. The easiest way
to do this is probably to look at the "zk dump" web page on the
Master's web ui[2].

Once you have the particular region server(s), take a look at their
logs for messages about difficulty sending edits to the replication
peer you have set up for the destination solr collection.

If you remove the WALs then the solr collection will have a hole in
it. Depending on how far behind you are, it might be quicker to 1)
remove the replication peer, 2) wait for old wals to clear, 3)
reenable replication, 4) use a batch indexing tool to index data
already in the table.

[1]:

http://ngdata.github.io/hbase-indexer/

[2]:

The specifics will vary depending on your installation, but the page
is essentially at a URL like
https://active-master-host.example.com:22002/zk.jsp

the link is on the master UI landing page, near the bottom, in the
description of the "ZooKeeper Quorum" row. it's the end of "Addresses
of all registered ZK servers. For more, see zk dump."

On Wed, Jul 11, 2018 at 10:16 AM, Manjeet Singh
<[EMAIL PROTECTED]> wrote:
NEW: Monitor These Apps!
elasticsearch, apache solr, apache hbase, hadoop, redis, casssandra, amazon cloudwatch, mysql, memcached, apache kafka, apache zookeeper, apache storm, ubuntu, centOS, red hat, debian, puppet labs, java, senseiDB