Solr RefCounted: Don't forget to close SolrQueryRequest or decref solrCore.getSearcher

How Solr uses RefCounted
RefCounted is an important concept in Solr, it keeps track of a reference count on a resource and close it when the count hits zero.
For example, a Solr core reuses SolrIndexSearcher, it uses RefCounted to keep track of its count and close it when the count hits zero. 

when Solr initializes a Solr core, it will create one SolrIndexSearcher, and put it into searcherList, but keep 2 reference to this searcher: one is in the searcherList(_realtimeSearchers or _searchers), one is the variable realtimeSearcher.
org.apache.solr.core.SolrCore.openNewSearcher(boolean, boolean)
      RefCounted<SolrIndexSearcher> newSearcher = newHolder(tmp, searcherList);    // refcount now at 1
      // Increment reference again for "realtimeSearcher" variable.  It should be at 2 after.
      // When it's decremented by both the caller of this method, and by realtimeSearcher being replaced,
      // it will be closed.
   realtimeSearcher = newSearcher;
Solr core always keeps 2 reference to the SolrIndexSearcher instance until the solr core is closed/ unloaded which will call SolrCore.closeSearcher(), this will decrease the count to 0, then SolrCore will release related resource, then remove its reference variable RefCounted from its searchList, thus there will be no pointer to the searcher, GC will reclaim it.
Code: SolrCore.newHolder(SolrIndexSearcher, List>)

When we send a request to Solr for a request handler, SolrDispatchFilter will create a SolrQueryRequest which has a RefCounted searcherHolder. In Solr request handler, it can call SolrQueryRequest.getSearcher() to get SolrIndexSearcher, this will increase the reference count to SolrIndexSearcher. At the end of the request, SolrDispatchFilter will call SolrQueryRequest.close() which will decrease the reference count.
How we should write our own code
In our own code, if we create a SoelrQueryRequest, then we have to call close after we don't need it.
If we call req.getCore().getSearcher() - this will increase the reference count, after we are done with the searcher, we have to call decref to decrease the reference count.

Otherwise it will cause memory leak, as the reference count of this searcher would never be zero, thus will not be close, and caches such as filterCache, queryResultCache, documentCache, fieldValueCache will not be cleaned, it will be kept in the
Code example
To demonstrate, I write a simple request handler which create a SolrRequestHandler to run facet query. But we forgot to close the SolrRequestHandler.

public class DemoUnClosedSolrRequest extends RequestHandlerBase {
  public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp)
      throws Exception {
    SolrCore core = req.getCore();
    SolrQuery query = new SolrQuery();
    SolrQueryRequest facetReq = new LocalSolrQueryRequest(core, query);
    try {
      SolrRequestHandler handler = core.getRequestHandler("/select");
      handler.handleRequest(facetReq, new SolrQueryResponse());
    } finally {
   // Don't forget to close SolrQueryRequest
Test Code

public void unclosedSearcher() throws Exception {
    long startTime = System.currentTimeMillis();
    int i = 0;
    try {
      for (; i < 100000; i++) {
        // in the server, this request handler will increase the reference
        // count, but forget to decrease it.
        HttpSolrServer server = new HttpSolrServer(
        AbstractUpdateRequest request = new UpdateRequest("/demo1");
        // Nomally commit will close the old search, and open a new searcher,
        // but in this case because the reference of SolrIndexSearcher is not 0, 
        // so the old search will not be cleaned.
        request = new UpdateRequest("/update");
        request.setParam("commit", "true");
    } catch (Exception e) {
      throw e;
    } finally {
      System.out.println("run " + i + " times");
      System.out.println("Took " + (System.currentTimeMillis() - startTime)
          + " mills");
If we run this DemoUnClosedSolrRequest, then run a commit, in normal case, this will close the old searcher, and open a new searcher. But in this case, as the reference count in the old searcher is not zero, so the old searcher will not be closed, and it will be kept in memory forever - including its caches, as noone will remove it from SolrCore.searchList. Thus GC can't reclaim it.

Test Result

After run 6817 times, 58 minutes, client will fail.

org.apache.solr.common.SolrException: Server at http://host:port/solr returned non ok status:500, message:Server Error
run 6817 times
Took 3513530 mills

In the server, it throws OutOfMemoryError:
SEVERE: null:java.lang.RuntimeException: java.lang.OutOfMemoryError: Java heap space
        at org.apache.solr.servlet.SolrDispatchFilter.sendError(
        at Source)
Caused by: java.lang.OutOfMemoryError: Java heap space
        at org.apache.lucene.util.OpenBitSet.(
        at org.apache.solr.handler.component.QueryComponent.process(
        at org.apache.solr.handler.component.SearchHandler.handleRequestBody(
        at org.apache.solr.handler.RequestHandlerBase.handleRequest(
        at com.commvault.solr.handler.DemoUnClosedSearcher.handleRequestBody(
        at org.apache.solr.handler.RequestHandlerBase.handleRequest(
The memory usage looks like below: increase constantly.
Solr Admin Console
In Solr Admin console, we can see there are a lot of SolrIndexSearcher instances.
CPU usage
After Java throws OutOfMemoryError, JAVA tries hard to do GC, but no impact at all. As these searchers are kept in SolrCore.searchList forever.
Another example
This example demonstrates what will happen if we forgot to decref Core().getSearcher(). Same thing will happen - the old searcher will not be cleaned instead it will be kept in core.searchList forever.
public class DemoUnClosedSearcher extends RequestHandlerBase {
    public void handleRequestBody(SolrQueryRequest req, SolrQueryResponse rsp)
      throws Exception {
        RefCounted<SolrIndexSearcher> refCounted = req.getCore().getSearcher();
    try {
      SolrIndexSearcher searcher = refCounted.get();
      String qstr = "datatype:4";
      QParser qparser = QParser.getParser(qstr, "lucene", req);
      Query query = qparser.getQuery();
      int topn = 1;
      TopDocs topDocs =, topn);
      for (int i = 0; i < topDocs.totalHits; i++) {
        ScoreDoc match = topDocs.scoreDocs[i];
        Document doc = searcher.doc(match.doc);
    } finally {
       // Dont' forget to decref efCounted<SolrIndexSearcher>

Lesson Learned
1. Read the documentation/javadoc when we use some API.
For example in javadoc of SolrQueryRequest, we know it's not thread safe, so we shouldn't share it in multi thread environment, we know we should call its close method explicitly when we no need it any more.

Also for SolrCore.getSearcher(): It must be decremented when no longer needed. SolrCoreState.getIndexWriter: It must be decremented when no longer needed.
2. Use tools like VisualVM to monitor threads and memory usage.
3. Solr uses JMX to monitor its resource. We can check values in SOlr Admin or visualVM.
4. Generate and analyze Heap dump - visualvm or eclipse mat. Use DQL to find the instances, and check values. 
Post a Comment


Java (159) Lucene-Solr (109) All (60) Interview (59) J2SE (53) Algorithm (37) Eclipse (35) Soft Skills (35) Code Example (31) Linux (26) JavaScript (23) Windows (22) Web Development (20) Spring (19) Tools (19) Nutch2 (18) Bugs (17) Debug (15) Defects (14) Text Mining (14) J2EE (13) Network (13) PowerShell (11) Chrome (9) Continuous Integration (9) How to (9) Learning code (9) Performance (9) UIMA (9) html (9) Design (8) Dynamic Languages (8) Http Client (8) Maven (8) Security (8) Trouble Shooting (8) bat (8) blogger (8) Big Data (7) Google (7) Guava (7) JSON (7) Problem Solving (7) ANT (6) Coding Skills (6) Database (6) Scala (6) Shell (6) css (6) Algorithm Series (5) IDE (5) Lesson Learned (5) Miscs (5) Programmer Skills (5) Tips (5) adsense (5) xml (5) AIX (4) Code Quality (4) GAE (4) Git (4) Good Programming Practices (4) Jackson (4) Memory Usage (4) OpenNLP (4) Project Managment (4) Python (4) Spark (4) System Design (4) Testing (4) ads (4) regular-expression (4) Android (3) Apache Spark (3) Become a Better You (3) Cache (3) Concurrency (3) Eclipse RCP (3) English (3) Firefox (3) Happy Hacking (3) IBM (3) J2SE Knowledge Series (3) JAX-RS (3) Jetty (3) Restful Web Service (3) Script (3) regex (3) seo (3) .Net (2) Android Studio (2) Apache (2) Apache Procrun (2) Architecture (2) Batch (2) Build (2) Building Scalable Web Sites (2) C# (2) C/C++ (2) CSV (2) Career (2) Cassandra (2) Distributed (2) Fiddler (2) Google Drive (2) Gson (2) Html Parser (2) Http (2) Image Tools (2) JQuery (2) Jersey (2) LDAP (2) Life (2) Logging (2) Software Issues (2) Storage (2) Text Search (2) xml parser (2) AOP (1) Application Design (1) AspectJ (1) Bit Operation (1) Chrome DevTools (1) Cloud (1) Codility (1) Data Mining (1) Data Structure (1) ExceptionUtils (1) Exif (1) Feature Request (1) FindBugs (1) Greasemonkey (1) HTML5 (1) Httpd (1) I18N (1) IBM Java Thread Dump Analyzer (1) JDK Source Code (1) JDK8 (1) JMX (1) Lazy Developer (1) Mac (1) Machine Learning (1) Mobile (1) My Plan for 2010 (1) Netbeans (1) Notes (1) Operating System (1) Perl (1) Problems (1) Product Architecture (1) Programming Life (1) Quality (1) Redhat (1) Review (1) RxJava (1) Solutions logs (1) Team Management (1) Thread Dump Analyzer (1) Troubleshooting (1) Visualization (1) boilerpipe (1) htm (1) ongoing (1) procrun (1) rss (1)

Popular Posts