How can I clear the mongoDB oplog?

How can I return 1.7 gb of mongodb documents when a query is made to the server in a web app?

  • In Spring mvc + mongodb application, I have 4 lakh documents. Assuming if I need to return 3 lakh documents when a query is made, how can I do that? Following is the stack trace, HTTP Status 500 - Request processing failed; nested exception is java.lang.IllegalArgumentException: response too long: 1634887426 type Exception report message Request processing failed; nested exception is java.lang.IllegalArgumentException: response too long: 1634887426 description The server encountered an internal error that prevented it from fulfilling this request. exception org.springframework.web.util.NestedServletException: Request processing failed; nested exception is java.lang.IllegalArgumentException: response too long: 1634887426 org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:973) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:863) javax.servlet.http.HttpServlet.service(HttpServlet.java:646) org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837) javax.servlet.http.HttpServlet.service(HttpServlet.java:727) root cause java.lang.IllegalArgumentException: response too long: 1634887426 com.mongodb.Response.<init>(Response.java:49) com.mongodb.DBPort$1.execute(DBPort.java:141) com.mongodb.DBPort$1.execute(DBPort.java:135) com.mongodb.DBPort.doOperation(DBPort.java:164) com.mongodb.DBPort.call(DBPort.java:135) com.mongodb.DBTCPConnector.innerCall(DBTCPConnector.java:292) com.mongodb.DBTCPConnector.call(DBTCPConnector.java:271) com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:84) com.mongodb.DBCollectionImpl.find(DBCollectionImpl.java:66) com.mongodb.DBCursor._check(DBCursor.java:458) com.mongodb.DBCursor._hasNext(DBCursor.java:546) com.mongodb.DBCursor.hasNext(DBCursor.java:571) org.springframework.data.mongodb.core.MongoTemplate.executeFindMultiInternal(MongoTemplate.java:1803) org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:1628) org.springframework.data.mongodb.core.MongoTemplate.doFind(MongoTemplate.java:1611) org.springframework.data.mongodb.core.MongoTemplate.find(MongoTemplate.java:535) com.AnnaUnivResults.www.service.ResultService.getStudentList(ResultService.java:38) com.AnnaUnivResults.www.service.ResultService$$FastClassBySpringCGLIB$$1f19973d.invoke(<generated>) org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204) org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:711) org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:157) org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:136) org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:179) org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:644) com.AnnaUnivResults.www.service.ResultService$$EnhancerBySpringCGLIB$$f9296292.getStudentList(<generated>) com.AnnaUnivResults.www.controller.ResultController.searchStudentByCollOrDept(ResultController.java:87) sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) java.lang.reflect.Method.invoke(Method.java:597) org.springframework.web.method.support.InvocableHandlerMethod.invoke(InvocableHandlerMethod.java:215) org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:132) org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:104) org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandleMethod(RequestMappingHandlerAdapter.java:749) org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:690) org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:83) org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:945) org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:876) org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:961) org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:863) javax.servlet.http.HttpServlet.service(HttpServlet.java:646) org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:837) javax.servlet.http.HttpServlet.service(HttpServlet.java:727) I guess the above stack trace is because the documents returned is very large. How can I manage this? I changed tomcat server config as 4096M. But I still have problem.

  • Answer:

    You don't. The problem here is not some configuration setting, it's a technical misunderstanding. 2GB of data is simply too much data to be shipped over HTTP in response to a user query. There are two major reasons for this: HTTP does not make any guarantee of bandwidth. Are you really going to let a user on a 3G wireless connection initiate a 2GB download? Are you capable of explaining to the user that their query is going to take minutes to complete and consume the entirety of their mobile bandwidth for the month? What happens if your web server is re-started 80% of the way through? You want to be very careful about enabling users to perform the type of queries that can effectively lock out your web server or lock your database. So what to do? Use paging. You should really be showing results in small chunks. Think 1 to 100 results at a time. If your users need a big data dump... Put the data somewhere for non-HTTP download. Run the query off-line, zip up the results in a file and dump them to a shared folder or network drive for them to download. Having big data downloads is a common problem and the answer is almost always to move the processing offline and provide a standard location to download them.

Gaëtan Voyer-Perrault at Quora Visit the source

Was this solution helpful to you?

Other answers

With this data size, I believe this is a case where the web tier needs to be split from the core, with separate s and system processes. The web layer handles the interaction with the client, and simulates the statefulness of the client with sessions. The core talks to the DB, does the pagination described in , and reduces the MongoDB documents to a domain model that could be used by the web layer.

Miguel Paraz

The answer is to build your results incrementally. For example, if you need to populate, say, an HTML table on a web page, don't return all rows; instead, serve only a limited amount of data, and when the user wants to see more, you use paging to display a small window of data on demand. Or, if you really need to write the ENTIRE result to disk (or network, as the case might be if you were handling a web-service call, for example), you open a file, fetch N documents, append N documents to the file, fetch another N docs, append another N docs, and so on, and then close the file at the end.  You're only ever using the size of a small buffer to hold the results in while you write the data. The point is not to fill any buffers too large, since otherwise you're going to blow up your heap.

Michael Chang

Most of the answers you are getting are wrong, you can get 2 Gig from http like for example when you download a huge file to install in your laptop. If the conn is stable it will work. My recommendation is to use a file handler to redirect the stream to disk (not in Windows) and then redirect the user browsing to an apache like static link so he really gets an apparently static resource from disk. Setting this up requires some pro coding and config skills.

Raul Lapeira

Sending a 2gb document over http is trivial, and websites do this all the time.  The key is to avoid reading the entire document into your JVM at one time.  Instead, as you read the data from MongoDB, read the bytes into a bytebuffer, and as the buffer fills, write the data out to the http response stream.  Don't try to hold the entire document in memory.  This  programming pattern is called streaming, and is why most core Java io interfaces (including http servlets) use InputStream and OutputStream. Additionally, you can use a gzip compression filter to transparently compress the http response, further reducing the time for the end user to download the data. As others have mentioned, if you can use pagination or other techniques to minimize the size of the response (use case dependent), that's always a good approach as well.  But even with pagination, its good to keep the JVM heap size small by streaming data.

Ben Kibler

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.