How can I clear the mongoDB oplog?

Is it normal that inserting to MongoDB takes random time in PHP?

  • I have one web application written in PHP and backed by MongoDB. For each web access, there is one access log written to one MongoDB collection named access in one database visits_log. The database visits_log has only that collection to reduce write-lock issue. Today, when I check timing performance, I found from time to time writing to the collection access spikes. Normally, it takes about 1 ms to insert one document. But, about 1 out of 20 insert take 20-80 ms. Writing to the collection access is about 5 write per second. I don't find any other code that would intervene the writing. I write a small MongoDB shell script in JavaScript to continuously write to that collection. It steadily takes 1 ms for each write. So, it seems that the problem surfaces when PHP meet MongoDB. The driver is from http://pecl.php.net/package/mongo  with version 1.3.4. Is it normal that PHP writing to Mongo would vary on performance? What can I do to dig deeper about this issue?

  • Answer:

    > Is it normal that inserting to MongoDB takes random time...? It's not "normal" but it definitely happens. And it can happen regardless of the language you are using. Some key follow-up questions: what is your WriteConcern here? "Safe", "Fsync", "Journal"? How much RAM and how much Data do you have? How big are the Indexes? Are you using custom IDs? Are you "appending to the end" of the index? Do you have information about what your server is doing during the slowdowns? CPU? Page faults? IO? Network? The behaviour you have described may be completely attributed to "what the server is doing". So before jumping into any PHP problems, you should start by ensuring that the server is not having problems.

Gaëtan Voyer-Perrault at Quora Visit the source

Was this solution helpful to you?

Other answers

The index is normally fast, when you're accessing it in RAM.  But sometimes the value you insert accesses a part of the index that isn't in RAM yet, or is in cache that has paged out to the file on disk.  So MongoDB has to access the disk to load the remaining part into memory. That pretty much kills performance, consistent with the degree of slowdown you report, and it happens at somewhat unpredictable times, because it depends on what part of the index is in memory at the time you insert a new row. See also http://docs.mongodb.org/manual/faq/indexes/#what-happens-if-an-index-does-not-fit-into-ram To avoid the slowdowns, you should upgrade RAM on your db server until you can fit all of the working set of your data plus the indexes in memory. See also http://www.10gen.com/presentations/diagnostics-performance-tuning for MongoDB for some tips on monitoring memory usage. You may also upgrade your disks to SSD so that when MongoDB is forced to touch the disk, it's not quite so big a difference in performance.  However, you should understand that SSD access times are still slower by orders of magnitude than RAM access times.  SSD can't be a substitute for upgrading the size of RAM cache, it only helps when data isn't in the cache.

Bill Karwin

Related Q & A:

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.