How to protect data in SQLite database?

Is "millions" of small data items a "large amount" for a database?

  • Or can any database handle this comfortably?  For example, at what data size (B or rows) does an indexed table become unmanageable in MySQL, SQLite, Postgres, Oracle etc?  At what point do you go beyond the capacities of, say, an 8GB server with a TB of disk space? This is a follow-up question to .

  • Answer:

    As long as the index fits in RAM, things are usually fast enough.

Richard Jones at Quora Visit the source

Was this solution helpful to you?

Other answers

Really depends on your data-set. As an example, I have a database in MySQL with somewhere over a billion rows (can't do a count() as it would take too long). That database performs fine for the specific use case and it only stores a date, a status id, a user id, and about 3-4 text fields. This is on an 8GB EC2 instance and I only just recently grew beyond a 500GB EBS volume. I also have a MongoDB database with about 600MM documents. I long ago exceeded my RAM requirements, but the database (now 1.5TB) performs OK as I only need a portion of that data in RAM at a given time. This is on a physical server with 32GB of RAM. At today's scale, "millions" does not seem to be a large amount, but depends on your data access patterns.

Damon Cortesi

Just Added Q & A:

Find solution

For every problem there is a solution! Proved by Solucija.

  • Got an issue and looking for advice?

  • Ask Solucija to search every corner of the Web for help.

  • Get workable solutions and helpful tips in a moment.

Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.