How do I transfer an entire Oracle database into the memory, and make that database work as if it's on the hard drive?
-
1.The database is of a size around 15G to 20G. 2. Webpages need to be presented in a dynamic fashion, static pages are not an option. 3. We don't have reliable economic, technical or management resources to deploy a brand new memory-oriented type of database. 4. Also, the system which the database supports is already an online production system. There is no way we can change the type of database now, and the time we can have for maintenance (meaning to shut down the system) is about one day in the weekend. What we are using now is an Oracle 11g database. The Problem: Because of the complexity of the company's business, there are numerous cross-table queries running on that database . This makes the displaying of website pages rather slow: each time a page is refreshed, a large number of queries must be executed across many tables. The approach: Is there any way we can transfer a copy of the Oracle database into the server's memory? In doing so we can execute the queries a lot faster. But we must also keep the data synchronized: (1)Whenever data is being written into the memory database, it must also be written into the database on hard drive; (2)Whenever the server reboots, a copy of the hard drive database must be put into the memory; (3)Queries are only executed on the memory database. Aside from the idea of putting an entire database into the memory, are there other ways we can solve the problem? Thanks.
-
Answer:
Firstly, thanks for your detailed explanation. Well, there are many ways you can solve this problem. As per my understanding, I think you need to ensure that, whatever tables are being used for querying, must be made available in the memory for faster access. There's really no point of putting the whole database in memory, it is unnecessary and an expensive task. Oracle database is designed to use its cache optimally. Firstly, understand why there are complex table structures? Is there anyway to normalize them? Is there any way you can simplify the complexity by normalizing the current table structure further? Secondly, are you keeping a tab or sample on the statistics of frequently used tables? For e.g. Table_A having 5 million rows fetches quicker results as compared to TABLE_B with just 1 million rows. So, here you need to compare these two tables and understand how and why Oracle is responding to them, the way it is doing and it's a little tricky. Then, look at the storage, are there any indexes? What type? Are you able to pull out an EXPLAIN PLAN for your queries and see their performance? Are they hitting the indexes or doing a full table scan? Also, please understand the operating system settings play a crucial role here as well. Please ensure that your OS user (e.g. oracle) has all the privileges and has been assigned the maximum memory usage permitted by OS administrator. Answering these questions will give you a direction on simplifying the complexity of the database, rather than jumping the gun of putting database in memory. There are many ways Oracle has facilitated this requirement to ensure that some table blocks never leave the memory. I will list them here : 1.Keep Pool - There's a feature in Oracle called "Keep Buffer Pool". Link - http://docs.oracle.com/cd/B19306_01/server.102/b14211/memory.htm#sthref410 The goal of the KEEP buffer pool is to retain objects in memory, thus avoiding I/O operations. The size of the KEEP buffer pool, therefore, depends on the objects that you want to keep in the buffer cache. You can compute an approximate size for the KEEP buffer pool by adding together the blocks used by all objects assigned to this pool. If you gather statistics on the segments, you can query DBA_TABLES.BLOCKS and DBA_TABLES.EMPTY_BLOCKS to determine the number of blocks used. In most of the projects I have worked, "fastest access" is naturally the primary requirement. Keep Buffer Pool is a good way to deal with this, but then again, you need to ensure that SGA is on a higher side and your server is able to cater to that requirement. 2. ALTER TABLE <table_name> CACHE; This is another technique to keep your hot tables in the memory. Well, this is not always recommended since the buffer cache works on LRU (least recently used) algorithm, if you keep unnecessary blocks in the memory, you play the risk of other tables being affected and thus the performance can hit. Link - https://asktom.oracle.com/pls/asktom/f?p=100%3A11%3A0%3A%3A%3A%3AP11_QUESTION_ID%3A253415112676 Now, coming to your data sync option : You should consider configuring Data Guard for your production database. This way, whatever transactions are recorded in your production database are copied to a standby database. This is not an expensive option and many companies use this as a safeguard technique. I hope this helps! P.S. - This is not a professional advice. Since I am an outsider, I can only give direction on how can this problem be resolved. Kindly consult your database admins for further assistance. You can always p.m me for any other specific question.
Ajinkya Soitkar at Quora Visit the source
Other answers
The problem is that you have jumped into a solution (ie put database in memory) without knowing or understanding the problem. I would start by looking at the execution plan (ie get a SQL Monitor report) and see where the time is being spent.
Bob Carlin
The simplest (simplistic ?) approach is to use this available memory as buffer cache for your database. I say simplistic because, as Bob mentioned you are jumping to a conclusion because you assume that I/Os are slowing down your application. Is that a fact ? Or is the server maxing out on CPU ? Or is it paging to death ? Did you do the essential performance analysis. If the issue is indeed I/O, then are you sure those queries are properly executed ? Are the query plans correct? Are all the proper indexes in place ? Did you identify the top consumers using Oracle's administration tools ? Did you optimize those queries ? Is the application correctly written ? Does it use relational operations the right way ? Or does it perform table joins manually ? Blindly getting the whole database in memory as a brute force approach is unlikely to bring long-term relief. A badly designed database and/or badly designed application will remain so, and remain sluggish even when given more resources to run.
Albert Godfrind
Related Q & A:
- How can I delete my entire inbox on my blackberry curve?Best solution by Yahoo! Answers
- What's a good hard-drive Camcorder?Best solution by camcorders.toptenreviews.com
- How do I transfer old psp memory to a new memory stick?Best solution by Yahoo! Answers
- How do I transfer my iTunes music to a different hard drive?Best solution by Yahoo! Answers
- How do I transfer my driver's license over?Best solution by Yahoo! Answers
Just Added Q & A:
- How many active mobile subscribers are there in China?Best solution by Quora
- How to find the right vacation?Best solution by bookit.com
- How To Make Your Own Primer?Best solution by thekrazycouponlady.com
- How do you get the domain & range?Best solution by ChaCha
- How do you open pop up blockers?Best solution by Yahoo! Answers
For every problem there is a solution! Proved by Solucija.
-
Got an issue and looking for advice?
-
Ask Solucija to search every corner of the Web for help.
-
Get workable solutions and helpful tips in a moment.
Just ask Solucija about an issue you face and immediately get a list of ready solutions, answers and tips from other Internet users. We always provide the most suitable and complete answer to your question at the top, along with a few good alternatives below.