Thanks to everyone who attended the November 12 webinar!
Presentation PDF
Webinar recording
Sunday, November 17, 2013
Wednesday, October 30, 2013
Becoming an Everyday Oracle Pro - November 12 Webinar
Register for my next webinar on November 12 entitled "Becoming an Everyday Oracle Pro".
About the webinar
Whether you are new to Oracle or a seasoned veteran, you want to do your job to the best of your ability. Each one of us can become an everyday Oracle pro if we strive towards one basic truth: doing something well isn't only about what you know, but about how you apply what you know. Even if you have memorized a lot of information, it's not much good if you can't apply it, and more importantly, understand how and when to apply that knowledge effectively.
You can become an everyday Oracle pro and make even greater contributions to the success and effectiveness of your organization by focusing on a few basic principles.
In this session, you will learn:
This will be the sixth, and final, Embarcadero sponsored webinar for 2013. See you then!
About the webinar
Whether you are new to Oracle or a seasoned veteran, you want to do your job to the best of your ability. Each one of us can become an everyday Oracle pro if we strive towards one basic truth: doing something well isn't only about what you know, but about how you apply what you know. Even if you have memorized a lot of information, it's not much good if you can't apply it, and more importantly, understand how and when to apply that knowledge effectively.
You can become an everyday Oracle pro and make even greater contributions to the success and effectiveness of your organization by focusing on a few basic principles.
In this session, you will learn:
- The 3 R's of being an everyday Oracle Pro (Research, Remember, Replicate)
- The difference between memorization and knowledge
- How to think clearly about problem solving
- How to collect and grow your personal collection of helpful tools
This will be the sixth, and final, Embarcadero sponsored webinar for 2013. See you then!
Friday, September 20, 2013
Webinar Follow-up: Execution Plans - Learn by Example
Thanks to everyone who attended the September 17 webinar!
Presentation PDF
Webinar recording
Q&A
Q: Do all these methods of showing execution plans work (dbms_sqltune, dbms_xplan) with Oracle Standard Edition? What about creating extended statistics in Oracle SE? Do all these tips work with SE? Is SQL Monitor a licensed product?
A: You cannot use SQL Monitor reports (dbms_sqltune) with Oracle Standard Edition as they are produced using elements included in the Tuning Pack license which is *not* available on SE. However, you can use dbms_xplan without restriction and all the tips for how to read and analyze plans are the same regardless of the method you use to display plan data.
Q: We have a big table with 80 partitions.. for analyze it is taking long time.. is there any easy way analyze can done quickly?
A: You could consider using incremental partition statistics if many of the partitions have data that changes infrequently. Generally speaking, make sure to use the default collection parameters (like estimate_percent=>auto_sample_size) and only collect stats when you really need to (after data changes by > 10%). The following two links to the Optimizer Development Team's blog may also be of some help:
https://blogs.oracle.com/optimizer/entry/maintaining_statistics_on_large_partitioned_tables
https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics
Q: How can the order of filters, joins, etc in the where clause be controlled to manually keep a minimum dataset through the execution or just force a different one for educational / what if purposes?
A: You can control plan operations and the order in which they are executed using hints. Simply inject the hints that specify access operations (FULL, INDEX) and join methods (USE_HASH, USE_NL) and join order (LEADING). The more hints you provide, the more control you can apply to the plan operations.
Q: How do we know if the current execution plan is the best plan or if there is ANY execution plan better than the current execution plan?
A: Test! Remember that the optimizer has gone through numerous alternatives before settling in on the final plan. If the plan chosen isn't performing as well as you'd like, then you must try to determine alternatives (by using hints to force some choices or by rewriting the SQL or adjusting statistics...). There is also the Visual SQL Tuning method you could use to "map" the best order of operations for a SQL statement (see my July webinar for more on VST). The bottom-line is that you have to test to understand the performance of the chosen plan and then find the reasons why it under-performs and correct those root causes.
Q: Basically the SQL execution is in the AWR SQL history but not in shared pool. and we would like to see the execution plan using dbms_xplan. how can we do that?
A: DBMS_XPLAN.DISPLAY_AWR will do the trick. You'll provide the SQL_ID and PLAN_HASH_VALUE (optionally) and the FORMAT parameters you desire.
Q: is there a way to force FAST FULL INDEX SCAN instead of INDEX FULL SCAN?
A: Hints. The INDEX_FFS (table index) hint would force an INDEX FAST FULL SCAN. Don't use the hint indiscriminately as if the optimizer "thinks" the fast full scan would be better it would have costed it as such and selected that operation. If you believe you should be getting a fast full scan and are not, try and verify why before you hint the SQL.
Q: When will optimizer choose INDEX FAST FULL SCAN over INDEX FULL SCAN?
A: A fast full scan is similar to a full table scan in that it will read all the blocks (using multiblock reads) in the index without maintaining order. You often see this operation when the query result set can be satisfied from the index contents alone without having to do additional data block accesses. The bottom line (and somewhat cheeky response) is that it will be chosen when the optimizer thinks it is the best choice. If you find otherwise, it's going to be up to you to research and test to discover why the optimizer thought it was best when it actually wasn't.
Q: Is there a way to display the lines of execution plan by the order in which the lines are executed, instead of using the indentation which can be VERY tideous for 100+ lines of SQL statement?
A: My favorite script for doing this from Randolf Geist and you can find it on his blog at http://oracle-randolf.blogspot.com/2011/12/extended-displaycursor-with-rowsource.html. Of course, you could write your own but no need to reinvent the wheel.
Q: In SQL monitor report what do the columns Time Active (s) and start active (S) mean. I never found a good documentation explaining these 2?
A: The Time Active(s) column shows how long the operation has been active (the delta in seconds between the first and the last active time). The
Start Active column shows, in seconds, when the operation in the execution plan started relative to the SQL statement execution start time.
Q: Can you please tell in which case FULL TABLE SCAN is OK to see in the Explain Plan? or IS Full Table Scan is always bad?
A: All operations have a good use case! It's critical to **NOT** assign a "good" or "bad" judgment to any of them. Each operation may be optimal given the context in which it is used. With full table scans, you typically hope to see them being used when a significant amount of data from an object is needed (as determined by the number of blocks that must be accessed in order to retrieve the needed rows).
Q: What are the trade-offs between SQL Tuning Advisor and Execution Plans?
A: I wouldn't say there are any "trade-offs". SQL Tuning Advisor can be used to identify possible changes that could be helpful to your query's performance. One of the options STA may offer is the option to create a SQL Profile. The Profile provides some additional statistical information to the optimizer so it can/should produce a more effective execution plan. I personally think of STA as a tool to point me towards things I need to investigate.
Q: When running an execution plan, I see "- dynamic sampling used for this statement (level=6)" even though all of the objects in the query (table and indexes) have good statistics and optimizer_dynamic_sampling=2. Is there a way (without setting a 10053 trace) to find out why dynamic sampling was used, and at what part of the plan it was used?
A: From Oracle Database 11g Release 2 onwards the optimizer will automatically decide if dynamic sampling will be useful and what dynamic sampling level will be used for SQL statements executed in parallel. This decision is based on size of the tables in the statement and the complexity of the predicates. However, if the optimizer_dynamic_sampling parameter is explicitly set to a non-default value, then that specified value will be honored. When it does kick in at level 6, it is simply doing a 256 block sample to help the optimizer produce more accurate cardinality estimates on the objects being accessed in parallel.
Q: How do you find out about your extended stats, what query or view do I need?
A: I'm going to point you to a couple of blog articles from the Optimizer Development Team regarding extended stats which should answer this and any other questions you may have on extended stats.
https://blogs.oracle.com/optimizer/entry/extended_statistics
https://blogs.oracle.com/optimizer/entry/how_do_i_know_what_extended_statistics_are_needed_for_a_given_workload
Q: Does extended stats get maintained automatically or does it need to be manually collected again and again.
A: Please see the two links I provided for the previous question for more details. But, generally speaking, once you create an extended statistic, it will continue to be collected (if you use default collection parameters) until you specifically drop the extended stat.
Q: How to effectively trace an execution plan for a given SQL to understand why the Optimizer chose the specific execution plan?
A: You can capture a 10053 optimizer trace of the given SQL and review the trace file. However, that is not something I'd recommend as a primary method! You can "always" know that the optimizer chose a particular set of plan operations because those operations were the lowest costed options considered by the optimizer. So, if you really want to know why, you need to inspect the inputs the optimizer used to cost the various plans. The place to start is with object statistics and the execution plan rowsource statistics. You need to compare estimated cardinalities with the actual rows returned and find where discrepancies exist. When found, the discrepancies will lead you to the statistics you need to review or they can help you see where/how in the SQL the objects that have discrepancies are used. The bottom-line is that it's going to require you to research and evaluate the plan execution data to determine where the optimizer may have gone astray. Only in very rare cases would I resort to a 10053 trace.
Q: If the E-ROWS and A-ROWS differ a lot but still the execution plan did not change in comparing when E-rows and A-rows are matching, will there is a performance difference in the SQL?
A: You can't know that unless you find out why the estimates and actuals are different and take the necessary steps to correct things so that the difference is limited or eliminated. Once you find out why there was a difference and correct it, the plan may change and performance may improve. However, depending on the access paths and join methods available to the optimizer, it is entirely possible that the plan may not change after the estimates are improved and performance would therefore remain the same.
Q: How to interpret the COST value/estimate of a given SQL?
A: Cost is the value computed by the optimizer that indicates the estimated amount of work (time and resources) required to produce the result set. It is computed based on statistical formulas utilized by the optimizer to assign a value to each viable set of plan operations possible for a given SQL statement. For the most part, cost is not something I focus on as it is a given that the cost of the plan selected by the optimizer was computed to be the lowest of all possible choices. Therefore, if the response time and resource usage for that plan doesn't meet my expectations, my next step is to determine where/how the optimizer "went wrong" in computing the selected option as the best/lowest cost.
Q: What is the difference between explain plan and execution plan?
A: An EXPLAIN PLAN is simply the proposed plan that the optimizer "might" choose when the query is executed. It is *NOT* a guarantee of the plan that will be used at runtime. The execution plan, on the other hand, is the actual plan that was selected by the optimizer and used to produce the query result set. So, the easiest way to differentiate the two is that EXPLAIN PLAN is the estimate, the execution plan is the actual.
Q: How to analyze or find which tables need histograms for the best execution plans depending on bind variable values?
A: When you're considering histograms, you're considering skew in your data. So, you're looking for columns that contain skewed data. If you simply collect statistics using the default METHOD_OPT parameter ('FOR ALL COLUMNS SIZE AUTO'), histograms will be collected automatically for you. Now, the collection may not be perfect, so you may need to use your own knowledge of the data to help you define specific columns that need histograms. Also, if you find that for queries that use binds your performance is wildly variable, that may be a red flag to point you towards columns that need histograms as well as being an indicator that you might need to adjust your use of bind variables to use some literals to help the optimizer make the best plan choices.
Q: Used Memory - what does (0) means?
A: This column displays the sum of the maximum amount of memory that was used in all execution of the specific plan operation.
Q: What are advantages using this tool over oracle RAT?
A: RAT, or Real Application Testing, is simply a way to do two things: 1) capture and replay executions of application code (SQL) for a representative period of workload and 2) compare the performance (i.e. plan changes and resulting differences in response times and resource usage) of an original workload and the workload after some change(s). The 2nd element (known as SQL Performance Analyzer) helps automate the process of comparing before and after execution plans and can highlight plans that change. The plans that change may change for the good (they improve) or for the bad (they regress). SPA captures the changes and helps you to focus in on the plans that regress so that you can do further work to find and correct the issues. STA simply compares performance and plans for the same SQL_IDs and displays the difference. You could do this yourself manually but SPA provides a separately licensable product to do much of the work for you. However, if you find a problem, you'll likely still have to do some work on your own to determine why the plans changed and how to correct them. So, this is not an "either/or" situation. RAT uses execution plans and simply automates a bit of the legwork for you.
Stay tuned for details of my November webinar coming soon!
Presentation PDF
Webinar recording
Q&A
Q: Do all these methods of showing execution plans work (dbms_sqltune, dbms_xplan) with Oracle Standard Edition? What about creating extended statistics in Oracle SE? Do all these tips work with SE? Is SQL Monitor a licensed product?
A: You cannot use SQL Monitor reports (dbms_sqltune) with Oracle Standard Edition as they are produced using elements included in the Tuning Pack license which is *not* available on SE. However, you can use dbms_xplan without restriction and all the tips for how to read and analyze plans are the same regardless of the method you use to display plan data.
Q: We have a big table with 80 partitions.. for analyze it is taking long time.. is there any easy way analyze can done quickly?
A: You could consider using incremental partition statistics if many of the partitions have data that changes infrequently. Generally speaking, make sure to use the default collection parameters (like estimate_percent=>auto_sample_size) and only collect stats when you really need to (after data changes by > 10%). The following two links to the Optimizer Development Team's blog may also be of some help:
https://blogs.oracle.com/optimizer/entry/maintaining_statistics_on_large_partitioned_tables
https://blogs.oracle.com/optimizer/entry/incremental_statistics_maintenance_what_statistics
Q: How can the order of filters, joins, etc in the where clause be controlled to manually keep a minimum dataset through the execution or just force a different one for educational / what if purposes?
A: You can control plan operations and the order in which they are executed using hints. Simply inject the hints that specify access operations (FULL, INDEX) and join methods (USE_HASH, USE_NL) and join order (LEADING). The more hints you provide, the more control you can apply to the plan operations.
Q: How do we know if the current execution plan is the best plan or if there is ANY execution plan better than the current execution plan?
A: Test! Remember that the optimizer has gone through numerous alternatives before settling in on the final plan. If the plan chosen isn't performing as well as you'd like, then you must try to determine alternatives (by using hints to force some choices or by rewriting the SQL or adjusting statistics...). There is also the Visual SQL Tuning method you could use to "map" the best order of operations for a SQL statement (see my July webinar for more on VST). The bottom-line is that you have to test to understand the performance of the chosen plan and then find the reasons why it under-performs and correct those root causes.
Q: Basically the SQL execution is in the AWR SQL history but not in shared pool. and we would like to see the execution plan using dbms_xplan. how can we do that?
A: DBMS_XPLAN.DISPLAY_AWR will do the trick. You'll provide the SQL_ID and PLAN_HASH_VALUE (optionally) and the FORMAT parameters you desire.
Q: is there a way to force FAST FULL INDEX SCAN instead of INDEX FULL SCAN?
A: Hints. The INDEX_FFS (table index) hint would force an INDEX FAST FULL SCAN. Don't use the hint indiscriminately as if the optimizer "thinks" the fast full scan would be better it would have costed it as such and selected that operation. If you believe you should be getting a fast full scan and are not, try and verify why before you hint the SQL.
Q: When will optimizer choose INDEX FAST FULL SCAN over INDEX FULL SCAN?
A: A fast full scan is similar to a full table scan in that it will read all the blocks (using multiblock reads) in the index without maintaining order. You often see this operation when the query result set can be satisfied from the index contents alone without having to do additional data block accesses. The bottom line (and somewhat cheeky response) is that it will be chosen when the optimizer thinks it is the best choice. If you find otherwise, it's going to be up to you to research and test to discover why the optimizer thought it was best when it actually wasn't.
Q: Is there a way to display the lines of execution plan by the order in which the lines are executed, instead of using the indentation which can be VERY tideous for 100+ lines of SQL statement?
A: My favorite script for doing this from Randolf Geist and you can find it on his blog at http://oracle-randolf.blogspot.com/2011/12/extended-displaycursor-with-rowsource.html. Of course, you could write your own but no need to reinvent the wheel.
Q: In SQL monitor report what do the columns Time Active (s) and start active (S) mean. I never found a good documentation explaining these 2?
A: The Time Active(s) column shows how long the operation has been active (the delta in seconds between the first and the last active time). The
Start Active column shows, in seconds, when the operation in the execution plan started relative to the SQL statement execution start time.
Q: Can you please tell in which case FULL TABLE SCAN is OK to see in the Explain Plan? or IS Full Table Scan is always bad?
A: All operations have a good use case! It's critical to **NOT** assign a "good" or "bad" judgment to any of them. Each operation may be optimal given the context in which it is used. With full table scans, you typically hope to see them being used when a significant amount of data from an object is needed (as determined by the number of blocks that must be accessed in order to retrieve the needed rows).
Q: What are the trade-offs between SQL Tuning Advisor and Execution Plans?
A: I wouldn't say there are any "trade-offs". SQL Tuning Advisor can be used to identify possible changes that could be helpful to your query's performance. One of the options STA may offer is the option to create a SQL Profile. The Profile provides some additional statistical information to the optimizer so it can/should produce a more effective execution plan. I personally think of STA as a tool to point me towards things I need to investigate.
Q: When running an execution plan, I see "- dynamic sampling used for this statement (level=6)" even though all of the objects in the query (table and indexes) have good statistics and optimizer_dynamic_sampling=2. Is there a way (without setting a 10053 trace) to find out why dynamic sampling was used, and at what part of the plan it was used?
A: From Oracle Database 11g Release 2 onwards the optimizer will automatically decide if dynamic sampling will be useful and what dynamic sampling level will be used for SQL statements executed in parallel. This decision is based on size of the tables in the statement and the complexity of the predicates. However, if the optimizer_dynamic_sampling parameter is explicitly set to a non-default value, then that specified value will be honored. When it does kick in at level 6, it is simply doing a 256 block sample to help the optimizer produce more accurate cardinality estimates on the objects being accessed in parallel.
Q: How do you find out about your extended stats, what query or view do I need?
A: I'm going to point you to a couple of blog articles from the Optimizer Development Team regarding extended stats which should answer this and any other questions you may have on extended stats.
https://blogs.oracle.com/optimizer/entry/extended_statistics
https://blogs.oracle.com/optimizer/entry/how_do_i_know_what_extended_statistics_are_needed_for_a_given_workload
Q: Does extended stats get maintained automatically or does it need to be manually collected again and again.
A: Please see the two links I provided for the previous question for more details. But, generally speaking, once you create an extended statistic, it will continue to be collected (if you use default collection parameters) until you specifically drop the extended stat.
Q: How to effectively trace an execution plan for a given SQL to understand why the Optimizer chose the specific execution plan?
A: You can capture a 10053 optimizer trace of the given SQL and review the trace file. However, that is not something I'd recommend as a primary method! You can "always" know that the optimizer chose a particular set of plan operations because those operations were the lowest costed options considered by the optimizer. So, if you really want to know why, you need to inspect the inputs the optimizer used to cost the various plans. The place to start is with object statistics and the execution plan rowsource statistics. You need to compare estimated cardinalities with the actual rows returned and find where discrepancies exist. When found, the discrepancies will lead you to the statistics you need to review or they can help you see where/how in the SQL the objects that have discrepancies are used. The bottom-line is that it's going to require you to research and evaluate the plan execution data to determine where the optimizer may have gone astray. Only in very rare cases would I resort to a 10053 trace.
Q: If the E-ROWS and A-ROWS differ a lot but still the execution plan did not change in comparing when E-rows and A-rows are matching, will there is a performance difference in the SQL?
A: You can't know that unless you find out why the estimates and actuals are different and take the necessary steps to correct things so that the difference is limited or eliminated. Once you find out why there was a difference and correct it, the plan may change and performance may improve. However, depending on the access paths and join methods available to the optimizer, it is entirely possible that the plan may not change after the estimates are improved and performance would therefore remain the same.
Q: How to interpret the COST value/estimate of a given SQL?
A: Cost is the value computed by the optimizer that indicates the estimated amount of work (time and resources) required to produce the result set. It is computed based on statistical formulas utilized by the optimizer to assign a value to each viable set of plan operations possible for a given SQL statement. For the most part, cost is not something I focus on as it is a given that the cost of the plan selected by the optimizer was computed to be the lowest of all possible choices. Therefore, if the response time and resource usage for that plan doesn't meet my expectations, my next step is to determine where/how the optimizer "went wrong" in computing the selected option as the best/lowest cost.
Q: What is the difference between explain plan and execution plan?
A: An EXPLAIN PLAN is simply the proposed plan that the optimizer "might" choose when the query is executed. It is *NOT* a guarantee of the plan that will be used at runtime. The execution plan, on the other hand, is the actual plan that was selected by the optimizer and used to produce the query result set. So, the easiest way to differentiate the two is that EXPLAIN PLAN is the estimate, the execution plan is the actual.
Q: How to analyze or find which tables need histograms for the best execution plans depending on bind variable values?
A: When you're considering histograms, you're considering skew in your data. So, you're looking for columns that contain skewed data. If you simply collect statistics using the default METHOD_OPT parameter ('FOR ALL COLUMNS SIZE AUTO'), histograms will be collected automatically for you. Now, the collection may not be perfect, so you may need to use your own knowledge of the data to help you define specific columns that need histograms. Also, if you find that for queries that use binds your performance is wildly variable, that may be a red flag to point you towards columns that need histograms as well as being an indicator that you might need to adjust your use of bind variables to use some literals to help the optimizer make the best plan choices.
Q: Used Memory - what does (0) means?
A: This column displays the sum of the maximum amount of memory that was used in all execution of the specific plan operation.
Q: What are advantages using this tool over oracle RAT?
A: RAT, or Real Application Testing, is simply a way to do two things: 1) capture and replay executions of application code (SQL) for a representative period of workload and 2) compare the performance (i.e. plan changes and resulting differences in response times and resource usage) of an original workload and the workload after some change(s). The 2nd element (known as SQL Performance Analyzer) helps automate the process of comparing before and after execution plans and can highlight plans that change. The plans that change may change for the good (they improve) or for the bad (they regress). SPA captures the changes and helps you to focus in on the plans that regress so that you can do further work to find and correct the issues. STA simply compares performance and plans for the same SQL_IDs and displays the difference. You could do this yourself manually but SPA provides a separately licensable product to do much of the work for you. However, if you find a problem, you'll likely still have to do some work on your own to determine why the plans changed and how to correct them. So, this is not an "either/or" situation. RAT uses execution plans and simply automates a bit of the legwork for you.
Stay tuned for details of my November webinar coming soon!
Saturday, September 7, 2013
SQL Tuning Fundamentals: Execution Plans - September 17 Webinar
Register now and join me for my next webinar entitled "SQL Tuning Fundamentals: Execution Plans".
Few tools are as critical to SQL optimization as the execution plan. The ability to read and understand an execution plan allows us to evaluate and optimize SQL performance. Unfortunately, complex plans often seem daunting and can be difficult to understand. During this webinar, I'll walk you through a set of guidelines for how to read an execution plan and how to make sense of the operations and statistics (both estimated and actual) the plan output provides.
Topics covered in this webinar include:
- How to read an execution plan, and how various plan operations work
- How to use the plan to pinpoint performance problems
- Tools to accelerate your analysis of execution plan data
Wednesday, July 24, 2013
My day with the Ohio Oracle Users Group
Thanks so much to everyone who attended the Ohio Oracle User's Group event with me on July 18. I really enjoyed the opportunity to be there. The group appears to be vibrant and growing and everyone involved made it a top-notch experience (for me at least!).
My topic for the day was "SQL Stuff You Should Know" and was a mash-up of lots of different bits and pieces of information about SQL Tuning and the Oracle Optimizer. Links to my presentation slides will be made available on the OOUG web site, but I thought I'd also provide them here.
Downloads:
Presentation slides
Scripts
User groups are a fantastic way to network and have easily accessible and inexpensive opportunities to learn. I'm always happy to be invited to participate in an event and even more happy when I see a thriving community of my Oracle peers! Thanks again OOUG!!
Tuesday, July 23, 2013
Follow-up: Visual SQL Tuning Webinar
Thanks to everyone who attended my Visual SQL Tuning webinar. My goal was to keep it simple and show the value of using VST to help you know what execution plans "should" do.
Presentation PDF
Webinar recording
I want to thank:
Kyle Hailey for his extensive work on the subject and his recent great detailed VST presentation from KScope13.
Craig Martin for his building join order diagram I used but incorrectly attributed authorship to Kyle (sorry Craig!).
I hope to see everyone in September for my next webinar! Stay tuned for details.
Presentation PDF
Webinar recording
I want to thank:
Kyle Hailey for his extensive work on the subject and his recent great detailed VST presentation from KScope13.
Craig Martin for his building join order diagram I used but incorrectly attributed authorship to Kyle (sorry Craig!).
I hope to see everyone in September for my next webinar! Stay tuned for details.
Labels:
Embarcadero,
Oracle SQL tuning,
Visual SQL Tuning,
VST
Tuesday, July 2, 2013
Visual SQL Tuning - July 23 Webinar
Register now and join me on July 23 to discover how Visual SQL Tuning can help you understand execution plans and diagnose performance problems. During the webinar I'll cover:
- Visual SQL Tuning basics and how to create a VST diagram
- How to evaluate execution plan effectiveness with Visual SQL Tuning diagrams
- How to identify problems with performance statistics, SQL syntax, etc.
- Tools to accelerate the use of Visual SQL Tuning
Tuesday, May 14, 2013
Follow-up: Using Optimizer Hints for Oracle Performance Tuning Webinar
Thanks to everyone who attended my webinar on using hints for Oracle testing and performance tuning. As usual, it was a great event and I appreciate the comments and questions.
Downloads:
Presentation PDF
Related scripts
Webinar recording
I'll be back in the saddle again in July so keep your eyes open for the announcement of that event. Thanks again and hope to see you then!
Downloads:
Presentation PDF
Related scripts
Webinar recording
I'll be back in the saddle again in July so keep your eyes open for the announcement of that event. Thanks again and hope to see you then!
Monday, May 6, 2013
Using Optimizer Hints for Oracle Performance Tuning
My next Embarcadero sponsored webinar will be on May 14 and is entitled Using Optimizer Hints for Oracle Performance Tuning.
Register now!
Hints are excellent database performance tuning tools that direct the Oracle optimizer to utilize specific operations in SQL execution plans. We often use hints because the Oracle optimizer doesn’t always come up with the execution plan we want on its own. When used correctly, hints can help stabilize an execution plan to use the same operations over and over allowing the SQL to perform the way we desire.
In this webinar, I'll take a look at using hints specifically for testing. Hints are great, and often overlooked, testing tools.
Register for Optimizer Hints for Oracle Performance Tuning webinar to learn:
- The basics of using optimizer hints to choose desired plan operations
- How to setup and compare tests using different optimizer hints for changes in response time and resource consumption
- How to create and maintain an information repository of testing results to be used for problem analysis in the future
- Tools for applying and testing a broad range of hints
I hope to see you there!
Tuesday, March 26, 2013
Back to the Future AWR Mining Webinar Followup
Thanks to everyone who attended my webinar and thanks to Embarcadero (@DBPowerStudio) for hosting it. The presentation and scripts can be downloaded from the following links:
Presentation (PDF)
Scripts (ZIP)
Webinar recording
Stay tuned for my next webinar coming in May!
Tuesday, March 12, 2013
Webinar - Back to the Future: Oracle SQL Performance Firefighting using AWR
It's webinar time again! On March 26, Embarcadero will once again provide sponsorship for my webinar entitled "Back to the Future: Mining AWR Data for Oracle SQL Performance".
Abstract
Most of us have been in the situation where, for no apparent reason, performance for key SQL takes a nose-dive after having previously performed well. So, how do you handle this situation and stabilize performance back to acceptable levels? One approach is to go back in time using execution data stored in AWR. In many cases, AWR may contain what you need to revert your problem SQL to a better performing alternative.
Register for the webinar now to learn:
-
How to mine and analyze AWR data to review the SQL's performance over time
- How to validate that the SQL is using the "good" execution plan
- Tools to accelerate your analysis of AWR data and SQL code
Labels:
Embarcadero,
Oracle AWR,
Oracle performance,
sql tuning
Thursday, February 7, 2013
Webinar Recording Link
Embarcadero has posted the recording of my Understanding Oracle Optimizer Statistics webinar on YouTube. Check it out!
Wednesday, February 6, 2013
Understanding Oracle Optimizer Statistics Webinar Q&A
There were quite a few questions asked during the webinar yesterday that I didn't have time to answer online. So, here's the list of questions from the webinar transcripts with my answers. In some cases, I combined several similar questions and gave a single answer. Hope it's helpful!
1. Can you give your thoughts on bind peeking and how this relates to statistics?
Bind-peeking occurs during the hard parse of a SQL statement that includes a bind variable. At the time of the parse, the optimizer will "peek" at the contents of the bind variable and use that value in its cardinality calculation. The issue that can occur happens when the data is skewed (i.e. there are some frequently occurring values and some infrequently occurring values). If a histogram is present on that column and a literal is used in the SQL - instead of a bind variable - the optimizer is able to determine the cardinality exactly for that value because each query that uses a different literal value would require a new hard parse. Each parse could result in a different plan selection depending on the estimates derived for that specific value. But, the problem is that since the bind is present, there will be only one hard parse of the statement and thus whichever bind variable value is present during the initial parse, that's the plan that is in place for that specific SQL. Until that plan gets aged out of the library cache, it will continue to be used. So, if another bind variable value is used later and that value would be better serviced by a different plan operation, it would still use the original plan. This has changed in later versions of Oracle with the advent of features such as adaptive cursor sharing and cardinality feedback. But, the answer to your question is that bind peeking relates to statistics by making it a bit harder for the optimizer to create plans that are optimal for different bind values. Otherwise, how the optimizer uses the statistics is identical to a query that doesn't use binds.
2. How important are the system statistics (no workload statistics and workload statistics)?
Do you recommend to use workload statistics? When to use workload stats?
What about system statistics do you recommend to collect them?
Workload statistics are intended to help the optimizer understand workload characterizations that change over the course of the day (or whatever time period that is specific to you). For example, if your database is heavy OLTP between 8am and 6pm and heavy batch/reporting between 6pm and 8am, you could let the optimizer know about these different types of workload by implementing workload stats. The way it would work is that you would collect/capture workload stats twice: once during the daytime OLTP hours and once during the evening batch hours. The captured stats indicate how cpu, IO and throughput looks during that time period and could show the optimizer that the daytime hours are full of numerous fast-running queries whereas the evening hours are consumed by long-running queries. This information can help the optimizer choose plans more appropriate to each workload (for example, more index scans during the day and more full scans during the evening). Noworkload stats are in place by default and use fairly innocuous values for these stats. If you've never used workload stats, I advise caution before attempting to implement them. Different workload stats can/will effect the optimizer's plan choices and it may do so in ways that effect more plans than you expect (or want). So, while workload stats can provide the optimizer additional information from which to develop plans, make sure you are prepared to do in-depth and thorough testing before implementing them.
3. Do the order of predicates in the where clause influence in any way the order the optimizer will use them, i.e. if I put the filter for my largest table first, would it take it first?
The order of predicates used to make a difference in the long ago prior to the advent of cost-based optimization. But today's optimizer, using stats, will calculate which predicates will provide the best filtering and execute those first. So, the only reason to write your predicates in any order is so that you can verify what you expect with what the optimizer ends up choosing in the plan. If you think the order should be different than the optimizer chooses, you can quickly compare your written order with the steps in the plan and see where things are "off". Then, you can verify where the optimizer went wrong by reviewing the statistics and comparing the actual vs estimated values the optimizer made.
4. So are you stuck with no histogram to account for skew if you use bind variables?
No, you're not stuck. You just have to realize that skew and bind variables don't always play nicely together. As I mentioned in the earlier answer on bind peeking, today's optimizer is much better at handling binds and data skew. If, however, you find that a particular SQL's behavior is unpredictable and you have to have stability, you may have to consider writing the SQL specifically to accommodate certain cases of skew. For example, you may write two SQL statements and use IF/THEN logic in your code to execute the correct SQL based on the bind value to be used. It requires extra code and knowing about specific corner cases where skew is a problem, but when it's something that is important enough, that may be your best option.
5. Are terms "selectivity" and "density" used interchangeably?
The difference is that density refers to the computed answer to the expression 1/number-of-distinct-column-values. Each column has its own density. Selectivity is typically thought of as the combined densities of multiple columns (or predicates). Such that if you had a WHERE clause of WHERE gender = 'F' and district = 12, the density for gender would be 1/2, or .50, and the density for district would be 1/12, or .083. The predicate selectivity would be .50 x .083 = .0415. But, if you only have a single predicate, then you could use the terms interchangeably but I prefer to keep the two terms separate for clarity.
6. As a general default, what value would you recommend for the "method_opt" parameter of dbms_stats.gather_schema_stats, and would your recommendation be different for Oracle 11.2 vs. 9.2?
In v11, particularly 11.2, I'd recommend using the default method_opt of FOR ALL COLUMNS SIZE AUTO. There have been numerous improvements in how stats are collected and I think the defaults provide "close to perfect" results in most cases. I'd have to say that I didn't start using the defaults until this latest version. For pre-11 Oracle versions, I stuck with FOR ALL COLUMNS SIZE 1 and then did separate collections for tables/columns that I knew would benefit from histograms. The bottom-line is that there really is no "one size fits all" way to collect stats. Your data and your SQL have specific nuances that only you can know. But, if you're running the latest version of Oracle, I'd start with the defaults and modify/adjust from there to meet your specific needs.
7. Which version did Extended Statistics come out in?
11g release 1
8. Is there a way to see the transformed statement?
The transformed statement isn't emitted anywhere. The closest you can get is to use an optimizer trace, 10053, and review that information to see which transformations were considered and selected.
9. While SQL remains the definitive language of DBMS, increasingly, Java in the database through JDBC or other server side programming have increased with Oracle and DB2; are DBAs doing well with the complexity of integrating SQL, PL/SQL, JAVA and JEE in delivering effective DBMS performance and in ensuring enterprise application security?
The biggest issue I see is that there are usually multiple groups that each have expertise in the different disciplines. Java developers have certain biases and PL/SQL developers have theirs. Sometimes these biases, or preferences for how to do something, cause the final product to suffer. I think the key is that all groups, regardless of the tool they are using, must remember that the database is the core (and common) element. Understanding how the database does what it does (i.e. how it executes SQL) is critical. Then, each tool can be utilized to exploit its strengths while making sure to support what the database needs and can do best.
10. After using the hint with 2 predicates why is Optimizer estimating 100 rows? I am assuming the last row in stats is showing the optimizer estimated rows the SQL will return
When using a dynamic_sampling hint at level 4, the optimizer will be able to consider relationships/dependencies between columns it previously considered independently of one another. Since the two columns used in the example were identical (i.e. they contained identical data and thus either used alone would return the same answer), by default, they would be considered independently of one another and would cause the selectivity to be too low. However, when the hint is applied and the relationship between the columns is known, the optimizer computes the selectivity properly so that it is the same as if only a single predicate were used.
11. What would be the difference between letting the out of the box gather stats vs just creating the gather stats on the specific schemas that is used by the application?
Out of the box has multiple meanings. It can refer to the default settings and the default scheduled maintenance job where stale stats are gathered. In either case, I think using default settings (like method_opt FOR ALL COLUMNS SIZE AUTO) are a good place to start. But, as I already mentioned in a previous answer, I think each site needs to adjust stats collection parameters to properly handle their own unique needs. As for the default scheduled maintenance job that collects stale stats, I'm a little wary of that one. I prefer to control when stats are collected so the default job makes me feel less in control of things. So, once again I'll say that I think the defaults (all of them) are there because they are intended to suit the needs of most databases, most of the time. Only you can determine if the defaults "as is" work in your situation and if not, then you must adjust accordingly.
12. How often should the data dictionary stats be gathered?
I'll answer your question with another question: how often and by how much does the data dictionary in your database change significantly enough to require updated stats? You have to know the answer to that question before you can decide the best collection strategy for dictionary stats in your environment. I will say that if you are upgrading versions or doing any significant patching that effects data dictionary content, collecting dictionary stats should be done after that effort.
13. Does RULE hint prevent dynamic sampling?
Yes, if the RULE hint is used, dynamic sampling will not be used. The reason is because dynamic sampling is used by the cost-based optimizer to gather statistics that are used in the development of the plan. When the rule-based optimizer is used, then these statistics are irrelevant and will not be collected.
14. Does dynamic sampling occur every sql run even within the same session?
Dynamic sampling occurs during the hard parse of any SQL statement that either doesn't have stats on an object used in the SQL or if a dynamic_sampling hint exists or if the optimizer_dynamic_sampling instance parameter setting is high enough to "kick in". But, once a single SQL has been parsed and the plan chosen and loaded into the library cache, it will be used until the plan is aged out.
15. What do you do for stats on a highly changing table?
If the changes to the table cause the number of rows to increase enough or the distribution of values to change enough so that skewed values shifts within columns, then you need to gather stats frequently enough to allow the optimizer to adjust plans based on the updated stats so that you get (hopefully) the best, optimally performing plan. However, if the changes to number of rows and distribution of values doesn't really effect the plans the optimizer should choose, then I wouldn't collect as frequently and instead collect on a regular schedule that suits your needs (daily, weekly, bi-weekly, etc). In the end, the reason to collect stats is that the plans the optimizer is choosing using the current stats aren't adequate. If you need new plans to be derived, then you'd collect new stats. If you don't want plans to change, then don't collect.
16. You have asked dynamic sampling = 4 . Is there particular reason why 4?
Level 4 is the level at which dynamic sampling will consider complex predicates (an AND or OR operator between multiple predicates on the same table).
17. When would you want to lock stats? Also if you have a transaction table that is cleared out each minute, would you want to prevent stats from being generated on that table?
I'd want to lock stats on a table if
1) the table is static and never or rarely changes
2) if the table is emptied and reloaded frequently but always contains basically the same amount and type of data
3) I don't want changes to stats on this table to cause plan changes
4) I want to delete all stats on a table and prevent any new stats from being collected
If I have a frequently cleared and reloaded table, I'd either want to collect a set of stats at a time when the contents of the table are representative of what is typically queried and then lock them. Or, I'd want to delete the stats completely and lock them so that future stats collections are not allowed but instead remain empty thus allowing dynamic sampling to kick in based on the optimizer_dynamic_sampling parameter setting (which I'd want to set to at least 2 and most likely 4 for that table).
1. Can you give your thoughts on bind peeking and how this relates to statistics?
Bind-peeking occurs during the hard parse of a SQL statement that includes a bind variable. At the time of the parse, the optimizer will "peek" at the contents of the bind variable and use that value in its cardinality calculation. The issue that can occur happens when the data is skewed (i.e. there are some frequently occurring values and some infrequently occurring values). If a histogram is present on that column and a literal is used in the SQL - instead of a bind variable - the optimizer is able to determine the cardinality exactly for that value because each query that uses a different literal value would require a new hard parse. Each parse could result in a different plan selection depending on the estimates derived for that specific value. But, the problem is that since the bind is present, there will be only one hard parse of the statement and thus whichever bind variable value is present during the initial parse, that's the plan that is in place for that specific SQL. Until that plan gets aged out of the library cache, it will continue to be used. So, if another bind variable value is used later and that value would be better serviced by a different plan operation, it would still use the original plan. This has changed in later versions of Oracle with the advent of features such as adaptive cursor sharing and cardinality feedback. But, the answer to your question is that bind peeking relates to statistics by making it a bit harder for the optimizer to create plans that are optimal for different bind values. Otherwise, how the optimizer uses the statistics is identical to a query that doesn't use binds.
2. How important are the system statistics (no workload statistics and workload statistics)?
Do you recommend to use workload statistics? When to use workload stats?
What about system statistics do you recommend to collect them?
Workload statistics are intended to help the optimizer understand workload characterizations that change over the course of the day (or whatever time period that is specific to you). For example, if your database is heavy OLTP between 8am and 6pm and heavy batch/reporting between 6pm and 8am, you could let the optimizer know about these different types of workload by implementing workload stats. The way it would work is that you would collect/capture workload stats twice: once during the daytime OLTP hours and once during the evening batch hours. The captured stats indicate how cpu, IO and throughput looks during that time period and could show the optimizer that the daytime hours are full of numerous fast-running queries whereas the evening hours are consumed by long-running queries. This information can help the optimizer choose plans more appropriate to each workload (for example, more index scans during the day and more full scans during the evening). Noworkload stats are in place by default and use fairly innocuous values for these stats. If you've never used workload stats, I advise caution before attempting to implement them. Different workload stats can/will effect the optimizer's plan choices and it may do so in ways that effect more plans than you expect (or want). So, while workload stats can provide the optimizer additional information from which to develop plans, make sure you are prepared to do in-depth and thorough testing before implementing them.
3. Do the order of predicates in the where clause influence in any way the order the optimizer will use them, i.e. if I put the filter for my largest table first, would it take it first?
The order of predicates used to make a difference in the long ago prior to the advent of cost-based optimization. But today's optimizer, using stats, will calculate which predicates will provide the best filtering and execute those first. So, the only reason to write your predicates in any order is so that you can verify what you expect with what the optimizer ends up choosing in the plan. If you think the order should be different than the optimizer chooses, you can quickly compare your written order with the steps in the plan and see where things are "off". Then, you can verify where the optimizer went wrong by reviewing the statistics and comparing the actual vs estimated values the optimizer made.
4. So are you stuck with no histogram to account for skew if you use bind variables?
No, you're not stuck. You just have to realize that skew and bind variables don't always play nicely together. As I mentioned in the earlier answer on bind peeking, today's optimizer is much better at handling binds and data skew. If, however, you find that a particular SQL's behavior is unpredictable and you have to have stability, you may have to consider writing the SQL specifically to accommodate certain cases of skew. For example, you may write two SQL statements and use IF/THEN logic in your code to execute the correct SQL based on the bind value to be used. It requires extra code and knowing about specific corner cases where skew is a problem, but when it's something that is important enough, that may be your best option.
5. Are terms "selectivity" and "density" used interchangeably?
The difference is that density refers to the computed answer to the expression 1/number-of-distinct-column-values. Each column has its own density. Selectivity is typically thought of as the combined densities of multiple columns (or predicates). Such that if you had a WHERE clause of WHERE gender = 'F' and district = 12, the density for gender would be 1/2, or .50, and the density for district would be 1/12, or .083. The predicate selectivity would be .50 x .083 = .0415. But, if you only have a single predicate, then you could use the terms interchangeably but I prefer to keep the two terms separate for clarity.
6. As a general default, what value would you recommend for the "method_opt" parameter of dbms_stats.gather_schema_stats, and would your recommendation be different for Oracle 11.2 vs. 9.2?
In v11, particularly 11.2, I'd recommend using the default method_opt of FOR ALL COLUMNS SIZE AUTO. There have been numerous improvements in how stats are collected and I think the defaults provide "close to perfect" results in most cases. I'd have to say that I didn't start using the defaults until this latest version. For pre-11 Oracle versions, I stuck with FOR ALL COLUMNS SIZE 1 and then did separate collections for tables/columns that I knew would benefit from histograms. The bottom-line is that there really is no "one size fits all" way to collect stats. Your data and your SQL have specific nuances that only you can know. But, if you're running the latest version of Oracle, I'd start with the defaults and modify/adjust from there to meet your specific needs.
7. Which version did Extended Statistics come out in?
11g release 1
8. Is there a way to see the transformed statement?
The transformed statement isn't emitted anywhere. The closest you can get is to use an optimizer trace, 10053, and review that information to see which transformations were considered and selected.
9. While SQL remains the definitive language of DBMS, increasingly, Java in the database through JDBC or other server side programming have increased with Oracle and DB2; are DBAs doing well with the complexity of integrating SQL, PL/SQL, JAVA and JEE in delivering effective DBMS performance and in ensuring enterprise application security?
The biggest issue I see is that there are usually multiple groups that each have expertise in the different disciplines. Java developers have certain biases and PL/SQL developers have theirs. Sometimes these biases, or preferences for how to do something, cause the final product to suffer. I think the key is that all groups, regardless of the tool they are using, must remember that the database is the core (and common) element. Understanding how the database does what it does (i.e. how it executes SQL) is critical. Then, each tool can be utilized to exploit its strengths while making sure to support what the database needs and can do best.
10. After using the hint with 2 predicates why is Optimizer estimating 100 rows? I am assuming the last row in stats is showing the optimizer estimated rows the SQL will return
When using a dynamic_sampling hint at level 4, the optimizer will be able to consider relationships/dependencies between columns it previously considered independently of one another. Since the two columns used in the example were identical (i.e. they contained identical data and thus either used alone would return the same answer), by default, they would be considered independently of one another and would cause the selectivity to be too low. However, when the hint is applied and the relationship between the columns is known, the optimizer computes the selectivity properly so that it is the same as if only a single predicate were used.
11. What would be the difference between letting the out of the box gather stats vs just creating the gather stats on the specific schemas that is used by the application?
Out of the box has multiple meanings. It can refer to the default settings and the default scheduled maintenance job where stale stats are gathered. In either case, I think using default settings (like method_opt FOR ALL COLUMNS SIZE AUTO) are a good place to start. But, as I already mentioned in a previous answer, I think each site needs to adjust stats collection parameters to properly handle their own unique needs. As for the default scheduled maintenance job that collects stale stats, I'm a little wary of that one. I prefer to control when stats are collected so the default job makes me feel less in control of things. So, once again I'll say that I think the defaults (all of them) are there because they are intended to suit the needs of most databases, most of the time. Only you can determine if the defaults "as is" work in your situation and if not, then you must adjust accordingly.
12. How often should the data dictionary stats be gathered?
I'll answer your question with another question: how often and by how much does the data dictionary in your database change significantly enough to require updated stats? You have to know the answer to that question before you can decide the best collection strategy for dictionary stats in your environment. I will say that if you are upgrading versions or doing any significant patching that effects data dictionary content, collecting dictionary stats should be done after that effort.
13. Does RULE hint prevent dynamic sampling?
Yes, if the RULE hint is used, dynamic sampling will not be used. The reason is because dynamic sampling is used by the cost-based optimizer to gather statistics that are used in the development of the plan. When the rule-based optimizer is used, then these statistics are irrelevant and will not be collected.
14. Does dynamic sampling occur every sql run even within the same session?
Dynamic sampling occurs during the hard parse of any SQL statement that either doesn't have stats on an object used in the SQL or if a dynamic_sampling hint exists or if the optimizer_dynamic_sampling instance parameter setting is high enough to "kick in". But, once a single SQL has been parsed and the plan chosen and loaded into the library cache, it will be used until the plan is aged out.
15. What do you do for stats on a highly changing table?
If the changes to the table cause the number of rows to increase enough or the distribution of values to change enough so that skewed values shifts within columns, then you need to gather stats frequently enough to allow the optimizer to adjust plans based on the updated stats so that you get (hopefully) the best, optimally performing plan. However, if the changes to number of rows and distribution of values doesn't really effect the plans the optimizer should choose, then I wouldn't collect as frequently and instead collect on a regular schedule that suits your needs (daily, weekly, bi-weekly, etc). In the end, the reason to collect stats is that the plans the optimizer is choosing using the current stats aren't adequate. If you need new plans to be derived, then you'd collect new stats. If you don't want plans to change, then don't collect.
16. You have asked dynamic sampling = 4 . Is there particular reason why 4?
Level 4 is the level at which dynamic sampling will consider complex predicates (an AND or OR operator between multiple predicates on the same table).
17. When would you want to lock stats? Also if you have a transaction table that is cleared out each minute, would you want to prevent stats from being generated on that table?
I'd want to lock stats on a table if
1) the table is static and never or rarely changes
2) if the table is emptied and reloaded frequently but always contains basically the same amount and type of data
3) I don't want changes to stats on this table to cause plan changes
4) I want to delete all stats on a table and prevent any new stats from being collected
If I have a frequently cleared and reloaded table, I'd either want to collect a set of stats at a time when the contents of the table are representative of what is typically queried and then lock them. Or, I'd want to delete the stats completely and lock them so that future stats collections are not allowed but instead remain empty thus allowing dynamic sampling to kick in based on the optimizer_dynamic_sampling parameter setting (which I'd want to set to at least 2 and most likely 4 for that table).
Tuesday, February 5, 2013
Understanding Oracle Optimizer Statistics Webinar Follow-up
Thanks to everyone for attending today's webinar on Understanding Oracle Optimizer Statistics sponsored by Embarcadero Technologies. I appreciate everyone who took time to join me today and hope you found it informative.
The webinar recording will be posted within in the next couple of days, but you can download the presentation file now.
Understanding Oracle Optimizer Statistics - presentation
I'll provide the link to the recording as soon as it's available and will update this post with Q&A from the webinar sessions shortly.
Thanks and again and see you in March when I'll be presenting "Back to the Future: Oracle SQL Performance Firefighting using AWR."
The webinar recording will be posted within in the next couple of days, but you can download the presentation file now.
Understanding Oracle Optimizer Statistics - presentation
I'll provide the link to the recording as soon as it's available and will update this post with Q&A from the webinar sessions shortly.
Thanks and again and see you in March when I'll be presenting "Back to the Future: Oracle SQL Performance Firefighting using AWR."
Thursday, January 31, 2013
Webinar - Understanding Oracle Optimizer Statistics
Embarcadero is sponsoring another Oracle Community webinar on
- Oracle optimizer statistics fundamentals for computing cardinality/selectivity on single and multi-column predicates.
- How statistics can help the optimizer understand data distribution patterns.
- Reasons why certain ways of writing SQL limit the optimizer's ability and how query transformations improve the odds of getting a better execution plan.
Labels:
Embarcadero,
optimizer statistics,
Oracle,
Oracle optimizer,
webinar
Friday, January 18, 2013
Enkitec E4 2013
The 2nd annual Enkitec Extreme Exadata Expo (E4) is coming your way August 5-6, 2013 at the Four Seasons Hotel & Resort in Irving, TX. Last year's inaugural event received rave reviews from participants and this year's event should be even better!
After the conference, stick around and join me for a 3-day SQL/Exadata Performance Intensive course. I'll be covering how to approach optimizing SQL in both Exadata and non-Exadata environments. We'll start with some fundamentals that apply to how to approach tuning SQL in general and then look at how your focus needs to shift to take advantage of Exadata specific features.
Find and follow Enkitec in your favorite social media outlet to keep up with E4 news and lots more.
Hope to see you in August!
After the conference, stick around and join me for a 3-day SQL/Exadata Performance Intensive course. I'll be covering how to approach optimizing SQL in both Exadata and non-Exadata environments. We'll start with some fundamentals that apply to how to approach tuning SQL in general and then look at how your focus needs to shift to take advantage of Exadata specific features.
Find and follow Enkitec in your favorite social media outlet to keep up with E4 news and lots more.
Hope to see you in August!
Subscribe to:
Posts (Atom)