I recently was sent a query to review that was performing poorly (the database version, if you're curious, was 9.2.0.7 ). The developer told me that the plan cost for the query in the test environment was 3 (so it should be really fast, right?) but the query took a little over a minute (74 seconds) to complete and returned no rows (that seems slow, right?). But, it had been put into production and....well, it's kinda easy to do that math on this one, huh? When it executed in production it took f-o-r-e-v-e-r.
Now, the test database was approximately 10% of the production database size. The plan looked like this (click on image for enlarged view):
The statement used a WITH clause to separate the accesses on the tables involved in the query into separate pieces. By that I mean the developer was attempting to do the table filtering in separate steps and then at the end, join them all back together. There were 10 tables used in the query and one of the tables was used 3 times. The work required by Oracle to execute each table access as an individual operation vs. joining the tables in a single multi-table join meant that thousands of additional blocks had to be retrieved to access rows that eventually got thrown away.
By simply rewriting the query to be a multi-table join, the response time came down to under 1 second in the test environment. That means that approximately 99% of the work the query did originally was removed.
Don't do work you don't have to do. Well...don't make Oracle do more work that it has to! Sometimes we can write queries in such a way that we help the optimizer do a better job of getting an optimal query execution plan. But, sometimes what we think is helpful just throws a wrench in the works.
The bottom-line:
We need to do our job of writing good, well thought out, "clean" code and let Oracle do it's job in return. Hopefully, most of the time, both of us will get it right.
No comments:
Post a Comment