Pages

Friday, December 7, 2012

Sharing a Kindle account

I have discovered that sharing a Kindle account with family members can cause Amazon to prompt me with the oddest reading suggestions:


I suppose you never really know someone until you see what they read. It's possible that my reading choices are also involved in the determination of these recommendations. However, if Jack Reacher novels and Steal Like an Artist result in these recommendations, it just makes me wonder about the algorithm. Ha!

Sunday, October 21, 2012

What Cancer Cannot Do

For my sister and all those who battle cancer...
Cancer is so limited...
It cannot cripple love.
It cannot shatter hope.
It cannot corrode faith.
It cannot eat away peace.
It cannot destroy confidence.
It cannot kill friendship.
It cannot shut out memories.
It cannot quench the spirit.
It cannot silence courage.
It cannot reduce eternal life.




Thursday, September 27, 2012

Making Impactful Performance Changes Webinar Follow-up

Thanks everyone who attended the 3rd segment of my 3-part DBA Performance webinar series. I enjoyed delivering them and hope you found them informative and useful.

You can find the recording here and the presentation slide deck here.

There were two SQL statements in the deck that were used to produce execution plan output (either using dbms_xplan.display_cursor or dbms_sqltune.report_sql_monitor). So you won't have to type them if you want to use them, here they are:

-- SQL to produce execution plan using dbms_xplan.display_cursor
SELECT xplan.*
FROM  
(
select max(sql_id) keep
       (dense_rank last order by last_active_time) sql_id
     , max(child_number) keep
       (dense_rank last order by last_active_time) child_number
  from v$sql
 where upper(sql_text) like '%&1%'
   and upper(sql_text) not like
  '%FROM V$SQL WHERE UPPER(SQL_TEXT) LIKE %'
) sqlinfo,
table(DBMS_XPLAN.DISPLAY_CURSOR
(sqlinfo.sql_id, sqlinfo.child_number, 'ALLSTATS LAST')) xplan ;


-- SQL to produce a SQL monitor report
SELECT dbms_sqltune.report_sql_monitor
(
    SQL_ID => '&sql_id',
    SESSION_ID => '&session_id',
  SESSION_SERIAL => '&session_serial',
    TYPE => '&report_format'
)
FROM  dual ;

And since we didn't have enough time to answer all the questions that were posted, I wanted to provide those answers here.

Thanks again to everyone who attended any of the events and I look forward to doing more in the future. Stay tuned!



Questions and Answers

Q: Can spatial queries be optimized, queries that deal with large spatial data (land registry, geological data)
A: Any query can be optimized in one way or another, but I'd suggest that you check out the following link to a question posed on the Oracle Forum about spatial performance: https://forums.oracle.com/forums/thread.jspa?threadID=494768.


Q: Could you  pls repeat when an SQL will be monitored automatically?
A: Queries that run in parallel or any query that consumes 5 or more seconds of CPU or IO time.


Q: How did you arrive at 10ms time for one row index maintenance?
A: The 10ms time is from the book I referred to (Relational Database Index Design and the Optimizers). It is a combination of Queuing time, Seek time, Disk rotation time and Transfer time. Remember that it is intended to be used for doing estimated calculations; your mileage may vary.


Q: Do you need SQL tuning license to use MONITOR hint?
A: Well, you could use the MONITOR hint, but you just couldn't view the SQL Monitor reports of the collected data. So, I suppose you could imply that you couldn't/shouldn't use the MONITOR hint if you don't have the appropriate Tuning Pack license.


Q: STA will recommend new indexes in isolation for just one query. Doesn't it? That may not be useful when you have got to worry about 10 different queries accessing one table.
A: Yes, STA isn't looking at index creation from a system-wide view. It only looks at the one SQL statement and determines if a non-existing index might provide better performance. It cannot tell you what the creation of that index will do, either to the good or bad, for other queries. In the end, it is your responsibility to thoroughly test any index creation or modification in a representative environment to determine the effects.


Q: On a partitioned table , local index, if we want the load to be faster , the local index of the partition being loaded could be made unusable. but queries that are looking for partitions other than the one that was made unusable, don't use the index due to the fact that one of the partitions is unusable. Is there a way to tell the optimizer go ahead and use the index (does not work even with a profile)
A: I'd actually investigate other ways of performing loads, like using partition exchange, if possible. I do not know a way around the unusable index problem so I'd suggest investigating how to change your load process to avoid having to use that method.


Q: What is desired way to do benchmark and compare of SQL performance?
A: In part, the most appropriate way to benchmark SQL performance will depend on your specific circumstances. If you have an application that has been running in production for a while, you can use AWR data (or Statspack if you're not licensed for AWR) to retain execution information and do comparisons over time. If you're developing new code, you may have Performance SLAs (System Level Agreements) that specify the expected response times. In that case, testing would be evaluated against those metrics. But, regardless of your situation, you need a baseline measurement that is the expected/desired response time and then all tests must meet those minimum standards. You need to be able to measure performance under load and with representative data volumes to do this well. And, you have to keep in mind that response time isn't just about 'now', it's about the future as well. Testing must help ensure that performance will remain stable over time as user load and data volumes increase.


Q: Does join order still matter in 11gR2?
A: Not really and actually it (theoretically) mattered since the optimizer began using CPU costing and not just IO costing as early as Oracle version 9. The optimizer will use statistics to evaluate and weight various join orders and doesn't just evaluate joins in the order the SQL is written. For my own purposes, I typically try to write SQL in the order I "hope" it will get executed. If what I hope for and what actually occurs don't match, then I work my way through the differences. But, there is not a limit to what the optimizer will chose based on the order the SQL is written.


Q: Does Oracle engine have a process that collects statistics information near real time so that CBO can always get the latest and correct information?
A: There is a default stats collection job that runs in a nightly maintenance window. This job will check the staleness of the statistics and collect updated stats if needed (by default, statistics are considered stale when the table has changed by 10%). But, there is no automated way in place to collect "near real time" statistics. Of course, you could set up collections to occur as frequently as you wish, but due to the overhead required to do collections, most people don't collect stats except during non-peak times.


Q: What was the title of the database tuning/index book Karen mentioned?
A: Relational Database Index Design and the Optimizers by Tapio Lahdenmaki and Michael Leach


Q: In terms of Disk I/O versus Memory, have you seen an increase when adding more I/O or Memory to improve database performance? Or does it depend on a OLTP / OLAP?
A: Well, the only way to know if adding disk capacity or memory will help improve your performance footprint is to know if either disk IO or lack of memory is a bottleneck for you. If your bottleneck is CPU, then obviously adding disk or memory isn't going to fix your problem. So, my answer is the old consultant stand-by of "it depends". If you have a known issue with disk capacity or memory, then adding more of those resources should give you more breathing room. But, if you add those resources and something else is your main bottleneck, then you won't really see much improvements. 


Q: When looking at a SQL Query Execution Plan, what top 5 areas would you recommend for DBAs to look at initally. Thanks
A: I'd go so far as to say there's only one thing you need to look at and that is where is the most time and resources consumed in the plan? If you gather the appropriate execution plan data that includes rowsource execution statistics (using gather_plan_statistics or monitor hints for example), then you will be able to quickly find the lines of the plan that are your "big hitters". Once you have that, you must work towards reducing the time and resources consumed by that step. After you fix that, then you can check again and see where the most time is then spent. Keep going until you meet your SLA or can make no further improvement. Admittedly, just finding the "big hitter" steps in the plan is easy, but knowing what to do once you find them is the hard part. For that, I'd suggest learning as much about the optimizer and SQL execution fundamentals as you can so you'll be more adept at knowing what to do when you find a problem.






Tuesday, September 25, 2012

Making Impactful Performance Changes Webinar


Join me on Wednesday, September 26 as I present a webinar entitled "Making Impactful Performance Changes". This is the final segment in the 3-part DBA Performance Series sponsored by Embarcadero. Register even if you can't make it and you'll get an email with the link to the webcast video after it's over.

In this webinar, I'll cover
  • Common ways to rewrite SQL that make it perform better and more consistently
  • How and when to add or modify indexes 
  • How and when the SQL Tuning Advisor helps you and when it doesn't

Hope to see you!

Tuesday, September 4, 2012

Making SQL Performance Solutions Stick - Followup #2

During last week's webinar, there were several questions that I did not have a chance to answer so I wanted to follow-up by answering those questions here. I also wanted to provide links to previous webinar recordings as many people had asked where to find them.

Webinar Recordings
Best Practices for Developing Optimally Performing SQL

Diagnosing SQL Performance Problems (Part 1 of 3 part series)

Making SQL Performance Solutions Stick (Part 2 of 3 part series)


Question 1
The first question I wanted to address was something I provided an answer to but an audience member provided a follow-up that I wanted to comment on after having a chance to do some research.

Q: Is there any way to change the execution plan of an already running SQL? There are some articles on Internet that indicate that possibility by adding DBMS_RLS.ADD_POLICY which will call procedure that is using literals that will force hard parse.

Audience member follow-up: There is an article about using dbms_sqldiag_internal on optimizer blog about changing execution plan of an existing query on the fly.

My answer was that I didn't know of any way to change the execution plan of a query "on the fly" so I first wanted to clarify my understanding of the question. "On the fly" to me means that the questioner wanted to know if the execution plan could be changed while it was executing. My response to this was "no...not that I know of". There are several ways to influence/change the execution plan of a query after it has completed, but while it is executing, I do not know of any way to cause a plan change that would affect the query while it executes. The original questioner mentioned the use of DBMS_RLS.ADD_POLICY and I don't think that would do what was requested. That package/proc is used for Virtual Private Database and typically is called to provide the dynamic predicate needed to properly service VPD queries. The follow-up regarding dbms_sqldiag_internal also does not provide a way to change the execution plan "on the fly" but does offer a way to change the execution plan via a SQL Patch where you could add a hint, for example, that could change the plan the next time that same SQL was executed.

So, I'll stick by my original answer and say that I do not know of any way to change the execution plan for a SQL while it is currently executing. If anyone out there does know of a way to do so, I'd really like to hear from you.

Question 2
Q: Is the plan changing when the running node from clustered environment is evicted and the execution is failed over to another node?

A: On this one, I'll give the default answer of "it depends" as a starting point, but typically, the execution plan should not change. However, there is always a possibility the plan might change. So, I'll err on the side of caution and say that typically, the plan wouldn't change, but there is always a possibility.

Question 3
Q: Question on cardinality feedback-if child cursor is generated due to cardinality feedback for a given sql, will that new cursor be shared between users assuming that users are executing the same sql via application?

A: Yes, a child cursor generated due to cardinality feedback can be shared/used by other users executing the same SQL. The point of cardinality feedback is to be able to adjust row estimates derived in a plan so that they are more accurate. The new child cursor containing the updated cardinality estimates should then take precedence and be used.

Question 4
Q: Will reordering the table with respect to index help the CLUF (clustering factor) to be calculated correctly? Can you explain about it please?

A: Yes...and No. If you had an index on order_date (along with several other indexes on several other columns), and you reordered the table to store it in sorted order by order_date, the CLUF for that one index should certainly improve. However, what about all the other indexes on all the other columns? What happens to the index CLUF for those? The CLUF of those indexes would most likely be impacted as well causing some to get "worse".

Remember, CLUF is used to help weight the cost of using the index. But a table can only be physically stored in one order. The table data sort order can match one index order but it can't perfectly match the order of numerous indexes at the same time. So, the bottom-line is that you typically shouldn't reorder a table to try and get a better CLUF. You will likely adversely affect many more things that you fix in the long run.

Question 5
Q: Do you recommend running the following everyday dbms_stats.gather_schema_stats(ownname => , options =>'GATHER AUTO',estimate_percent => 40, method_opt => 'for all columns size 1', degree => 2, cascade => true ); ownername = sys, system

A: I'll answer this by saying that Oracle, particularly for the latest releases, recommends collecting statistics on dictionary objects (those owned by SYS) regularly as is evidenced by the fact that the automatic stats collection job includes a collection of these objects. So, if you have the automatic job enabled, you will be getting stats collections on these objects if/when needed.

There are specific procedures available for collecting dictionary stats (e.g. GATHER_FIXED_OBJECT_STATS, GATHER_DICTIONARY_STATS) and so I'd be more apt to use those than coming up with my own "home grown" method as you've shown here. However, since I am not familiar with your environment, then I can't say positively if your choice is "good" or "bad". I'd just encourage you to use defaults unless you have evidence you need something else that works better for your particular database application environment.

Question 6
Q: Can there be a negative impact on database performance of gathering stats for currently missing or null stats?

A: Gathering stats can have multiple impacts. First, there is the impact of the collection itself as it must use resources to complete. Second, there is the impact of how new statistics may affect the optimizer's plan choices. Plans could change based on the new statistics that cause performance to improve or decline. Finally, if you have *never* had stats on certain columns/objects, then don't just rush off and collect stats without thoroughly testing your application to discover the impacts. Ultimately, you won't know until you collect and test. Remember that you now have the option of collecting stats without publishing them (i.e. pending stats), so you can actually collect stats and do some specific testing with those stats before you apply them across the board.

Question 7
Q: Where do i check the column level stats?
A: Column stats can be found in the DBA_TAB_COL_STATISTICS view. I have a set of scripts I use to view all the various stats for a specific object and you can download those scripts here.

Wednesday, August 29, 2012

Making SQL Performance Solutions Stick Webinar Followup

Thanks again to everyone who attended part 2 of my Oracle SQL Performance webinar series entitled Making SQL Performance Solutions Stick. Embarcadero will be posting the recording within the next few days. In the meantime, I have uploaded the slides and scripts.

Thanks again and don't forget about the final installment in the series coming next month on September 29 entitled Making Impactful Performance Changes.

I hope to see you then!

Tuesday, August 14, 2012

I was wrong

Thanks to dear Maria Colgan for her intervention, I am publicly admitting the error of my ways and declaring to never use the drug of the optimizer_index_cost_adj parameter again. I publicly admit (again) that I have (on occasion) used an extremely high OICA setting to try and "force" Exadata to use full table scans over index scans. I was wrong and I am sorry.

Snicker...

I've had a great time at the Enkitec Extreme Exadata Expo (E4) this week in Dallas. I eagerly awaited the presentation of Maria Colgan ("The Optimizer Lady") where she promised to "show through the use of real-world examples what you can do to ensure the Optimizer fully understands the capabilities of the platform it is running on without having to mess with initialization parameters or Optimizer hints."

During her presentation today, she did offer a couple of ways (made available in Exadata patch bundles...only one of which is currently available) to help clue the optimizer in on a couple of Exadata features so that it can make adjustments to cost computations. The patch that is available allows you to collect system stats in "Exadata" mode. By doing so, the MBRC system statistic is actually recorded and made available to the optimizer and results in a reduction of the cost of a full scan. This ultimately results in full scans being selected more frequently (versus an index scan operation) which opens up the possibility of more smart scans being able to kick in. As Maria told us, the MBRC value is currently not retained in the system stats values and the optimizer uses a "default" of 8 (and has since Oracle 8 days). The patch allows a system stats collection done using the Exadata mode to collect and retain an actual (more realistic) MBRC value that can provide more accuracy.

I'm still holding out for options to affect specific individual SQL statements on a case-by-case basis versus a more broad brush approach that can cause all SQL to be affected, but this looks like a good start. In the meantime, I promise to do my very best to refrain from resorting to using OICA.

Disclaimer: I'm not promising to *not* use other parameters or hints...yet.  :)

Thanks to everyone who attended E4 and I hope you shared my opinion that it was a fantastic event. I'm already looking forward to doing it again next year!

Saturday, August 11, 2012

Learning to read

Not me...(I learned to read quite a few years ago).  :)

My daughter is learning to read and is sounding out words and leaving notes and signs all over the house. I love it! We've been reading several different series of books together (The Magic Treehouse, Junie B. Jones, Ivy & Bean) and she loves them. I'm thrilled to see her developing a love of reading.

But my favorite part of this adventure so far is her attempts at writing. I watch her concentrating so intensely at her desk and writing each letter. When she first started doing this, I was lucky to be able to decipher anything she had written. But now, I can almost always decode what she writes.

This morning she had left this outside my door for me to find.

Can you figure it out? It's "circus ticket" (srkis tikit). She had created a circus in her room with different acts that she performed. But, I had to have a ticket in order to be admitted. I have to say that it was the best s-r-k-i-s I ever attended.  :)

Friday, August 10, 2012

Making SQL Performance Solutions Stick Webinar - August 29


Join me for the 2nd Embarcadero Community webinar in my Performance Series on August 29 entitled "Making SQL Performance Solutions Stick". Registration is now open and I'd love to see you there!

Don't worry if you didn't attend the first session in July. This session will have plenty of new information and does not have any dependency on having to attend the first event.

I'll be talking about:
  • How to adjust statistics collection to accommodate for "sensitive SQL".
  • The differences between SQL Profiles and SQL Baselines and how to implement them.
  • How and when you can fool the optimizer into using a desired execution plan without changing SQL text.

The 3rd and final session, "Making Impactful Performance Changes", will be on September 26 and you can also register for it now as well. I'll post again about that session after the August webinar is behind us.

Hope to see you there!

*** By the way, the last session had several SQL Server attendees, so I just wanted to make sure everyone knows these sessions are Oracle-centric.

Tuesday, July 31, 2012

Diagnosing SQL Performance Problems Webinar Followup

Thanks to everyone who joined me for part 1 of my Oracle SQL Performance webinar series! Embarcadero will be posting the recording within the next few days. In the meantime, I thought I'd post the slides and a few scripts that were shown.

Thanks again and I hope to see you again next month for part 2!

Presentation slides (PDF)
Scripts (ZIP)

Tuesday, July 17, 2012

July 31 Webinar


Please join me for another Embarcadero sponsored Community Webinar on July 31. This is the first in a series of three webinars. My topics for this triple feature are:
Making SQL Performance Solutions "Stick"
Making Impactful Performance Changes

Here's a bit about what I'll cover in this first event:
Karen Morton begins her DBA Performance Series with Diagnosing SQL Performance Problems, designed to help DBAs understand the context in which SQL problems arise and present a framework to efficiently diagnose SQL issues, including: immediate items to identify, rule out, and confirm as well as identifying "fast" fixes to get performance back on track quickly.


You'll learn
  • How to distinguish between SQL and non-SQL performance optimization issues.
  • How to target typical cases that cause SQL to perform poorly and resolve them without rewriting SQL.
  • How to identify when a SQL statement needs to be rewritten to permanently solve performance inconsistencies.
Thanks again to Embarcadero for sponsoring the event and I hope to "see" you there!

Wednesday, June 6, 2012

How do you want it? Fast, Cheap, Great, Free?


Found a link to this on Pinterest. This was created in reference to graphic design but it struck me as being very appropriate for so much I see in my field of work. I think some adjustments could definitely be made to make it more specific to my field, but it struck me enough that I had to share!

Thursday, May 17, 2012

Force Matching Signature

It amazes and amuses me how many times I remember something that I used to know but forgot due to lack of use. I'm not sure if it's because I'm just getting older or what. :)

I had just created a SQL Profile and set the force_matching option to yes so that any SQL that had the same signature would be covered by this profile. If you need a reminder, what the force matching signature does is to treat literals in the SQL like binds (think along the lines of cursor_sharing = force). When force matching is in place, all the literals are treated as binds and the same signature is generated for any SQL that differs only by their literals (white space and case are handled already).

In my case, the force matching signature option works great when I'm working with SQL that has been generated from a reporting tool (like OBIEE) and is formulated with all literals. So, I had this gnarly OBIEE generated SQL statement that was running for over an hour that I needed to "fix". I could see what needed to happen to get a better plan, but I wasn't able to make changes to the model to get it to generate the SQL I wanted. However, I was able to produce a better plan by rewriting the SQL. Once I rewrote it, I decided to create a SQL Profile for the "bad" SQL and attach the "good" plan from my rewritten version of the query to it. I got the profile created and ran a test and everything looked good. The profile kicked in and the SQL used my "good" plan and it took just a few seconds to complete. Cool. All done, right? Well...not really.

A bit later, the same query was executed and instead of using the profile, it didn't and took over an hour again. Sigh...

What happened? Well, the SQL was the same except for a change to a literal. So, it's not really the same, but with force matching turned on, it should've been OK, right? That would've been true but, as it turned out, there was actually a single bind variable in the SQL statement in addition to all the literals. And, it was the presence of that single bind that caused the force matching signatures to be different and therefore my profile was ignored.

But why did the single bind variable mess the force matching signature up? The bind would be the same for every SQL statement, so shouldn't force matching simply convert the literals to binds and I'd be good to go? Something was tickling at the back of my brain and after a brief check of the documentation about force matching, I found this:
By setting force_match to TRUE, the SQL profile additionally targets all SQL statements that have the same text after normalizing literal values to bind variables. This setting may be useful for applications that use only literal values because it allows SQL with text differing only in its literal values to share a SQL profile. If both literal values and bind variables are in the SQL text, or if force_match is set to FALSE (default), then literal values are not normalized.
Ding, ding, ding. There it was. If both literals and binds are present, force matching won't kick in. Just to prove it to my brain, and hopefully make it stick a little better for next time, I did this simple test.
  1. Run two queries that differ only by a single literal.
  2. Verify that the force_matching_signature for both is the same.
  3. Add a predicate that uses a bind variable to each query.
  4. Execute each and check the signatures again.
SQL>variable v1 varchar2(10)
SQL>exec :v1 := 'Sunday';

PL/SQL procedure successfully completed.

SQL>
SQL>select /* kmfmtst00 */ count(*) ct from km1 where mcal_year = 2011 ;

                  CT
--------------------
                 365

SQL>select /* kmfmtst00 */ count(*) ct from km1 where mcal_year = 2012 ;

                  CT
--------------------
                 366

SQL>
SQL>select /* getfm */ sql_id, plan_hash_value, force_matching_signature, substr(sql_text,1,200) sql_text
  2  from v$sql
  3  where upper(sql_text) like '%KMFMTST00%'
  4  and sql_text not like '%/* getfm */%' ;

SQL_ID             PLAN_HASH_VALUE FORCE_MATCHING_SIGNATURE SQL_TEXT
------------- -------------------- ------------------------ -------------------------------------------------------------------------------
6sz5sqg1yu2u7           3996576519      9139782190997132164 select /* kmfmtst00 */ count(*) ct from km1 where mcal_year = 2011
88bgq57sjbtkt           3996576519      9139782190997132164 select /* kmfmtst00 */ count(*) ct from km1 where mcal_year = 2012

SQL>
SQL>select /* kmfmtst00 */ count(*) ct from km1 where mcal_year = 2011 and mcal_day_name = :v1 ;

                  CT
--------------------
                  52

SQL>select /* kmfmtst00 */ count(*) ct from km1 where mcal_year = 2012 and mcal_day_name = :v1 ;

                  CT
--------------------
                  53

SQL>
SQL>select /* getfm */ sql_id, plan_hash_value, force_matching_signature, substr(sql_text,1,200) sql_text
  2  from v$sql
  3  where upper(sql_text) like '%KMFMTST00%'
  4  and sql_text not like '%/* getfm */%' ;

SQL_ID             PLAN_HASH_VALUE FORCE_MATCHING_SIGNATURE SQL_TEXT
------------- -------------------- ------------------------ -------------------------------------------------------------------------------
6sz5sqg1yu2u7           3996576519      9139782190997132164 select /* kmfmtst00 */ count(*) ct from km1 where mcal_year = 2011
88bgq57sjbtkt           3996576519      9139782190997132164 select /* kmfmtst00 */ count(*) ct from km1 where mcal_year = 2012
48rxh2r545xqy           3996576519      5839486434578375421 select /* kmfmtst00 */ count(*) ct from km1 where mcal_year = 2011 and mcal_day_name = :v1
6q610fykrr4d3           3996576519      8791659410855071518 select /* kmfmtst00 */ count(*) ct from km1 where mcal_year = 2012 and mcal_day_name = :v1
As you can see, the first two statements that differed by just a single literal shared the same signature. But, for the two statements that had the bind added, the signatures were different. That's exactly what happened to me for the SQL profile I had created. When I ran my test after creating it, I didn't check the query execution for different literal values so I didn't catch this.

So, in the end, I remembered something I forgot about force matching *and* I got a good mental slap about thorough testing. I assumed I knew how something was going to work and didn't use due diligence to test and confirm. Want to bet that next time I'll remember? I can't be 100% certain, but I'd say the odds are in favor that I will.  :)

Tuesday, May 8, 2012

All Things Oracle Webinar

Thanks to everyone who attended my webinar sponsored by Red Gate and All Things Oracle. My topic was "How to Gather SQL Resource Consumption Metrics in Oracle". The webinar recording has been posted at AllThingsOracle as of May 9 but I thought I'd also make the materials available here for anyone interested. In the zip file, you'll find a PDF of the presentation slides, several example reports (ASH, AWR, SQL Monitor) as well as several scripts I used and a text file containing all the demo queries and output.

I was also asked a couple of questions that I wanted to follow up on. First, someone asked about the use of dbms_sqltune. As it turns out, it does require a license for the Oracle Tuning Pack in order to be able to legally use that package. That means that SQL Monitor reports are off limits without that license. Sigh...

Another person asked about how to create a SQL Profile and I included a script I use that was created by Kerry Osborne called create_sql_profile_awr.sql. You can get that script and several others along with a great overview of SQL Profiles at Kerry's blog.

Again, thanks to everyone who attended and I hope you found it interesting and helpful. Thanks also to the folks at Red Gate and particularly James Murtaugh who leads the charge at AllThingsOracle.com. Keep your eyes on the site for upcoming webinars, great articles and more!

How to Gather SQL Resource Consumption Metrics in Oracle (2012 May 8)


Saturday, May 5, 2012

Love this quote

Tom Kyte has joined the fun on Twitter (@OracleAskTom) and tweeted this photo that I just had to steal and post here so I'd have it "on file" for future use.  :)


Great quote!

Thanks for this one Tom...and welcome to the madness that is Twitter.

Monday, April 16, 2012

The Magic of Doing One Thing At A Time

I just read an article from the Harvard Business Review entitled "The Magic of Doing One Thing at a Time" and thought it was excellent. I highly recommend reading the whole article (it's not long) but here are a few "policies worth promoting" from the article:

1. Maintain meeting discipline.
Among other things, this includes starting and ending meetings at a "precise" time and insisting that all digital devices be turned off throughout the meeting. Amen!
2. Stop demanding or expecting instant responsiveness at every moment of the day.
Boy, do I agree with this one! When did we get to the point that people must be available NOW, regardless of when now is? I understand that our world is often a 24/7 venture when it comes to technology, but if we never give people a chance to "turn off" what results are we really creating?
3. Encourage renewal.
Again, I really believe in this one. People arrive at the workplace after perhaps being up half the night dealing with a problem, eat lunch at their desks, and often never move until they finally leave the office hours after the normal end of the day (again, due to some problem or another). We all need to truly stop and decompress every now and then. I took up meditation several years ago and find that in 20 minutes I can feel more alert and focused than I did before I unplugged. Whatever way you find that works to help you get some distance and take a real break a couple of times a day, I think it's essential and am a firm believer in the power of renewal.


There's a lot more good stuff in the article and I hope you'll take a few minutes to read the whole thing. It sure rang a bell for me!

Saturday, April 7, 2012

Thanks for attending the webinar!

Thanks to everyone who attended my two Embarcadero sponsored webinar sessions. I promised to make the slides and some scripts demonstrated in the slides available so here they are:

Slide deck (pdf format)
Scripts (zip format)

Keep an eye out for more webinars I'll be conducting in the coming months!

Saturday, March 31, 2012

Best practices? I'm not so sure.

I'll be delivering a webinar this week that has the words "Best Practices" in the presentation title. As I've been preparing, it has really struck me how my definition of a best practice is not the traditional IT definition. Wikipedia says
A best practice is a method or technique that has consistently shown results superior to those achieved with other means, and that is used as a benchmark.
OK. That sounds reasonable, right? So, what's my problem? Well, I think that if this definition is the traditionally accepted standard definition, there's something missing. What about context? I suppose you could infer that a specific context is implied for any best practice, but the inference isn't enough. An unfortunate side effect the label "best practice" has when it is attached to any method or technique is that it makes people blind to the context in which the practice should/could be applied. Can you think of any examples where a practice that is "best" in one context is questionable within another? Sure you can (feel free to share yours in the comments!). But, so often, many people will take a best practice at face value without considering the context of their own situation.

I think a best practice is really a contextual practice. I'm borrowing this term from Eric Ries, author of The Lean Startup. Ries writes that we should strive to understand the context which we find ourselves in, and then apply the practice which is best within that context. I couldn't agree more. It's not about blindly following steps or guidelines that may, or may not, be applicable and particularly useful in your situation. It's about understanding. You've got to understand your situation and take steps that make sense in that context. Not simply take steps that worked for some other person, in some other context, at some other time, just because it is labeled as a "best practice".

My goal for my webinar's list of "best practices" is to offer ways to gain understanding and clarify the context of your specific situation when developing SQL. With a clearer understanding and proper context, you can have confidence to make the right choices that lead to optimal results. Come join me for the webinar and see if I succeed!

Friday, March 23, 2012

April 5 Webinar (sponsored by Embarcadero)

I'll be conducting a webinar entitled Best Practices for Developing Optimally Performing SQL on April 5. Embarcadero is sponsoring the event as part of their commitment to support the Oracle community with growth and learning opportunities.

The webinar will be offered twice on April 5.
- 6:00 am PDT / 9:00 am EDT / 13:00 GMT / 2 pm in UK
or
- 11:00am PDT / 2:00pm EDT / 18:00 GMT / 7 pm in UK

I'm normally hesitant to use the label "Best Practices" as I think things often get labeled as such and people forget to test and verify them in context of their own situations and end up creating as many problems as they attempt to solve. So my list of best practices doesn't include specific do's and don'ts. My list is more about how to approach SQL development with a strong emphasis on several key areas:
  • Plan first - There's almost always more than one way to write a SQL statement. Cover your bases.
  • Ask questions - You have to "tune the question, not the query". The more questions you ask, the deeper your understanding.
  • Focus on the journey, not the destination - If you are only focused on what the query answer/result is, you'll likely overlook how you write SQL. Typically, how and why are more important questions to ask.
  • Gather data - You have to know/understand the resource consumption of your SQL in order to know if it will perform well (both short and long term).
  • Play "what if" - Try to determine what would happen if certain conditions occurred. Things like data volume increase and number of users can have an effect on how your query will perform. "What if" games will help you think about scalability.
  • Instrument your code - Add a way to monitor and trace your code easily on demand.
  • TTT (Test To Destruction) - Test, test, test....then test some more!
I'm looking forward to the event and hope you'll join me for the fun! Register soon and save your spot.

Wednesday, March 21, 2012

Starting young





I think this is seriously awesome. This group of students are using the Beginning Oracle SQL book I co-authored in their high school Oracle class. Yes, that's right. I said high school Oracle class. I just recently became aware that a teacher at this particular Colorado high school has been including Oracle curriculum in her computer science class for a decade or so. Apparently, these kids are using the book to help them prepare for the Oracle SQL certification exam.

It amazes me that these high school students are getting exposed to Oracle this early. Not only are they getting exposed but they are learning enough to take (and pass) the Oracle SQL exam. I even heard there have been some students over the years who have completed their DBA certifications as a result of this program. Heck, when I was in high school I had never touched a computer. Of course that was back in the stone age before computers were as prevalent as bottled water (which also wasn't really around when I was a kid)!

The fact that this educator has put this in her curriculum and continued to do so for so long is something special. She also takes some of her students to the RMOUG (Rocky Mountain Oracle Users Group) Training Days each year as well. That shows real interest in the kids and a desire to let them see how their education is put into action in the real world. I think such an experience would have had a huge impact on me at that age.

So, hats off to this group of students and their teacher. Here's to all of them and the next generation of Oracle professionals!

Wednesday, March 14, 2012

What's important

In reviewing my blog activity over the past few years, I noticed that the first two years I had the blog I averaged about one post per week. The next year, my posts fell off to about one per month. Then last year, I only had 3 posts! Talk about decline!

I want to revitalize my blog and post 1-2 times per month. Given my blogging history, I realize this is a big change to commit to. I think part of me being successful is going to hinge on understanding where making time to blog more regularly fits into my life. My lack of posts hasn't been due to a lack of material or ideas...I have tons of those. My passion for work and interest in continuing to learn and grow professionally is as strong as ever. But I've found that almost everything has taken a back seat to spending time with my family. I get frustrated at times because I do want to blog and work more, but I don't want to be away from my family and I'm jealous of every minute of my time. One thing my daughter has taught me is that every moment, no matter how simple or ordinary it is, can be filled with wonder and joy. And, I really want to appreciate these moments fully. Now.






So, I'm going to work towards spending a few moments each day to compose some ideas and put them into posts that show up more frequently here. To each of you who read this blog, thank you. I'll do my best to provide useful information (along with some personal thoughts, rants and fun) so that the moments you graciously take to stop by and read are well spent.

Saturday, March 10, 2012

Mac|Life and Tom Kyte

I usually have my nose in my Kindle or iPad while traveling, but there's always the time between take-off and 10,000 feet where electronics are verboten where I need something to occupy my time. As my time filler, I will often pick up a magazine to read and on yesterday's trip home from the Hotsos Symposium in Dallas, I grabbed a Mac|Life magazine to read.

I l-o-v-e my Mac. My entire household is Mac - MacBook Pro, iMac, iPhone, iPad, iTouch, and a Nano...the works. So I get a kick out of picking up a Mac related publication to read and always find some cool stuff that I didn't know about lurking in the pages. This time however, I found something that I knew quite a lot about and it really tickled me when I saw it. At the back of the magazine they have a How-To article, and this month's article was on how to Read PDFs with iBooks. The thing that most caught my notice was the use of Tom Kyte's book "Expert Oracle Database Architecture" in the demo. Click on the supplied link to see the whole article, but here's a quick shot of what originally caught my eye in the magazine:


I loved it that Tom's book was the example book they chose to demo the how-to. What can I say? The geek in me had spasms when my Mac and Oracle lives crossed paths like that. Admittedly, if it had been one of the Oracle books I had co-authored that they had used, you'd have had to pull me off the ceiling, but Tom's book was still pretty spiffy.  :)

Wednesday, March 7, 2012

Another Hotsos Symposium in the books

Another Hotsos Symposium is in the books (well, the main Symposium is over with only the Training Day remaining tomorrow). As usual, it was a phenomenal event. Hotsos has got it down to a science on how to deliver a 5 Diamond event and it just seems to get better every year. My thanks go out to everyone at Hotsos who work so hard to make this event happen (special nod to Rhonda B and Becky G...sounds like a new girl band, huh?). It's a week chock full of the best technical information on Oracle performance you're going to find anywhere, a lot of great food, superb networking and "get-to-know" you time with other attendees, and (last, but not least) a fantastic themed party night to top it all off. This year's party theme was "Steampunk". The attire I donned for this photo is indicative of the genre if you're wondering what the heck Steampunk is (more photos here).

Thanks also to everyone who came to hear me speak. I delivered two presentations this year and had fun with both.

I hope everyone who attended had as good a time as I did and for those who missed it, there's always next year!

Saturday, January 21, 2012

Hotsos Symposium

My favorite conference of the year is just around the corner.

The Hotsos Symposium will be held March 4-8 and will be celebrating its tenth anniversary. I'll be speaking again this year and, as always, am eagerly looking forward to the week. Every year the speakers, topics and opportunities for networking continue to excel and I'm sure this year will be no different. This year's Training Day event will be conducted by Jonathan Lewis so make sure to stay the extra day to take advantage of spending a day with him as he discusses "Designing Optimal SQL".

If you haven't signed up yet, there's still time. I'll look forward to seeing you there!