I work closely with the DBA when we are looking at performance. Most performance issues are related to poorly running SQL. There are tools and techniques you can use to improve slow or high cost queries which are hogging resources on the system down. This article describes a collection of tips I have learnt over some 20 years of developing and supporting Oracle Databases and Oracle EBS in particular. Many of issues can be found by modern tools like the Tuning Advisor, AWR, and via Enterprise Manager, however some problems are more ellusive and need some thinking-out-side-the-box.
Database and Application performance is influenced heavily by the SQL running on your database. SQL Queries are used to select, insert, update and delete data from your database. A query can be automatically built by design tools such as Discoverer, OBIEE or jDeveloper, but of course some could be a seeded or custom PL/SQL programmes or scripts. A query can be native SQL using underlying database tables or views but some can use custom PL/SQL packages or functions to derive data from more complex calculations.
A number of design decisions will influence the performance of your query, such as effective use of selection critieria, user entered parameters, tables, join order, partitions or index utilization. However, sometimes you will find that a COTS packages like EBS will be optimized for a single ledger (or set of books), hence should your instance contain multiple ledgers, then the performance can change significantly as the data from the different entities grows. A common problem can be implicit conversion, say where you store an Account Number in a Varchar column, and mistakenly join to a Number column or add a parameter using Number format; this ordinarily will prevent use of any Index and throw a full table scan.
All Oracle databases provide statistics on the queries currently running on your database. These stats should be maintained by your DBA, often they will form part of their schedule of database maintenance tasks. One of the first things to check is that DBMS_STATS is running at the correct frequency for all schemas on the database.
All queries are allocated an execution plan and query cost. The query cost is a measurable number assigned by the dbms which represents the work involved in running each of your queries. Cost has no units of measure. The database previews a query when it hits the database server, and chooses the best plan based on the current statisitics. It may after some time from initial release choose a new plan, which can sometimes result in different performance than predicted by the database. This is when you regular users will call the helpdesk complaining a report they ran yesterday suddenly takes 10x as long to run compared to yesterday. The cost will be influenced by the status of current database statistics. It's pretty good, but sometimes if can get it wrong.
There are several ways you can identify poor running queries.
Make sure Gather Stats at the appropriate time intervals and at the optimum percentages have been ran for the tables in the schemas involved. This is more of a DBA than developer task, but it does good to ask the question.
You need to decide when queries can be considered poor in different circumstances. Consider when they are run, how long they take, how much load on the system and so forth.
You need to decide on it's use-case and the urgency in terms of how badly it may be affecting the users or the wider system.
Next Page > Tuning - Costs