Introduction
We are in the era of AI, where increasing processing power is crucial for handling growing and complex workloads. As enterprise systems continue to scale, database performance challenges become increasingly common, especially under high-throughput operations. In my experience, no other database platform matches Oracle’s flexibility and tooling when it comes to identifying and resolving such performance issues.
Recently, we encountered a performance bottleneck in an Exadata Cloud@Customer (ExaCS) environment, where the database was handling a high volume of insert operations coming from a Kafka stream.
In this article, I’ll walk through the technical details of the SQL version count issue we faced and the solution we implemented to stabilize performance.
The Issue: Excessive SQL Versioning and Hard Parses
The ExaCS database was receiving a continuous stream of dynamically structured INSERT
statements from Kafka. The column structure of these inserts varied significantly; some contained 100 columns, while others had up to 150. This variation was ongoing and unpredictable.
Due to these structural differences, Oracle’s optimizer treated each statement as a unique SQL. As a result, the database began to experience:
Even though we were running Oracle 19.24, which includes enhancements to SQL plan management and version count handling, the optimizer still created new cursor versions for each structurally distinct INSERT
, which led to rapid cursor growth and degraded overall performance.
Figure 1: AWR report for Oracle SQLID version count
Temporary Workaround: Manual Flushing of High Version Count Cursors
As an immediate mitigation step, we identified the SQLs with high version counts and manually flushed them from the shared pool using their memory IDs. This helped temporarily relieve pressure on CPU and memory by:
-
Reducing shared pool bloat
-
Freeing up memory consumed by excessive cursor versions
-
Preventing further hard parsing on the same overloaded SQL
However, it's important to note that this is only a
temporary workaround. The relief is short-lived, as the issue resurfaces once new
INSERT
statements with varying structures continue streaming in from Kafka.
To clarify, this issue has not been resolved in Oracle 19.24, despite the version including several recent patches and updates. Here’s the output from the environment confirming the exact patch level:
[oracle@exaprd01-node01 ~]$ $ORACLE_HOME/OPatch/opatch lspatches
34697081;NOT SHIPPING LIBAUTH_SDK_IAM.SO IN 23 SHIPHOME INSTALL
36538667;JDK BUNDLE PATCH 19.0.0.0.240716
36414915;OJVM RELEASE UPDATE: 19.24.0.0.240716 (36414915)
36587798;OCW RELEASE UPDATE 19.24.0.0.0 (36587798)
36582781;Database Release Update : 19.24.0.0.240716 (36582781)
OPatch succeeded.
[oracle@exaprd01-node01 ~]$
To monitor and identify SQL statements with high version counts typically those contributing to shared pool pressure, you can use the following query:
SELECT version_count, sql_id, sql_text FROM v$sqlarea WHERE version_count > 512;
For any SQLs with unusually high version counts, manual flushing can be performed as a short-term mitigation step using the following commands:
select inst_id,ADDRESS, HASH_VALUE from V$SQLAREA where SQL_ID like '&sqlid';
exec sys.DBMS_SHARED_POOL.PURGE ('-ADDRESS-, -HASH-VALUE-', 'C');
Note: Use manual flushing with caution, especially in production environments, as it may impact performance for frequently executed queries.
Permanent Solution
To address the high version count issue more permanently, Oracle provides specific guidance in the following My Oracle Support note:
High Version Counts For SQL Statements (>1024) Post Upgrade To 12.2 and Above Causing Database Slow Performance Doc ID 2431353.1
As per the recommendation, you should add the following initialization parameter _cursor_obsolete_threshold = <recommended_value>
This parameter helps control the number of obsolete child cursors and can significantly reduce version count growth, improving shared pool performance and overall database stability.
If you’re running in a RAC environment, apply this change and restart the database instances in a rolling fashion to avoid downtime.
alter system set “_cursor_obsolete_threshold”=1024 scope=spfile;
Conclusion
This issue showed us how dynamic SQL from systems like Kafka can create serious performance problems, even in powerful environments like Exadata Cloud@Customer. Because each INSERT had a slightly different structure, Oracle treated them as new statements, leading to too many cursor versions, high CPU usage, and shared pool pressure.
Even though we were on Oracle 19.24, the problem still occurred. The key was identifying the root cause and taking action, monitoring version counts, applying a temporary fix, and then implementing a permanent solution using the _cursor_obsolete_threshold parameter.
In short, managing SQL behavior and understanding how Oracle handles different workloads is critical for keeping your systems running smoothly, especially in today’s fast-moving, high-volume environments.