Quantcast
Channel: John Watson's blog
Viewing all 69 articles
Browse latest View live

How indexes can degrade performance

$
0
0
articles: 

Indexes may improve the performance of SELECT statements, but what about DML? This simple demo shows how bad they can be.
First, I'll create a table and insert a million rows:

orclz>
orclz> create table t1 (c1 number);

Table created.

orclz> set timing on
orclz> insert into t1 select rownum from dual connect by level <= 1000000;

1000000 rows created.

Elapsed: 00:00:02.58
orclz>
And now repeat the test, but this time with the column indexed:
orclz>
orclz> drop table t1;

Table dropped.

Elapsed: 00:00:00.13
orclz> create table t1 (c1 number);

Table created.

Elapsed: 00:00:00.01
orclz> create index i1 on t1 (c1);

Index created.

Elapsed: 00:00:00.02
orclz> insert into t1 select rownum from dual connect by level <= 1000000;

1000000 rows created.

Elapsed: 00:00:10.29
orclz>

Nearly four times as slow! And that is just one simple numeric index. I see tables with twenty or thirty indexes: wide compound indexes, complex function based indexes. The effects on high volume DML may be devastating. Of course this simple not very scientific test may not apply to your environment, but it serves to emphasize the point that indexes have a cost. Be sure that you really need them.
Tests done using DB release 12.1.0.2, Windows 10, Dell laptop with SSD disc.
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

Oracle 12cR2 - the next release, cloud only?

$
0
0
articles: 

Database release 12.2 has been in beta for nearly a year, see the announcement here:
Oracle Announces Beta Availability of Oracle Database 12c Release 2
There have been a few hints of what is coming. For example: Multitenant will be able to host 4096 pluggable containers instead of "only" 252; enhancements to Global Data Services and high availability; distributed systems with sharding; some long overdue improvements to In-Memory; lots more. But when will we get it? From hints in some TARs I've had to raise, I was expecting it to be released in July, along with the July PSU - but it wasn't. So all I have to go on is the declared statement that it will be in the second half of the calendar year. Not too much of that left.
So far, so good - but that doesn't mean it will be available for everyone. See this,
Oracle Confirms 12.2 Database Release Will Be Cloud-Only At First
It looks as though Oracle Corporation is going to use 12cR2 as an incentive to move users onto Oracle Cloud hosted systems. I know from various sources that the Oracle Sales line of business is being heavily incentivised to sell Cloud, with sales of on-premises licences being discouraged. This move will also encourage users to move from third party cloud services (such as Amazon RDS) to the Oracle Cloud. So overall, if you aren't going to the Oracle Cloud, then you aren't going anywhere.
Of course there are many users who will never move to cloud hosted services. But for everyone else, it is time to investigating them. I've been working on Oracle Cloud for some time now. It's pretty good. In many ways, you wouldn't know that it is not your own machine. You can get started cheaply and easily, buying a few credits directly or through a partner. If you want 12.2, there may be no alternative for some considerable time..
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

How to change column order when using SELECT *

$
0
0
articles: 

Not infrequently, I see questions in forums such as "how can I add a column to a table, in between two existing columns?" The answers given are always (a) you can't, and (b) why do you want to?
I do not intend to address (b) in any detail. The reason is usually that the developer is using * in the column projection list of his SELECTs and/or not specifying a column list in his INSERTs. There are many reasons why these are poor programming practice.
But let us assume that for some legitimate reason, it is necessary to insert a new column between existing ones. Here is a 12c technique for doing it. Consider the table SCOTT.DEPT:

orclz> desc dept
 Name                                                        Null?    Type
 ----------------------------------------------------------- -------- ----------------------------------------
 DEPTNO                                                      NOT NULL NUMBER(2)
 DNAME                                                                VARCHAR2(14)
 LOC                                                                  VARCHAR2(13)

orclz>
and a requirement to add a new column TOTSAL in between the columns DNAME and LOC. Add the column, and see where it is:
orclz>
orclz> alter table dept add (totsal number);

Table altered.

orclz> desc dept;
 Name                                                        Null?    Type
 ----------------------------------------------------------- -------- ----------------------------------------
 DEPTNO                                                      NOT NULL NUMBER(2)
 DNAME                                                                VARCHAR2(14)
 LOC                                                                  VARCHAR2(13)
 TOTSAL                                                               NUMBER

orclz> select * from dept;

    DEPTNO DNAME          LOC               TOTSAL
---------- -------------- ------------- ----------
        10 ACCOUNTING     NEW YORK
        20 RESEARCH       DALLAS
        30 SALES          CHICAGO
        40 OPERATIONS     BOSTON

orclz>
and the new column is at the end. Now a little 12c hack: make LOC invisible, then visible again:
orclz>
orclz> alter table dept modify (loc invisible);

Table altered.

orclz> select * from dept;

    DEPTNO DNAME              TOTSAL
---------- -------------- ----------
        10 ACCOUNTING
        20 RESEARCH
        30 SALES
        40 OPERATIONS

orclz> alter table dept modify (loc visible);

Table altered.

orclz> select * from dept;

    DEPTNO DNAME              TOTSAL LOC
---------- -------------- ---------- -------------
        10 ACCOUNTING                NEW YORK
        20 RESEARCH                  DALLAS
        30 SALES                     CHICAGO
        40 OPERATIONS                BOSTON

orclz>
How about that? I've adusted the column order!
Let's try to work out what may be happening:
orclz>
orclz> select column_name,hidden_column,column_id from user_tab_cols where table_name='DEPT';

COLUMN_NAME                    HID  COLUMN_ID
------------------------------ --- ----------
DEPTNO                         NO           1
DNAME                          NO           2
LOC                            NO           4
TOTSAL                         NO           3

orclz> desc dept
 Name                                                        Null?    Type
 ----------------------------------------------------------- -------- ----------------------------------------
 DEPTNO                                                      NOT NULL NUMBER(2)
 DNAME                                                                VARCHAR2(14)
 TOTSAL                                                               NUMBER
 LOC                                                                  VARCHAR2(13)

orclz> alter table dept modify (dname invisible);

Table altered.

orclz> desc dept
 Name                                                        Null?    Type
 ----------------------------------------------------------- -------- ----------------------------------------
 DEPTNO                                                      NOT NULL NUMBER(2)
 TOTSAL                                                               NUMBER
 LOC                                                                  VARCHAR2(13)

orclz> select column_name,hidden_column,column_id from user_tab_cols where table_name='DEPT';

COLUMN_NAME                    HID  COLUMN_ID
------------------------------ --- ----------
TOTSAL                         NO           2
LOC                            NO           3
DNAME                          YES
DEPTNO                         NO           1

orclz> alter table dept modify (dname visible);

Table altered.

orclz> desc dept
 Name                                                        Null?    Type
 ----------------------------------------------------------- -------- ----------------------------------------
 DEPTNO                                                      NOT NULL NUMBER(2)
 TOTSAL                                                               NUMBER
 LOC                                                                  VARCHAR2(13)
 DNAME                                                                VARCHAR2(14)

orclz> select column_name,hidden_column,column_id from user_tab_cols where table_name='DEPT';

COLUMN_NAME                    HID  COLUMN_ID
------------------------------ --- ----------
TOTSAL                         NO           2
LOC                            NO           3
DNAME                          NO           4
DEPTNO                         NO           1

orclz> select * from dept;

    DEPTNO     TOTSAL LOC           DNAME
---------- ---------- ------------- --------------
        10            NEW YORK      ACCOUNTING
        20            DALLAS        RESEARCH
        30            CHICAGO       SALES
        40            BOSTON        OPERATIONS

orclz>
It would seem that marking a column invisible sets its COLUMN_ID to NULL, and adjusts the COLUMN_ID of all other columns acordingly. Then when making it visible it is assigned the next available number, and the column sequence is determined accordingly.
How much use is this trick? Well, it could be a quick get-you-out-of-trouble if you have to change column ordering. A better (and supported) solution would be to cover the table with a view. And the real solution is not to use SELECT * but rather to specify a column projection list.
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

Is row-by-row processing really slow-by-slow? Emphatically: YES

$
0
0
articles: 

Often I see code that uses cursor loops to manage rows individually. If performance is an issue, I always try to see if the loops could be replaced with multi-row SQL statements. This (not very scientific) test shows performance of updating a set of rows to be doubled by changing the processing model:

orclz>
orclz> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.27
orclz>
orclz> DECLARE
  2    CURSOR c_sales IS
  3      SELECT * from sales FOR UPDATE;
  4  BEGIN
  5    FOR row IN c_sales
  6    LOOP
  7      UPDATE sales SET amount_sold = amount_sold+1 WHERE CURRENT OF c_sales;
  8    END LOOP;
  9  END;
 10  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:58.28
orclz>
orclz> rollback;

Rollback complete.

Elapsed: 00:00:22.18
orclz> alter system flush buffer_cache;

System altered.

Elapsed: 00:00:00.18
orclz>
orclz> UPDATE sales SET amount_sold = amount_sold+1;

918843 rows updated.

Elapsed: 00:00:25.66
orclz> rollback;

Rollback complete.

Elapsed: 00:00:20.92
orclz>
When one considers the use of indexes (not touched in my example) and the possibilities of parallel processing, the benefits of simple SQL may become even more obvious. Of course there are times when cursor loops are needed, but the default position must be "don't use them unless you have to".
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

12cR2 new feature: online table move

$
0
0
articles: 

I'm sure all DBAs know the ALTER TABLE MOVE command - and its problems. See here:

C:\Users\john>sqlplus scott/tiger@x122

SQL*Plus: Release 12.1.0.2.0 Production on Wed Oct 19 13:44:32 2016

Copyright (c) 1982, 2014, Oracle.  All rights reserved.

Last Successful login time: Wed Oct 19 2016 13:30:14 +01:00

Connected to:
Oracle Database 12c Enterprise Edition Release 12.2.0.0.3 - 64bit Production

x122> alter table emp move tablespace example;

Table altered.

x122> delete from emp where rownum=1;
delete from emp where rownum=1
*
ERROR at line 1:
ORA-01502: index 'SCOTT.PK_EMP' or partition of such index is in unusable state


x122> select index_name,status from user_indexes;

INDEX_NAME                     STATUS
------------------------------ --------
PK_DEPT                        VALID
PK_EMP                         UNUSABLE

x122> alter index pk_emp rebuild;

Index altered.

x122>
Not only is the table locked while the move is in progress, but also the move broke all the indexes. That is massive downtime. But this is release 12.2. Take a look at this syntax:
x122>
x122> alter table emp move tablespace users online update indexes;

Table altered.

x122> select index_name,status from user_indexes;

INDEX_NAME                     STATUS
------------------------------ --------
PK_DEPT                        VALID
PK_EMP                         VALID

x122>
The objects remain usable throughout and after the entire operation. You can move any LOBs, too.
How cool is that?
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

12cR2 lots of new instance parameters

$
0
0
articles: 

Comparing database 12cR1 release 12.1.0.2 to 12cR2 release 12.2.0.0.3, I see these new parameters:

adg_imc_enabled                               TRUE                 Enable IMC support on ADG                                                                                                  
allow_global_dblinks                          FALSE                LDAP lookup for DBLINKS                                                                                                    
allow_group_access_to_sga                     FALSE                Allow read access for SGA to users of Oracle owner group                                                                   
approx_for_aggregation                        FALSE                Replace exact aggregation with approximate aggregation                                                                     
approx_for_count_distinct                     FALSE                Replace count distinct with approx_count_distinct                                                                          
approx_for_percentile                         none                 Replace percentile_* with approx_percentile                                                                                
asm_io_processes                              0                    number of I/O processes per domain in the ASM IOSERVER instance                                                            
cdb_cluster                                   FALSE                if TRUE startup in CDB Cluster mode                                                                                        
cdb_cluster_name                              cfcdba1              CDB Cluster name                                                                                                           
clonedb_dir                                                        CloneDB Directory                                                                                                          
containers_parallel_degree                    65535                Parallel degree for a CONTAINERS() query                                                                                   
cursor_invalidation                           IMMEDIATE            default for DDL cursor invalidation semantics                                                                              
data_guard_sync_latency                       0                    Data Guard SYNC latency                                                                                                    
data_transfer_cache_size                      0                    Size of data transfer cache                                                                                                
default_sharing                               metadata             Default sharing clause                                                                                                     
disable_pdb_feature                           0                    Disable features                                                                                                           
enable_dnfs_dispatcher                        FALSE                Enable DNFS Dispatcher                                                                                                     
enable_pdb_isolation                          FALSE                Enables Pluggable Database isolation inside a CDB                                                                          
enabled_PDBs_on_standby                       *                    List of Enabled PDB patterns                                                                                               
encrypt_new_tablespaces                       ALWAYS               whether to encrypt newly created tablespaces                                                                               
external_keystore_credential_location         +DATA/wallets/tde/us external keystore credential location                                                                                      
inmemory_expressions_capture                  DISABLE              Controls detection of frequently used costly expressions                                                                   
inmemory_expressions_usage                    ENABLE               Controls which In-Memory Expressions are populated in-memory                                                               
inmemory_virtual_columns                      ENABLE               Controls which user-defined virtual columns are stored in-memory                                                           
instance_abort_delay_time                     0                    time to delay an internal initiated abort (in seconds)                                                                     
instance_mode                                 READ-WRITE           indicates whether the instance read-only or read-write or read-mostly                                                      
long_module_action                            TRUE                 Use longer module and action                                                                                               
max_idle_time                                 60                   maximum session idle time in minutes                                                                                       
max_iops                                      0                    MAX IO per second                                                                                                          
max_mbps                                      0                    MAX MB per second                                                                                                          
ofs_threads                                   4                    Number of OFS threads                                                                                                      
one_step_plugin_for_pdb_with_tde              FALSE                Facilitate one-step plugin for PDB with TDE encrypted data                                                                 
optimizer_adaptive_plans                      TRUE                 controls all types of adaptive plans                                                                                       
optimizer_adaptive_statistics                 FALSE                controls all types of adaptive statistics                                                                                  
outbound_dblink_protocols                     ALL                  Outbound DBLINK Protocols allowed                                                                                          
pga_aggregate_xmem_limit                      0                    limit of aggregate PGA XMEM memory consumed by the instance                                                                
remote_recovery_file_dest                                          default remote database recovery file location for refresh/relocate                                                        
sec_protocol_allow_deprecated_rpcs            YES                  Allow deprecated TTC RPCs                                                                                                  
standby_db_preserve_states                    NONE                 Preserve state cross standby role transition                                                                               
target_pdbs                                   525                  Parameter is a hint to adjust certain attributes of the CDB                                                                
uniform_log_timestamp_format                  TRUE                 use uniform timestamp formats vs pre-12.2 formats 

A lot of them are to do with Multitenant. The others? Well, some are fairly obvious, some used to be _underscore parameters. As for the others, I'll have to wait until I can study the docs.

--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

Compression test, 12cR2

$
0
0
articles: 

HCC is available only on Oracle supplied storage, such as a ZFS storage appliance or (as in this case) an Exadata engineered system. Furthermore, it occurs only for direct loads: in my examples, using CTAS. This is the script I'm running for this (not very scientific) test:

set timing on
create table t1 as select * from all_objects;
create table t1_bas row store compress basic as select * from all_objects;
create table t1_adv row store compress advanced as select * from all_objects;
create table t1_cql column store compress for query low as select * from all_objects;
create table t1_cqh column store compress for query high as select * from all_objects;
create table t1_cal column store compress for archive low as select * from all_objects;
create table t1_cah column store compress for archive high as select * from all_objects;

select table_name,blocks from user_tables where table_name like 'T1%' order by 2;

The script creates a table using no compression, then using the basic and advanced de-duplication methods, then the four HCC algorithms. Here's what happens:
x122>
x122> set timing on
x122> create table t1 as select * from all_objects;

Table created.

Elapsed: 00:00:01.71
x122> create table t1_bas row store compress basic as select * from all_objects;

Table created.

Elapsed: 00:00:01.54
x122> create table t1_adv row store compress advanced as select * from all_objects;

Table created.

Elapsed: 00:00:01.55
x122> create table t1_cql column store compress for query low as select * from all_objects;

Table created.

Elapsed: 00:00:01.50
x122> create table t1_cqh column store compress for query high as select * from all_objects;

Table created.

Elapsed: 00:00:01.92
x122> create table t1_cal column store compress for archive low as select * from all_objects;

Table created.

Elapsed: 00:00:02.57
x122> create table t1_cah column store compress for archive high as select * from all_objects;

Table created.

Elapsed: 00:00:13.41
x122>
x122> select table_name,blocks from user_tables where table_name like 'T1%' order by 2;

TABLE_NAME                         BLOCKS
------------------------------ ----------
T1_CQH                                 60
T1_CAH                                 62
T1_CAL                                 62
T1_CQL                                128
T1_BAS                                382
T1_ADV                                425
T1                                   1244

7 rows selected.

Elapsed: 00:00:00.45
x122>
x122>
The results show that the deduplication comes in at three or four to one compression ratio, and that HCC is around ten to one for Query Low, twenty to one for the others. The astonishing figure is that the Archive High algorithm is nearly eight times as slow as no compression. Most of the other algorithms are actually faster than no compression.
The lesson from this? Compression may give you huge space savings, but test the algorithms carefully. In another article I'll look at the effects on subsequent SELECTs and DMLs.
Tests done on database release 12.2.0.0.3, Exadata.
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

Tuning with equivalent SQLs - a little challenge

$
0
0
articles: 

I am fascinated by what I call "equal SQL": statements that are equivalent, in that they deliver the same result but may have hugely different performance characteristics. Here's a little case study.

Consider this example, working in the OE demonstration schema. There is a table of warehouses:

jw122pdb> select warehouse_id,warehouse_name from warehouses;

WAREHOUSE_ID WAREHOUSE_NAME
------------ -----------------------------------
           1 Southlake, Texas
           2 San Francisco
           3 New Jersey
           4 Seattle, Washington
           5 Toronto
           6 Sydney
           7 Mexico City
           8 Beijing
           9 Bombay
and an inventory of products at each warehouse:
jw122pdb> desc inventories
 Name                                                        Null?    Type
 ----------------------------------------------------------- -------- ----------------------------------------
 PRODUCT_ID                                                  NOT NULL NUMBER(6)
 WAREHOUSE_ID                                                NOT NULL NUMBER(3)
 QUANTITY_ON_HAND                                            NOT NULL NUMBER(8)

I want to find out which products are stocked in both Toronto and Bombay. These are five solutions:
select product_id from inventories where warehouse_id=5 
intersect 
select product_id from inventories where warehouse_id=9;

select product_id from inventories where warehouse_id=5 
and product_id in (select product_id from inventories where warehouse_id=9);

select product_id from inventories i where warehouse_id=5 
and exists (select product_id from inventories j where j.warehouse_id=9 and j.product_id=i.product_id);

select distinct product_id from (
(select product_id from inventories where warehouse_id=5) 
join
(select product_id from inventories where warehouse_id=9) 
using (product_id));

select product_id from 
(select product_id from inventories where warehouse_id=5
union all
select product_id from inventories where warehouse_id=9)
group by product_id having count(*) > 1;
To me, the first is the most intuitive: find the products in Toronto and the products in Bombay, and the answer is the intersection. The fifth solution is in effect the same thing done manually: add the two queries together, and keep only those products that occur twice (though there could a bug in that solution - what is it, and how can you avoid it?) The second uses a subquery. The third uses a correlated subquery which is often an inefficient, iterative, structure. The fourth is perhaps the most convoluted.
Which of the five will be the most efficient? Or will the cost based optimizer be able to re-write them into a common, efficient, form? Are there any other solutions?
These are my results:
jw122pdb> set autotrace traceonly explain
jw122pdb>
jw122pdb> select product_id from inventories where warehouse_id=5
  2  intersect
  3  select product_id from inventories where warehouse_id=9;

Execution Plan
----------------------------------------------------------
Plan hash value: 3944618082

------------------------------------------------------------------------------------
| Id  | Operation           | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |              |   114 |  1694 |     6  (34)| 00:00:01 |
|   1 |  INTERSECTION       |              |       |       |            |          |
|   2 |   SORT UNIQUE NOSORT|              |   114 |   798 |     3  (34)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN | INVENTORY_IX |   114 |   798 |     2   (0)| 00:00:01 |
|   4 |   SORT UNIQUE NOSORT|              |   128 |   896 |     3  (34)| 00:00:01 |
|*  5 |    INDEX RANGE SCAN | INVENTORY_IX |   128 |   896 |     2   (0)| 00:00:01 |
------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("WAREHOUSE_ID"=5)
   5 - access("WAREHOUSE_ID"=9)

jw122pdb> select product_id from inventories where warehouse_id=5
  2  and product_id in (select product_id from inventories where warehouse_id=9);

Execution Plan
----------------------------------------------------------
Plan hash value: 409421562

----------------------------------------------------------------------------------
| Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |              |    80 |  1120 |     2   (0)| 00:00:01 |
|   1 |  NESTED LOOPS     |              |    80 |  1120 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN| INVENTORY_IX |   114 |   798 |     2   (0)| 00:00:01 |
|*  3 |   INDEX RANGE SCAN| INVENTORY_IX |     1 |     7 |     0   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("WAREHOUSE_ID"=5)
   3 - access("WAREHOUSE_ID"=9 AND "PRODUCT_ID"="PRODUCT_ID")

jw122pdb> select product_id from inventories i where warehouse_id=5
  2  and exists (select product_id from inventories j where j.warehouse_id=9 and j.product_id=i.product_id);

Execution Plan
----------------------------------------------------------
Plan hash value: 1721271592

----------------------------------------------------------------------------------
| Id  | Operation         | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |              |    80 |  1120 |     2   (0)| 00:00:01 |
|   1 |  NESTED LOOPS SEMI|              |    80 |  1120 |     2   (0)| 00:00:01 |
|*  2 |   INDEX RANGE SCAN| INVENTORY_IX |   114 |   798 |     2   (0)| 00:00:01 |
|*  3 |   INDEX RANGE SCAN| INVENTORY_IX |   128 |   896 |     0   (0)| 00:00:01 |
----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("WAREHOUSE_ID"=5)
   3 - access("J"."WAREHOUSE_ID"=9 AND "J"."PRODUCT_ID"="I"."PRODUCT_ID")

jw122pdb> select distinct product_id from (
  2  (select product_id from inventories where warehouse_id=5)
  3  join
  4  (select product_id from inventories where warehouse_id=9)
  5  using (product_id));

Execution Plan
----------------------------------------------------------
Plan hash value: 49070421

-----------------------------------------------------------------------------------
| Id  | Operation          | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |              |    80 |  1120 |     3  (34)| 00:00:01 |
|   1 |  SORT UNIQUE NOSORT|              |    80 |  1120 |     3  (34)| 00:00:01 |
|   2 |   NESTED LOOPS SEMI|              |    80 |  1120 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN| INVENTORY_IX |   114 |   798 |     2   (0)| 00:00:01 |
|*  4 |    INDEX RANGE SCAN| INVENTORY_IX |   128 |   896 |     0   (0)| 00:00:01 |
-----------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("WAREHOUSE_ID"=5)
   4 - access("WAREHOUSE_ID"=9 AND "PRODUCT_ID"="PRODUCT_ID")

jw122pdb> select product_id from
  2  (select product_id from inventories where warehouse_id=5
  3  union all
  4  select product_id from inventories where warehouse_id=9)
  5  group by product_id having count(*) > 1;

Execution Plan
----------------------------------------------------------
Plan hash value: 352515046

-------------------------------------------------------------------------------------
| Id  | Operation            | Name         | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |              |     5 |    20 |     5  (20)| 00:00:01 |
|*  1 |  FILTER              |              |       |       |            |          |
|   2 |   HASH GROUP BY      |              |     5 |    20 |     5  (20)| 00:00:01 |
|   3 |    VIEW              |              |   242 |   968 |     4   (0)| 00:00:01 |
|   4 |     UNION-ALL        |              |       |       |            |          |
|*  5 |      INDEX RANGE SCAN| INVENTORY_IX |   114 |   798 |     2   (0)| 00:00:01 |
|*  6 |      INDEX RANGE SCAN| INVENTORY_IX |   128 |   896 |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   1 - filter(COUNT(*)>1)
   5 - access("WAREHOUSE_ID"=5)
   6 - access("WAREHOUSE_ID"=9)

jw122pdb>
This surprised me: I had expected the the third solution to be cheapest (assuming that it could be rewritten to a semijoin, as it was) and that the fourth solution would be the worst.
The take away from all this is that the way you write your code can have a huge effect on the way it runs, and you should always consider alternative formulations.
Hope you enjoyed that - you will have if you are as much of a SQL headcase as I am.

Domo's run using DB12.2.0.1 on Windows 10.

--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com


Upgrading to 12.2 ? Make sure you won't break JSON

$
0
0
articles: 

Will upgrade from 12.1 to 12.2 break your applications? It may if your developers are using JSON.
In both release 12.1 and 12.2, there are these keywords:

orclx> select * from v$reserved_words where keyword like 'JSON%' order by 1;

KEYWORD                            LENGTH R R R R D     CON_ID
------------------------------ ---------- - - - - - ----------
JSON                                    4 N N N N N          0
JSONGET                                 7 N N N N N          0
JSONPARSE                               9 N N N N N          0
JSON_ARRAY                             10 N N N N N          0
JSON_ARRAYAGG                          13 N N N N N          0
JSON_EQUAL                             10 N N N N N          0
JSON_EXISTS                            11 N N N N N          0
JSON_EXISTS2                           12 N N N N N          0
JSON_OBJECT                            11 N N N N N          0
JSON_OBJECTAGG                         14 N N N N N          0
JSON_QUERY                             10 N N N N N          0
JSON_SERIALIZE                         14 N N N N N          0
JSON_TABLE                             10 N N N N N          0
JSON_TEXTCONTAINS                      17 N N N N N          0
JSON_TEXTCONTAINS2                     18 N N N N N          0
JSON_VALUE                             10 N N N N N          0

16 rows selected.

orclx>

The SQL functions are the also same in both releases:
orclx> select distinct name from v$sqlfn_metadata where name like 'JSON%' order by 1;

NAME
------------------------------
JSON
JSON_ARRAY
JSON_ARRAYAGG
JSON_EQUAL
JSON_EXISTS
JSON_OBJECT
JSON_OBJECTAGG
JSON_QUERY
JSON_SERIALIZE
JSON_TEXTCONTAINS2
JSON_VALUE

11 rows selected.

orclx>
The problem comes with PL/SQL. According to the 12.2 docs:
Quote:
SQL/JSON functions json_value, json_query, json_object, and json_array, as well as SQL/JSON condition json_exists, have been added to the PL/SQL language as built-in functions (json_exists is a Boolean function in PL/SQL).
This means that if, within your PL/SQL code, you created a function called (for example) JSON_VALUE, it will compile and run in releases up to 12.1, but in 12.2 it will throw errors. This is what our client had done: they had written PL/SQL equivalents of the SQL functions.
That was a nasty problem to detect, and the only solution is to re-write the functions to have different names and adjust all the code that uses them.
Lesson learnt - never use a keyword as an identifier.
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

12.2 upgrade - it can break all your outgoing https calls

$
0
0

Do you know about multiple domain certificates? If not you may have to learn quickly, because Oracle has changed the way they are handled in release 12.2. This is going to break a lot of applications.

A multiple domain certificate (aka "Unified Communications Certificate", a UCC) is an SSL certificate that secures multiple domain and host names. There are a lot of them about. Even www.oracle.con is secured by one. In release 12.1 and earlier, there was no problem. You would download the website's root certificate, load it into a wallet, and then you could use UTL_HTTP.REQUEST or UTL_SMTP.STARTTLS or APEX_WEB_SERVICE.MAKE_REQUEST to make the call. It would work if you were going to any of the domains that the certificate secures.

Not in 12.2.

Take this example, using eBay. In 12.1 I can do this,

select utl_http.request(url=>'https://www.ebay.com',wallet_path=>'file:\tmp\wallet') from dual;

or, because I'm based in England, this:
select utl_http.request(url=>'https://www.ebay.co.uk',wallet_path=>'file:\tmp\wallet') from dual;

but in 12.2, only www.ebay.com works. The UK name gives me this:
orclx> select utl_http.request(url=>'https://www.ebay.co.uk',wallet_path=>'file:\tmp\wallet') from dual;
select utl_http.request(url=>'https://www.ebay.co.uk',wallet_path=>'file:\tmp\wallet') from dual
       *
ERROR at line 1:
ORA-29273: HTTP request failed
ORA-06512: at "SYS.UTL_HTTP", line 1501
ORA-24263: Certificate of the remote server does not match the target address.
ORA-06512: at "SYS.UTL_HTTP", line 380
ORA-06512: at "SYS.UTL_HTTP", line 1441
ORA-06512: at line 1

There is a solution - specify a new parameter introduced in 12.2, like this:
select utl_http.request(url=>'https://www.ebay.co.uk',wallet_path=>'file:\tmp\wallet',https_host=>'www.ebay.com') from dual;

It is the same with APEX_WEB_SERVICE.MAKE_REQUEST, the latest release has a new parameter P_HTTPS_HOST.

There are some MOS articles that help, such as Doc ID 2275666.1 and Doc ID 2339601.1.

This may complicate your 12.2 upgrades. It is certainly complicating ours.

--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com
https://www.skillbuilders.com/oracle-dba-training/

19c Standard Edition permits 3 PDBs per CDB

$
0
0
articles: 

A very nice licensing change in 19c: you can now have up to three PDBs in a Standard Edition Multitenant database. Apart from the obvious advantage of being able to do some database consolidation, it gives SE2 users the ability to do those wonderful PDB clone operations. Just one example: if my test database is currently a single tenant CDB I can make a clone of it before some test runs. Like this:

Before cloning, I do like to create an after clone trigger in the source PDB (in this example named atest) to do any changes that might be necessary in the clone, such as disabling the job system:

conn / as sysdba
alter session set container=atest;
create trigger stopjobs after clone on pluggable database
begin
execute immediate 'alter system set job_queue_processes=0';
end;
/

Then do the clone:
conn / as sysdba
create pluggable database atestcopy from atest;
alter pluggable database atestcopy open;

It is that simple because I always use Oracle Managed FIles. The new clone will be registered with the listener as service name atestcopy, the trigger will have stopped jobs and then dropped itself. At any stage I can then, for example, use the clone to revert to the version of atest as it was at clone creation time simply by dropping atest and renaming atestcopy:
conn / as sysdba
drop pluggable database atest including datafiles;
alter session set container=atestcopy;
alter system set job_queue_processes=100;
alter database rename global name to atest;

That is as good as a Database Flashback - which of course you don't have have in SE.
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

Oracle now supported on VMware

$
0
0

For years (decades?) people have been running Oracle databases on VMs hosted by VMware, and everything has worked fine. But VMware has never been a certified platform, and if you raised a TAR for a VMware hosted environment Oracle Support could, at any time, demand that you reproduce the problem on a certified platform before they would help.
However, now according to MOS Doc Id 249212.1 dated 2019-09-24 this has changed:

Quote:
Oracle customers with an active support contract and running supported versions of Oracle products will receive assistance from Oracle when running those products on VMware virtualized environments.
It always annoyed me that Oracle would happily take your money for a support contract, but could then back out of it if they wanted. An important change, and long overdue.
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

How to move a table from one schema to another

$
0
0
articles: 

Many times I've seen the question on forums "How can I move a table from one schema to another?" and the answer is always that you can't. You have to copy it. Or export/import it. Well, here's a way. It assumes that you are on release 12.x and have the partitioning option.

My schemas are jack and jill. Create the table and segment in jack:

orclz> create table jack.t1(c1) as select 1 from dual;

Table created.

orclz>
Convert it to a partitioned table, and see what you've got:
orclz> alter table jack.t1 modify partition by hash (c1) partitions 1;

Table altered.

orclz> select segment_name,segment_type,partition_name,header_file,header_block from dba_Segments where owner='JACK';

SEGMENT_NAME                   SEGMENT_TYPE       PARTITION_NAME                 HEADER_FILE HEADER_BLOCK
------------------------------ ------------------ ------------------------------ ----------- ------------
T1                             TABLE PARTITION    SYS_P5443                               12           66

orclz>
Create an empty table (by default, no segment) for jill:
orclz> create table jill.t1 as select * from jack.t1 where 1=2;

Table created.

orclz>
And now move the segment from jack to jill:
orclz> alter table jack.t1 exchange partition sys_p5443 with table jill.t1;

Table altered.

orclz>
and now (woo-hoo!) see what we have:
orclz> select segment_name,segment_type,partition_name,header_file,header_block from dba_Segments where owner='JILL';

SEGMENT_NAME                   SEGMENT_TYPE       PARTITION_NAME                 HEADER_FILE HEADER_BLOCK
------------------------------ ------------------ ------------------------------ ----------- ------------
T1                             TABLE                                                      12           66

orclz>
It isn't only Father Christmas who can do impossible things :)

--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

Database 20c docs

Controlling distributed queries with hints

$
0
0
articles: 

Recently I've been working on tuning some distributed queries. This is not always straightforward.

This is not a comprehensive discussion of the topic, rather a description of how one might approach the problem. The query I'm using for this demonstration is joining EMP, DEPT, and SALGRADE in the SCOTT schema. EMP and DEPT are at the remote site L1, SALGRADE is local:

SELECT ename,
       dname,
       grade
FROM   emp@l1
       join dept@l1 USING (deptno)
       join salgrade
         ON ( sal BETWEEN losal AND hisal );

This is the plan:
-------------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |    42 |  2646 |    11  (19)| 00:00:01 |        |      |
|   1 |  MERGE JOIN          |          |    42 |  2646 |    11  (19)| 00:00:01 |        |      |
|   2 |   SORT JOIN          |          |    14 |   742 |     7  (15)| 00:00:01 |        |      |
|*  3 |    HASH JOIN         |          |    14 |   742 |     6   (0)| 00:00:01 |        |      |
|   4 |     REMOTE           | DEPT     |     4 |    80 |     3   (0)| 00:00:01 |     L1 | R->S |
|   5 |     REMOTE           | EMP      |    14 |   462 |     3   (0)| 00:00:01 |     L1 | R->S |
|*  6 |   FILTER             |          |       |       |            |          |        |      |
|*  7 |    SORT JOIN         |          |     5 |    50 |     4  (25)| 00:00:01 |        |      |
|   8 |     TABLE ACCESS FULL| SALGRADE |     5 |    50 |     3   (0)| 00:00:01 |        |      |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
   6 - filter("EMP"."SAL"<="HISAL")
   7 - access(INTERNAL_FUNCTION("EMP"."SAL")>=INTERNAL_FUNCTION("LOSAL"))
       filter(INTERNAL_FUNCTION("EMP"."SAL")>=INTERNAL_FUNCTION("LOSAL"))

Remote SQL Information (identified by operation id):
----------------------------------------------------

   4 - SELECT "DEPTNO","DNAME" FROM "DEPT""DEPT" (accessing 'L1' )

   5 - SELECT "ENAME","SAL","DEPTNO" FROM "EMP""EMP" (accessing 'L1' )

The IN-OUT R->S operations are remote-to-serial, and tell me that the DEPT table and the EMP table are being sent from L1 to be joined locally, and then the result is joined to SALGRADE. This could be a bit silly, and furthermore there is no chance of using an index driven nested loop or merge join, because the local database can't see any indexes that might exist at L1.
So I'll try the driving site hint:
SELECT /*+ driving_site(emp) */ ename,
                                dname,
                                grade
FROM   emp@l1
       join dept@l1 USING (deptno)
       join salgrade
         ON ( sal BETWEEN losal AND hisal ); 

and that gives me this:
-----------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT REMOTE        |          |    42 |  1512 |    11  (28)| 00:00:01 |        |      |
|   1 |  MERGE JOIN                    |          |    42 |  1512 |    11  (28)| 00:00:01 |        |      |
|   2 |   SORT JOIN                    |          |    14 |   364 |     7  (29)| 00:00:01 |        |      |
|   3 |    MERGE JOIN                  |          |    14 |   364 |     6  (17)| 00:00:01 |        |      |
|   4 |     TABLE ACCESS BY INDEX ROWID| DEPT     |     4 |    52 |     2   (0)| 00:00:01 |  ORCLZ |      |
|   5 |      INDEX FULL SCAN           | PK_DEPT  |     4 |       |     1   (0)| 00:00:01 |  ORCLZ |      |
|*  6 |     SORT JOIN                  |          |    14 |   182 |     4  (25)| 00:00:01 |        |      |
|   7 |      TABLE ACCESS FULL         | EMP      |    14 |   182 |     3   (0)| 00:00:01 |  ORCLZ |      |
|*  8 |   FILTER                       |          |       |       |            |          |        |      |
|*  9 |    SORT JOIN                   |          |     5 |    50 |     4  (25)| 00:00:01 |        |      |
|  10 |     REMOTE                     | SALGRADE |     5 |    50 |     3   (0)| 00:00:01 |      ! | R->S |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("A3"."DEPTNO"="A2"."DEPTNO")
       filter("A3"."DEPTNO"="A2"."DEPTNO")
   8 - filter("A3"."SAL"<="A1"."HISAL")
   9 - access(INTERNAL_FUNCTION("A3"."SAL")>=INTERNAL_FUNCTION("A1"."LOSAL"))
       filter(INTERNAL_FUNCTION("A3"."SAL")>=INTERNAL_FUNCTION("A1"."LOSAL"))

Remote SQL Information (identified by operation id):
----------------------------------------------------

  10 - SELECT "GRADE","LOSAL","HISAL" FROM "SALGRADE""A1" (accessing '!' )


Note
-----
   - fully remote statement

As a "fully remote" statement, the plan is showing the point of view of L1. EMP and DEPT are joined locally (with an indexed merge join, which was not possible before) and SALGRADE is sent across the database link. That too seems a bit silly. Wouldn't it be better to join EMP and DEPT remotely, and send the result across the link and join to SALGRADE locally? Well, the driving_site hint doesn't let you do that. But I can get that effect by using an in-line view:
SELECT ename,
       dname,
       grade
FROM   (SELECT /*+ no_merge */ ename,
                               sal,
                               dname
        FROM   emp@l1
               join dept@l1 USING (deptno))
       join salgrade
         ON ( sal BETWEEN losal AND hisal ); 
------------------------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |    42 |  1638 |    11  (19)| 00:00:01 |        |      |
|   1 |  MERGE JOIN         |          |    42 |  1638 |    11  (19)| 00:00:01 |        |      |
|   2 |   SORT JOIN         |          |     5 |    50 |     4  (25)| 00:00:01 |        |      |
|   3 |    TABLE ACCESS FULL| SALGRADE |     5 |    50 |     3   (0)| 00:00:01 |        |      |
|*  4 |   FILTER            |          |       |       |            |          |        |      |
|*  5 |    SORT JOIN        |          |    14 |   406 |     7  (15)| 00:00:01 |        |      |
|   6 |     VIEW            |          |    14 |   406 |     6   (0)| 00:00:01 |        |      |
|   7 |      REMOTE         |          |       |       |            |          |     L1 | R->S |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter("SAL"<="HISAL")
   5 - access("SAL">="LOSAL")
       filter("SAL">="LOSAL")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   7 - EXPLAIN PLAN SET STATEMENT_ID='PLUS1550001' INTO PLAN_TABLE@! FOR SELECT /*+
       NO_MERGE */ "A2"."ENAME","A2"."SAL","A1"."DNAME" FROM "EMP""A2","DEPT""A1" WHERE
       "A2"."DEPTNO"="A1"."DEPTNO" (accessing 'L1' )


Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------

   7 -  SEL$64EAE176
         U -  no_merge

Now I have what I want: EMP and DEPT are joined remotely, with the result being joined to SALGRADE locally. This should give the optimizer the capability of using the best access path and join methods and minimize the network traffic (though it does not however give much flexibility for join order).
Note the use of the no_merge hint (which the hint report says was unused): without it, everything happens locally to give the same plan that I started with.

The take away from this is that you may be able to control which parts of query run at each site, but that the driving_site hint may be too crude a tool to do this optimally. And that, as is so often the case, a hint may have unexpected effects.
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com


Installing database 19c on Oracle Linux 8

$
0
0
articles: 

Database release 19.7 (ie, 19c with the April 2020 RU) is at last certified for OL8, but there may be some hacking needed to get it installed.

This certification is long overdue: our security admin has been pushing for the 5.x kernel for some time, and OL7 still only supports kernel 4.x. I'm starting to move some production systems over now using the July RUR, which takes the release to 19.7.1.

Begin by installing the Oracle Validated rpm from the ol8_UEKR6 repository:

yum install oracle-database-preinstall-19c

That is supposed to sort out everything, but there are still two hassles.

First, the installer will refuse to run because it doesn't recognize the operating system. There is no switch on runInstaller that I can find to avoid this, but there is an easy workaround:

export CV_ASSUME_DISTID=OL7

then it will proceed.

Second, it will throw a warning about a missing rpm, compat-libcap1-1.10, which you can of course ignore but it is nice to have an install run cleanly. The problem seems to be that this package is missing from the OL8 repos. No problem - you can grap it from a Linux 7 repo:

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/compat-libcap1-1.10-7.el7.x86_64.rpm
yum localinstall compat-libcap1-1.10-7.el7.x86_64.rpm

and now the install goes through with no warnings, and you can proceed to apply the latest RUR (or RU, if you are feeling brave).

Hope this helps someone.

--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

database block size - does it really matter?

$
0
0
articles: 

What block size should you use? For what purpose? How about tablespaces in different block sizes? Any opinions?

When support for multiple block sizes was introduced, I was working for Oracle Uni and did have some (very restricted) access to Product Development. It seemed to me that an obvious use case for this was tuning. I was thinking of things like putting LOB segments and IOT overflow segments in large blocks while keeping the base table in small blocks. Product Development was most emphatic: "Don't try to do that". They wouldn't give any reason (they never do) but there was a hint that the buffer cache management algorithms for non-default block size pools are not optimized for normal work; I have no idea if that is, or was, true. They pretty much said that the only reason for multiple block size support was to allow tablespace transport between DBs with different block sizes. Of course there was nothing said that can be quoted, and I have no idea if the situation has changed since.

So if one accepts that all tablespaces should use the db_block_size, what size should this be? I have never seen any justification for the advice about "small blocks for OLTP, large blocks for DW" that has been in the docs for decades. It sounds right instinctively, but that is all. Virtually all the DBs I see use 8KB or 16KB, and I have no opinion on whether one performs better than the other for any purpose. Some people produce algorithms based on block size and db_file_multiblock_read_count, trying to relate the IO size to the RAID stripe or the ASM Allocation Unit, but again I have never seen any proof of this having any effect.

For a long time, I thought that 16KB blocks were more convenient than 8KB because it meant that I could have datafiles up to 64GB. But now that I always use bigfile tablespaces, that reason no longer holds.

With regard to the buffer cache, I now follow the principle that it is best to have one big default buffer pool: do not try to segment it with different block sizes or keep and recycle pools. The only interference a DBA should consider doing is setting the db_big_table_cache_percent_target, which I think can really help when you have a mixed workload. Otherwise, let Uncle Oracle get on with it: he can manage the cache better than me.

So my conclusion is that in the twentyfirst century, all DBs should use the 8KB default block size, and the cache should be one default pool. However, I would love to see some science behind this, or behind any other opnions.

ORDS 21.x make sure you have the latest

$
0
0

ORDS version 21 was released in May, I had it tested and rolled it out in June. But once live, a problem popped up: numerous executions of this statement,
SELECT COUNT(1) FROM SYS.ALL_SYNONYMS WHERE OWNER = 'PUBLIC' AND SYNONYM_NAME = 'APEX_RELEASE' AND TABLE_NAME = 'APEX_RELEASE';
which appears to be run whenever you initialize a connection through the ORDS connection pools. It is a not a nice query. This is a typical exec plan:

atest> SELECT COUNT(1) FROM SYS.ALL_SYNONYMS WHERE OWNER = 'PUBLIC' AND SYNONYM_NAME = 'APEX_RELEASE' AND TABLE_NAME = 'APEX_RELEASE';
       COUNT(1)
---------------
              1
Execution Plan
----------------------------------------------------------
Plan hash value: 4162468211
-------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                        | Name                       | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                                 |                            |     1 |   198 |       |  4488   (7)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE                                  |                            |     1 |   198 |       |            |          |       |       |
|   2 |   VIEW                                           | ALL_SYNONYMS               | 20002 |  3867K|       |  4488   (7)| 00:00:01 |       |       |
|   3 |    SORT UNIQUE                                   |                            | 20002 |    13M|    14M|  4488   (7)| 00:00:01 |       |       |
|   4 |     UNION-ALL                                    |                            |       |       |       |            |          |       |       |
|   5 |      PARTITION LIST ALL                          |                            |     1 |   369 |       |    23 (100)| 00:00:01 |     1 |     2 |
|*  6 |       EXTENDED DATA LINK FULL                    | INT$DBA_SYNONYMS           |     1 |   369 |       |    23 (100)| 00:00:01 |       |       |
|*  7 |      VIEW                                        | _ALL_SYNONYMS_TREE         | 20001 |  6699K|       |  1545 (100)| 00:00:01 |       |       |
|*  8 |       CONNECT BY WITHOUT FILTERING               |                            |       |       |       |            |          |       |       |
|*  9 |        HASH JOIN RIGHT SEMI                      |                            |     1 |   475 |       |   112 (100)| 00:00:01 |       |       |
|  10 |         VIEW                                     | VW_SQ_1                    |  1190 |   153K|       |    87 (100)| 00:00:01 |       |       |
|* 11 |          FILTER                                  |                            |       |       |       |            |          |       |       |
|  12 |           PARTITION LIST ALL                     |                            | 20000 |  5664K|       |    87 (100)| 00:00:01 |     1 |     2 |
|  13 |            EXTENDED DATA LINK FULL               | _INT$_ALL_SYNONYMS_FOR_AO  | 20000 |  5664K|       |    87 (100)| 00:00:01 |       |       |
|* 14 |           FILTER                                 |                            |       |       |       |            |          |       |       |
|  15 |            NESTED LOOPS                          |                            |     1 |   107 |       |     6   (0)| 00:00:01 |       |       |
|  16 |             NESTED LOOPS                         |                            |     1 |    95 |       |     5   (0)| 00:00:01 |       |       |
|  17 |              NESTED LOOPS                        |                            |     1 |    71 |       |     4   (0)| 00:00:01 |       |       |
|  18 |               TABLE ACCESS BY INDEX ROWID        | USER$                      |     1 |    18 |       |     1   (0)| 00:00:01 |       |       |
|* 19 |                INDEX UNIQUE SCAN                 | I_USER1                    |     1 |       |       |     0   (0)| 00:00:01 |       |       |
|  20 |               TABLE ACCESS BY INDEX ROWID BATCHED| OBJ$                       |     1 |    53 |       |     3   (0)| 00:00:01 |       |       |
|* 21 |                INDEX RANGE SCAN                  | I_OBJ5                     |     1 |       |       |     2   (0)| 00:00:01 |       |       |
|* 22 |              INDEX RANGE SCAN                    | I_USER2                    |     1 |    24 |       |     1   (0)| 00:00:01 |       |       |
|* 23 |             INDEX RANGE SCAN                     | I_OBJAUTH1                 |     1 |    12 |       |     1   (0)| 00:00:01 |       |       |
|* 24 |            FIXED TABLE FULL                      | X$KZSRO                    |     1 |     6 |       |     0   (0)| 00:00:01 |       |       |
|* 25 |            TABLE ACCESS BY INDEX ROWID BATCHED   | USER_EDITIONING$           |     1 |     6 |       |     2   (0)| 00:00:01 |       |       |
|* 26 |             INDEX RANGE SCAN                     | I_USER_EDITIONING          |     2 |       |       |     1   (0)| 00:00:01 |       |       |
|* 27 |            TABLE ACCESS BY INDEX ROWID BATCHED   | USER_EDITIONING$           |     1 |     6 |       |     2   (0)| 00:00:01 |       |       |
|* 28 |             INDEX RANGE SCAN                     | I_USER_EDITIONING          |     2 |       |       |     1   (0)| 00:00:01 |       |       |
|  29 |            NESTED LOOPS SEMI                     |                            |     1 |    29 |       |     2   (0)| 00:00:01 |       |       |
|* 30 |             INDEX SKIP SCAN                      | I_USER2                    |     1 |    20 |       |     1   (0)| 00:00:01 |       |       |
|* 31 |             INDEX RANGE SCAN                     | I_OBJ4                     |     1 |     9 |       |     1   (0)| 00:00:01 |       |       |
|  32 |         PARTITION LIST ALL                       |                            | 20000 |  6699K|       |    21 (100)| 00:00:01 |     1 |     2 |
|  33 |          EXTENDED DATA LINK FULL                 | _INT$_ALL_SYNONYMS_FOR_SYN | 20000 |  6699K|       |    21 (100)| 00:00:01 |       |       |
|  34 |        PARTITION LIST ALL                        |                            | 20000 |  6699K|       |    21 (100)| 00:00:01 |     1 |     2 |
|  35 |         EXTENDED DATA LINK FULL                  | _INT$_ALL_SYNONYMS_FOR_SYN | 20000 |  6699K|       |    21 (100)| 00:00:01 |       |       |
-------------------------------------------------------------------------------------------------------------------------------------------------------
You can see the problem: it is a UNION ALL query. The first branch (operations 5 and 6) is simple and low cost. It is what you get if you query dba_synonyms instead of all_synonyms:
atest> SELECT COUNT(1) FROM SYS.dba_SYNONYMS WHERE OWNER = 'PUBLIC' AND SYNONYM_NAME = 'APEX_RELEASE' AND TABLE_NAME = 'APEX_RELEASE';
       COUNT(1)
---------------
              1
Execution Plan
----------------------------------------------------------
Plan hash value: 1145150501

--------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name             | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                  |     1 |   198 |    23 (100)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE           |                  |     1 |   198 |            |          |       |       |
|   2 |   PARTITION LIST ALL      |                  |     1 |   198 |    23 (100)| 00:00:01 |     1 |     2 |
|*  3 |    EXTENDED DATA LINK FULL| INT$DBA_SYNONYMS |     1 |   198 |    23 (100)| 00:00:01 |       |       |
--------------------------------------------------------------------------------------------------------------
but the second branch (operations 7 through 355) is ghastly. It is a query against the SYS._ALL_SYNONYMS_TREE view. That view is a hierarchical query, meaning that it has to be materialized and cannot be merged. It must be run to completion, and against a database with zillions of synonyms, it is slow. Possibly several seconds. Why is it there? To account for the possibility that you might have synonyms pointing to synonyms, which ORDS really doesn't need to know about. There is no reason for ORDS to be doing this.
We were fortunate: in the databases where I noticed the issue, usage was light and the query was usually running in under a second but it was still hammering the system.
The solution, if you haven't done so already, is to upgrade your ORDS pronto. Oracle rushed out a quick fix last month, which is ORDS 21.1.3 and this weekend released the ORDS 21.2.0 which should be the real solution. It is looking good so far. Hope this helps someone.
--
John Watson
Oracle Certified Master DBA

DB 21c available for download

12.2 upgrade - it can break all your outgoing https calls

$
0
0

Do you know about multiple domain certificates? If not you may have to learn quickly, because Oracle has changed the way they are handled in release 12.2. This is going to break a lot of applications.

A multiple domain certificate (aka "Unified Communications Certificate", a UCC) is an SSL certificate that secures multiple domain and host names. There are a lot of them about. Even www.oracle.con is secured by one. In release 12.1 and earlier, there was no problem. You would download the website's root certificate, load it into a wallet, and then you could use UTL_HTTP.REQUEST or UTL_SMTP.STARTTLS or APEX_WEB_SERVICE.MAKE_REQUEST to make the call. It would work if you were going to any of the domains that the certificate secures.

Not in 12.2.

Take this example, using eBay. In 12.1 I can do this,

select utl_http.request(url=>'https://www.ebay.com',wallet_path=>'file:\tmp\wallet') from dual;

or, because I'm based in England, this:
select utl_http.request(url=>'https://www.ebay.co.uk',wallet_path=>'file:\tmp\wallet') from dual;

but in 12.2, only www.ebay.com works. The UK name gives me this:
orclx> select utl_http.request(url=>'https://www.ebay.co.uk',wallet_path=>'file:\tmp\wallet') from dual;
select utl_http.request(url=>'https://www.ebay.co.uk',wallet_path=>'file:\tmp\wallet') from dual
       *
ERROR at line 1:
ORA-29273: HTTP request failed
ORA-06512: at "SYS.UTL_HTTP", line 1501
ORA-24263: Certificate of the remote server does not match the target address.
ORA-06512: at "SYS.UTL_HTTP", line 380
ORA-06512: at "SYS.UTL_HTTP", line 1441
ORA-06512: at line 1

There is a solution - specify a new parameter introduced in 12.2, like this:
select utl_http.request(url=>'https://www.ebay.co.uk',wallet_path=>'file:\tmp\wallet',https_host=>'www.ebay.com') from dual;

It is the same with APEX_WEB_SERVICE.MAKE_REQUEST, the latest release has a new parameter P_HTTPS_HOST.

There are some MOS articles that help, such as Doc ID 2275666.1 and Doc ID 2339601.1.

This may complicate your 12.2 upgrades. It is certainly complicating ours.

--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com
https://www.skillbuilders.com/oracle-dba-training/

Viewing all 69 articles
Browse latest View live