Quantcast
Channel: John Watson's blog
Viewing all 69 articles
Browse latest View live

19c Standard Edition permits 3 PDBs per CDB

$
0
0
articles: 

A very nice licensing change in 19c: you can now have up to three PDBs in a Standard Edition Multitenant database. Apart from the obvious advantage of being able to do some database consolidation, it gives SE2 users the ability to do those wonderful PDB clone operations. Just one example: if my test database is currently a single tenant CDB I can make a clone of it before some test runs. Like this:

Before cloning, I do like to create an after clone trigger in the source PDB (in this example named atest) to do any changes that might be necessary in the clone, such as disabling the job system:

conn / as sysdba
alter session set container=atest;
create trigger stopjobs after clone on pluggable database
begin
execute immediate 'alter system set job_queue_processes=0';
end;
/

Then do the clone:
conn / as sysdba
create pluggable database atestcopy from atest;
alter pluggable database atestcopy open;

It is that simple because I always use Oracle Managed FIles. The new clone will be registered with the listener as service name atestcopy, the trigger will have stopped jobs and then dropped itself. At any stage I can then, for example, use the clone to revert to the version of atest as it was at clone creation time simply by dropping atest and renaming atestcopy:
conn / as sysdba
drop pluggable database atest including datafiles;
alter session set container=atestcopy;
alter system set job_queue_processes=100;
alter database rename global name to atest;

That is as good as a Database Flashback - which of course you don't have have in SE.
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com


Oracle now supported on VMware

$
0
0

For years (decades?) people have been running Oracle databases on VMs hosted by VMware, and everything has worked fine. But VMware has never been a certified platform, and if you raised a TAR for a VMware hosted environment Oracle Support could, at any time, demand that you reproduce the problem on a certified platform before they would help.
However, now according to MOS Doc Id 249212.1 dated 2019-09-24 this has changed:

Quote:
Oracle customers with an active support contract and running supported versions of Oracle products will receive assistance from Oracle when running those products on VMware virtualized environments.
It always annoyed me that Oracle would happily take your money for a support contract, but could then back out of it if they wanted. An important change, and long overdue.
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

How to move a table from one schema to another

$
0
0
articles: 

Many times I've seen the question on forums "How can I move a table from one schema to another?" and the answer is always that you can't. You have to copy it. Or export/import it. Well, here's a way. It assumes that you are on release 12.x and have the partitioning option.

My schemas are jack and jill. Create the table and segment in jack:

orclz> create table jack.t1(c1) as select 1 from dual;

Table created.

orclz>
Convert it to a partitioned table, and see what you've got:
orclz> alter table jack.t1 modify partition by hash (c1) partitions 1;

Table altered.

orclz> select segment_name,segment_type,partition_name,header_file,header_block from dba_Segments where owner='JACK';

SEGMENT_NAME                   SEGMENT_TYPE       PARTITION_NAME                 HEADER_FILE HEADER_BLOCK
------------------------------ ------------------ ------------------------------ ----------- ------------
T1                             TABLE PARTITION    SYS_P5443                               12           66

orclz>
Create an empty table (by default, no segment) for jill:
orclz> create table jill.t1 as select * from jack.t1 where 1=2;

Table created.

orclz>
And now move the segment from jack to jill:
orclz> alter table jack.t1 exchange partition sys_p5443 with table jill.t1;

Table altered.

orclz>
and now (woo-hoo!) see what we have:
orclz> select segment_name,segment_type,partition_name,header_file,header_block from dba_Segments where owner='JILL';

SEGMENT_NAME                   SEGMENT_TYPE       PARTITION_NAME                 HEADER_FILE HEADER_BLOCK
------------------------------ ------------------ ------------------------------ ----------- ------------
T1                             TABLE                                                      12           66

orclz>
It isn't only Father Christmas who can do impossible things :)

--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

Database 20c docs

Controlling distributed queries with hints

$
0
0
articles: 

Recently I've been working on tuning some distributed queries. This is not always straightforward.

This is not a comprehensive discussion of the topic, rather a description of how one might approach the problem. The query I'm using for this demonstration is joining EMP, DEPT, and SALGRADE in the SCOTT schema. EMP and DEPT are at the remote site L1, SALGRADE is local:

SELECT ename,
       dname,
       grade
FROM   emp@l1
       join dept@l1 USING (deptno)
       join salgrade
         ON ( sal BETWEEN losal AND hisal );

This is the plan:
-------------------------------------------------------------------------------------------------
| Id  | Operation            | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |          |    42 |  2646 |    11  (19)| 00:00:01 |        |      |
|   1 |  MERGE JOIN          |          |    42 |  2646 |    11  (19)| 00:00:01 |        |      |
|   2 |   SORT JOIN          |          |    14 |   742 |     7  (15)| 00:00:01 |        |      |
|*  3 |    HASH JOIN         |          |    14 |   742 |     6   (0)| 00:00:01 |        |      |
|   4 |     REMOTE           | DEPT     |     4 |    80 |     3   (0)| 00:00:01 |     L1 | R->S |
|   5 |     REMOTE           | EMP      |    14 |   462 |     3   (0)| 00:00:01 |     L1 | R->S |
|*  6 |   FILTER             |          |       |       |            |          |        |      |
|*  7 |    SORT JOIN         |          |     5 |    50 |     4  (25)| 00:00:01 |        |      |
|   8 |     TABLE ACCESS FULL| SALGRADE |     5 |    50 |     3   (0)| 00:00:01 |        |      |
-------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("EMP"."DEPTNO"="DEPT"."DEPTNO")
   6 - filter("EMP"."SAL"<="HISAL")
   7 - access(INTERNAL_FUNCTION("EMP"."SAL")>=INTERNAL_FUNCTION("LOSAL"))
       filter(INTERNAL_FUNCTION("EMP"."SAL")>=INTERNAL_FUNCTION("LOSAL"))

Remote SQL Information (identified by operation id):
----------------------------------------------------

   4 - SELECT "DEPTNO","DNAME" FROM "DEPT""DEPT" (accessing 'L1' )

   5 - SELECT "ENAME","SAL","DEPTNO" FROM "EMP""EMP" (accessing 'L1' )

The IN-OUT R->S operations are remote-to-serial, and tell me that the DEPT table and the EMP table are being sent from L1 to be joined locally, and then the result is joined to SALGRADE. This could be a bit silly, and furthermore there is no chance of using an index driven nested loop or merge join, because the local database can't see any indexes that might exist at L1.
So I'll try the driving site hint:
SELECT /*+ driving_site(emp) */ ename,
                                dname,
                                grade
FROM   emp@l1
       join dept@l1 USING (deptno)
       join salgrade
         ON ( sal BETWEEN losal AND hisal ); 

and that gives me this:
-----------------------------------------------------------------------------------------------------------
| Id  | Operation                      | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-----------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT REMOTE        |          |    42 |  1512 |    11  (28)| 00:00:01 |        |      |
|   1 |  MERGE JOIN                    |          |    42 |  1512 |    11  (28)| 00:00:01 |        |      |
|   2 |   SORT JOIN                    |          |    14 |   364 |     7  (29)| 00:00:01 |        |      |
|   3 |    MERGE JOIN                  |          |    14 |   364 |     6  (17)| 00:00:01 |        |      |
|   4 |     TABLE ACCESS BY INDEX ROWID| DEPT     |     4 |    52 |     2   (0)| 00:00:01 |  ORCLZ |      |
|   5 |      INDEX FULL SCAN           | PK_DEPT  |     4 |       |     1   (0)| 00:00:01 |  ORCLZ |      |
|*  6 |     SORT JOIN                  |          |    14 |   182 |     4  (25)| 00:00:01 |        |      |
|   7 |      TABLE ACCESS FULL         | EMP      |    14 |   182 |     3   (0)| 00:00:01 |  ORCLZ |      |
|*  8 |   FILTER                       |          |       |       |            |          |        |      |
|*  9 |    SORT JOIN                   |          |     5 |    50 |     4  (25)| 00:00:01 |        |      |
|  10 |     REMOTE                     | SALGRADE |     5 |    50 |     3   (0)| 00:00:01 |      ! | R->S |
-----------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   6 - access("A3"."DEPTNO"="A2"."DEPTNO")
       filter("A3"."DEPTNO"="A2"."DEPTNO")
   8 - filter("A3"."SAL"<="A1"."HISAL")
   9 - access(INTERNAL_FUNCTION("A3"."SAL")>=INTERNAL_FUNCTION("A1"."LOSAL"))
       filter(INTERNAL_FUNCTION("A3"."SAL")>=INTERNAL_FUNCTION("A1"."LOSAL"))

Remote SQL Information (identified by operation id):
----------------------------------------------------

  10 - SELECT "GRADE","LOSAL","HISAL" FROM "SALGRADE""A1" (accessing '!' )


Note
-----
   - fully remote statement

As a "fully remote" statement, the plan is showing the point of view of L1. EMP and DEPT are joined locally (with an indexed merge join, which was not possible before) and SALGRADE is sent across the database link. That too seems a bit silly. Wouldn't it be better to join EMP and DEPT remotely, and send the result across the link and join to SALGRADE locally? Well, the driving_site hint doesn't let you do that. But I can get that effect by using an in-line view:
SELECT ename,
       dname,
       grade
FROM   (SELECT /*+ no_merge */ ename,
                               sal,
                               dname
        FROM   emp@l1
               join dept@l1 USING (deptno))
       join salgrade
         ON ( sal BETWEEN losal AND hisal ); 
------------------------------------------------------------------------------------------------
| Id  | Operation           | Name     | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT    |          |    42 |  1638 |    11  (19)| 00:00:01 |        |      |
|   1 |  MERGE JOIN         |          |    42 |  1638 |    11  (19)| 00:00:01 |        |      |
|   2 |   SORT JOIN         |          |     5 |    50 |     4  (25)| 00:00:01 |        |      |
|   3 |    TABLE ACCESS FULL| SALGRADE |     5 |    50 |     3   (0)| 00:00:01 |        |      |
|*  4 |   FILTER            |          |       |       |            |          |        |      |
|*  5 |    SORT JOIN        |          |    14 |   406 |     7  (15)| 00:00:01 |        |      |
|   6 |     VIEW            |          |    14 |   406 |     6   (0)| 00:00:01 |        |      |
|   7 |      REMOTE         |          |       |       |            |          |     L1 | R->S |
------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   4 - filter("SAL"<="HISAL")
   5 - access("SAL">="LOSAL")
       filter("SAL">="LOSAL")

Remote SQL Information (identified by operation id):
----------------------------------------------------

   7 - EXPLAIN PLAN SET STATEMENT_ID='PLUS1550001' INTO PLAN_TABLE@! FOR SELECT /*+
       NO_MERGE */ "A2"."ENAME","A2"."SAL","A1"."DNAME" FROM "EMP""A2","DEPT""A1" WHERE
       "A2"."DEPTNO"="A1"."DEPTNO" (accessing 'L1' )


Hint Report (identified by operation id / Query Block Name / Object Alias):
Total hints for statement: 1 (U - Unused (1))
---------------------------------------------------------------------------

   7 -  SEL$64EAE176
         U -  no_merge

Now I have what I want: EMP and DEPT are joined remotely, with the result being joined to SALGRADE locally. This should give the optimizer the capability of using the best access path and join methods and minimize the network traffic (though it does not however give much flexibility for join order).
Note the use of the no_merge hint (which the hint report says was unused): without it, everything happens locally to give the same plan that I started with.

The take away from this is that you may be able to control which parts of query run at each site, but that the driving_site hint may be too crude a tool to do this optimally. And that, as is so often the case, a hint may have unexpected effects.
--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

Installing database 19c on Oracle Linux 8

$
0
0
articles: 

Database release 19.7 (ie, 19c with the April 2020 RU) is at last certified for OL8, but there may be some hacking needed to get it installed.

This certification is long overdue: our security admin has been pushing for the 5.x kernel for some time, and OL7 still only supports kernel 4.x. I'm starting to move some production systems over now using the July RUR, which takes the release to 19.7.1.

Begin by installing the Oracle Validated rpm from the ol8_UEKR6 repository:

yum install oracle-database-preinstall-19c

That is supposed to sort out everything, but there are still two hassles.

First, the installer will refuse to run because it doesn't recognize the operating system. There is no switch on runInstaller that I can find to avoid this, but there is an easy workaround:

export CV_ASSUME_DISTID=OL7

then it will proceed.

Second, it will throw a warning about a missing rpm, compat-libcap1-1.10, which you can of course ignore but it is nice to have an install run cleanly. The problem seems to be that this package is missing from the OL8 repos. No problem - you can grap it from a Linux 7 repo:

wget http://mirror.centos.org/centos/7/os/x86_64/Packages/compat-libcap1-1.10-7.el7.x86_64.rpm
yum localinstall compat-libcap1-1.10-7.el7.x86_64.rpm

and now the install goes through with no warnings, and you can proceed to apply the latest RUR (or RU, if you are feeling brave).

Hope this helps someone.

--
John Watson
Oracle Certified Master DBA
http://skillbuilders.com

database block size - does it really matter?

$
0
0
articles: 

What block size should you use? For what purpose? How about tablespaces in different block sizes? Any opinions?

When support for multiple block sizes was introduced, I was working for Oracle Uni and did have some (very restricted) access to Product Development. It seemed to me that an obvious use case for this was tuning. I was thinking of things like putting LOB segments and IOT overflow segments in large blocks while keeping the base table in small blocks. Product Development was most emphatic: "Don't try to do that". They wouldn't give any reason (they never do) but there was a hint that the buffer cache management algorithms for non-default block size pools are not optimized for normal work; I have no idea if that is, or was, true. They pretty much said that the only reason for multiple block size support was to allow tablespace transport between DBs with different block sizes. Of course there was nothing said that can be quoted, and I have no idea if the situation has changed since.

So if one accepts that all tablespaces should use the db_block_size, what size should this be? I have never seen any justification for the advice about "small blocks for OLTP, large blocks for DW" that has been in the docs for decades. It sounds right instinctively, but that is all. Virtually all the DBs I see use 8KB or 16KB, and I have no opinion on whether one performs better than the other for any purpose. Some people produce algorithms based on block size and db_file_multiblock_read_count, trying to relate the IO size to the RAID stripe or the ASM Allocation Unit, but again I have never seen any proof of this having any effect.

For a long time, I thought that 16KB blocks were more convenient than 8KB because it meant that I could have datafiles up to 64GB. But now that I always use bigfile tablespaces, that reason no longer holds.

With regard to the buffer cache, I now follow the principle that it is best to have one big default buffer pool: do not try to segment it with different block sizes or keep and recycle pools. The only interference a DBA should consider doing is setting the db_big_table_cache_percent_target, which I think can really help when you have a mixed workload. Otherwise, let Uncle Oracle get on with it: he can manage the cache better than me.

So my conclusion is that in the twentyfirst century, all DBs should use the 8KB default block size, and the cache should be one default pool. However, I would love to see some science behind this, or behind any other opnions.

ORDS 21.x make sure you have the latest

$
0
0

ORDS version 21 was released in May, I had it tested and rolled it out in June. But once live, a problem popped up: numerous executions of this statement,
SELECT COUNT(1) FROM SYS.ALL_SYNONYMS WHERE OWNER = 'PUBLIC' AND SYNONYM_NAME = 'APEX_RELEASE' AND TABLE_NAME = 'APEX_RELEASE';
which appears to be run whenever you initialize a connection through the ORDS connection pools. It is a not a nice query. This is a typical exec plan:

atest> SELECT COUNT(1) FROM SYS.ALL_SYNONYMS WHERE OWNER = 'PUBLIC' AND SYNONYM_NAME = 'APEX_RELEASE' AND TABLE_NAME = 'APEX_RELEASE';
       COUNT(1)
---------------
              1
Execution Plan
----------------------------------------------------------
Plan hash value: 4162468211
-------------------------------------------------------------------------------------------------------------------------------------------------------
| Id  | Operation                                        | Name                       | Rows  | Bytes |TempSpc| Cost (%CPU)| Time     | Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT                                 |                            |     1 |   198 |       |  4488   (7)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE                                  |                            |     1 |   198 |       |            |          |       |       |
|   2 |   VIEW                                           | ALL_SYNONYMS               | 20002 |  3867K|       |  4488   (7)| 00:00:01 |       |       |
|   3 |    SORT UNIQUE                                   |                            | 20002 |    13M|    14M|  4488   (7)| 00:00:01 |       |       |
|   4 |     UNION-ALL                                    |                            |       |       |       |            |          |       |       |
|   5 |      PARTITION LIST ALL                          |                            |     1 |   369 |       |    23 (100)| 00:00:01 |     1 |     2 |
|*  6 |       EXTENDED DATA LINK FULL                    | INT$DBA_SYNONYMS           |     1 |   369 |       |    23 (100)| 00:00:01 |       |       |
|*  7 |      VIEW                                        | _ALL_SYNONYMS_TREE         | 20001 |  6699K|       |  1545 (100)| 00:00:01 |       |       |
|*  8 |       CONNECT BY WITHOUT FILTERING               |                            |       |       |       |            |          |       |       |
|*  9 |        HASH JOIN RIGHT SEMI                      |                            |     1 |   475 |       |   112 (100)| 00:00:01 |       |       |
|  10 |         VIEW                                     | VW_SQ_1                    |  1190 |   153K|       |    87 (100)| 00:00:01 |       |       |
|* 11 |          FILTER                                  |                            |       |       |       |            |          |       |       |
|  12 |           PARTITION LIST ALL                     |                            | 20000 |  5664K|       |    87 (100)| 00:00:01 |     1 |     2 |
|  13 |            EXTENDED DATA LINK FULL               | _INT$_ALL_SYNONYMS_FOR_AO  | 20000 |  5664K|       |    87 (100)| 00:00:01 |       |       |
|* 14 |           FILTER                                 |                            |       |       |       |            |          |       |       |
|  15 |            NESTED LOOPS                          |                            |     1 |   107 |       |     6   (0)| 00:00:01 |       |       |
|  16 |             NESTED LOOPS                         |                            |     1 |    95 |       |     5   (0)| 00:00:01 |       |       |
|  17 |              NESTED LOOPS                        |                            |     1 |    71 |       |     4   (0)| 00:00:01 |       |       |
|  18 |               TABLE ACCESS BY INDEX ROWID        | USER$                      |     1 |    18 |       |     1   (0)| 00:00:01 |       |       |
|* 19 |                INDEX UNIQUE SCAN                 | I_USER1                    |     1 |       |       |     0   (0)| 00:00:01 |       |       |
|  20 |               TABLE ACCESS BY INDEX ROWID BATCHED| OBJ$                       |     1 |    53 |       |     3   (0)| 00:00:01 |       |       |
|* 21 |                INDEX RANGE SCAN                  | I_OBJ5                     |     1 |       |       |     2   (0)| 00:00:01 |       |       |
|* 22 |              INDEX RANGE SCAN                    | I_USER2                    |     1 |    24 |       |     1   (0)| 00:00:01 |       |       |
|* 23 |             INDEX RANGE SCAN                     | I_OBJAUTH1                 |     1 |    12 |       |     1   (0)| 00:00:01 |       |       |
|* 24 |            FIXED TABLE FULL                      | X$KZSRO                    |     1 |     6 |       |     0   (0)| 00:00:01 |       |       |
|* 25 |            TABLE ACCESS BY INDEX ROWID BATCHED   | USER_EDITIONING$           |     1 |     6 |       |     2   (0)| 00:00:01 |       |       |
|* 26 |             INDEX RANGE SCAN                     | I_USER_EDITIONING          |     2 |       |       |     1   (0)| 00:00:01 |       |       |
|* 27 |            TABLE ACCESS BY INDEX ROWID BATCHED   | USER_EDITIONING$           |     1 |     6 |       |     2   (0)| 00:00:01 |       |       |
|* 28 |             INDEX RANGE SCAN                     | I_USER_EDITIONING          |     2 |       |       |     1   (0)| 00:00:01 |       |       |
|  29 |            NESTED LOOPS SEMI                     |                            |     1 |    29 |       |     2   (0)| 00:00:01 |       |       |
|* 30 |             INDEX SKIP SCAN                      | I_USER2                    |     1 |    20 |       |     1   (0)| 00:00:01 |       |       |
|* 31 |             INDEX RANGE SCAN                     | I_OBJ4                     |     1 |     9 |       |     1   (0)| 00:00:01 |       |       |
|  32 |         PARTITION LIST ALL                       |                            | 20000 |  6699K|       |    21 (100)| 00:00:01 |     1 |     2 |
|  33 |          EXTENDED DATA LINK FULL                 | _INT$_ALL_SYNONYMS_FOR_SYN | 20000 |  6699K|       |    21 (100)| 00:00:01 |       |       |
|  34 |        PARTITION LIST ALL                        |                            | 20000 |  6699K|       |    21 (100)| 00:00:01 |     1 |     2 |
|  35 |         EXTENDED DATA LINK FULL                  | _INT$_ALL_SYNONYMS_FOR_SYN | 20000 |  6699K|       |    21 (100)| 00:00:01 |       |       |
-------------------------------------------------------------------------------------------------------------------------------------------------------
You can see the problem: it is a UNION ALL query. The first branch (operations 5 and 6) is simple and low cost. It is what you get if you query dba_synonyms instead of all_synonyms:
atest> SELECT COUNT(1) FROM SYS.dba_SYNONYMS WHERE OWNER = 'PUBLIC' AND SYNONYM_NAME = 'APEX_RELEASE' AND TABLE_NAME = 'APEX_RELEASE';
       COUNT(1)
---------------
              1
Execution Plan
----------------------------------------------------------
Plan hash value: 1145150501

--------------------------------------------------------------------------------------------------------------
| Id  | Operation                 | Name             | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
--------------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT          |                  |     1 |   198 |    23 (100)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE           |                  |     1 |   198 |            |          |       |       |
|   2 |   PARTITION LIST ALL      |                  |     1 |   198 |    23 (100)| 00:00:01 |     1 |     2 |
|*  3 |    EXTENDED DATA LINK FULL| INT$DBA_SYNONYMS |     1 |   198 |    23 (100)| 00:00:01 |       |       |
--------------------------------------------------------------------------------------------------------------
but the second branch (operations 7 through 355) is ghastly. It is a query against the SYS._ALL_SYNONYMS_TREE view. That view is a hierarchical query, meaning that it has to be materialized and cannot be merged. It must be run to completion, and against a database with zillions of synonyms, it is slow. Possibly several seconds. Why is it there? To account for the possibility that you might have synonyms pointing to synonyms, which ORDS really doesn't need to know about. There is no reason for ORDS to be doing this.
We were fortunate: in the databases where I noticed the issue, usage was light and the query was usually running in under a second but it was still hammering the system.
The solution, if you haven't done so already, is to upgrade your ORDS pronto. Oracle rushed out a quick fix last month, which is ORDS 21.1.3 and this weekend released the ORDS 21.2.0 which should be the real solution. It is looking good so far. Hope this helps someone.
--
John Watson
Oracle Certified Master DBA


DB 21c available for download

Viewing all 69 articles
Browse latest View live