David Kurtz

Subscribe to David Kurtz feed
This blog contains things about PeopleSoft, mostly performance related, that DBAs might find interesting.
Or then again they might not! The non-PeopleSoft Oracle stuff is at blog.go-faster.co.uk.David Kurtzhttp://www.blogger.com/profile/00468908370233805717noreply@blogger.comBlogger161125
Updated: 2 hours 51 min ago

Running nVision in a Mixed OpenXML/Excel mode

Mon, 2021-04-12 05:58
This blog post follows on from a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.
OpenXML is an alternative method for running PeopleSoft nVision introduced in PeopleTools 8.54.  It is recommended by Oracle and is the default configuration in the process scheduler.  It generally outperforms Excel because it is a lightweight process and it evades Excel's concurrency limitations.
Only one Excel nVision will run concurrently on a Windows server because Excel itself is a single-threaded process since Microsoft Excel 2010.  If you need to run 10 concurrent nVision reports you would need 10 windows servers each running a process scheduler that will run a single nVision report in Excel mode.  One of the major advantages of OpenXML is that it is not subject to this restriction.  See also:
What is the Problem?
I have worked with PeopleSoft Financials customers where it became apparent that some of their nVision layouts did not run correctly on OpenXML, and until they could be redeveloped they had to run on Excel.  The problems seem to be around pagination and nPlosion.  Also, some variables are not populated when the layouts are run in OpenXML.  For example, the report on the left was run in OpenXML, and the one on the right was run in Excel.
I have also found a very few layouts, that contrary to expectation, perform better on Excel than OpenXML.
Reverting to Excel is the only other option, but to do so across the board would have a significant performance impact.  However, my experience suggests that customers can run the majority of their layouts perfectly successfully on OpenXML, but a relatively small proportion still has to be run on Excel.  
A mixed mode of operation is therefore a good option, where nVision is run on OpenXML by default, but certain reports and report books are run on Excel.  It avoids having to train end-users to run certain reports on certain process schedulers.  
This blog sets out a method by which reports using certain nVision layouts are automatically redirected to certain process schedulers that run Excel nVision by a combination of configuring process categories and two database triggers to adjust the request records before they are picked up by the process schedulers.
Excel nVision Process Schedulers and OpenXML nVision process schedulers
Whether nVision is run in Excel or OpenXML mode is determined by the UseExcelAutomation variable in the nVision section of the process scheduler configuration file (psprcs.cfg).  It applies to all nVision process run by that scheduler.  However, this variable is not documented well in the PeopleTools manual.  It takes the following values:

[nVision]
;=========================================================================
; General settings for nVision
;=========================================================================

UseExcelAutomation=2
Today, most PeopleSoft systems run most of their batch processes on Linux/Unix servers.  nVision is one of the few PeopleSoft process types that had to run on Windows, and that is still the case for Excel nVision.  It is typical for these PeopleSoft systems to have several Windows servers dedicated to nVision execution.  To run in a mixed-mode, it is necessary to have an Excel nVision process scheduler and an OpenXML nVision process scheduler on each server.
Process Categories 
In the past, I have suggested creating additional process types, but I think it is easier and more in keeping with the process scheduler structure to create additional process definitions and process categories.  It is necessary to create two new process categories (in PeopleTools->Process  Scheduler->System Settings)
Servers
Process Schedulers (or Servers) should be created in pairs for each physical server available.  In this example, I have created PSNT_X1 to run the OpenXML category and PSNT_E1 to run the Excel category.  
Note that:
  • Maximum concurrency for the Excel server has been reduced to 1.
  • The maximum number of API aware processes for PSNT_E1 has also be reduced to 1.
  • The 'Default' category is disabled on both PSNT_E1 and PSNT_X1.
  • These servers only run nVision and PSJob process types,
  • They load balance within the same operating system (although this is not appropriate for PSNT_X1 if you also run OpenXML on other non-Windows process schedulers).

Process Definitions
The NVSRUN and RPTBOOK process definitions have been duplicated as NVSRUNE and RPTBOOKE respectively.  NVSRUN and RPTBOOK are allocated to the nVisionOpenXML process category, but NVSRUNE and RPTBOOKE are allocated to the nVisionExcel process category.
The whole configuration can be set up manually through the Process Scheduler components. However, I have also written a couple of scripts to manage the configuration that should ensure consistency.
Scripts
When a process request is submitted either in the PeopleSoft application or an enterprise scheduling tool (such as AutoSYS or Control-M), request records are inserted into:
  • PSPRCSQUE is polled by the process schedulers when looking for work.
  • PSPRCSRQST reports the status of processes and is exposed in the Process Monitor component in PeopleSoft.
These two tables are siblings, both keyed on PRCSINSTANCE.  Both have the following columns:
  • PRCSNAME determines the name of the process to be run.
  • PRCSCATEGORY specifies the category of the process in the request.
Triggers
I have created a pair of almost identical database triggers that fire on insert into each of these tables that will make certain changes to request for nVision layouts that should be run on Excel:
  • Change the process name from RPTBOOK to RPTBOOKE, or from NVSRUN to NVSRUNE.
  • Change the process category from nVisionOpenXML to nVisionExcel.  It is changing the process category that makes the difference to which process scheduler picks up the request, but the name is also changed to maintain consistency.
  • If the process request specifies a particular process scheduler (specified in SERVERNAMERQST) that cannot run the updated process category, then the requested server name is blanked out.
If any one layout in a ReportBook is listed as requiring to be run on Excel, then the whole ReportBook is moved to Excel because the report book runs as a single process.  This may become a reason to split ReportBooks where only some layouts require to run on Excel.
Script gfc_nvsion_excel_redirect_triggers.sql creates the meta-data table PS_NVS_REDIR_EXCEL, and the triggers on PSPRCSRQST and PSPRCSQUE.
  • The triggers will only fire when requests for nVision-Report/NVSRUN or nVision-ReportBook/RPTBOOK are submitted.
  • There is an almost identical trigger on each table (apart from the conditionally compiled debug code).  I will only exhibit the trigger on PSPRCSRQST in this post.
  • Normally, one would not code WHEN OTHERS THEN NULL exceptions handlers in PL/SQL.  However, I do this on triggers on process scheduler tables to prevent any error being raised causing the scheduler to crash.
  • Conditionally compiled debug code in the trigger is not compiled if the mydebug flag is set to FALSE.

ALTER SESSION SET PLSQL_CCFLAGS = 'mydebug:FALSE';

CREATE OR REPLACE TRIGGER sysadm.gfc_nvision_excel_redirect_rqst
BEFORE INSERT ON s.psprcsrqst
FOR EACH ROW
WHEN (new.prcstype IN('nVision-Report','nVision-ReportBook')
AND new.prcsname IN('RPTBOOK','NVSRUN')

)
DECLARE
l_excel INTEGER := 0;
BEGIN
$IF $$mydebug $THEN dbms_output.put_line('Entering Trigger psoft.gfc_nvision_excel_redirect_rqst'); $END

IF :new.prcstype = 'nVision-ReportBook' THEN
--check for reportbook running report that uses layout on Excel list
SELECT 1
INTO l_excel
FROM psnvsbookrequst b
, ps_nvs_report n
, ps_nvs_redir_excel e
WHERE b.oprid = :new.oprid
AND b.run_cntl_id = :new.runcntlid
AND b.eff_status = 'A'
AND n.business_unit = b.business_unit
AND n.report_id = b.report_id
AND n.layout_id = e.layout_id
AND e.eff_status = 'A'
AND rownum=1;
ELSE
--look in command line for report running layout on Excel list
SELECT 1
INTO l_excel
FROM psprcsparms p
, ps_nvs_report n
, ps_nvs_redir_excel e
WHERE p.prcsinstance = :new.prcsinstance
AND n.report_id = substr(regexp_substr(p.parmlist,'-NRN[^ ]+'),5)
AND n.layout_id = e.layout_id
AND e.eff_status = 'A'
AND rownum=1;
END IF;

--update name of process if to be run on Excel
$IF $$mydebug $THEN dbms_output.put_line('found Excel nVision layout for oprid='||:new.oprid||', runcntlid='||:new.runcntlid); $END
IF :new.prcsname IN('RPTBOOK') THEN
:new.prcsname := 'RPTBOOKE';
ELSE
:new.prcsname := :new.prcsname||'E';
END IF;

--get category of new process definition
SELECT d.prcscategory
INTO :new.prcscategory
FROM ps_prcsdefn d
WHERE d.prcstype = :new.prcstype
AND d.prcsname = :new.prcsname;

--get max concurrency of new category on new server
SELECT maxconcurrent
INTO l_maxconcurrent
FROM ps_servercategory
WHERE prcscategory = :new.prcscategory
AND servername = :new.servernamerqst;

--if request assigned to server where it cannot run blank out server assignment and allow load balancing to determine it
IF l_maxconcurrent = 0 THEN
:new.servernamerqst := ' ';
END IF;

$IF $$mydebug $THEN dbms_output.put_line('set process name to '||:new.prcsname); $END
EXCEPTION
WHEN no_data_found THEN
$IF $$mydebug $THEN dbms_output.put_line('No excel redirect found'); $ELSE NULL; $END
WHEN others THEN
$IF $$mydebug $THEN dbms_output.put_line('Other Error'); $ELSE NULL; $END
END;
/
show errors
MetaData
The script gfc_nvsion_excel_redirect_metadata.sql populates a list of nVision layouts that must run on Excel into the metadata table PS_NVS_REDIR_EXCEL. You have to enter your own list of nVision layouts here.
REM gfc_nvsion_excel_redirect_metadata.sql
REM (c)Go-Faster Consultancy 2021
REM load metadata of layouts that have to run on Excel rather than OpenXML

spool gfc_nvsion_excel_redirect_metadata

INSERT INTO ps_nvs_redir_excel VALUES ('EXCELNVS','A');
commit;

spool off
Other Scripts
  • excel_only_reportbooks.sql determines which nVision ReportBooks contain only some layouts that require to be run on nVision.  These are candidates to be split up.
                                                              ReportBooks with both Excel and OpenXML nVision layouts

Number of
All Excel
OPRID RUN_CNTL_ID Layouts Layouts Excel Layouts OpenXML Layouts
---------- ------------------------------ ------- ---------- ------------------------------ ----------------------------------------------------------------------
BATCH ADHOC_NVISION 8 1 GLXXXO21 GLXYZD03, GLXXXO03, GLXXXO05, GLXXXO22, GLXXXO23, GLXXXO31, GLXXXO32
BATCH ADHOC_09062016 3 1 ZYXVIS14 ZYXVBS14, ZYXVIS12
BATCH GLXXX_GLXXXO02_ABDC_YYY 2 1 GLXXXO02 GLXXXO28
BATCH GLXXX_GLXXXO21_ABDC_YYY 3 2 GLXXXO21, GLXXXO98 GLXXXO71
BATCH GLXXX_ZYXB4080_M000_ZZZ 10 2 ZYXVBS64, ZYXVIS14 ZYXVBS04, ZYXVBS14, ZYXVBS14_LED, ZYXVBS16, ZYXVBS16_LED, ZYXVBS54, ZB
UVIS04, ZYXVIS16
  • nvision_processsmonitor.sql reports on nVision ReportBooks scheduled in the last 24 hours, their process category and the excel layouts within them, so that you can verify that the configuration is working.
  •                                                        Process         Server   Server   Server
    PRCSINSTANCE RUNDTTM PRCSNAME Category Request Run Assign RUNCNTLID STA EXCEL_LAYOUT_IDS
    ------------ ---------------------------- ------------ --------------- -------- -------- -------- ------------------------------ --- ------------------------------
    12345680 31-MAR-21 07.42.51.000000 AM RPTBOOKE nVisionExcel PSNT_E1 PSNT_E1 GLXXX_GLXXXO10_ABDC_YYY OK GLXXXO10
    12345681 31-MAR-21 07.43.25.000000 AM RPTBOOKE nVisionExcel PSNT_E1 PSNT_E1 GLXXX_GLXXXO21_ABDC_YYY OK GLXXXO21, GLXXXO98
    12345683 31-MAR-21 08.06.42.000000 AM RPTBOOKE nVisionExcel PSNT_E1 PSNT_E1 GLXXX_GLXXXO98_ADHOC OK GLXXXO98
    12345684 31-MAR-21 08.32.12.000000 AM RPTBOOKE nVisionExcel PSNT_E1 PSNT_E1 GLXXX_GLXXXO21_ABDC_YYY OK GLXXXO21, GLXXXO98
    12345685 31-MAR-21 09.18.23.000000 AM FBRPTBK nVisionOpenXML PSNT_X1 PSNT_X1 GLXXX_GLXYAD06_ABDC_YYY OK
    12345686 31-MAR-21 09.20.01.000000 AM RPTBOOK nVisionOpenXML PSNT_X1 PSNT_X1 GLXXX_GLXYBP00_ABDC_YYY OK
    12345687 31-MAR-21 09.22.21.000000 AM RPTBOOK nVisionOpenXML PSNT_X1 PSNT_X1 GLXXX_GLXYAD03_ABDC_YYY OK
    12345688 31-MAR-21 09.23.11.000000 AM RPTBOOK nVisionOpenXML PSNT_X1 PSNT_X1 GLXXX_GLVLAD03_ABDC_XXX OK
    12345689 31-MAR-21 09.24.18.000000 AM RPTBOOK nVisionOpenXML PSNT_X1 PSNT_X1 GLXXX_ZYXAB001_M000_ZZZ OK
    All the scripts mentioned in this blog have been included in my nVision Github repository.

    Tuning Dynamically Generated SQL from PeopleSoft COBOL and Application Engine

    Tue, 2020-12-01 04:58

    When working on a performance issue, you may reach the point where you want to introduce one or more hints into a SQL statement.  It is often tempting to simply add the hints to the source code.  Doing so may appear to be simpler.  That is often the case with Application Engine programs, however, it is much more difficult in PeopleSoft COBOL programs.  

    A strong argument against such code change is that having made it, you have also to functionally test the change and push it through the entire release management process to get it into production.  Then, should you ever want to change or remove the hints, you have to go through the entire process again.

    Oracle provides several mechanisms to define a database object containing a hint or set of hints and to apply them to matching SQL statements.  These mechanisms work best with SQL that uses bind variables rather than literals.  If an otherwise identical SQL statement has different literal values then it has a different SQL_ID and is treated by the database as a new statement.  SQL Plan Baselines, SQL Patches and SQL Profiles match to the specific text of a SQL statement.  Different literal values will prevent matching and defeat these mechanisms.  These techniques must still be tested and migrated in a controlled manner, but they have no functional impact and so only testing of performance is needed.

    SQL Profiles can also perform forced matching, where statements that are similar except for literal values are matched.  However, note that they also require licencing of Tuning Pack.

    Some parts of PeopleTools and the PeopleSoft applications are better than others at producing sharable SQL with bind variables.  Most of the SQL generated by the component processor uses bind variables.  In Application Engine, if the ReUse Statement property is not set, which it is not by default, the %BIND fields are substituted with their literal values in the SQL statement.  However, if the property is set then %BIND fields become bind variables.  Over the years much more PeopleSoft code has been delivered with this attribute enabled.  Doing so has significant performance benefits (see Performance Benefits of ReUse Statement Flag in Application Engine).  

    Where, under normal circumstances, I might use a baseline or patch to inject a hint or profile of hints into a particular SQL statement (i.e. where I am dealing with a single SQL_ID), if the statement has literal values that change, then each statement has a different SQL_ID.  I have experimented with setting CURSOR_SHARING to FORCE at session-level for a specific scheduled process, but I have always had very poor experiences with that approach.  It invariably causes more problems than it solves.  Instead, I use force matched SQL Profiles.

    The PeopleTools documentation sets out situations where ReUse Statement cannot be set.  This includes dynamically generated code where %BIND(…,NOQUOUTES) is used to embed a piece of SQL held in a variable.  This is a very common technique in PeopleSoft; often dynamically generated code is driven by the application configuration.  

    We also see a similar design in PeopleSoft's COBOL programs.  Static statements are loaded from the stored SQL statements table (PS_SQLSTMT_TBL) and do use bind variables, but dynamic statements are assembled at runtime from static fragments in the COBOL code and any variable values are embedded as literals rather than using bind variables.

    Forced matching will allow a SQL profile to match a statement that is the same except for different literal values.   However, dynamic SQL statements can change in ways that are beyond that, including:

    • Different instances of working storage tables can be used by different instances of the same process.
    • Different columns can be used in select and group by clauses.
    • Different criteria can be introduced. 
    • A different number of terms in an IN() condition.

    Occasionally, and I really mean very occasionally when I have exhausted other alternatives, I have dynamically created groups of SQL Profiles (still with forced matching) to cover every permutation of the variations of the dynamic SQL statement.

    Example
    Here is a dynamic statement from such a COBOL process, FSPCCURR.  This General Ledger Revaluation process adjusts the base currency value of the account balances by creating adjusting entries for the accounts being revalued. It creates corresponding entries for any gain or loss that results from the revaluation.  It was captured by AWR, and I have extracted it with DBMS_XPLAN.DISPLAY_AWR.
    The ledger table (PS_LEDGER) is joined to a working storage table of tree selectors (PS_TREE_SEL10_R001) and working storage table (PS_CURR_WRK_RT001) and the result is put into another working storage table (PS_CURR_WRK_TBL001).
    INSERT INTO PS_CURR_WRK_TBL001 (ACCOUNT,ACCOUNTING_PERIOD,AFFILIATE,AFFI
    LIATE_INTRA1,AFFILIATE_INTRA2,ALTACCT,BOOK_CODE,BUDGET_REF,CHARTFIELD1,C
    HARTFIELD2,CHARTFIELD3,CLASS_FLD,CURRENCY_CD,DATE_CODE,DEPTID,FISCAL_YEA
    R,FUND_CODE,GL_ADJUST_TYPE,LEDGER,OPERATING_UNIT,POSTED_BASE_AMT,POSTED_
    TOTAL_AMT,POSTED_TOTAL_CR,POSTED_TOTAL_DR,POSTED_TRAN_AMT,POSTED_TRAN_CR
    ,POSTED_TRAN_DR,PROCESS_INSTANCE,PRODUCT,PROGRAM_CODE,PROJECT_ID,STATIST
    ICS_CODE,RATE_DIV,RATE_MULT,GAINLOSS_ACCT,RESTATED_AMT,REVAL_ADJUSTMENT,
    TARGET_CURRENCY) SELECT A.ACCOUNT,003,AFFILIATE,' ',' ','
    ',A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIELD2,CHARTFIELD3,'
    ',A.CURRENCY_CD,DATE_CODE,DEPTID,2020,FUND_CODE,GL_ADJUST_TYPE,'X_UKCORE
    ',OPERATING_UNIT,SUM(A.POSTED_BASE_AMT),SUM(POSTED_TOTAL_AMT),0,0,0,0,0,
    0002858795,PRODUCT,PROGRAM_CODE,PROJECT_ID,' ',R.RATE_DIV,R.RATE_MULT,'
    ',ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02
    ),ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02 )
    - SUM(A.POSTED_BASE_AMT),'USD' FROM PS_LEDGER A , PS_TREE_SEL10_R001 B
    , PS_CURR_WRK_RT001 R WHERE A.LEDGER='X_UKCORE' AND A.FISCAL_YEAR =
    2020 AND A.ACCOUNTING_PERIOD IN ( 0, 1, 2, 3) AND B
    .PROCESS_INSTANCE=0002858795 AND B .CHARTFIELD='ACCOUNT' AND
    A.ACCOUNT>=B .RANGE_FROM_10 AND A.ACCOUNT<=B .RANGE_TO_10 AND
    A.BUSINESS_UNIT='96013' AND A.CURRENCY_CD <> 'GBP' AND FROM_CUR =
    A.CURRENCY_CD AND TO_CUR = 'GBP' AND R.PROCESS_INSTANCE = 0002858795
    GROUP BY A.ACCOUNT,AFFILIATE,A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIE
    LD2,CHARTFIELD3,A.CURRENCY_CD,DATE_CODE,DEPTID,FUND_CODE,GL_ADJUST_TYPE,
    OPERATING_UNIT,PRODUCT,PROGRAM_CODE,PROJECT_ID,RATE_DIV,RATE_MULT
    However there are several copies of each of these working storage tables, and different concurrent instances of this process may be allocated different copies. 
    There is also an in clause that lists the accounting periods to be processed. 
    So the statement can vary. Here is another version of what is essentially the same statement with different literal values and different tables and for a different accounting period (this time period 5). The parts in bold a the ones that vary from statement to statement that are not literal values
    INSERT INTO PS_CURR_WRK_TBL001 (ACCOUNT,ACCOUNTING_PERIOD,AFFILIATE,AFFI
    LIATE_INTRA1,AFFILIATE_INTRA2,ALTACCT,BOOK_CODE,BUDGET_REF,CHARTFIELD1,C
    HARTFIELD2,CHARTFIELD3,CLASS_FLD,CURRENCY_CD,DATE_CODE,DEPTID,FISCAL_YEA
    R,FUND_CODE,GL_ADJUST_TYPE,LEDGER,OPERATING_UNIT,POSTED_BASE_AMT,POSTED_
    TOTAL_AMT,POSTED_TOTAL_CR,POSTED_TOTAL_DR,POSTED_TRAN_AMT,POSTED_TRAN_CR
    ,POSTED_TRAN_DR,PROCESS_INSTANCE,PRODUCT,PROGRAM_CODE,PROJECT_ID,STATIST
    ICS_CODE,RATE_DIV,RATE_MULT,GAINLOSS_ACCT,RESTATED_AMT,REVAL_ADJUSTMENT,
    TARGET_CURRENCY) SELECT A.ACCOUNT,005,AFFILIATE,' ',' ','
    ',A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIELD2,CHARTFIELD3,'
    ',A.CURRENCY_CD,DATE_CODE,DEPTID,2020,FUND_CODE,GL_ADJUST_TYPE,'XXXX',OP
    ERATING_UNIT,SUM(A.POSTED_BASE_AMT),SUM(POSTED_TOTAL_AMT),0,0,0,0,0,0002
    991789,PRODUCT,PROGRAM_CODE,PROJECT_ID,' ',R.RATE_DIV,R.RATE_MULT,'
    ',ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02
    ),ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02 )
    - SUM(A.POSTED_BASE_AMT),'AUD' FROM PS_LEDGER A , PS_TREE_SEL10_R B ,
    PS_CURR_WRK_RT002 R WHERE A.LEDGER='XXXX' AND A.FISCAL_YEAR = 2020 AND
    A.ACCOUNTING_PERIOD IN ( 0, 1, 2, 3, 4, 5) AND B
    .PROCESS_INSTANCE=0002991789 AND B .CHARTFIELD='ACCOUNT' AND
    A.ACCOUNT>=B .RANGE_FROM_10 AND A.ACCOUNT<=B .RANGE_TO_10 AND
    A.BUSINESS_UNIT='13051' AND A.CURRENCY_CD <> 'AUD' AND FROM_CUR =
    A.CURRENCY_CD AND TO_CUR = 'AUD' AND R.PROCESS_INSTANCE = 0002991789
    GROUP BY A.ACCOUNT,AFFILIATE,A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIE
    LD2,CHARTFIELD3,A.CURRENCY_CD,DATE_CODE,DEPTID,FUND_CODE,GL_ADJUST_TYPE,
    OPERATING_UNIT,PRODUCT,PROGRAM_CODE,PROJECT_ID,RATE_DIV,RATE_MULT
    If I want to use SQL Profiles to introduce hints to control the execution plan, then I will need a different profile for every possible permutation.
    I start by using Carlos Sierra's coe_xfr_sql_profile.sql script.  This is a part of Oracle's SQLTEXPLAIN (SQLT) tool. It generates a SQL script that generates a SQL profile to reproduce a given execution plan for a given SQL statement that was captured by AWR.
    From there is not a big jump to add a SQL statement to generate all the permutations of the variations in the SQL (other than for bind variables) and create a profile inside a loop.  The exact details will vary depending on the behaviour of the program.  However, in this particular example I need:
    • Different SQL profiles will be needed for each accounting period because there will be a different list of accounting periods in the IN() condition.  Subquery factors n and n1 produce a list of accounting periods.
    WITH n AS (SELECT rownum n FROM dual CONNECT BY level<=12
    ), n1 AS (
    SELECT n1.n period, LISTAGG(n2.n,', ') WITHIN GROUP (ORDER BY n2.n) periods
    FROM n n1, n n2
    WHERE n2.n <= n1.n
    AND n1.n <= 12
    GROUP BY n1.n
    )
    SELECT * FROM n1
    /

    PERIOD PERIODS
    ---------- ----------------------------------------
    1 1
    2 1, 2
    3 1, 2, 3
    4 1, 2, 3, 4
    5 1, 2, 3, 4, 5
    6 1, 2, 3, 4, 5, 6
    7 1, 2, 3, 4, 5, 6, 7
    8 1, 2, 3, 4, 5, 6, 7, 8
    9 1, 2, 3, 4, 5, 6, 7, 8, 9
    10 1, 2, 3, 4, 5, 6, 7, 8, 9, 10
    11 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11
    12 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12
    • Lists of the various tables used for working storage can be queried from the PeopleSoft data dictionary, PSRECDEFN.  I can see that there are 5 versions of the current work table that the process can choose from.  Note that these are ordinary tables, so there are 5 different records in PSRECDEFN. 
    SELECT TO_NUMBER(substr(recname,13)) seq
    , decode(sqltablename,' ','PS_'||recname,sqltablename) table_name
    FROM psrecdefn
    WHERE recname like 'CURR_WRK_TBL%'
    /

    SEQ TABLE_NAME
    ---------- ------------------
    PS_CURR_WKK_TBL
    1 PS_CURR_WKK_TBL001
    2 PS_CURR_WKK_TBL002
    3 PS_CURR_WKK_TBL003
    4 PS_CURR_WKK_TBL004

    • However, if I was working on a temporary record used in an Application Engine program, I would need to look up the number of instances of that record.  
      • The number of non-shared temporary record tables is the sum of all the instances defined on each application engine program to which the record is allocated, plus the number of global instances, up to a maximum of 99.  Instance 0 is the shared instance.  The number can be altered in development and the additional tables built by Application Designer.  This can require additional SQL Profiles be built. 
    WITH n AS (
    S
    ELECT rownum-1 n FROM dual CONNECT BY LEVEL<=100
    ), c AS (
    SELECT c.recname
    , n.n instance
    , DECODE(r.sqltablename, ' ', 'PS_'||r.recname,r.sqltablename)||DECODE(n.n,0,'',n.n) table_name
    FROM n
    , pstemptblcntvw c
    , psrecdefn r
    , psoptions o
    WHERE r.recname = c.recname
    AND n.n <= c.temptblinstances+o.temptblinstances
    )
    SELECT instance, table_name
    FROM c
    WHERE recname = 'WRK_XREF_CAL'
    /

    INSTANCE TABLE_NAME
    ---------- ----------------------------------------------------------
    0 PS_WRK_XREF_CAL
    1 PS_WRK_XREF_CAL1
    2 PS_WRK_XREF_CAL2
    3 PS_WRK_XREF_CAL3
    4 PS_WRK_XREF_CAL4
    5 PS_WRK_XREF_CAL5
    6 PS_WRK_XREF_CAL6
    • In this particular example, I know that every permutation of all three tables could occur in all accounting period, so I simply Cartesian join all the subquery factors.  
      • In other cases, only some permutations may occur.  This must be handled in the code that is written.  Literal values do not need to be considered because the profile will be created with force matching.
      • In Application Engine, although you often see the same instance of different temporary records used in the same process, there is nothing to prevent different instances of different records being used, and so all permutations must be considered.
    • I will also concatenate the ID for each table, and also the accounting period to produce an ID string that I can use in the name of the SQL profile.
    'TS'||ts.seq||'_RT'||rt.seq||'_TBL'||wk.seq||'_'||n1.period
    The profile generated by coe_xfr_sql_profile.sql will contain the complete profile of hints for the SQL statement captured by AWR.  That is officially the only way to guarantee a particular execution plan.  However, as in this example, I could specify the just that I want to introduce in the statement, effectively treating a SQL Profile as if it was a SQL Patch.  See also:
    Here is the modified script.  Note the sections in bold.
    • The SQL statement in the FOR clause returns all the permutations of the variations in the SQL statement in an implicit cursor.  
    • Table names are concatenated into the SQL text from the columns in the implicit cursor.
    • Single quotation marks are doubled so that the string contains the single quotation mark.
    • It is important not to add or remove any spaces when introducing these changes.
    • Profiles are dropped and created inside the loop.  Force Matching is enabled.
    REM coe_xfr_sql_profile_FSPCCURR_tree_rate_curr_wrk.sql
    SPO coe_xfr_sql_profile_FSPCCURR_tree_rate_curr_wrk.log;

    WHENEVER SQLERROR CONTINUE
    REM WHENEVER SQLERROR EXIT SQL.SQLCODE;
    VAR signature NUMBER;

    DECLARE
    sql_txt CLOB;
    h SYS.SQLPROF_ATTR;
    e_no_sql_profile EXCEPTION;
    PRAGMA EXCEPTION_INIT(e_no_sql_profile, -13833);
    BEGIN
    FOR i IN(
    WITH n AS (SELECT rownum n FROM dual CONNECT BY level<=12
    ), n1 AS (
    SELECT n1.n period, LISTAGG(n2.n,', ') WITHIN GROUP (ORDER BY n2.n) periods
    FROM n n1, n n2
    WHERE n2.n <= n1.n
    GROUP BY n1.n
    ), ts AS (
    SELECT TO_NUMBER(substr(recname,13)) seq
    , decode(sqltablename,' ','PS_'||recname,sqltablename) table_name
    FROM psrecdefn
    WHERE recname like 'TREE_SEL10_R%'
    ), rt AS (
    SELECT TO_NUMBER(substr(recname,12)) seq
    , decode(sqltablename,' ','PS_'||recname,sqltablename) table_name
    FROM psrecdefn
    WHERE recname like 'CURR_WRK_RT%'
    ), wk AS (
    SELECT TO_NUMBER(substr(recname,13)) seq
    , decode(sqltablename,' ','PS_'||recname,sqltablename) table_name
    FROM psrecdefn
    WHERE recname like 'CURR_WRK_TBL%'
    )
    SELECT 'TS'||ts.seq||'_RT'||rt.seq||'_TBL'||wk.seq||'_'||n1.period id
    , ts.table_name ts_table_name
    , rt.table_name rt_table_name
    , wk.table_name wk_table_name
    , n1.period, n1.periods
    FROM n1, ts, rt, wk
    ) LOOP
    sql_txt := 'INSERT INTO '||i.wk_table_name||' (ACCOUNT,ACCOUNTING_PERIOD,AFFILIATE,AFFILIATE_INTRA1,AFFILIATE_INTRA2,ALTACCT,
    BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIELD2,CHARTFIELD3,CLASS_FLD,CURRENCY_CD,DATE_CODE,DEPTID,FISCAL_YEAR,
    FUND_CODE,GL_ADJUST_TYPE,LEDGER,OPERATING_UNIT,POSTED_BASE_AMT,POSTED_TOTAL_AMT,POSTED_TOTAL_CR,POSTED_TOTAL_DR,
    POSTED_TRAN_AMT,POSTED_TRAN_CR,POSTED_TRAN_DR,PROCESS_INSTANCE,PRODUCT,PROGRAM_CODE,PROJECT_ID,STATISTICS_CODE,
    RATE_DIV,RATE_MULT,GAINLOSS_ACCT,RESTATED_AMT,REVAL_ADJUSTMENT,TARGET_CURRENCY)
    SELECT A.ACCOUNT,011,AFFILIATE,
    '' '','' '','' '',A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIELD2,CHARTFIELD3,'' '',A.CURRENCY_CD,DATE_CODE,DEPTID,2016,
    FUND_CODE,GL_ADJUST_TYPE,''X_UKCORE'',OPERATING_UNIT,SUM(A.POSTED_BASE_AMT),SUM(POSTED_TOTAL_AMT),0,0,0,0,0,
    0001234567,PRODUCT,PROGRAM_CODE,PROJECT_ID,'' '',R.RATE_DIV,R.RATE_MULT,'' '',
    ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02
    ),ROUND(SUM(((((A.POSTED_TOTAL_AMT) / (RATE_DIV))) * (RATE_MULT))),02 )
    - SUM(A.POSTED_BASE_AMT),''GBP'' FROM PS_LEDGER A , '||i.ts_table_name||' B
    , '||i.rt_table_name||' R WHERE A.LEDGER=''X_UKCORE'' AND A.FISCAL_YEAR =
    2016 AND A.ACCOUNTING_PERIOD IN ( 0, '||i.periods||')
    AND B .PROCESS_INSTANCE=0001234567 AND B .CHARTFIELD=''ACCOUNT'' AND
    A.ACCOUNT>=B .RANGE_FROM_10 AND A.ACCOUNT<=B .RANGE_TO_10 AND
    A.BUSINESS_UNIT=''12345'' AND A.CURRENCY_CD <> ''GBP''
    AND FROM_CUR = A.CURRENCY_CD
    AND TO_CUR = ''GBP''
    AND R.PROCESS_INSTANCE = 0001234567
    GROUP BY A.ACCOUNT,AFFILIATE,A.BOOK_CODE,BUDGET_REF,CHARTFIELD1,CHARTFIELD2,CHARTFIELD3,A.CURRENCY_CD,DATE_CODE,
    DEPTID,FUND_CODE,GL_ADJUST_TYPE,OPERATING_UNIT,PRODUCT,PROGRAM_CODE,PROJECT_ID,RATE_DIV,RATE_MULT';
    h := SYS.SQLPROF_ATTR(
    q'[BEGIN_OUTLINE_DATA]',
    q'[IGNORE_OPTIM_EMBEDDED_HINTS]',
    q'[ALL_ROWS]',
    q'[LEADING(@"SEL$1" "B"@"SEL$1" "A"@"SEL$1" "R"@"SEL$1")]',
    q'[USE_NL(@"SEL$1" "A"@"SEL$1")]',
    q'[SWAP_JOIN_INPUTS(@"SEL$1" "R"@"SEL$1")]',
    q'[END_OUTLINE_DATA]');
    :signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(sql_txt);
    dbms_output.put_line(i.id||':'||:signature);

    BEGIN
    DBMS_SQLTUNE.drop_SQL_PROFILE (name => 'FSPCCURR_'||i.id);
    EXCEPTION
    WHEN e_no_sql_profile THEN NULL;
    END;

    IF 1=1 THEN
    DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
    sql_text => sql_txt,
    profile => h,
    name => 'FSPCCURR_'||i.id,
    description => 'coe FSPCCURR '||i.id||' @ '||:signature||'',
    category => 'DEFAULT',
    validate => TRUE,
    replace => TRUE,
    force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL).
    FALSE:EXACT (similar to CURSOR_SHARING) */ );

    END IF;

    END LOOP;
    END;
    /

    column name format a30
    select name, status, sql_text
    from dba_sql_profiles
    where name like '%FSPCCURR%'
    order by 1
    /
    SPO OFF;

    When I implemented this particular example, it created 900 SQL profiles.  Each one has a different force matching signature.  Having this many SQL Profiles is not a problem for the database because they are looked up by the signature. 

    TS1_RT4_TBL4_12:1278395827722096644
    TS2_RT4_TBL4_12:4126874017730826984

    PL/SQL procedure successfully completed.

    However, managing that many profiles could become a cause for concern by the DBA.  Therefore, I think it is important that they have a sensible naming convention so that it is obvious to what they relate.
    NAME                           STATUS   SQL_TEXT                                                                        
    ------------------------------ -------- --------------------------------------------------------------------------------

    FSPCCURR_TS1_RT4_TBL4_12 ENABLED INSERT INTO PS_CURR_WRK_TBL004 (ACCOUNT,ACCOUNTING_PERIOD,AFFILIATE,AFFILIATE_IN

    FSPCCURR_TS2_RT4_TBL4_12 ENABLED INSERT INTO PS_CURR_WRK_TBL004 (ACCOUNT,ACCOUNTING_PERIOD,AFFILIATE,AFFILIATE_IN


    900 rows selected.
    Further reading

    PeopleSoft Financials Ledger Partitioning Recommendations

    Tue, 2020-11-24 07:54

    I have written previously about partitioning the PS_LEDGER table in Financials for the benefit of General Ledger (GL) reporting.  I have always recommended top-level range partitioning on the combination of FISCAL_YEAR and ACCOUNTING_PERIOD with separate partitions for each accounting period.  It also leaves sub-partitioning available to use another column, usually LEDGER.

    However, recent research into partition change tracking (PCT) and materialized views has made me question that advice.  The decision is not as clear cut.

    Summary Ledgers or Materialized Views

    You can create summary ledger tables in PeopleSoft Financials that are incrementally maintained by batch processes, and then nVision reports can reference the summary ledgers instead.  If the summary ledgers are not up to date, then the reports will produce out of date results.

    Similarly, materialized views can be used in an Oracle database to create pre-generated reporting tables.  An application can reference a materialized view directly, or the database can dynamically rewrite submitted SQL queries to use the materialized view if they are sufficiently similar to the query that created the materialized view and if the optimizer judges that it is cheaper to query the materialized view.  By default, the database will check that the materialized view is up to date, that is no change has been made to the underlying tables since the last refresh commenced, before it can rewrite the SQL query.  So the query will always return the same data, but if the materialized view is out of date you don't get the performance improvement.

    You can optionally choose to configure the database to write SQL queries to use stale materialized views by setting QUERY_REWRITE_INTEGRITY=stale_tolerated at either database or session-level.  

    Materialized views can be created for the nVision queries that you wish to optimise, and no further code change is required because the database will rewrite the SQL.  You can see a typical example of this in my blog about PCT.

    Partition Change Tracking is a mechanism the Oracle database uses to 'track freshness to a finer grain than the entire materialized view'.  It can identify which partitions and subpartitions are fresh and can be used for query rewrite, and to refresh just the partitions that are stale or that contain stale sub-partitions.  

    Alternatives for Partitioning PS_LEDGER

    If you wish to create materialized views on the main ledger table, and rely upon query rewrite, and keep the materialized views up to date with respect to the ledger table, and only use them when they are up to date, then you probably want PCT to help with both rewrite and refresh.

    1. Multi-column composite partitioning 

    I usually like to range partition PS_LEDGER on the combination of FISCAL_YEAR and ACCOUNTING_PERIOD to produce a separate range partition for each accounting period.   Partition pruning works very effectively with the SQL generated by nVision.  It also allows sub-partitioning on another column, usually LEDGER.  This is desirable when a system has multiple actuals ledgers, and especially since 9.2 where the budget data is also stored in PS_LEDGER rather than PS_LEDGER_BUDG.  

    However, amongst the documented restrictions of PCT is that partitioned tables must use either range, or list partitioning, or composite partitioning with range or list as the top-level partitioning strategy.  Also, the top-level partition key must consist of only a single column (see Database Data Warehousing Guide -> Advanced Materialized Views -> About Partition Change Tracking)

    If I want to use query rewrite to materialized views for queries on LEDGER table then I have a few choices.

    • If I stick with multi-column range partitioning, then I cannot use PCT.  I must either keep the materialized views fresh, or the queries remain on the ledger table.  Any update to any partition in the ledger table will render the entire materialized view stale and prevent query rewrite.  Many customers run a suite of nVision reportbooks overnight.  I could set QUERY_REWRITE_INTEGRITY=stale_tolerated at session-level for the report books processes using a trigger on the process scheduler request table (PSPRCSRQST) - see Setting Oracle Session Parameters for Specific Process Scheduler Processes.  In this case, I would have to take responsibility for refreshing the materialized views prior to running, say, a suite of report processes.  This is effectively the same situation as using summary ledgers, but without code change to the reports.
      • I have created materialized views on summary ledger tables in order to provide compressed copies of the summary ledger.  Again, in this case, the materialized views had to be refreshed after the summary ledger maintenance process.
    • Or, I have to produce a simpler partitioning strategy for the ledger table that is still compatible with PCT.

    2. Composite Range-Range Partitioning on FISCAL_YEAR and ACCOUNTING_PERIOD

    I could composite partition both the LEDGER table and the materialized views by FISCAL_YEAR and ACCOUNTING_PERIOD, but then I cannot further subpartition by other columns.  This would degrade queries on smaller ledgers that could not be rewritten to dedicated materialized views.

    CREATE TABLE ps_ledger
    (business_unit VARCHAR2(5) NOT NULL

    ) PCTFREE 10 PCTUSED 80
    PARTITION BY RANGE (FISCAL_YEAR)
    SUBPARTITION BY RANGE (ACCOUNTING_PERIOD)
    SUBPARTITION TEMPLATE
    (SUBPARTITION ap_bf VALUES LESS THAN (1)
    ,SUBPARTITION ap_01 VALUES LESS THAN (2)
    ,SUBPARTITION ap_02 VALUES LESS THAN (3)
    ,SUBPARTITION ap_03 VALUES LESS THAN (4)
    ,SUBPARTITION ap_04 VALUES LESS THAN (5)
    ,SUBPARTITION ap_05 VALUES LESS THAN (6)
    ,SUBPARTITION ap_06 VALUES LESS THAN (7)
    ,SUBPARTITION ap_07 VALUES LESS THAN (8)
    ,SUBPARTITION ap_08 VALUES LESS THAN (9)
    ,SUBPARTITION ap_09 VALUES LESS THAN (10)
    ,SUBPARTITION ap_10 VALUES LESS THAN (11)
    ,SUBPARTITION ap_11 VALUES LESS THAN (12)
    ,SUBPARTITION ap_12 VALUES LESS THAN (13)
    ,SUBPARTITION ap_cf VALUES LESS THAN (MAXVALUE))
    (PARTITION ledger_2018 VALUES LESS THAN (2019) PCTFREE 0 COMPRESS
    ,PARTITION ledger_2019 VALUES LESS THAN (2020) PCTFREE 0 COMPRESS
    ,PARTITION ledger_2020 VALUES LESS THAN (2021)
    ,PARTITION ledger_2021 VALUES LESS THAN (2022)
    )
    ENABLE ROW MOVEMENT NOLOGGING
    /
    The materialized view will be similarly composite partitioned.  Note that I have created the materialized view for specific fiscal years and for a specific ledger.  I would create materialized views for each combination of ledger and each distinct set of analysis columns that are regularly reported upon.
    CREATE MATERIALIZED VIEW mv_ledger_2020
    PARTITION BY RANGE (FISCAL_YEAR)
    SUBPARTITION BY RANGE (ACCOUNTING_PERIOD)
    SUBPARTITION TEMPLATE
    (SUBPARTITION ap_bf VALUES LESS THAN (1)
    ,SUBPARTITION ap_01 VALUES LESS THAN (2)
    ,SUBPARTITION ap_02 VALUES LESS THAN (3)
    ,SUBPARTITION ap_03 VALUES LESS THAN (4)
    ,SUBPARTITION ap_04 VALUES LESS THAN (5)
    ,SUBPARTITION ap_05 VALUES LESS THAN (6)
    ,SUBPARTITION ap_06 VALUES LESS THAN (7)
    ,SUBPARTITION ap_07 VALUES LESS THAN (8)
    ,SUBPARTITION ap_08 VALUES LESS THAN (9)
    ,SUBPARTITION ap_09 VALUES LESS THAN (10)
    ,SUBPARTITION ap_10 VALUES LESS THAN (11)
    ,SUBPARTITION ap_11 VALUES LESS THAN (12)
    ,SUBPARTITION ap_12 VALUES LESS THAN (13)
    ,SUBPARTITION ap_cf VALUES LESS THAN (MAXVALUE))
    (PARTITION ledger_2019 VALUES LESS THAN (2020)
    ,PARTITION ledger_2020 VALUES LESS THAN (2021)
    ) PCTFREE 0 COMPRESS PARALLEL
    REFRESH COMPLETE ON DEMAND
    ENABLE QUERY REWRITE AS
    SELECT business_unit, account, chartfield1, fiscal_year, accounting_period,
    sum(posted_total_amt) posted_total_amt
    FROM ps_ledger
    WHERE fiscal_year >= 2019
    AND ledger = 'ACTUALS'
    AND currency_cd = 'GBP'
    GROUP BY business_unit, account, chartfield1, fiscal_year, accounting_period
    /

    Note that I have equality criteria on LEDGER and CURRENCY_CD in the materialized view, but I have not included those columns in the select clause, and so they are not in the view.  Oracle can still rewrite queries to use this materialized view and that specify the same criteria on PS_LEDGER

    PCT will determine whether any unpruned partitions or subpartitions are stale and if so prevent query rewrite.  It is documented behaviour that the materialized view refresh will truncate and rebuild the whole top-level partitions, in this case each fiscal year, where the partition or any constituent sub-partitions are stale.  So even if just one subpartition, for one accounting period is stale, the whole fiscal year is refreshed.

    3. Composite Range-Range Partitioning on ACCOUNTING_PERIOD and FISCAL_YEAR

    I investigated making ACCOUNTING_PERIOD the top-level partitioning key, and sub-partitioning on FISCAL_YEAR.  

    • Partitioning pruning works correctly, so the query performance will be unaffected, 
    • PCT refresh processed all years for the single accounting period, rather than all accounting periods for the single year.  That is less work if you have fewer fiscal years than accounting periods.  Generally, this is the case.  I usually see systems that contain 3 to 6 fiscal years of data.

    However, it has a number of problems.

    • I can't specify storage options or compression attributes on sub-partitions in the create table DDL command, so I have to come along afterwards with PL/SQL to alter the sub-partitions.
    • It is not possible to interval range sub-partition an object, so I can't automatically add partitions for future fiscal years on demand.  Instead, I am going to have to add new fiscal year subpartitions to each of the 14 range partitions.

    On balance, I don't think I would choose to implement this.

    Conclusion

    There is no single clear recommendation for partitioning and PCT.  It will depend on the circumstances.

    • If I don't need to introduce materialized views on PS_LEDGER then I would usually stick with the multi-column composite partitioning with the top-level range partition on FISCAL_YEAR and ACCOUNTING_PERIOD.
      • PeopleSoft provides summary ledgers to improve the performance of the ledger queries and compressed materialized views can be built on these.
    • If you only have a single actuals ledger then composite range-range partition on FISCAL_YEAR and ACCOUNTING_PERIOD is attractive.  
      • I do not recommend interval partitioning on FISCAL_YEAR because this affects the scope of the refresh process.  It processes a second top-level range partition.
    • If the budget data is stored in the PS_LEDGER table rather than LEDGER_BUDG then consider building separate materialized views for each value of LEDGER.  
      • If you can manage to build materialized views to support rewrite of most the queries on the smaller ledgers, then the lack of sub-partitioning by LEDGER is not going to be a problem unless the query doesn't get rewritten because the materialized views are stale.  Keeping the materialized views up to date is going to be a bigger challenge.

    Practical Application Performance Tuning: An nVision Case Study

    Mon, 2019-12-02 16:41
    I gave this presentation at the UKOUG Techfest 19 conference.  It is closely based on a previous presentation about PeopleSoft nVision performance tuning, and uses the experience of a PeopleSoft project as a case study, so I am also posting here on my PeopleSoft blog.
    This video was produced as a part of the preparation for this session.  The slide deck is also available on my website.

    Learning about and understanding the principles and mechanics of the Oracle database is fundamentally important for both DBAs and developers. It is one of the reasons we still physical conferences.
    This presentation tells the story of a performance tuning project for the GL reporting on a Financials system on an engineered system. It required various techniques and features to be brought to bear. Having a theoretical understanding of how the database and various features work allowed us to make reasonable predictions about whether they would be effective in our environment. Some ideas were discounted, some were taken forward.
    We will look at instrumentation, ASH, statistics collection, partitioning, hybrid columnar compression, Bloom filtering, SQL profiles. All of them played a part in the solution, some added further complications that had to be worked around, some had to be carefully integrated with the application, and some required some reconfiguration of the application into order to work properly.
    Ultimately, performance improvement is an experimental science, and it requires a similar rigorous thought process.

    nVision Bug in PeopleTools 8.55/8.56 Impacts Performance

    Tue, 2019-11-12 13:12
    I have recently come across an interesting bug in nVision that has a significant performance impact on nVision reports in particular and can impact the database as a whole.
    Problem nVision SQLThis is an example of the problematic SQL generated by nVision.  The problem is that all of the SQL looks like this. There is never any group by clause, nor any grouping columns in the select clause in from of the SUM().
    SELECT SUM(A.POSTED_BASE_AMT) 
    FROM PS_LEDGER A, PSTREESELECT10 L2, PSTREESELECT10 L1
    WHERE A.LEDGER='ACTUAL' AND A.FISCAL_YEAR=2018 AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 8
    AND L2.SELECTOR_NUM=159077 AND A.ACCOUNT=L2.RANGE_FROM_10
    AND (A.BUSINESS_UNIT='10000')
    AND L1.SELECTOR_NUM=159075 AND A.DEPTID=L1.RANGE_FROM_10
    AND A.CURRENCY_CD='GBP' AND A.STATISTICS_CODE=' '
    Each query only returns a single row, that only populates a single cell in the report, and therefore a different SQL statement is generated and executed for every cell in the report.  Therefore, more statements are parsed and executed, and more scans of the ledger indexes and look-ups of the ledger table and performed.  This consumes more CPU, more logical I/O.
    Normal nVision SQLThis is how I would expect normal nVision SQL to look.  This example, although obfuscated, came from a real customer system.  Note how the query is grouped by TREE_NODE_NUM from two of the tree selector tables, so this one query now populates a block of cells.
    SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
    FROM PS_LEDGER A, PSTREESELECT05 L2, PSTREESELECT10 L3
    WHERE A.LEDGER='S_UKMGT'
    AND A.FISCAL_YEAR=2018
    AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12
    AND (A.DEPTID BETWEEN 'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149'
    OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID BETWEEN 'B9165' AND 'B9999'
    OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999'
    OR A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DEPTID BETWEEN 'H0000' AND 'H9999'
    OR A.DEPTID='B9150' OR A.DEPTID=' ')
    AND L2.SELECTOR_NUM=10228
    AND A.BUSINESS_UNIT=L2.RANGE_FROM_05
    AND L3.SELECTOR_NUM=10231
    AND A.ACCOUNT=L3.RANGE_FROM_10
    AND A.CHARTFIELD1='0012345'
    AND A.CURRENCY_CD='GBP'
    GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM
    The BugThis Oracle note details an nVision bug:
    "UPTO SET2A-C Fixes - Details-only nPlosion not happening for Single Chart-field nPlosion Criteria.
    And also encountered a performance issue when enabled details-only nPlosion for most of the row criteria in the same layout
    Issue was introduced on build 8.55.19.
    Condition: When most of the row filter criteria enabled Details-only nPlosion. This is solved in 8.55.22 & 8.56.07.
    UPTO SET3 Fixes - Performance issue due to the SET2A-C fixes has solved but encountered new one. Performance issue when first chart-field is same for most of the row criteria in the same layout.
    Issue was introduced on builds 8.55.22 & 8.56.07.
    Condition: When most of the filter criteria’s first chart-field is same. The issue is solved in 8.55.25 & 8.56.10."
    In summary
    • Bug introduced in PeopleTools 8.55.19, fully resolved in 8.55.25.
    • Bug introduced in PeopleTools 8.56.07, fully resolved in 8.56.10.

    Which Version Number Corresponds Which PeopleTools Object?

    Tue, 2019-10-15 10:15
    Recently somebody asked me "Our “CRM” values in PSLOCK and PSVERSION are growing tremendously and we don’t know why. We will be opening a case with Oracle but … do you know what that “CRM” value is related to? We are not using the CRM product in our PeopleSoft installation."
    There is some documentation on the Oracle Support website, but it is not exhaustive and may not be completely up to date.
    The page in the PTRef utility that describes the relationship of version numbers to PeopleTools tables is one of the few static pages in the tool.  I have now updated it with the information in the above Oracle support notes, but there are other version numbers.
    In the previous blog post, I showed how to increment version numbers before updating PeopleTools objects.  I knew RDM (the Record Definition Manager) is the OBJECTTYPENAME for PSRECDEFN because I worked that out by tracing Application Designer while it saved a record change.  That remains the only completely reliable way to determine the relationship.
    However, version number matching is also a useful technique, though when it does not provide a completely definitive answer, it dramatically cuts down the amount of work then necessary.
    I have written a little PL/SQL code, that is delivered with PTRef, that extracts the maximum version number for each PeopleTools table in PeopleSoft (other than the PS%DEL) tables and stores it on a working storage table (I used the PLAN_TABLE because it is always there on an Oracle database).  Then you can compare the version number on PSLOCK/PSVERSION with the maximum version on the PeopleTools object.
    If the version number is 1, you can’t do the match because the version number has never been incremented, at least since it was last reset by the VERSION application engine.
    If the version is only updated occasionally you may have some trouble with duplicate matches. In my example, 3 tables have a maximum version number of 80, while only one version number is 80.
    RECNAME                   MAX        CNT
    ------------------ ---------- ----------
    PSAEAPPLDEFN 80 3504
    PSMSGNODEDEFN 80 78
    PSOPRVERDFN 80 1468
    I would guess that OPRVM matches PSOPRVERDFN, and the above support notes confirm this, but otherwise, you would have to check that manually with trace.
    OBJECTTY    VERSION
    -------- ----------
    OPRVM 80
    Higher version numbers are easier to match because they are less likely to have duplicate matches.
    So to return to the original question, what is CRM?  In my sample system, version numbers CRM and SCATM are both 3.
    OBJECTTY    VERSION
    -------- ----------
    CRM 3
    SCATM 3
    However, only PSCONTDEFN has a maximum version of 3.
    RECNAME                   MAX        CNT
    ------------------ ---------- ----------
    PSCONTDEFN 3 7567
    Again not a perfect match, but again Oracle Support Note 664848.1 confirms that CRM corresponds to PSCONTDEFN.  CRM stands for Content Registry Manager.
    So the question now becomes what is updating the content definitions, and hence increasing the version number?  It turned out to be an entity registry synchronisation process that was run daily.
    It is perfectly legitimate for many updated rows on the PeopleTools table can be given the same version number.  The version number does not need to be incremented again for each row being updated, but then the row-level locks on PSVERSION and PSLOCK created by their updates must be held until the update on the PeopleTools table is committed.  That can increase contention on the version number update.  The alternative is to commit after each update and then increment the version numbers again.  Many PeopleSoft processes do exactly that, and it can, in turn, lead to massive increase in some version numbers.

    Understanding PeopleTools Object Version Numbers

    Mon, 2019-10-14 13:36
    I was recently asked a question about PeopleSoft version numbers, but before I address that directly, I think it would be useful to explain what is their general purpose.
    CachingThe PeopleSoft data model and application are mostly stored in the database in PeopleTools tables.  These tables are queried as the application executes.  For example, when you open a component, the component and pages, including all the PeopleCode, the definition of any records used, and so on have to be loaded into the component buffer.  Ultimately this information comes from the PeopleTools tables.  To save the overhead of repeatedly querying these tables, PeopleSoft caches this data locally in physical files the application server and process scheduler domains.  The application servers also cache some of this information in memory to save visiting the local physical cache.  Application Designer also maintains a physical cache.
    Over time, as the application executes, the cache files build up.  Occasionally, when it is necessary to delete the cache files and then it becomes clear just how significant is the overhead of the PeopleTools queries as a period of poor performance is seen as the application builds up fresh cache files.
    Physical cache files are created in directories in the application server and process scheduler Tuxedo domains.  By default, each process maintains its own private cache.  Separate directories of cache files are created for each type of PeopleSoft server process in each domain.    Pairs of cache files are created in each directory for each object type as needed.  There is also a CACHE.LOK file in each directory that is used to ensure that only one process is accessing that cache directory concurrently.
    It is possible to run with a shared physical cache, but then it is read-only and must be pre-generated.  It is very rare to see this implemented, because everyone expects to continuously deliver changes over time, and if you had a shared cache you would have to deliver an updated set of shared cache file to every domain every time you delivered a new PeopleTools object.
    The cache files come in pairs.  The name of the cache files is the Object Type Name.  This corresponds to the OBJECTTYPENAME on the PSLOCK and PSVERSION tables.  The .DAT file contains the data to be cached.  The .KEY file is an index for the .DAT file, and it also holds the version number of the cached object.

    -rw------- 1 psadm2 oracle 5228492 Jun 12 06:37 RDM.DAT
    -rw------- 1 psadm2 oracle 69120 Jun 12 06:37 RDM.KEY
    -rw------- 1 psadm2 oracle 0 Oct 26 2015 ROLM.DAT
    -rw------- 1 psadm2 oracle 24192 Oct 26 2015 ROLM.KEY
    -rw------- 1 psadm2 oracle 0 Oct 26 2015 RSM.DAT
    -rw------- 1 psadm2 oracle 24192 Oct 26 2015 RSM.KEY
    Version Numbers
    Version numbers track when a cached PeopleTools object has been changed, either by Application Designer, or a change in configuration, or the application.  The version numbers are sequences generated from two PeopleTools tables PSLOCK and PSVERSION that hold the highest version number for each type of object.  These two tables have the same structure.
    SQL> desc psversion
    Name Null? Type
    ----------------------------------------- -------- ----------------------------
    OBJECTTYPENAME NOT NULL VARCHAR2(8 CHAR)
    VERSION NOT NULL NUMBER(38)
    There are now over 100 different version numbers, each with a specific object type name that each track a specific PeopleTools object.  There is a global version number, with the object type name of SYS, that is incremented whenever any other version number is incremented.
    I have no idea why two identical tables of version numbers were created.  I can see no reason for this, but it has been like this since the version numbers were changed (if I remember correctly) in PeopleTools 7.  In early versions of PeopleTools, not all version numbers were on both tables, but in at least PeopleTools 8.55 only one object type appears on PSVERSION and not PSLOCK.
    When an object is changed, the object and global version numbers are incremented, and the incremented object version number is recorded on the object in the PeopleTools table.  The version number on the object is also stored in the physical cache files when the object is cached.  If the version on the database is higher than that in the cache file, then the PeopleSoft process knows it must query the latest version from the PeopleTools table and update the cache file.
    How to Update Version Numbers
    It is not generally recommended, nor strictly speaking supported, to update PeopleTools tables directly with SQL.  Apart from the risk of updating them incorrectly, or to invalid values, you also need to ensure that the changes are picked up by PeopleTools processes and that they do not simply continue to read the cached data.  However, occasionally, it is the pragmatic way to doing something.  
    Here is an example from Chapter 5 of PeopleSoft for the Oracle DBA that shows how to maintain version numbers so the change is picked up by PeopleTools processes.  I want to mark alternate search key indexes as unique where there is a unique key on a record because they are unique because the unique key is a subset of their columns.  Then Application Designer will build the indexes as unique.  
    UPDATE psversion SET version = version + 1
    WHERE objecttypename IN('SYS','RDM');

    UPDATE pslock SET version = version + 1
    WHERE objecttypename IN('SYS','RDM');

    UPDATE psrecdefn
    SET version = (
    SELECT version FROM psversion WHERE objecttypename = 'RDM')
    WHERE recname = '';

    UPDATE psindexdefn a
    SET a.uniqueflag = 1
    WHERE a.uniqueflag = 0
    AND a.indextype = 3
    AND EXISTS(
    SELECT 'x'
    FROM psindexdefn k
    WHERE k.recname = a.recname
    AND k.indexid = '_'
    AND k.indextype = 1
    AND k.uniqueflag = 1)
    AND a.recname = '';
    I am updating a PeopleTools object (PSINDEXDEFN) that doesn't have a version number, but its parent is PSRECDEFN that does have a version number.  I happen to know that object type RDM (the Record Definition Manager) generates the version number for PSRECDEFN.  I found that out by tracing Application Designer while it saved a record change.  That is the only completely reliable method to determine which sequence is used for which record.  However, I will discuss another less onerous matching method in a subsequent blog post.
    I must increment the RDM and SYS version numbers and write the new RDM version number to the updated rows on PSRECDEFN.  Next time a PeopleSoft process needs to read a record definition it will check the version numbers.  The increment of the SYS object tells PeopleSoft than an object number has changed, and then it will detect that the RDM version number has changed so it has to reload and cache objects with version numbers greater than the last cached version number for that object.

    PeopleTools Table Reference Generator

    Sun, 2019-06-09 15:46
    Like many other PeopleSoft professionals, I spend a lot of time looking at the PeopleTools tables because they contain meta-data about the PeopleSoft application. Much of the application is stored in PeopleTools tables. Some provide information about the Data Model. Many of my utility scripts reference the PeopleTools tables, some of them update them too. Therefore, it is very helpful to be able to understand what is in these tables. In PeopleSoft for the Oracle DBA, I discussed some tables that are of regular interest. I included the tables that correspond to the database catalogue and that are used during the PeopleSoft login procedure. The tables that are maintained by the process scheduler are valuable because they contain information about who ran what process when, and how long they ran for.
    I am not the only person to have started to document the PeopleTools tables on their website or blog, most people have picked a few tables that are of particular interest. I wanted to produce a complete reference.  However, with over 4000 PeopleTools tables and views, it is simply not viable to do even a significant number of them manually.  So, I wrote some SQL and PL/SQL to dynamically generate a page for each PeopleTools table and views and put the generated pages on my website.  If you use Google to search for a PeopleTools name you will probably find the reference page.
    I have now revisited that code and made a number of changes and improvements.
    • I have used a later version of PeopleTools to generate the reference on my website.  The list of PeopleTools tables is no longer defined in PeopleTools by object security, so I have used an independent and somewhat broader definition: Table or View records that are either owned by PPT or whose SQLTABLENAME is the same as the record name.
    • There is much less reliance on static pages.  There are now only 3 such pages everything else is generated.  Instead, additional data is loaded from static scripts into the PLAN_TABLE that should always be present in an Oracle database and so it doesn't have to be created.  It should be a global temporary table so there is no problem with debris being left behind or interference with other processes.  That data is then combined with data in the PeopleTools tables during the generation process.
    • The utility has been rewritten as a PL/SQL package that should be created in the PeopleSoft Owner ID schema (usually SYSADM).
    • The generated HTML is simply tidier, and more consistent.
    The source is available on GitHub at https://github.com/davidkurtz/PTRef, so you can download and generate your own reference on your own current version of PeopleTools. An example of the generated output can be found on my website.
    The idea of the original PeopleTools tables reference was the people could contribute additional descriptions and information that were not in the PeopleTools tables.  That can still happen, and indeed should be easier, by making changes to the scripts that load the additional data and uploading new versions to GitHub.


    PS360 enhancement: Added report of DDL models

    Fri, 2019-06-07 09:08
    I have written several blogs and presentations recently about how and how not to collect statistics in PeopleSoft.
    • Managing Cost-Based Optimizer Statistics for PeopleSoft recommends
      • If you are going to continue to use DBMS_STATS in the DDL model then
        • Do not specify ESTIMATE_PERCENT because it disables the hash-based number-of-distinct-values calculation, forcing it to go back to the COUNT(DISTINCT ...) method that requires a sort, and may not produce accurate values because it only samples data.
        • Do not specify METHOD_OPT='FOR ALL INDEXED COLUMNS SIZE 1' because this will not collect histograms on indexed columns, and will not update column statistics on unindexed columns.
        • Do specify FORCE=>TRUE so that you can lock and delete statistics on temporary records.
      • However, the alternative is to use GFCSTATS11 package to collects these statistics.  This package is controlled by a metadata table so you can defined statistics collection behaviour for specific records.
    • How Not to Collect Optimizer Statistics in an Application Engine Program
      • This blog explains why you should not explicitly code DBMS_STATS calls into Application Engine programs.
    This has prompted me to add a new report to the PS360 utility that simply reports the various DDL models for Oracle.  Thus during a health check, I can see how statistics are collected during batch processes.

    PS360 can be download from https://github.com/davidkurtz/ps360.




    PeopleSoft Adminstrator Podcast: #184 – nVision Performance

    Wed, 2019-05-29 06:42
    I recorded a second podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about nVision.
    (10 May 2019) #184 – nVision Performance You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

    PeopleSoft Administrator Podcast: #183 – Effective Performance Monitoring

    Wed, 2019-05-08 14:09
    I recently recorded a podcast with Dan Iverson and Kyle Benson for the PeopleSoft Administrator Podcast, this time about instrumentation, monitoring the performance of PeopleSoft system, and Performance Monitor.  There is also just a little about cursor sharing.
    (3 May 2019) #183 – Effective Performance Monitoring You can listen to the podcast on psadmin.io, or subscribe with your favourite podcast player, or in iTunes.

    Effective PeopleSoft Performance Monitoring

    Fri, 2019-03-08 12:00
    This advice note describes how to configure PeopleSoft systems on Oracle so that performance metrics are collected that are useful performance monitoring and subsequent performance tuning.
    Contents
    • Oracle RDBMS Instrumentation
      • On-Line Component/Page Information
      • Application Engine Step Information
      • Cobol/nVision Instrumentation
      • nVision Layout Instrumentation
      • 2nd Database Connection Instrumentation
    • PeopleTools Performance Metrics
      • Cobol and Application Engine Batch Timings
      • PeopleSoft Performance Monitor
    Summary of Recommendations
    • Set EnableAEMonitoring=1 in all Application Server and process scheduler domains in order to enable PeopleSoft instrumentation on-line, in the integration broker, and in Application Engine programs.
    • Implement the PSFTAPI package and trigger described above to set module and action at the start of all processes.
    • Implement Fine-Grained Audit policy and handler to instrument nVision processes if required.
    • Reduce AWR snapshot frequency from the default of 1 hour to every 15 minutes.
    • Enable Application Engine batch timings to at least database and preferably also file by setting AETrace=1152 in all Application Server and Process Scheduler domain configuration files.
    • Enable Cobol statement timings report should be written to log file by setting TraceSQL=128 in all Process Scheduler domain configuration files.
    • Performance Monitor is complex to set up and the delivered analytics are limited.  Nonetheless, the information that can be obtained from the data collected can be worth the effort of configuration to address on-line configuration and performances issues.
    Oracle RDBMS InstrumentationOracle provides the dbms_application_info package to set certain attributes on the current database session.  These attributes are visible in some of the dynamic performance version, and are picked up by Active Session History (ASH) and can also be seen in AWR reports, Enterprise Manager screens, SQL trace files, and other performance utilities such as EDB360.  The package was first documented in Oracle 7.3.3
    "Application developers can use the DBMS_APPLICATION_INFO package to record the name of the executing module or transaction in the database for use later when tracking the performance of various modules… System administrators can also use this information to track resource usage by module. When an application registers with the database, its name and actions are recorded in the V$SESSION and V$SQLAREA views. Your applications should set the name of the module and name of the action automatically each time a user enters that module. The module name could be the name of a … code segment in an … application. The action name should usually be the name or description of the current transaction within a module." Oracle 7 Tuning, release 7.3.3 ©Oracle 1997, Chapter 23 Registering Applications See also One of my Favourite Database Things: DBMS_APPLICATION_INFO
    If module and action are consistently set to meaningful values as the application executes it is then possible to determine from where SQL statements originated and how different parts of the application are performing.
    ASH is separately licenced as part the Diagnostics Pack, that is only available on Enterprise Edition of the database.  However, most PeopleSoft customers running on Oracle do so on Enterprise Edition and are licenced for the diagnostics pack.
    PeopleSoft has set client_info since PeopleTools 7.53 so the PeopleSoft operator ID can be associated with the database session, mainly to allow auditing to be done with database triggers.  However, this attribute is not persisted to the ASH data.  However, client_id is also set to the operator ID, and this is collected by ASH.
    PeopleTools has set module and action since PeopleTools 8.50, though the exact values used have changed sometimes with the PeopleTools version.
    A new application server/process scheduler domain parameter EnableAEMonitoring was introduced in PeopleTools 8.54 to control this behaviour and it is not enabled by default.  If monitoring is not enabled module defaults to the program name and action remains blank.
    Recommendation: Set EnableAEMonitoring=1 in all application server and process scheduler domains in order to enable PeopleSoft instrumentation on-line, in the integration broker, and in Application Engine programs.
    See also:
    On-Line Component/Page InformationIn the online application, module and action are set to the component and page name respectively.  In a search dialogue, Action is set to 'xyzzy'.
    In the Integration Broker module and action are set to service name and queue name.
    Application Engine Step InformationIn Application Engine, module is set to a string that includes the name of the Application Engine main program the was called and the sessionid_num for the current process instance number recorded on the process scheduler request table PSPRCSRQST.  For example: PSAE.PSPMCSOSUM.1448
    Later on, it may be necessary to use regular expressions in SQL to process this string before profiling the ASH data.
    Action is set to string concatenated from the Application Engine program, section, step name and step type.  For example: PSPMCSOSUM.GETCNT.CNT.P
    The program name may be different from that shown in module if one AE program calls another.
    Note: Application Engine still doesn't reset ACTION on DO SELECT steps
    Cobol/nVision InstrumentationCobol, nVision and SQR do not set module and action.  Instead, they can be set at the start of every program initiated by Process Scheduler using a trigger on PSPRCSRQST.  The first thing a process does when it is initiated is to set the run status on its scheduler request record to 7, indicating that it is processing.  This is visible in the Process Monitor component.  A trigger on this transition can set module and action for the session to the program and process instance number on that row of data.
    This technique was used prior to PeopleSoft adding instrumentation to PeopleTools.  It can still be applied to all processes, including Application Engine, because any PeopleSoft instrumentation will simply overwrite the value set by the trigger.
    Recommendation: Implement the PSFTAPI package and trigger described above in order to set module and action at the start of all processes.
    The same technique was also used prior to the introduction of ASH to enable Oracle SQL Trace if a particular run control was specified, and is still occasionally useful.
    nVision Layout InstrumentationOne of the challenges of tuning and monitoring nVision is to be able to identify each report being run.   nVision reports always run as the same process name, either NVSRUN for a single report, RPTBOOK for a report book of many reports, or DRILLDWN for a nVision drill-down query.  Knowing the process instance is useful because then we can look up the operator and run control ID
    However, it would also be useful to know the report ID being run.  When each individual nVision report starts it queries the runtime parameters from the PS_NVS_REPORT PeopleTools table.  There is no update, so it is not possible to capture this with a DML trigger.  Instead, it is possible to define a fine-grained audit policy on the query and set module within a PL/SQL handler package that is invoked by the fine-grained audit.
    Recommendation: Implement Fine-Grained Audit policy and handler to instrument nVision processes if required
    2nd Database Connection InformationPeopleSoft programs use a second database connection to increment sequence numbers to minimise the row level locking on such tables.  It is like an AUTONOMOUS_TRANSACTION in PL/SQL.  There is no PeopleSoft instrumentation on this session.  It is possible to use an AFTER LOGON trigger to set client_info, module and action.
    Oracle Automatic Workload Repository (AWR) SnapshotsPeopleSoft generates a lot of non-shareable SQL.
    • Dynamically generated SQL, often in PeopleCode, concatenates strings of SQL with bind variables, thus the bind variables become literals in the final SQL statement.  Statements with different literal values are considered to be different statements with different SQL_IDs.
    • Similarly, dynamic Cobol statements result in literal values in the SQL statement.
    • %BIND() variables in Application Engine will also become literal values in the SQL Statement unless the ReUseStatement attribute is set on the AE step, however, this cannot be set on statements with dynamic fragments code are introduced with %BIND(…,NOQUOTES).
    • Application Engine programs that use temporary records can use different non-shared instances of the record in different executions, and this also results in different statements with different SQL_IDs.
    • See also Performance Benefits of ReUse Statement Flag in Application Engine
    Consequently, the library caching is not particularly efficient in PeopleSoft, and dynamic SQL statements are often quickly aged out of the library cache.  AWR snapshots can only capture the SQL that is in the library cache at the time of the snapshot.  If the SQL statement, or at least a statement with the same force matching signature or plan hash value, cannot be found the AWR it cannot be identified or analysed.  Therefore, it is advantageous to increase the snapshot frequency on PeopleSoft systems.
    Recommendation: Reduce AWR snapshot frequency from the default of 1 hour to every 15 minutes.  This change results in only a modest increase overhead in processing and space on AWR, but it is worth the additional information that is captured.
    This advice also applies to Statspack that may be used if you are not licenced for the Diagnostics Pack.
    PeopleTools Performance MetricsBatch TimingsPeopleSoft Application Engine and Cobol programs can emit batch timings reports on successful completion.
    Application EngineApplication Engine batch timings are controlled by the AETrace flag in the Process Scheduler domain configuration file and for on-line AE programs in the Application Server domain configuration files.
    • AETrace=128: batch timings report is written to the AE Trace file to
    • AETrace=1024: batch timings are written to PS_BAT_TIMINGS% tables in the database
    The overhead of batch timings is negligible while the program is running because it is accounted in memory and only written to file or database when the process completes successfully.
    Recommendation: Enable Application Engine batch timings to at least database and preferably also file by setting AETrace=1152 in all Application Server and Process Scheduler domain configuration files.
    The trace setting in the process scheduler configuration can be overridden by setting process specific command line parameter overrides in the process definition.  This is often done to set other trace settings, it is also common to see these unintentionally left in place longer than necessary.  If trace is set in this way it should always also set the batch timings flags.
    See PeopleBooks -> Development Tools -> Application Engine -> Enabling Application Engine Tracing
    CobolPeopleSoft Cobol programs can only write batching timings reports to file and not to the database.  This is controlled by a different parameter.
    • TraceSQL = 128: Enable Cobol statement timings report 
    Recommendation: Enable Cobol statement timings report should be written to log file by setting TraceSQL=128 in all Process Scheduler domain configuration files.
    This trace setting can also be overridden by setting process specific command line parameter overrides in the process definition.  If trace is set in this way it should always also set the batch timings flags.
    PeopleSoft Performance MonitorThis provides information about the performance of the PIA including response times for the online transactions.  Metrics are stored in a separate monitoring PeopleSoft system to minimize the effect of measurement intrusion.  It optionally samples the state of each web server, application server and process scheduler collecting operating system and Tuxedo metrics.  It also has a PIA session trace capability.
    The sampled data includes the number of busy application server processes and length of inbound Tuxedo service queues.  This data can be used to validate the sizing of the application servers.
    Recommendation: Performance Monitor is complex to set up and the delivered analytics are limited.  Nonetheless, the information that can be obtained from the data collected can be worth the effort of configuration to address on-line configuration and performances issues.

    How Not to Collect Optimizer Statistics in an Application Engine Program

    Thu, 2018-09-06 05:24
    I was told about a PeopleSoft customer experiencing an Oracle error when collecting statistics during an Application Engine.
    ORA-06533: Subscript beyond count 
    ORA-06512: at "SYS.DBMS_STATS", line 36873
    It is possibly a manifestation of a database bug. The workaround was not to use AUTO_SAMPLE_SIZE, so instead, this customer initially coded an explicit call to DBMS_STATS.GATHER_TABLE_STATS with a specific sample size.
    This blog is not about the bug, but how to manage the workaround.
    DO NOT TRY THIS AT HOME!

    I think that there are a number of problems with this approach
    1. Using a fixed sample size rather than AUTO_SAMPLE_SIZE should only be considered as a temporary workaround. The new hash-based number-of-distinct-values (NDV) algorithm in Oracle 12c only works with AUTO_SAMPLE_SIZE, and it produces more accurate statistics and runs faster because saves a large sort operation. Coding a fixed sample size into an Application Engine requires a managed change to be made, tested and released into production, and then when the underlying problem is resolved the customisation needs to be removed by the same managed process. 
    2. DBMS_STATS.GATHER_TABLE_STATS implies a commit. That can lead to problems that PeopleSoft avoids by only calling statistics via the %UpdateStats macro and controlling when that macro does and does not execute. 
    Committing and Restart Checkpointing in Application EngineIf a restartable Application Engine program fails it rolls back to the last commit point and it can then be restarted from that point. Committing is managed by Application Engine at section and step levels where the program state record is updated accordingly. If an error occurs in a step after the implicit commit in DBMS_STATS, it can result in the data in the application tables being different to where the state record indicates the program can be restarted. The program may not restart, or it could conceivably execute but produce erroneous results.
    Committing inside a do while loop, including any other Application Engine program called from inside the loop is suppressed at Application Engine section/step level and therefore the execution of %UpdateStats macro is also suppressed. Otherwise, you could get rogue ORA-01555 Snapshot Too Old errors. Suppression of %UpdateStats is reported in the Application Engine step trace.
    -- 19:07:37.199 .(AEMINITEST.MAIN.LOOP) (Do Select)
    %Select(recname) SELECT recname FROM psrecdefn WHERE recname = 'JOB'
    /
    -- Buffers:
    -- 1) JOB
    -- 19:07:37.204 Iteration 1 of .(AEMINITEST.MAIN.LOOP) (Do Select) loop

    -- 19:07:37.208 .(AEMINITEST.MAIN.LOOP) (SQL)
    RECSTATS PS_JOB LOW
    /
    -- 19:07:37.211 UpdateStats ignored - COMMIT required

    -- 19:07:37.212 .(AEMINITEST.MAIN.LOOP) (Do Fetch)
    Even a previously uncommitted SQL step can lead to %UpdateStats being suppressed.
    -- 19:07:35.205 .(AEMINITEST.MAIN.Step01) (SQL)
    UPDATE PS_AETEMPTBLMGR SET AE_DEDICATED = 1 WHERE 0 = 1
    /
    -- Row(s) affected: 0

    -- 19:07:35.213 .(AEMINITEST.MAIN.Step02) (PeopleCode)

    -- 19:07:35.220 .(AEMINITEST.MAIN.US1) (SQL)
    RECSTATS PS_JOB LOW
    /
    -- 19:07:35.227 UpdateStats ignored - COMMIT required
    If you code DBMS_STATS.GATHER_TABLE_STATS explicitly, Application Engine will not recognise the step as having committed. In the following example. you can see the %UpdateStats on the last step has been suppressed because it Application Engine does not recognise that the update in the first step has been committed by the call to DBMS_STATS.
    -- 19:12:06.690 .(AEMINITEST.MAIN.Step01) (SQL)
    UPDATE PS_AETEMPTBLMGR SET AE_DEDICATED = 1 WHERE 0 = 1
    /
    -- Row(s) affected: 0
    -- 19:12:06.696 .(AEMINITEST.MAIN.Step02) (PeopleCode)
    -- 19:12:06.700 .(AEMINITEST.MAIN.GTS) (SQL)
    %Execute(/) BEGIN
    dbms_stats.gather_table_stats('SYSADM','PS_JOB',estimate_percent=>1); END; /
    /
    -- Row(s) affected: 1

    -- 19:12:09.871 .(AEMINITEST.MAIN.US1) (SQL)
    RECSTATS PS_JOB LOW
    /
    -- 19:12:09.877 UpdateStats ignored - COMMIT required
    /
    Perhaps, the safest form of this workaround would be to have the step with DBMS_STATS and the immediately preceding step explicitly commit as in the following example.  I have also made the program restartable.  Now restart data is checkpointed, and the %UpdateStats macro executes at step US1.
    Restart Data CheckPointed
    /
    COMMIT
    /

    -- 19:20:24.792 .(AEMINITEST.MAIN.GTS) (SQL)
    %Execute(/) BEGIN
    dbms_stats.gather_table_stats('SYSADM','PS_JOB',estimate_percent=>1); END; /
    /

    -- Row(s) affected: 1
    /
    /
    Restart Data CheckPointed
    /
    COMMIT
    /

    -- 19:20:29.153 .(AEMINITEST.MAIN.US1) (SQL)
    RECSTATS PS_JOB LOW
    /
    -- Row(s) affected: 1
    /
    However, you have to consider the state the application data after an error, whether you wish to restart or cancel the Application Engine because you can no longer rollback.
    Doing the Right Thing
    I recommend that:
    • You should only ever collect stats in Application Engine with the %UpdateStats macro that in turn executes the command in the DDL model.
    • From Oracle 11g both PeopleSoft statistics gathering DDL models should be the same and should ultimately call DBMS_STATS without any parameters other than the table name. The default value of ESTIMATE_PERCENT is AUTO_SAMPLE_SIZE.
    There are two DDL models in PeopleSoft because %UpdateStats can be invoked with a second parameter to collect the statistics HIGH or LOW. This dates back to Oracle's ANALYZE command that could either compute or estimate statistics (and other database platforms had similar options). Collecting optimizer statistics with ANALYZE has been deprecated for many years, but the command still has other valid uses.  It was superceded by DBMS_STATS in Oracle 8i (released in 1998).
    Automatic sample size was introduced in Oracle 9i.  In Oracle 9i and 10g, it was usual to use automatic sample size in the high statistics gathering model and a small fixed sample size in the low model for use on very large tables.  The LOW parameter was specified on %Updatestats in Application Engine programs as necessary.
    This approach became redundant from Oracle 11g with the introduction of table preferences. If you need to collect statistics with a specific rather than the automatic sample size or specify any other parameters, then a table preference should be defined. It applies whenever statistics are gathered on that table, during the statistics maintenance window, and any places where statistics are explicitly collected, rather than just a single call.
    From Oracle 12c, as the new NDV algorithm only works with AUTO_SAMPLE_SIZE, you should always use the default unless you have an overarching reason to the contrary, and then you should use a table preference. This approach does not require any application code change because the preference is an attribute of a table in the database.
    I recommend using GFCPSSTATS package, calling it from the DDL model (see previous blog Managing Cost-Based Optimizer Statistics for PeopleSoft). The package also includes a mechanism to specify table preferences in metadata, so that they are automatically instantiated when a table is created or altered by Application Designer, and are applied to every instance of a temporary record.
    gfcpsstats11.ps_stats(p_ownname=>[DBNAME], p_tabname=>[TBNAME]);
    Alternatively, just call DBMS_STATS with no additional parameters (other than FORCE, in case you lock statistics on temporary tables) but then you must manage table preferences manually.
    DBMS_STATS.GATHER_TABLE_STATS(ownname=>[DBNAME], tabname=>[TBNAME], force=>TRUE);
    Further Reading

    Managing Cost-Based Optimizer Statistics for PeopleSoft

    Fri, 2018-06-29 06:05
    I gave this presentation to UKOUG PeopleSoft Roadshow 2018

    PeopleSoft presents some special challenges when it comes to collecting and maintaining the object statistics used by the cost-based optimizer.

    I have previously written and blogged on this subject.  This presentation focuses exclusively on the Oracle database and draws together the various concepts into a single consistent picture.  It makes clear recommendations for Oracle 12c that will help you work with the cost-based optimizer, rather than continually fight against it.

    It looks at collecting statistics for permanent and temporary working storage tables and considers some other factors that can affect optimizer statistics.

    This presentation also discusses PSCBO_STATS, that is going to be shipped with PeopleTools, and compares and contrasts it with GFCPSSTATS11.

    Application Engine in Process Scheduler: PSAESRV Server Process -v- Standalone PSAE executable

    Sat, 2018-04-14 06:05
    Whether to use the Application Engine server process (PSAESRV) in the process scheduler tuxedo domain or the standalone PSAE executable is a frequently discussed point amongst PeopleSoft administrator.  Over the years, I have written various things on the subject.  I am going to draw them together in this blog, and restate Oracle’s now clear advice about when to use which option.

    In PeopleTools 8.4, the Process Scheduler became a fully fledged Tuxedo domain.  The PSAESRV process was also introduced at this time.  It is a persistent process that handles both Application Engine and Optimization Engine requests.  Each server process creates persistent database connections.  A number of these server processes are started with the domain.   The PSAESRV process does not spawn like other Tuxedo server processes.  Instead, you must configure the number of server processes to match the maximum number of concurrent Application Engine process requests and concurrent Optimization Engine requests that the process scheduler can run.  The server was introduced to handle very short-lived Application Engine programs thus avoiding the overhead of instantiating a new process and new database sessions for each process request.  CRM typically uses Application Engine in this fashion, but generally, you do not see this in other PeopleSoft products.

    Oracle has not always been clear what they mean by a short-lived process.  It has been suggested that if Application Engine processes are typically taking less than 10-30 seconds, or if you run more than 1000 Application Engine processes requests per hour (note 651970.1) you should use PSAESRVs.
    PeopleBooks advises you should use PSAESRV because it delivers improved system performance.  However, PeopleTools Performance Guidelines Red Paper (Doc ID 747389.1) contradicts this somewhat.  Ultimately, if you have any doubts, you should it test each way and determine whether one way gives a clear advantage over the other.

    Oracle Support Note "What will be the impact of disabling PSAESRV on the Process Scheduler (Doc ID 651970.1)" explains that if PSAESRV is disabled in the Tuxedo domain configuration, the Process Scheduler goes back to the legacy behaviour and spawns a stand-alone PSAE process for each Application Engine request.  “The Application Engine will take a bit longer to start, [the] time delay may be range from millisecond to seconds” depending on hardware and configuration.

    The stand-alone process has several advantages.
    • At the end of the Application Engine program, it disconnects from the database and terminates.  Thus releasing resources from the process and the database session.  Whereas the persistent Application Engine process has been reported to accumulate allocated memory over time.
    • If you are using Oracle database Global Temporary Tables in an application engine, then you should not use PSAESRV because the tables are always created PRESERVE ON COMMIT and so are only released when the database session terminates.
    • If you set any session parameters within an Application Engine program run via PSAESRV, or enable database trace, then these settings will carry forward from one Application Program to the next unless you reset the parameter at the end of the program, or the start of the next.  This is not a concern with standalone PSAE processes.
    However, there is at least one case where you must use the server process:
    • If you are using Oracle Active Data Guard and wish to mark some Application Engine programs as read-only then they must be run via the PSAESRV process
    Conclusion
    • PeopleTools Performance Guidelines Red Paper (Doc ID 747389.1 sums it up very nicely: “PSAE is as good as PSAESRV for most practical purposes.  If you have an application engine job that runs longer than 10 seconds, PSAE is equivalent to PSAESRV.  PSAE has the added advantage of being recycled at the end of each application engine job, cleaning up any outstanding SQL cursors to the database that may have been left behind.  Because PSAE recycles after each use, PSAE does not have any possible memory leakage problem that may occupy the precious system memory.  In short, PSAE is a cleaner workhorse.”
    • I think it is reasonable to use PSAESRV in CRM.  For all other products, I recommend that PSAESRV should be disabled from all Process Schedulers.
      • If you do have some Application Processes that are both short-lived (i.e. less than 10 seconds) and run frequently, then consider creating other process schedulers with PSAESRV processes that are dedicated to running only these process.  You can then move these processes to a new Process Scheduler category that only runs on these new Process Scheduler.
    • PSAESRV is configured by default, so if you don’t want to use it, and mostly you won’t, then you have to remember to disable it.

    Resetting High Water Marks on On-line Temporary Table Instances

    Tue, 2018-03-20 11:45
    PeopleSoft has always used regular database tables for temporary working storage in batch processes.   Up to PeopleTools 7.x working storage tables were shared by all instances of a program.  That led to consistent read contention when multiple processes concurrently used the same table, and much higher high water marks that increased durations of full scans.
    From PeopleTools 8, many copies of each temporary working storage table are created.  Application Engines that run on the Process Scheduler are allocated exclusive use of a particular copy of the table.  This avoids the inter-process contention.  They start by truncating each allocated table, which resets the high-water mark.
    Some delivered processing uses batch programs that are run apparently synchronously from the PIA.  On-line edit and post in Financials General Ledger is a common example.  Up to PeopleTools 7, the application server would synchronously spawn a batch process and wait for it to complete.  From PeopleTools 8 the process is submitted to the process scheduler, and the PIA polls the Scheduler tables waiting for the process to complete.  However, Application Engine can be run within the component processor.  In Financials General Ledger, this can be chosen by a setting an installation configuration option.  The truly on-line method can perform better because you are no longer waiting for the process scheduler to pick up the process request.  A separate process Application Engine is not spawned, but the Application Engine program is executed by the PSAPPSRV application server process.  One of the limitations is that the Application Engine program cannot commit.  Committing after steps or sections is suppressed, and the %TruncateTable macro generates a delete statement instead.  Therefore, on-line temporary table instances are never truncated by any process and their high-water marks can be raised by processes that handle larger volumes of data.  This can have impacts for subsequent processes with smaller data volumes but that still have to full-scan working storage tables up to their high water marks.
    Truncating On-line Temporary Table Instances
    The answer is to implement a periodic process that truncates working storage tables, but only doing so when the table is not currently being used by a process.  Every on-line Application Engine program is allocated a temporary table instance number, it locks the corresponding row on the table PS_AEONLINEINST.  If it allocated to instance 1, it locks the row where CURTEMPINSTANCE is 1 and uses instance 1 of each temporary record that it needs.  
    Therefore the proposed truncate process must also lock the row on PS_AEONLINEINST that corresponds to each table that is to be truncated.  The truncate must be done in an autonomous transaction so that the implicit commit does not release that lock.  The lock can be released after the truncate completes.  Thus, the truncate process waits for any online process to complete before truncating a table with the same instance number, and no process can start while the truncate process is holding the lock.  However, each truncate will be very quick, and so each lock will only be held briefly, and it will have only a minimal effect on any online process that may be running at the time.  

    I have written a PL/SQL packaged procedure (to perform this process for all temporary records.  It is available on Github as a part of my collection of miscellaneous PeopleSoft scripts.
    Package UsageUsually, the package will be run without any parameters. The default behaviour will be to truncate tables with more than a single extent.  Information on what the package does is emitted to the server output.
    Set serveroutput on 
    EXECUTE xx_onlineinsthwmreset.main;
    The package can be run in test mode when it will list the commands without executing them.  Thus you can see what it will do without actually doing it.
    EXECUTE xx_onlineinsthwmreset.main(p_testmode=>TRUE);
    The package can optionally deallocate any physical storage. Storage will be reallocated next time the table is used.
    EXECUTE xx_onlineinsthwmreset.main(p_drop_storage=>TRUE, p_min_extents=>0);
    The package can be run for certain tables that match a particular pattern.
    EXECUTE xx_onlineinsthwmreset.main(p_recname_like=>'JP%');
    I recommend that the package is run daily. However, it can be run safely while the users are doing on-line edit/post processing, but it would be sensible to choose a quiet time.

    Fewer Platform Flags on Indexes from PeopleTools 8.55

    Wed, 2018-03-07 12:28
    It has always been possible in Application Deisnger to specify upon which databases platforms each index should be built.  This is really a feature that is used by PeopleSoft development, rather than customers to deliver indexes that are more appropriate for a particular platform due to differences in the optimizer.
    Over the years, the number of supported PeopleSoft platforms has declined.  In PeopleTools 8.45, it went down from 9 to 6 and in PeopleTools 8.55 it has gone down to just 4, but there are still 9 columns on PSINDEXDEFN that correspond to the original 9 supported platforms.
    I explained in a previous blog that you can have all or none of the platform flags set to the same value, but with the platform radio button on the index properties dialogue box is still set to 'some' because one or more of the platform flag columns for some of the unsupported platforms is set differently.  Of course, this is a purely cosmetic problem, but one that can cause confusion in Application Designer.
    PeopleTools 8.45PeopleTools 8.55I fix this by updating PeopleTools tables as follows. The first query reports on the indexes where the supported platform flags all have the same value and one of the unsupported platform flags are different.
    column RECNAME format a15
    column INDEXID format a3 heading 'Idx|ID'
    column DDLCOUNT format 999 heading 'DDL|Cnt'
    column CUSTKEYORDER format 9999 heading 'Cust|Key|Order'
    column KEYCOUNT format 999 heading 'Key|Cnt'
    column PLATFORM_SBS format 999 heading 'SBS'
    column PLATFORM_DB2 format 999 heading 'DB2'
    column PLATFORM_ORA format 999 heading 'ORA'
    column PLATFORM_INF format 999 heading 'INF'
    column PLATFORM_DBX format 999 heading 'DBx'
    column PLATFORM_ALB format 999 heading 'ALB'
    column PLATFORM_SYB format 999 heading 'SYB'
    column PLATFORM_MSS format 999 heading 'MSS'
    column PLATFORM_DB4 format 999 heading 'DB4'
    column ACTIVEFLAG Format 999 heading 'Active'
    column CLUSTERFLAG format 999 heading 'Clst|Flg'
    column UNIQUEFLAG format 999 heading 'Uniq|Flg'
    column INDEXTYPE format 999 heading 'Idx|Type'
    column IDXCOMMENTS format a60
    spool platformfix855
    SELECT *
    FROM PSINDEXDEFN
    WHERE PLATFORM_DB2=PLATFORM_DBX
    AND PLATFORM_DBX=PLATFORM_ORA
    AND PLATFORM_ORA=PLATFORM_MSS
    AND (PLATFORM_ORA!=PLATFORM_SBS
      OR PLATFORM_ORA!=PLATFORM_ALB
      OR PLATFORM_ORA!=PLATFORM_SYB
      OR PLATFORM_ORA!=PLATFORM_INF
      OR PLATFORM_ORA!=PLATFORM_DB4)
    ;
    These are the indexes that have inconsistent platform flags.  In this case PS_PSPMTRANSHIST is to be disabled on DB2/AS400.  You can't update the flag via Application Designer, but you could set the radio button to ALL.
                                               Cust
    Idx Idx Uniq Clst Key Key DDL
    RECNAME ID Type Flg Flg Active Order Cnt Cnt SBS DB2 ORA INF DBx ALB SYB MSS DB4
    --------------- --- ---- ---- ---- ------ ----- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ----
    IDXCOMMENTS
    ------------------------------------------------------------
    PSPMTRANSHIST _ 1 1 1 1 0 4 1 1 1 1 1 1 1 1 1 0
    It could be a tedious process to do this for a lot of indexes.  So the following SQL commands correct all indexes.  They set the SQL flags for the unsupported platforms to the value for the supported platforms if they are all the same. The version number on the record definition is updated so that Application Desinger refreshes the object.
    UPDATE PSVERSION
    SET VERSION = VERSION + 1
    WHERE OBJECTTYPENAME IN('SYS','RDM');

    UPDATE PSLOCK
    SET VERSION = VERSION + 1
    WHERE OBJECTTYPENAME IN('SYS','RDM');

    UPDATE PSRECDEFN
    SET VERSION = (
    SELECT VERSION
    FROM PSVERSION
    WHERE OBJECTTYPENAME = 'RDM')
    WHERE RECNAME IN (
    SELECT RECNAME
    FROM PSINDEXDEFN
    WHERE PLATFORM_DB2=PLATFORM_DBX
    AND PLATFORM_DBX=PLATFORM_INF
    AND PLATFORM_INF=PLATFORM_ORA
    AND PLATFORM_ORA=PLATFORM_SYB
    AND ( PLATFORM_ORA!=PLATFORM_SBS
    OR PLATFORM_ORA!=PLATFORM_ALB
    OR PLATFORM_ORA!=PLATFORM_DB4)
    );

    UPDATE psindexdefn
    SET PLATFORM_DB4=PLATFORM_ORA
    WHERE PLATFORM_DB2=PLATFORM_DBX
    AND PLATFORM_DBX=PLATFORM_INF
    AND PLATFORM_INF=PLATFORM_ORA
    AND PLATFORM_ORA=PLATFORM_SYB
    AND PLATFORM_SYB=PLATFORM_MSS
    AND PLATFORM_ORA!=PLATFORM_DB4;

    UPDATE psindexdefn
    SET PLATFORM_ALB=PLATFORM_ORA
    WHERE PLATFORM_DB2=PLATFORM_DBX
    AND PLATFORM_DBX=PLATFORM_INF
    AND PLATFORM_INF=PLATFORM_ORA
    AND PLATFORM_ORA=PLATFORM_SYB
    AND PLATFORM_SYB=PLATFORM_MSS
    AND PLATFORM_ORA!=PLATFORM_ALB;

    UPDATE psindexdefn
    SET PLATFORM_SBS=PLATFORM_ORA
    WHERE PLATFORM_DB2=PLATFORM_DBX
    AND PLATFORM_DBX=PLATFORM_INF
    AND PLATFORM_INF=PLATFORM_ORA
    AND PLATFORM_ORA=PLATFORM_SYB
    AND PLATFORM_SYB=PLATFORM_MSS
    AND PLATFORM_ORA!=PLATFORM_SBS;
    The platform flags now say 'ALL'. Not a tremendous change, but at least I can immediately see that these indexes do build on all platforms without having to open each one.

    The new platformfix855.sql script is available on collection of miscellaneous scripts on Github.

    Setting Oracle Session Parameters for Specific Process Scheduler Processes

    Fri, 2018-03-02 06:01
    This note describes a mechanism for setting initialisation parameters for specific processes run on the process scheduler. I will demonstrate it relation to nVision, but it has general application in PeopleSoft.
    A table is used to hold metadata that described what setting is applied to which processes. A trigger on the process scheduler request table PSPRCSRQST reads that data and alters the session setting. This approach is easier to adjust and understand that static PL/SQL code in a trigger.
    NB: The PSAESRV server process is used to run application engine programs in the process scheduler by default.  It should only be used for very short-lived programs and should usually be deconfigured. It maintains a persistent connection to the database. Therefore, session parameters set for one application engine program will carry forward into subsequent programs run on the same server process with the potential for unintended and unpredictable results. Other processes all establish their own database session that ends when the process terminates. This trigger mechanism can be still used to set parameters for some processes that run on PSAESRV, but the default parameter value should then be set for all other application engine processes. 
    Metadata The table that holds the metadata should be defined in Application Designer.
    CREATE TABLE PS_PRCS_SESS_PARM (PRCSTYPE VARCHAR2(30) NOT NULL,
    PRCSNAME VARCHAR2(12) NOT NULL,
    OPRID VARCHAR2(30) NOT NULL,
    RUNCNTLID VARCHAR2(30) NOT NULL,
    PARAM_NAME VARCHAR2(50) NOT NULL,
    PARMVALUE VARCHAR2(128) NOT NULL) TABLESPACE PTTBL STORAGE (INITIAL
    40000 NEXT 100000 MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10
    PCTUSED 80
    /
    CREATE UNIQUE iNDEX PS_PRCS_SESS_PARM ON PS_PRCS_SESS_PARM (PRCSTYPE,
    PRCSNAME,
    OPRID,
    RUNCNTLID,
    PARAM_NAME) TABLESPACE PSINDEX STORAGE (INITIAL 40000 NEXT 100000
    MAXEXTENTS UNLIMITED PCTINCREASE 0) PCTFREE 10 PARALLEL NOLOGGING
    /
    ALTER INDEX PS_PRCS_SESS_PARM NOPARALLEL LOGGING
    /
    I will demonstrate this mechanism for nVision. The metadata is simply inserted into the table by script.
    INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
    VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_policy','auto');
    INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
    VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_limit','4');
    INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
    VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_degree_level','150');
    INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
    VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', 'parallel_min_time_threshold','1');
    INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue)
    VALUES ('nVision-ReportBook','RPTBOOK',' ',' ', '_optimizer_skip_scan_enabled','FALSE');
    Here we have 5 session parameters that will apply to all nVision reportbooks, but which I don't want to apply to the rest of the system.
    PRCSTYPE             PRCSNAME     OPRID      RUNCNTLID       PARAM_NAME                     PARMVALUE
    -------------------- ------------ ---------- --------------- ------------------------------ --------------------
    nVision-ReportBook RPTBOOK parallel_degree_policy auto
    nVision-ReportBook RPTBOOK parallel_degree_limit 4
    nVision-ReportBook RPTBOOK parallel_degree_level 150
    nVision-ReportBook RPTBOOK parallel_min_time_threshold 1
    nVision-ReportBook RPTBOOK _optimizer_skip_scan_enabled FALSE
    The objective is to use limited parallelism only in the nVision reporting, and without decorating underlying ledger tables
    • parallel_degree_policy=auto enables the new 12c automatic degree of parallel calculation, statement queuing. 
    • parallel_min_time_threshold is set to 1 second. The default is 10. Statements whose runtime is estimated to be greater than or equal to this value will be considered for automatic degree of parallelism. 
    • parallel_degree_limit=4 restricts the automatic degree of parallelism to 4 to prevent any one statement using excessive parallelism.
    • parallel_degree_level=150 scales up the automatic degree of parallelism calculation but within the parallel_degree_limit. See Kerry Osborne's blog 12c – parallel_degree_level (control for auto DOP)
    • _optimiser_skip_scan_enabled=FALSE disables index skip scan to promote the use of smart full scan and Bloom filtering. It is recommended for engineered systems in Advice for the PeopleSoft DBA. Skip scan can prevent the optimizer choosing a smart full scan, so it makes sense to limits the setting to just nVision. I can also specify a parameter that will only be set when the reportbook is run by a particular operator with a particular run control.
    INSERT INTO sysadm.PS_PRCS_SESS_PARM (prcstype, prcsname, oprid, runcntlid, param_name, parmvalue) VALUES ('nVision-ReportBook','RPTBOOK','NVOPR','NVSRBK_2', 'parallel_degree_level','200');
    The specific setting for one particular operation ID and run control takes precedence of the generic setting for all reportbooks. In this case, I will scale the degree of parallelism further for a particular reportbook.
    PRCSTYPE             PRCSNAME     OPRID      RUNCNTLID       PARAM_NAME                     PARMVALUE
    -------------------- ------------ ---------- --------------- ------------------------------ --------------------

    nVision-ReportBook RPTBOOK parallel_degree_level 150
    nVision-ReportBook RPTBOOK NVOPR NVSRBK_2 parallel_degree_level 200
    Trigger When a process starts the first thing it does is update its own status to 7 to indicate that it is processing. This is another example of a trigger created on that transition that injects behaviour at the start of a PeopleSoft process. This trigger reads the metadata and applies the settings with an ALTER SESSION command. The process type, name, operation and run control attributes must exactly match the process request, but a blank space is treated as a wildcard. Underscore parameters must be delimited in double quotes.
    CREATE OR REPLACE TRIGGER sysadm.set_prcs_sess_parm
    BEFORE UPDATE OF runstatus ON sysadm.psprcsrqst
    FOR EACH ROW
    WHEN (new.runstatus = 7
    AND old.runstatus != 7
    AND new.prcstype != 'PSJob')
    DECLARE
    l_cmd VARCHAR2(100 CHAR);
    l_delim VARCHAR2(1 CHAR);
    BEGIN
    FOR i IN (
    WITH x as (
    SELECT p.*
    , row_number() over (partition by param_name
    order by NULLIF(prcstype, ' ') nulls last,
    NULLIF(prcsname, ' ') nulls last,
    NULLIF(oprid, ' ') nulls last,
    NULLIF(runcntlid,' ') nulls last
    ) priority
    FROM PS_PRCS_SESS_PARM p
    WHERE (p.prcstype = :new.prcstype OR p.prcstype = ' ')
    AND (p.prcsname = :new.prcsname OR p.prcsname = ' ')
    AND (p.oprid = :new.oprid OR p.oprid = ' ')
    AND (p.runcntlid = :new.runcntlid OR p.runcntlid = ' ')
    )
    SELECT * FROM x
    WHERE priority = 1
    ) LOOP
    IF SUBSTR(i.param_name,1,1) = '_' THEN
    l_delim := '"';
    ELSE
    l_delim := '';
    END IF;

    IF NULLIF(i.parmvalue,' ') IS NOT NULL THEN
    dbms_output.put_line('Rule:'||NVL(NULLIF(i.prcstype,' '),'*')
    ||'.'||NVL(NULLIF(i.prcsname,' '),'*')
    ||':'||NVL(NULLIF(i.oprid,' '),'*')
    ||'.'||NVL(NULLIF(i.runcntlid,' '),'*')
    ||':'||i.param_name||'='||i.parmvalue);

    l_cmd := 'ALTER SESSION SET '||l_delim||i.param_name||l_delim||'='||i.parmvalue;
    dbms_output.put_line('PI='||:new.prcsinstance||':'||:new.prcstype||'.'||:new.prcsname||':'
    ||:new.oprid||'.'||:new.runcntlid||':'||l_cmd);
    EXECUTE IMMEDIATE l_cmd;
    END IF;
    END LOOP;
    EXCEPTION
    WHEN OTHERS THEN
    dbms_output.put_line(sqlerrm);
    END;
    /
    The trigger script set_prcs_sess_parm.sql can be download from my miscellaneous PeopleSoft scripts repository on Github.
    Testing The trigger can be tested by updating the process scheduler request table in SQL*Plus, but be careful to roll back the update afterwards rather than committing. The trigger writes debug information to the server output that can be seen in SQL*Plus showing the rule being used and the ALTER SESSION command generated. However, this output is not captured in any PeopleSoft log when the trigger is fired by a PeopleSoft process.
    set serveroutput on 
    update psprcsrqst set runstatus = 7 where runstatus != 7
    and prcsname = 'RPTBOOK' and runcntlid = 'NVSRBK_2' and oprid = 'NVOPR' and rownum = 1;
    rollback;

    Rule:nVision-ReportBook.RPTBOOK:*.*:_optimizer_skip_scan_enabled=FALSE
    PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET "_optimizer_skip_scan_enabled"=FALSE
    Rule:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:parallel_degree_level=200
    PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_level=200
    Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_limit=4
    PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_limit=4
    Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_policy=auto
    PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_degree_policy=auto
    Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_min_time_threshold=1
    PI=3543986:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_2:ALTER SESSION SET parallel_min_time_threshold=1
    In the above example, the specific rule for NVSRBK_2 was applied setting PARALLEL_DEGREE_LEVEL to 200, whereas in the next example the generic setting of 150 is applied to NVSRBK_1.
    update psprcsrqst set runstatus = 7 where runstatus != 7
    and prcsname = 'RPTBOOK' and runcntlid = 'NVSRBK_1' and oprid = 'NVOPR' and rownum = 1;
    rollback;

    Rule:nVision-ReportBook.RPTBOOK:*.*:_optimizer_skip_scan_enabled=FALSE
    PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET "_optimizer_skip_scan_enabled"=FALSE
    Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_level=150
    PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_level=150
    Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_limit=4
    PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_limit=4
    Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_degree_policy=auto
    PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_degree_policy=auto
    Rule:nVision-ReportBook.RPTBOOK:*.*:parallel_min_time_threshold=1
    PI=3543964:nVision-ReportBook.RPTBOOK:NVOPR.NVSRBK_1:ALTER SESSION SET parallel_min_time_threshold=1

    Invalid Views in PeopleSoft and the Oracle Database

    Wed, 2018-01-31 14:33
    I was listening to the section on Invalid Views in PSADMIN Podcast #117 (@19:00). Essentially, when you drop and recreate a view that is referenced by a second view, the status on the second view in the database goes invalid. This is not a huge problem because as soon as you query the second view it is compiled. However, you would like to know whether any change to a view prevents any dependent views from compiling, although you would expect have teased these errors out before migration to production.
    The PeopleSoft solution to this is to include all the dependent views in the Application Designer package. However, as pointed out, in the podcast you are now releasing code, possibly unintentionally releasing code changes and certainly updating last change dates on record definitions, when really you just need to compile the database objects.   PeopleSoft does this because it is a platform generic solution, but really it is using the PeopleSoft Application Designer to solve a database management issue.
    A similar problem also occurs in the Oracle database with dependent PL/SQL procedures and packages where you sometimes get referential loops. Oracle provides a procedure DBMS_UTILITY.COMPILE_SCHEMA that recompiles all invalid objects in a schema and report any errors to the ALL_ERRORS view.  I think this is a much safer option.

    Here is a very simple (non-PeopleSoft) example
    drop table t purge;
    drop view a;
    drop view b;

    create table t (a number);
    insert into t values (1);
    create view a as select a from t;
    create view b as select a from a;

    column object_name format a12
    select object_type, object_name, status
    from user_objects where object_name IN('T','A','B')
    /

    OBJECT_TYPE OBJECT_NAME STATUS
    ----------------------- ------------ -------
    TABLE T VALID
    VIEW B VALID
    VIEW A VALID
    Dropping and recreating view A renders view B invalid.
    drop view a;
    create view a as select a from t;

    select object_type, object_name, status
    from user_objects
    where object_name IN('T','A','B')
    /

    OBJECT_TYPE OBJECT_NAME STATUS
    ----------------------- ------------ -------
    TABLE T VALID
    VIEW B INVALID
    VIEW A VALID

    select * from b;
    select object_type, object_name, status
    from user_objects where object_name IN('T','A','B')
    /
    Just querying B has made it valid again.
    OBJECT_TYPE             OBJECT_NAME  STATUS
    ----------------------- ------------ -------
    TABLE T VALID
    VIEW B VALID
    VIEW A VALID
    Let's make B invalid again by rebuild A, but this time I will change the name of the column in view A from A to T so that view B cannot compile without an error.
    drop view a;
    create view a (t) as select a from t;

    select object_type, object_name, status
    from user_objects where object_name IN('T','A','B')
    /
    EXEC DBMS_UTILITY.compile_schema(schema => 'SCOTT');
    select object_type, object_name, status
    from user_objects where object_name IN('T','A','B')
    /

    OBJECT_TYPE OBJECT_NAME STATUS
    ----------------------- ------------ -------
    TABLE T VALID
    VIEW B INVALID
    VIEW A VALID
    And I can query the errors from user_errors
    NAME
    ---------------------------------------------------
    TYPE SEQUENCE LINE POSITION
    ------------ ---------- ---------- ----------
    TEXT
    ---------------------------------------------------
    ATTRIBUTE MESSAGE_NUMBER
    --------- --------------
    B
    VIEW 1 0 0
    ORA-00904: "A": invalid identifier
    ERROR 0
    N.B.: if you use CREATE OR REPLACE VIEW is not left invalid unless there is an error. Unfortunately, Application Designer always drops and recreates views.

    nVision Performance Tuning 12: Hinting nVision with SQL Profiles

    Wed, 2017-12-20 10:00
    This blog post is part of a series that discusses how to get optimal performance from PeopleSoft nVision reporting as used in General Ledger.  It is a PeopleSoft specific version of a posting on my Oracle blog.

    As I explained earlier in this series, it is not possible to add hints to nVision.  The dynamic nature of the nVision SQL means that it is not possible to use SQL Patches.  nVision SQL statements contain literal values and never use bind variables.  When dynamic selectors are used, the SELECTOR_NUM will be different for every execution. A SQL_ID found in one report will be not be seen again in another report.  Even static selector numbers will change after the tree is updated or when a new tree is created.
    It is possible to use SQL Profiles to introduce hints because they can optionally match the force match signature of a SQL.  SQL statements that differ only in the literal values they contain will have different SQL IDs but will have the same force matching signature.  Although you will still have a lot of force matching signatures, you should find that you have far fewer force matching signatures than SQL_IDs.   Picking out the signatures that account for the most elapsed execution time and creating profiles for them is manageable.
    Note: SQL Profiles require the Tuning Pack to be licenced.
    As far as is possible, good nVision performance should be achieved by setting appropriate tree performance options at tree level.  These are global settings.  You may find that a particular setting on a particular tree is not optimal for all reports.  You may then choose to override the tree-level setting in specific layouts.  You may also find that you still need hints to control execution plans.
    In particular, parallel query can be an effective tactic in nVision performance tuning.  However, you should put a degree of parallelism on PS_LEDGER or PS_LEDGER_BUDG because that will invoke parallelism in many other processes.  I have found that even putting a degree of parallelism on a summary ledger table can easily result in too many concurrent parallel queries.   On OLTP systems, such as PeopleSoft, I recommend that parallelism should be used sparingly and in a highly controlled and targetted fashion.
    ExampleLet's take the following nVision query as an example.
    SELECT L2.TREE_NODE_NUM,L3.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT) 
    FROM PS_XX_SUM_CONSOL_VW A, PSTREESELECT05 L2, PSTREESELECT10 L3
    WHERE A.LEDGER='S_USMGT'
    AND A.FISCAL_YEAR=2017
    AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12
    AND (A.DEPTID BETWEEN 'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149'
    OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID BETWEEN 'B9165' AND 'B9999'
    OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999'
    OR A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DEPTID BETWEEN 'H0000' AND 'H9999'
    OR A.DEPTID='B9150' OR A.DEPTID=' ')
    AND L2.SELECTOR_NUM=10228
    AND A.BUSINESS_UNIT=L2.RANGE_FROM_05
    AND L3.SELECTOR_NUM=10231
    AND A.ACCOUNT=L3.RANGE_FROM_10
    AND A.CHARTFIELD1='0012345'
    AND A.CURRENCY_CD='GBP'
    GROUP BY L2.TREE_NODE_NUM,L3.TREE_NODE_NUM
    /
    We can tell from the equality join conditions that the two selectors still joined to the are dynamic selectors.
    A third selector on DEPTID has been suppressed with the 'use literal values' performance option.  The number of DEPTID predicates in the statement will depend on the tree and the node selected for the report.  Note, that if these change then the statement will not force match the same profile.  SQL profiles might suddenly cease to work due to a tree or selection criteria change.
    This is the plan I get initially and without a profile. It doesn't perform well.
    Plan hash value: 808840077
    -----------------------------------------------------------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop |
    -----------------------------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | | | 10408 (100)| | | |
    | 1 | HASH GROUP BY | | 517 | 50666 | 10408 (1)| 00:00:01 | | |
    | 2 | HASH JOIN | | 517 | 50666 | 10407 (1)| 00:00:01 | | |
    | 3 | PARTITION RANGE SINGLE | | 731 | 13158 | 3 (0)| 00:00:01 | 10228 | 10228 |
    | 4 | INDEX FAST FULL SCAN | PSAPSTREESELECT05 | 731 | 13158 | 3 (0)| 00:00:01 | 10228 | 10228 |
    | 5 | HASH JOIN | | 518 | 41440 | 10404 (1)| 00:00:01 | | |
    | 6 | PARTITION RANGE SINGLE | | 249 | 5727 | 2 (0)| 00:00:01 | 10231 | 10231 |
    | 7 | INDEX FAST FULL SCAN | PSAPSTREESELECT10 | 249 | 5727 | 2 (0)| 00:00:01 | 10231 | 10231 |
    | 8 | PARTITION RANGE ITERATOR | | 7785 | 433K| 10402 (1)| 00:00:01 | 28 | 40 |
    | 9 | TABLE ACCESS BY LOCAL INDEX ROWID BATCHED| PS_X_LEDGER_ACCTS | 7785 | 433K| 10402 (1)| 00:00:01 | 28 | 40 |
    | 10 | SORT CLUSTER BY ROWID BATCHED | | 5373 | | 5177 (1)| 00:00:01 | | |
    | 11 | INDEX SKIP SCAN | PS_X_LEDGER_ACCTS | 5373 | | 5177 (1)| 00:00:01 | 28 | 40 |
    -----------------------------------------------------------------------------------------------------------------------------------
    These are the hints I want to introduce (on Oracle 12c).
    SELECT /*+OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_min_time_threshold',2) 
    OPT_PARAM('parallel_degree_limit',4) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)*/…
    • Use automatic parallel degree, statement queuing and in-memory parallel execution.
    • Invoke parallelism if the statement is estimated to run for at least 2 seconds
    • However, I will also limit the automatic parallelism to a degree of 4
    • Force materialize view rewrite
    • Use a Bloom filter when joining to the materialized view.
    I have created a data-driven framework to create the profiles. I have created working storage table to hold details of each force matching signature for which I want to create a profile.
    CREATE TABLE dmk_fms_profiles
    (force_matching_signature NUMBER NOT NULL
    ,sql_id VARCHAR2(13)
    ,plan_hash_value NUMBER
    ,module VARCHAR2(64)
    ,report_id VARCHAR2(32) /*Application Specific*/
    ,tree_list CLOB /*Application Specific*/
    ,sql_profile_name VARCHAR2(30)
    ,parallel_min_time_threshold NUMBER
    ,parallel_degree_limit NUMBER
    ,other_hints CLOB
    ,delete_profile VARCHAR2(1)
    ,sql_text CLOB
    ,CONSTRAINT dmk_fms_profiles_pk PRIMARY KEY (force_matching_signature)
    ,CONSTRAINT dmk_fms_profiles_u1 UNIQUE (sql_id)
    ,CONSTRAINT dmk_fms_profiles_u2 UNIQUE (sql_profile_name)
    )
    /
    Using conditional parallelism with the PARALLEL_MIN_TIME_THRESHOLD, but limited with PARALLEL_DEGREE_LIMIT is an effective tactic with nVision, so I have specified columns in the metadata table for those hints, otherwise, hints are injected via a string. I identified the problematic SQL by analysis with ASH, and hence I also obtained the FORCE_MATCHING_SIGNATURE. The metadata is keyed by FORCE_MATCHING_SIGNATURE. I have specified a meaningful name for the SQL profile.
    INSERT INTO dmk_fms_profiles (force_matching_signature, parallel_min_time_threshold, parallel_degree_limit, other_hints, sql_profile_name) 
    VALUES (16625752171077499412, 1, 4, 'REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)', 'NVS_GBGL123I_BU_CONSOL_ACCOUNT');
    COMMIT;
    Profiles are created using the text of a SQL rather than the SQL_ID or FORCE_MATCHING_SIGNATURE directly. Therefore the SQL_TEXT must be extracted from the AWR, so this method also requires that the SQL statement has been captured by an AWR snapshot.
    UPDATE dmk_fms_profiles a
    SET (module, action, sql_id, plan_hash_value, sql_text)
    = (SELECT s.module, s.action, s.sql_id, s.plan_hash_value, t.sql_text
    FROM dba_hist_sqlstat s
    , dba_hist_sqltext t
    WHERE t.dbid = s.dbid
    AND t.sql_id = s.sql_id
    AND s.force_matching_signature = a.force_matching_signature
    AND s.snap_id = (
    SELECT MAX(s1.snap_id)
    FROM dba_hist_sqlstat s1
    WHERE s1.force_matching_signature = a.force_matching_signature
    AND s1.module = 'RPTBOOK' /*Application Specific*/
    AND s1.action LIKE 'PI=%:%:%' /*Application Specific*/)
    AND s.module = 'RPTBOOK' /*Application Specific*/
    AND s.action LIKE 'PI=%:%:%' /*Application Specific*/
    AND ROWNUM = 1)
    WHERE sql_id IS NULL
    /

    MERGE INTO dmk_fms_profiles u
    USING (
    SELECT a.sql_id, a.force_matching_signature, p.name
    FROM dmk_fms_profiles a
    , dba_sql_profiles p
    WHERE p.signature = a.force_matching_signature
    ) s
    ON (s.force_matching_signature = u.force_matching_signature)
    WHEN MATCHED THEN UPDATE
    SET u.sql_profile_name = s.name
    /
    Columns REPORT_ID and TREE_LIST contain application specific information extracted from the application instrumentation and tree selector logging.
    /*Application Specific - extract report ID from ACTION*/
    UPDATE dmk_fms_profiles a
    SET report_id = substr(regexp_substr(s.action,':([A-Za-z0-9_-])+',1,1),2)
    WHERE report_id IS NULL
    AND action IS NOT NULL
    /
    /*Application Specific - extract financial analysis tree from application logging*/
    UPDATE dmk_fms_profiles a
    SET tree_list =
    (SELECT LISTAGG(tree_name,', ') WITHIN GROUP (ORDER BY tree_name)
    FROM (select l.tree_name, MAX(l.length) length
    FROM dba_hist_sql_plan p
    , ps_nvs_treeslctlog l
    WHERE p.plan_hash_value = a.plan_hash_value
    AND p.sql_id = a.sql_id
    AND p.object_name like 'PS%TREESELECT__'
    AND p.partition_start = partition_stop
    AND p.partition_start = l.selector_num
    AND l.tree_name != ' '
    GROUP BY l.tree_name)
    )
    WHERE tree_list IS NULL
    /

    Now I can produce a simple report of the metadata in order to see what profiles should be created.
    column sql_text word_wrapped on format a110
    column module format a8
    column report_id heading 'nVision|Report ID'
    column tree_list word_wrapped on format a20
    column plan_hash_value heading 'SQL Plan|Hash Value' format 9999999999
    column parallel_min_time_threshold heading 'Parallel|Min Time|Threshold' format 999
    column parallel_degree_limit heading 'Parallel|Degree|Limit' format 999
    set long 500
    SELECT * FROM dmk_fms_profiles
    /

    SQL Plan
    FORCE_MATCHING_SIGNATURE SQL_ID Hash Value MODULE ACTION
    ------------------------ ------------- ----------- -------- ----------------------------------------------------------------
    Parallel Parallel
    nVision Min Time Degree
    Report ID TREE_LIST SQL_PROFILE_NAME Threshold Limit D
    -------------------------------- -------------------- ------------------------------ --------- -------- -
    OTHER_HINTS
    --------------------------------------------------------------------------------
    SQL_TEXT
    --------------------------------------------------------------------------------------------------------------
    12803175998948432502 5pzxhha3392cs 988048519 RPTBOOK PI=3186222:USGL233I:10008
    USGL233I BU_GAAP_CONSOL, NVS_GBGL123I_BU_CONSOL_ACCOUNT 1 4
    GAAP_ACCOUNT
    REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)
    SELECT L2.TREE_NODE_NUM,A.ACCOUNT,SUM(A.POSTED_TOTAL_AMT) FROM PS_LEDGER A, PSTREESELECT05 L2, PSTREESELECT10 L3
    WHERE A.LEDGER='S_GBMGT' AND A.FISCAL_YEAR=2017 AND A.ACCOUNTING_PERIOD BETWEEN 0 AND 12 AND (A.DEPTID BETWEEN
    'A0000' AND 'A8999' OR A.DEPTID BETWEEN 'B0000' AND 'B9149' OR A.DEPTID='B9156' OR A.DEPTID='B9158' OR A.DEPTID
    BETWEEN 'B9165' AND 'B9999' OR A.DEPTID BETWEEN 'C0000' AND 'C9999' OR A.DEPTID BETWEEN 'D0000' AND 'D9999' OR
    A.DEPTID BETWEEN 'G0000' AND 'G9999' OR A.DE
    Next, this PL/SQL block will create or recreate SQL profiles from the metadata. The various hints can be concatenated into a single string and passed as a parameter to SQLPROF_ATTR. The SQL text is passed as a parameter when the profile is created.
    set serveroutput on
    DECLARE
    l_signature NUMBER;
    h SYS.SQLPROF_ATTR;
    e_no_sql_profile EXCEPTION;
    PRAGMA EXCEPTION_INIT(e_no_sql_profile, -13833);
    l_description CLOB;
    BEGIN

    FOR i IN (
    SELECT f.*, s.name
    FROM dmk_fms_profiles f
    LEFT OUTER JOIN dba_sql_profiles s
    ON f.force_matching_signature = s.signature
    ) LOOP

    BEGIN
    IF i.name IS NOT NULL AND i.delete_profile = 'Y' THEN
    dbms_sqltune.drop_sql_profile(name => i.name);
    END IF;
    EXCEPTION WHEN e_no_sql_profile THEN NULL;
    END;

    IF i.delete_profile = 'Y' THEN
    NULL;
    ELSIF i.sql_text IS NOT NULL THEN
    h := SYS.SQLPROF_ATTR(
    q'[BEGIN_OUTLINE_DATA]',
    CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_degree_policy'',''AUTO'') ' END||
    CASE WHEN i.parallel_degree_limit >=0 THEN 'OPT_PARAM(''parallel_degree_limit'',' ||i.parallel_degree_limit ||') ' END||
    CASE WHEN i.parallel_min_time_threshold>=0 THEN 'OPT_PARAM(''parallel_min_time_threshold'','||i.parallel_min_time_threshold||') ' END||
    i.other_hints,
    q'[END_OUTLINE_DATA]');

    l_signature := DBMS_SQLTUNE.SQLTEXT_TO_SIGNATURE(i.sql_text);
    l_description := 'coe nVision '||i.report_id||' '||i.tree_list||' '||i.force_matching_signature||'='||l_signature;
    dbms_output.put_line(i.sql_profile_name||' '||l_description);

    DBMS_SQLTUNE.IMPORT_SQL_PROFILE (
    sql_text => i.sql_text,
    profile => h,
    name => i.sql_profile_name,
    description => l_description,
    category => 'DEFAULT',
    validate => TRUE,
    replace => TRUE,
    force_match => TRUE /* TRUE:FORCE (match even when different literals in SQL). FALSE:EXACT (similar to CURSOR_SHARING) */ );

    END IF;
    END LOOP;
    END;
    /
    I can verify that the profile has been created, and the hints that it contains, thus:
    SELECT profile_name,
    xmltype(comp_data) as xmlval
    FROM dmk_fms_profiles p
    , dbmshsxp_sql_profile_attr x
    WHERE x.profile_name = p.sql_profile_name
    AND p.status = 'ENABLED'
    ORDER BY 1
    /

    PROFILE_NAME
    ------------------------------
    XMLVAL
    ------------------------------------------------------------------------------------------------
    NVS_GBGL123I_BU_CONSOL_ACCOUNT
    <![CDATA[BEGIN_OUTLINE_DATA]]>
    <![CDATA[OPT_PARAM('parallel_degree_policy','AUTO') OPT_PARAM('parallel_degree_limit',4) OPT_PARAM('parallel_min_time_threshold',1) REWRITE PX_JOIN_FILTER(PS_XX_SUM_GCNSL_MV)]]>
    <![CDATA[END_OUTLINE_DATA]]>
    And now when the application runs, I get the plan that I wanted.
    • The query runs in parallel.
    • The SQL is rewritten to use materialized view.
    • There are no indexes on the materialized view, so it must full scan it.
    • It generates a bloom filter from PSTREESELECT10 and applies it to the materialized view.
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------
    | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | Pstart| Pstop | TQ |IN-OUT| PQ Distrib |
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------
    | 0 | SELECT STATEMENT | | | | 2219 (100)| | | | | | |
    | 1 | PX COORDINATOR | | | | | | | | | | |
    | 2 | PX SEND QC (RANDOM) | :TQ10004 | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | P->S | QC (RAND) |
    | 3 | HASH GROUP BY | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | PCWP | |
    | 4 | PX RECEIVE | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,04 | PCWP | |
    | 5 | PX SEND HASH | :TQ10003 | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,03 | P->P | HASH |
    | 6 | HASH GROUP BY | | 111 | 9879 | 2219 (6)| 00:00:01 | | | Q1,03 | PCWP | |
    | 7 | HASH JOIN | | 536 | 47704 | 2218 (6)| 00:00:01 | | | Q1,03 | PCWP | |
    | 8 | PX RECEIVE | | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,03 | PCWP | |
    | 9 | PX SEND HYBRID HASH | :TQ10002 | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,02 | P->P | HYBRID HASH|
    | 10 | STATISTICS COLLECTOR | | | | | | | | Q1,02 | PCWC | |
    | 11 | HASH JOIN | | 536 | 38056 | 2215 (6)| 00:00:01 | | | Q1,02 | PCWP | |
    | 12 | BUFFER SORT | | | | | | | | Q1,02 | PCWC | |
    | 13 | JOIN FILTER CREATE | :BF0000 | 236 | 3776 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
    | 14 | PX RECEIVE | | 236 | 3776 | 2 (0)| 00:00:01 | | | Q1,02 | PCWP | |
    | 15 | PX SEND BROADCAST | :TQ10000 | 236 | 3776 | 2 (0)| 00:00:01 | | | | S->P | BROADCAST |
    | 16 | PARTITION RANGE SINGLE | | 236 | 3776 | 2 (0)| 00:00:01 | 36774 | 36774 | | | |
    | 17 | INDEX FAST FULL SCAN | PSAPSTREESELECT10 | 236 | 3776 | 2 (0)| 00:00:01 | 36774 | 36774 | | | |
    | 18 | JOIN FILTER USE | :BF0000 | 8859 | 475K| 2213 (6)| 00:00:01 | | | Q1,02 | PCWP | |
    | 19 | PX BLOCK ITERATOR | | 8859 | 475K| 2213 (6)| 00:00:01 | 29 | 41 | Q1,02 | PCWC | |
    | 20 | MAT_VIEW REWRITE ACCESS STORAGE FULL| PS_XX_SUM_GCNSL_MV | 8859 | 475K| 2213 (6)| 00:00:01 | 29 | 41 | Q1,02 | PCWP | |
    | 21 | BUFFER SORT | | | | | | | | Q1,03 | PCWC | |
    | 22 | PX RECEIVE | | 731 | 13158 | 3 (0)| 00:00:01 | | | Q1,03 | PCWP | |
    | 23 | PX SEND HYBRID HASH | :TQ10001 | 731 | 13158 | 3 (0)| 00:00:01 | | | | S->P | HYBRID HASH|
    | 24 | PARTITION RANGE SINGLE | | 731 | 13158 | 3 (0)| 00:00:01 | 36773 | 36773 | | | |
    | 25 | INDEX FAST FULL SCAN | PSAPSTREESELECT05 | 731 | 13158 | 3 (0)| 00:00:01 | 36773 | 36773 | | | |
    ---------------------------------------------------------------------------------------------------------------------------------------------------------------------
    Conclusion SQL Profiles can be used in much the same way as SQL Patches to introduce hints into application SQL without changing the code, the difference being that SQL Profiles can force match SQL.  However, SQL Profiles do require the Tuning pack to be licenced, whereas SQL Patches and Baselines do not.
    Applying force matching SQL profiles to nVision is an effective, though reactive tactic.   Tree changes can result in changes to the number of literal criteria in nVision SQL statements that may, therefore, cease to match existing profiles.  nVision will always require on-going monitoring and introduction of new profiles.

    Pages