The Anti-Kyte

Subscribe to The Anti-Kyte feed The Anti-Kyte
Oracle - for when it was like that when you got there
Updated: 10 hours 40 min ago

Generating CSV files from PL/SQL the Very Easy Way.

Thu, 2020-02-13 15:31

This post is dedicated to Morten Braten and William Robertson.
Thanks to both of you for saving me a lot of time (and typing)…

XML, YAML, JSON. When it comes to transferring data between systems, they’ve all been flavour of the month at one time or another. However, good old CSV is still the format of choice when the batch window opens.
Not only is it simple, it adds comparatively little overhead to the size of the data file being generated.

OK – CSV used to mean “Comma-delimited” but these days it’s become synonymous with delimited data.
Whatever separator character you use, generating CSV is considerably easier these days…unless you’re using PL/SQL in a batch (as opposed to interactive) process.
That said, Connor MacDonald does have a clever way of utilising the functionality available in a command line tool such as SQL*Plus by means of a DBMS_SCHEDULER job.

If for some reason that won’t work for you, it looks like you’re going to have to hack out some custom PL/SQL to get the job done…or you could have a look at a couple of the options available in what other technologies would refer to as “the ecosystem”.
What I’m going to cover is :

  • The pain of hand-cranking delimited SQL
  • A solution offered in Morten Braten’s Alexandria PL/SQL Utility Library
  • An alternative solution made available by William Robertson
  • The final PL/SQL procedure
  • Hand-cranking delimited code

    We’re working in a Data Warehouse running on Oracle. We need to provide a data feed to a downstream system.
    The feed is comma separated with values enclosed by quotes.
    The query to generate the data for the feed is reasonably straight forward :

    select dept.department_id, dept.department_name,, coun.country_id
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';  -- this will be passed as a parameter in the run.  

    All ok so far. However, when it comes to tweaking the query to generate the output, things get a bit more fiddly.
    The query in the package will have to change to something like this :

    select '"'||dept.department_id||'",'
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';    

    Typically, these sorts of files are generated with dozens (sometimes hundreds) of attributes. If looking at all of those quotes and commas is going to give you a headache, you might seek to mitigate the worst effects by replacing the characters with their ASCII values :

    select chr(34)||dept.department_id||chr(34)||chr(44)
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';    

    The double-quotes surrounding the data values are designed to escape any commas in the data and prevent them from being read as field terminators. Unfortunately, this doesn’t help much if the data also contains double-quotes. For this reason, I tend to prefer a pipe character as a delimiter. This allows us to dispense with the double-quotes. As a double-pipe is the SQL concatenation operator, let’s use the ASCII value instead for the delimiter itself :

    select dept.department_id||chr(124)
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';      

    Wouldn’t it be nice if there was some way we could just persuade Oracle to magically write our delimiters for us.
    Well, I don’t know about Oracle, but Morten Braten will have a go for you…

    The Alexandria CSV_UTIL_PKG package

    As with most Alexandria packages, there’s a demo on how to use CSV_UTIL_PKG in demos/csv_util_pkg_demo.sql.
    However, as explained in Morten’s post on this subject, you need to create some types first …

    create type t_str_array as table of varchar2(4000);
    create type t_csv_line as object (
      line_number  number,
      line_raw     varchar2(4000),
      c001         varchar2(4000),
      c002         varchar2(4000),
      c003         varchar2(4000),
      c004         varchar2(4000),
      c005         varchar2(4000),
      c006         varchar2(4000),
      c007         varchar2(4000),
      c008         varchar2(4000),
      c009         varchar2(4000),
      c010         varchar2(4000),
      c011         varchar2(4000),
      c012         varchar2(4000),
      c013         varchar2(4000),
      c014         varchar2(4000),
      c015         varchar2(4000),
      c016         varchar2(4000),
      c017         varchar2(4000),
      c018         varchar2(4000),
      c019         varchar2(4000),
      c020         varchar2(4000)
    create type t_csv_tab as table of t_csv_line;

    With these types in place, we can install the package from the Alexandria GitHub repo.
    The files we’re looking for are under the ora directory:

    • csv_util_pkg.pks
    • csv_util_pkg.pkb

    Download them and run them in the order they are listed here ( the .pks is the header and the .pkb is the body).

    Now, we can take some of the commas out of our code…

    select csv_util_pkg.array_to_csv(
            dept.department_id, dept.department_name,
  , coun.country_id))
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';

    By default, the results are output using a comma as the separator. However, we can change this easily enough:

    select csv_util_pkg.array_to_csv(
        t_str_array(dept.department_id, dept.department_name,, coun.country_id),
    from departments dept
    inner join locations loc
        on dept.location_id = loc.location_id
    inner join countries coun
        on loc.country_id = coun.country_id  
    inner join regions reg
        on reg.region_id = coun.region_id
    where reg.region_name = 'Americas';

    The output is pretty much what we’re looking for :

    50|Shipping|South San Francisco|US
    130|Corporate Tax|Seattle|US
    140|Control And Credit|Seattle|US
    150|Shareholder Services|Seattle|US
    210|IT Support|Seattle|US
    230|IT Helpdesk|Seattle|US
    240|Government Sales|Seattle|US
    250|Retail Sales|Seattle|US

    Now all you need to do is put this into your PL/SQL procedure and write some code to save it to a file ( possibly using the Alexandria FILE_UTIL_PKG package).
    If that seems like too much effort though…

    William Robertson’s CSV package

    Whilst the Alexandria package is a bit of an all-rounder, William Robertson has produced something that’s rather more tailored to producing CSV files.

    The package – simply called CSV – has a function that produces delimited format from a refcursor. It also contains a procedure to write csv data to a file.
    Let’s take a closer look…

    Installation of the package simply involves downloading the code from here, and running it. Both the package header and body are in a single file – csv.pkg.

    The package’s REPORT function takes a slightly different approach in that it takes a ref cursor as an argument :

    select column_value
    from table(
        select dept.department_id, dept.department_name,
  , coun.country_id
        from departments dept
        inner join locations loc
            on dept.location_id = loc.location_id
        inner join countries coun
            on loc.country_id = coun.country_id  
        inner join regions reg
            on reg.region_id = coun.region_id
        where reg.region_name = 'Americas')))

    Once again, the delimiter character is comma by default. Once again, we can override this :

    select column_value
    from table(
        select dept.department_id, dept.department_name,
  , coun.country_id
        from departments dept
        inner join locations loc
            on dept.location_id = loc.location_id
        inner join countries coun
            on loc.country_id = coun.country_id  
        inner join regions reg
            on reg.region_id = coun.region_id
        where reg.region_name = 'Americas'), 

    We can go even further. You can add common things that you may need to include in a csv that’s serving as a feed file for another system. These include :

    • a header record with a delimited list of attributes
    • a label attribute for each row of data to make these rows easier to identify for the program loading the csv
    • a rowcount as a trailer record

    All of which can be accomplished thus :

    select column_value
    from table(
        select dept.department_id, dept.department_name,
  , coun.country_id
        from departments dept
        inner join locations loc
            on dept.location_id = loc.location_id
        inner join countries coun
            on loc.country_id = coun.country_id  
        inner join regions reg
            on reg.region_id = coun.region_id
        where reg.region_name = 'Americas'), 
        p_separator => chr(124),
        p_label => 'DATA',
        p_heading => 'Y',
        p_rowcount => 'Y'))
    [show output]

    Not only that, but the package also facilitates creating the file itself…

    Generating a file from the CSV package

    First of all, we need access to a directory. So, the DBA needs to do something like :

    grant read, write on directory app_files to hr;

    Now, connected as HR, we can create a PL/SQL procedure to generate our csv files like this :

    create or replace procedure depts_by_region( i_region regions.region_name%type)
        l_fname varchar2(100);
        rc sys_refcursor;
        l_fname := i_region||'_depts.csv';
        open rc for
        select dept.department_id, dept.department_name,
  , coun.country_id
        from departments dept
        inner join locations loc
            on dept.location_id = loc.location_id
        inner join countries coun
            on loc.country_id = coun.country_id  
        inner join regions reg
            on reg.region_id = coun.region_id
        where reg.region_name = i_region; 
            p_dataset => rc,
            p_separator => chr(124),
            p_label => 'DATA',
            p_heading => 'Y',
            p_rowcount => 'Y',
            p_directory => 'APP_FILES',
            p_filename => l_fname);

    …and run it like this :


    Sure enough, when we look on the server, we can see :

    [mike@frea ~]$ cd /opt/oracle/app_files/
    [mike@frea app_files]$ ls -l *.csv
    -rw-r--r-- 1 oracle oinstall 840 Feb 11 20:56 Americas_depts.csv

    …and the file looks like this :

    DATA|50|Shipping|South San Francisco|US
    DATA|130|Corporate Tax|Seattle|US
    DATA|140|Control And Credit|Seattle|US
    DATA|150|Shareholder Services|Seattle|US
    DATA|210|IT Support|Seattle|US
    DATA|230|IT Helpdesk|Seattle|US
    DATA|240|Government Sales|Seattle|US
    DATA|250|Retail Sales|Seattle|US

    There are some limitations. For example, William points out that any row that’s longer than 4000 characters will cause the package to break. However, unless you’re generating fairly “wide” lines, he’s pretty much written your code for you.

    Running a “background” job in PL/SQL

    Mon, 2020-01-20 14:11

    Teddy has observed the recent General Election campaign with some interest and has concluded that he has what it takes to be the next Prime Minister.

    It’s not just the haircut, which does now look extremely Prime Ministerial…

    Politics is a robust business but Teddy’s more than capable of playing “ruff” :

    He firmly believes in the need to streamline Government at Cabinet level, which has the incumbent Chief Mouser to the Cabinet Office a little nervous.
    He’s also well used to being followed around by a “pooper scooper”. And not to put too fine a point on it, there’s more than one reason that he’s known as a “shaggy” dog.

    If he’s going to make it in politics, Teddy knows that he doesn’t have time to waste waiting for that pesky long-running job he’s just started. Oh no, he needs to use his evenings to get on with building his power base.
    Fortunately, Oracle facilitates detached execution of PL/SQL blocks by means of the DBMS_SCHEDULER package. Now, I know what you’re thinking, that’s going to involve a lot of fiddly setup with schedules and windows and chains and stuff. Well, you may be pleasantly surprised…

    The package

    Teddy has a packaged procedure that he needs to test :

    create or replace package long_runner as
        procedure write_log( 
            i_msg_type in logs.message_type%type,
            i_msg logs.message%type);
        procedure marathon( i_sleep in pls_integer, i_string in varchar2);
    end long_runner;
    create or replace package body long_runner as
        procedure write_log( 
            i_msg_type in logs.message_type%type,
            i_msg logs.message%type)
        pragma autonomous_transaction;
            insert into logs( message_type, message)
            values( i_msg_type, i_msg);
        end write_log;    
        procedure marathon( i_sleep in pls_integer, i_string in varchar2)
            l_msg logs.message%type;
            write_log('INFO', 'i_sleep : '||i_sleep);
            write_log('INFO', 'i_string : '||i_string);
            for i in 1..26 loop
                l_msg := null;
                for j in 1..i loop
                    l_msg := l_msg||'z';
                end loop;
                write_log('INFO', initcap( l_msg));
                dbms_session.sleep( i_sleep);
            end loop;
            write_log('INFO', 'Completed');
        end marathon;    
    end long_runner;

    Normally, he’d kick it off in a simple PL/SQL block :

        long_runner.marathon( 30, 'Teddy for PM!');

    Unfortunately, it’s almost time to leave for the day and he doesn’t trust Windows not to do an update and reboot itself.

    Fortunately, Teddy’s not above a (quick and) dirty trick…

        stmnt_block varchar2(4000);
        stmnt_block := q'[
                long_runner.marathon( 30, 'Teddy for PM!');
            job_name => 'checkit_tomorrow',
            job_type => 'PLSQL_BLOCK',
            job_action => stmnt_block,
            start_date => sysdate, -- run now
            enabled => true,
            auto_drop => true,
            comments => 'My background job - because I have a life');

    He’s created a DBMS_SCHEDULER job on-the-fly. The job is set to run immediately by setting the start_date to now. The job will be dropped once it’s completed ( auto_drop set to true).

    Run this and it returns almost immediately. That’s because the job has been submitted rather than the PL/SQL block having finished.
    If we want to check progress, we can take a look at the scheduler views…

    Tracking job progress

    When the job starts, we can see it in USER_SCHEDULER_JOBS :

    select to_char(start_date, 'DD-MON-YYYY HH24:MI') as start_date, 
    from user_scheduler_jobs
    where job_name = 'CHECKIT_TOMORROW'
    START_DATE                 STATE               
    -------------------------- --------------------
    20-JAN-2020 19:45          RUNNING             

    If, for any reason, Teddy needs to terminate the job, he can simply run :

    exec dbms_scheduler.stop_job('CHECKIT_TOMORROW', true);

    As noted previously, the job will drop itself on completion at which point it will disappear from these views.

    To see the outcome of the job :

    select log_id, log_date, status
    from user_scheduler_job_log
    where job_name = 'CHECKIT_TOMORROW';
        LOG_ID LOG_DATE                         STATUS                        
    ---------- -------------------------------- ------------------------------
         53216 20-JAN-20 GMT SUCCEEDED                                          

    Sometimes we need further details. For example, if the job failed, the error stack will be included in the

    select run_duration, status,
        error#, additional_info, errors
    from user_scheduler_job_run_details
    where job_name = 'CHECKIT_TOMORROW'
    and log_id = 53216 --log id from user_scheduler_job_log;
    RUN_DURATION        STATUS                             ERROR# ADDITIONAL_INFO      ERRORS              
    ------------------- ------------------------------ ---------- -------------------- --------------------
    +00 00:05:00.000000 SUCCEEDED                               0                                     

    In this case however, it looks like Teddy’s run was perfect…just like it will be in 2024.

    Upcycling and old TV to a Smart TV with an Amazon FireTV Stick

    Mon, 2020-01-13 15:53

    Once upon a time, a TV was simply something you plugged into an arial socket and used to watch whatever broadcasters were prepared to show at any given time.
    With the rise of streaming, TVs have now become “smart”. Well, smartish. I mean, some of them use really slick software and have lots of Apps that enable you to stream from a variety of sources.
    Others are a bit less sophisticated and have limited access to such Apps.
    Not only that, the world is now full of the old type of “dumb” but perfectly serviceable TVs.
    So, if you want to :

    • Revitalise an old non-smart TV
    • Make a new not-so-smart TV smarter
    • Place a TV somewhere that’s not within easy reach of an arial socket

    …then help may be at hand. There are a number of devices you can plug into a TV to beef up it’s smartness.
    As these devices are streaming content over the internet, you don’t need an arial to make use of them.
    Furthermore, they are a fraction of the cost of buying a new Smart TV.

    What follows is an account of how I’ve setup an Amazon Fire TV Stick and used it to access the wealth of streaming content that’s freely available in the UK, as well as some of the main subscription services…

    Why FireTV ?

    Let’s be clear here, I’ve not done any extensive research into streaming sticks so I’m not saying that FireTV is the best. However, it does fulfill all of our requirements whilst being a very cheap way of “smartening up” your TV. It also happens to be the only one I’ve used, although I have used it on several TVs now, with uniformly impressive results.

    Installation pre-requisites

    First up you’re going to need a TV with an HDMI port.
    You’ll also need access to a reasonable broadband connection. At the start of 2020, let’s say that anything over 12Mb/Sec should be good enough if your not sharing, maybe super fast broadband speeds (20MB up) if you are.
    You can use slower speeds but this can be prone to buffering.

    Oh yes, you’ll also need one of these :


    Other stuff it would be useful to have to hand :

    • Your Wifi network name and password
    • Your Amazon account – if you have one – don’t worry if you haven’t
    • Your BBC Account – if you have one

    One point to note is that, if you do happen to have an Amazon Prime account, there’s a whole bunch of streaming content that is available to you for free so it makes sense to use this account when seting up the FireTV.
    If you don’t it’s not a problem however, as we’ll see shortly…

    Initial Setup

    Plug the FireTV stick into a it’s power cable then into a mains socket. Then connect it to an HDMI port on your TV.

    Now turn on the TV and play hunt-the-HDMI-port to find out which one your FireTV is plugged into.

    FireTV will now take you through the initial setup process which consists of :

    1. Set Language (English UK in my case)
    2. Set up Wifi to connect to your network
    3. Sign in with your Amazon Account ( or create a new one) – NOTE – you can set up an account without providing any payment details !
    4. Select the brand of TV you are using so that the power and volume buttons on the FireTV remote will work

    At this point the setup will give you the opportunity to install some of the more popular apps. Don’t worry if you can’t find what you’re looking for, because we’ll add a lot more apps shortly.
    So, select Netflix, YouTube and anything else being offerred that takes your fancy.

    Finally, we have the FireTV Home screen. Before we do anything else however, we need to go to the menu that’s displayed at the top of the screen and select Settings
    Scroll across and select My Fire TV.
    Then select About from the list that pops up.
    Finally you need to select the option Install Update if it’s there.

    If there is a pending update, FireTV will download it and then either automatically re-start or prompt you to re-start it.

    Once it’s re-booted, you can now download your apps.
    Back on the Home screen menu, navigate to Apps. This opens up the FireTV app store.
    If you navigate to Categories then Entertainment, you’ll find the TV services you need to access a significant chunk of the Freeview channels, along with some subscription services.

    Some of these apps will require registration ( or use of an existing account), however all of the ones listed below can be installed and used without having to pay anything.
    Note that, as a TV License payer, you will have access to a veritable treasure trove of content on BBC iPlayer, including classic programs, box sets and even movies.

    Streaming Freeview Channels

    The mapping between Freeview Channels and Apps is as follows :

    Channel App Notes BBC 1 iPlayer BBC 2 iPlayer ITV 1 ITV Hub Channel 4 All 4 Catchup only Channel 5 My 5 Catchup only ITV 2 ITV Hub BBC 4 iPlayer ITV 3 ITV Hub Quest TV Player E 4 All 4 Catchup only Really TV Player More 4 All 4 Catchup only Dave UK TV Play Catchup only Drama UK TV Play Catchup only 5 USA My 5 Catchup only ITV 4 ITV Hub Yesterday UK TV Play Catchup only ITVBe ITV Hub 5 Star My 5 Catchup only DMax TV Player Food Network TV Player Home TV Player 47 TV Player Paramount Network My 5 Catchup only 5 Select My 5 Catchup only Blaze My 5 Catchup only PBS America My 5 Catchup only CBBC iPlayer CBeebies iPlayer CITV ITV Hub BBC News iPlayer BBC Parliament iPlayer Sky News Sky News

    You can stream some channels in more than one App. For example BBC1 can be streamed live on TVPlayer or iPlayer.

    Other useful apps

    If you’re a BT TV subscriber, you can use the BT App to access some of the content and save yourself buying an extra set-top box. Note that BT do charge for access via the FireTV stick.
    Also, if you happen to have a Plex media server configured, we’ll there’s an App for that ( under the Apps and Games category).

    In fact, if you can’t find an app, it’s always worth asking Alexa.
    Right, back to my catch-up of a repeat of QI on Dave…

    Putting VALIDATE_CONVERSION front and centre, or possibly center

    Thu, 2019-12-19 13:35

    I recently had an interesting conversation with Suzanne, a native of Pittsburgh PA, about how the Americans and British spell some words differently.
    Adding some local colour ( color) to the discussion, Suzanne recounted the story of when the US Postal service, seeking to save money on printing, proposed removing “superfluous” characters from place names.
    Well, the burghers of Pittsburgh weren’t having any of that thank-you-very-much and so the City’s name remains unabridged to this day. The denizens of Harrisburg weren’t so fortunate.
    Whilst we may be separated by a common language, as programmers who work with Oracle, Suzanne and I do share the challenge of loading data into tables when the format of that data may not be entirely what we were expecting.
    If you’re fortunate enough to inhabit the sunlit uplands of Oracle 12c R2 and beyond, we’re about to explore the shiny new VALIDATE_CONVERSION function which promises to make your life that little bit easier.
    For those languishing on 11g, we’ll explore how we might implement a similar function in PL/SQL.

    The Data to be converted

    Let’s pretend we’ve loaded some data from a file. By definition the actual data as it exists in the file is a collection of character strings. We’ve taken a permissive approach to the load (i.e. load everything if at all possible). The target table for our load is :

    create table historic_events_stg(
        id varchar2(4000),
        event_desc varchar2(4000),
        event_ts varchar2(4000))

    A DML representation of the data load would look like this :

    insert into historic_events_stg(id, event_desc, event_ts)
        select 'ONE', q'[Webster's Dictionary first published]', 'April 14 1828' from dual union all
        select '2', q'[Wright Brother's first flight]', 'DEC-17-1903' from dual union all
        select '3', 'NHS established in the UK', '5 July 1948' from dual union all
        select '4', 'First Manned Moon Landing',  'July 20 1969 20:17:40' from dual union all
        select '5', 'Tim Berners-Lee proposes the World Wide Web',  '19901112' from dual union all
        select '6', q'[JK Rowling's first Harry Potter book published]', '30-JUN-1997' from dual union all
        select '7', 'iPhone released in the USA', '06/29/2007' from dual;

    Now, we’d like to load the data from the staging table into our application table which looks like this :

    create table historic_events (
        id number,
        event_desc varchar2(4000),
        event_ts timestamp with time zone)

    In 11g and previously this might prove a bit tricky without the use of something like Log Errors :

    insert into historic_events select * from historic_events_stg;
    ERROR at line 1:
    ORA-01722: invalid number

    Sometimes, it would be nice if we could just see which values are going to cause problems before we even attempt to load the data…


    Let’s see what might be causing our error…

    select id, 
        validate_conversion(id as number) as isnum
    from historic_events_stg;

    Yes, it’s that simple, just ask validate_conversion what happens when you try to convert all of the IDs in the table to numbers :

    --- -----
    ONE     0
    2       1
    3       1
    4       1
    5       1
    6       1
    7       1

    If the conversion is going to succeed, the function returns a 1. Otherwise, it returns a zero.
    It works for DATES too, although some extra effort may be needed.

    In my current session, the NLS_DATE_FORMAT is :

    select sys_context('userenv', 'nls_date_format') from dual;

    So, when I check my date strings in the staging table, most of them fail to pass muster…

    select event_ts, 
        validate_conversion(event_ts as date) as isdate
    from historic_events_stg; 
    EVENT_TS                  ISDATE
    ------------------------- ------
    April 14 1828                  0
    DEC-17-1903                    0
    5 July 1948                    1
    July 20 1969 20:17:40 UTC      0
    19901112                       0
    30-JUN-1997                    1
    06/29/2007                     0

    We can specify the date format we’re validating to so we get a slightly different result with :

    select event_ts, 
        validate_conversion(event_ts as date, 'MON-DD-YYYY') as isdate
    from historic_events_stg;
    EVENT_TS                       ISDAT
    ------------------------------ -----
    April 14 1828                      1
    DEC-17-1903                        1
    5th July 1948                      0
    July 20th 1969 20:17:40 UTC        0
    19901112                           0
    30-JUN-1997                        0
    06/29/2007                         0

    Unfortunately, it looks like we’re going to have to go through the record set and re-format most of the dates.
    OK, it’s more likely that you’d go back to whoever is giving you the feed and ask them to just pick one date format.

    The other possibility is to use the function to select the correct format to use for each conversion…

    select event_ts,
            when validate_conversion( event_ts as date, 'MON-DD-YYYY') = 1 
                then to_date(event_ts, 'MON-DD-YYYY')
            when validate_conversion( event_ts as date, 'DD Month YYYY') = 1 
                then to_date( event_ts, 'DD Month YYYY')
            when validate_conversion( event_ts as date, 'DD-MON-YYYY') = 1 
                then to_date( event_ts, 'DD-MON-YYYY')
            when validate_conversion( event_ts as date, 'MM/DD/YYYY') = 1 
                then to_date(event_ts, 'MM/DD/YYYY')
            when validate_conversion( event_ts as date, 'YYYYMMDD') = 1 
                then to_date(event_ts, 'YYYYMMDD')
            when validate_conversion(event_ts as timestamp with time zone, 'MON DD YYYY HH24:MI:SS TZR') = 1 
                then to_timestamp_tz(event_ts, 'MON DD YYYY HH24:MI:SS TZR')
        end as converted_event_ts
    from historic_events_stg; 
    EVENT_TS                       CONVERTED_EVENT_TS                                
    ------------------------------ --------------------------------------------------
    April 14 1828                  14-APR-28 EUROPE/LONDON        
    DEC-17-1903                    17-DEC-03 EUROPE/LONDON        
    5 July 1948                    05-JUL-48 EUROPE/LONDON        
    July 20 1969 20:17:40 UTC      20-JUL-69 UTC                  
    19901112                       12-NOV-90 EUROPE/LONDON        
    30-JUN-1997                    30-JUN-97 EUROPE/LONDON        
    06/29/2007                     29-JUN-07 EUROPE/LONDON        
    7 rows selected. 

    To be honest, I think I’d prefer the first option if I had a choice.
    If you’re not on 12c yet, all of this is somewhat academic. If you want to take advantage of similar functionality, you’re going to have to roll-your-own…

    The sort-of Validate Conversion function

    Using the documentation for the VALIDATE_CONVERSION function as a rough guide, we can come up with something reasonably serviceable in PL/SQL :

    create or replace function is_valid_conversion(
        i_expr in varchar2,
        i_target_type in varchar2,
        i_format in varchar2 default null)
        return pls_integer deterministic
    -- Mimic the VALIDATE_CONVERSION function that's available in 12c and above.
    -- NOTE - setting of NLS params specifically excluded here.
    -- This function simply works against the base data types i.e. :
    -- NUMBER
    -- DATE
    -- This should cover most of the common use-cases for this function.
        dummy_date date;
        dummy_num number;
        dummy_ts timestamp;
        dummy_tstz timestamp with time zone;
        l_type varchar2(30);
        l_format varchar2(50);
        e_missing_type exception;
        e_unsupported_type exception;
        -- Sanity check the input parameters
        if i_target_type is null then
            raise e_missing_type;
        elsif upper( i_target_type) not in ('NUMBER', 'DATE', 'TIMESTAMP', 'TIMESTAMP WITH TIME ZONE')
            raise e_unsupported_type;
        end if;
        if i_expr is null then
            -- will convert to any of the data types we're dealing with here
            return 1;
        end if;
        l_type := upper( i_target_type);
        -- Now test the conversion
        if l_type = 'NUMBER' then
            if i_format is not null then
                dummy_num := to_number( i_expr, i_format);
                dummy_num := to_number(i_expr);
            end if;
        elsif l_type = 'DATE' then
            l_format := coalesce( i_format, sys_context('userenv', 'nls_date_format'));
            dummy_date := to_date( i_expr, l_format);
        elsif l_type = 'TIMESTAMP' then
            l_format := coalesce( i_format, sys_context('userenv', 'nls_timestamp_format'));
            dummy_ts := to_timestamp( i_expr, l_format);
        elsif l_type = 'TIMESTAMP WITH TIME ZONE' then
            select coalesce( i_format, value)
            into l_format
            from v$nls_parameters
            where parameter = 'NLS_TIMESTAMP_TZ_FORMAT';
            dummy_tstz := to_timestamp_tz( i_expr, i_format);
        end if;
        return 1;
        when e_missing_type then 
            raise_application_error(-20000, 'A type to convert to must be specified');
        when e_unsupported_type then
            raise_application_error(-20001, q'[Target type is not supported by this function. You haven't written that bit yet !]');
        when others then
            -- conversion has failed
            return 0;
    end is_valid_conversion;

    Sure enough, when you run this, the results are the same :

    select id, 
        is_valid_conversion( id, 'NUMBER')
    from historic_events_stg;
    --- -----
    ONE     0
    2       1
    3       1
    4       1
    5       1
    6       1
    7       1
    select event_ts, 
        is_valid_conversion(event_ts, 'DATE') as isdate
    from historic_events_stg; 
    EVENT_TS                  ISDATE
    ------------------------- ------
    April 14 1828                  0
    DEC-17-1903                    0
    5 July 1948                    1
    July 20 1969 20:17:40 UTC      0
    19901112                       0
    30-JUN-1997                    1
    06/29/2007                     0
    select event_ts, 
        is_valid_conversion(event_ts, 'DATE', 'MON-DD-YYYY') as isdate
    from historic_events_stg; 
    EVENT_TS                           ISDATE
    ------------------------------ ----------
    April 14 1828                           1
    DEC-17-1903                             1
    5 July 1948                             0
    July 20 1969 20:17:40 UTC               0
    19901112                                0
    30-JUN-1997                             0
    06/29/2007                              0
    select event_ts,
        is_valid_conversion(event_ts, 'TIMESTAMP WITH TIME ZONE')
    from historic_events_stg
    where id = '4';
    EVENT_TS                          IS_TSTZ
    ------------------------------ ----------
    July 20 1969 20:17:40 UTC               1

    Of course, this is no substitute for a built-in SQL function in terms of both functionality and performance. However, it may provide the basis of something useful if you’re not in a position to play with all those 12c goodies just yet.

    Configuring External Tools in SQLDeveloper

    Tue, 2019-11-19 14:51

    In order to distract myself from the lunacy that is the ongoing UK General Election campaign, I’ve been continuing to spend some quality time with the Swiss Army Knife of the Oracle world, SQLDeveloper.
    I know, conventional wisdom says that when you have a hammer, everything is a nail.
    On the other hand, conventional wisdom is rather less forthcoming if instead you have a multi-tool of Central European origin.
    Fortunately, SQLDeveloper pays no heed to conventional wisdom but allows you to configure callouts to other useful software utilities by means of it’s External Tools functionality.
    What we’re going to look at here is :

    • Adding a call to an external tool in SQLDeveloper
    • Using macros to control the behaviour of an External Tool
    • Configuration settings for common tools in Windows
    • Configuration settings for common tools in Ubuntu

    Setup External Tools

    In the Tools menu, select External Tools.
    If you’re on Windows, you can kick-start matters by clicking the Find Tools button.
    This will configure :

    • Notepad
    • Internet Explorer
    • Word (if available)

    This option is not available on Ubuntu, which is where I am at the moment so…

    Adding a new tool

    I’m going to start with the equivalent to Notepad on Ubuntu.
    I feel I should apologise to the noble Gedit for that last sentence.
    From the Tools menu select External Tools :

    Click New.

    Now we need to enter the location of the program we want to run. As we’re on Linux, we can find this out easily enough :

    which gedit

    If we have a file in focus when we call this editor we’d like Gedit to open it. We’d also like it to use the directory in which that file is located.
    We can achieve this using some of the macros available to us. In the Argument field, click Insert. Now click on File Name :

    Click OK and we can see that the field now has a value of :


    We’re going to use a macro for the Run Directory as well. In this case we’ll select the File Directory macro which gives us a Run Directory value of


    After all that the dialog looks like this :

    Click Next

    Next we can configure how the tool is displayed in SQLDeveloper. We’ll leave the caption as is. We’ll add a tooltip though.
    Oh, and the icon is currently set to the SQLDeveloper default for External Tools.
    On Windows, SQLDeveloper is usually smart enough to derive the appropriate icon from a fully qualified path to the executable. On Ubuntu, we need to point it in the right direction.
    We also need to be a bit careful about the size of the icon file we choose otherwise SQLDeveloper will complain with something like :

    Fortunately, Ubuntu tends to put suitably sized icons for it’s applications in :


    …less fortunately, the one for Gedit isn’t called gedit.png. Anyhow, what we end up with is :

    Next, we want to determine where Gedit will be available.
    As well as the main Tools menu, we’d like it to be on the context menus and the main toolbar.
    We don’t want to log output messages :

    I’m going to leave the Availability settings at the default :

    Finally click Finish, then close the External Tools window.

    Now, if I have a file in focus, I can open it in Gedit :

    Macro Definitions

    In addition to External Tool configuration, these macros can be used elsewhere in SQLDeveloper to configure various preferences. For example, to set your Worksheet default path to be the same directory as the file that you have open at any point in time, you’d use


    You can get an idea of the value of certain macros when you select them in the External Tool setup dialog:

    The complete list of macros – at least – all of those available in the External Tool config dialog- is :

    Macro Description syntax DB Connection String Database Connection String ${sqldev.conn} DB UserName Database username ${sqldev.dbuser} Environment Variable The value of an environment variable The var attribute specifies the name of the environment variable ${env:var=PATH} File Directory The directory containing the currently selected file ${file.dir} File Extension The extension of the currently selected file ${file.ext} File Name The name of the currently selected file ${} File Name Without Extension The basename of the currently selected file ${} File Path Full path of the currently selected file ${file.path} File URL The URL of the currently selected file ${file.url} IDE Application Name The name of this application ${} IDE Classpath The full classpath of this application ${ide.classpath} IDE Install Directory The directory in which this application is installed ${ide.dir} IDE Oracle Home Directory The oracle home directory in which this application is installed. ${} IDE User directory The user directory ${ide.user.dir} Prompt Displays a prompt at runtime to allow user to specify an argument value ${prompt} Prompt with Label Prompt user for a value. The label attribute specifies a label to display in the prompt dialog ${promptl:=prompt text} System Property The value of a system property. The name attribute specifies the name of the system property ${prop:name=property} Configuration details for common Windows tools

    Here’s a list of configurations that work for me on Windows 10.
    There are a couple of things that may help on Windows when looking for the information required to configure an external tool.
    First, if you’re unsure as to the location fo the executable of the program you need to run, you can simply run it and then open Task Manager and right-click it. This will give you the exe name and path.
    Second, whilst executables that are in your path do not require a fully qualified path name for SQLD to find them, it will usually need the full path to use the normal icon for that program.

    Tool Executable Arguments Run Directory Icon Location Windows Explorer C:\Windows\explorer.exe /select,${file.path} Notepad++ C:\Program Files(x86)\Notepad++\notepad++.exe ${file.path} ${file.dir} Git Bash C:\Program Files\Git\git-bash.exe ${file.dir} Putty C:\Program Files(x86)\PUTTY\putty.exe Excel C:\PROGRA~1\MICROS~1\Office16\EXCEL.EXE ${file.path} ${file.dir} FileZilla C:\Program Files\FileZilla FTP Client\filezilla.exe “-a ${file.path}” Snipping tool C:\Windows\system32\SnippingTool.exe ide.fileicon.file:/C:/Windows/system32/SnippingTool.exe Notepad notepad.exe ${file.path} ${file.dir} ide.fileicon.file:/C:/Windows/system32/notepad.exe Configuration details for common Ubuntu tools

    Things are slightly different on Ubuntu, No, I’m not about to go into a chauvanistic rant about the innate superiority of life with the Penguins – although I’ve not included Git Bash in this list because Ubuntu has proper Bash…

    Tool Executable Arguments Run Directory Icon Location Nautilus /usr/bin/nautilus ${file.dir} /usr/share/icons/gnome/16×16/apps/file-manager.png Gedit /usr/bin/gedit ${} ${file.dir} /usr/share/icons/gnome/16×16/apps/accessories-text-editor.png Terminal /usr/bin/gnome-terminal ${file.dir} /usr/share/icons/gnome/16×16/apps/gnome-terminal.png LibreOffice Calc /usr/bin/libreoffice –calc ${} ${file.dir} /usr/share/icons/gnome/16×16/apps/calc.png FileZilla /usr/bin/filezilla “-a ${file.path}” /usr/share/filezilla/resources/16×16/filezilla.png VS Code /usr/bin/code ${} ${file.dir} Screenshot /usr/bin/gnome-screenshot /usr/share/icons/gnome/16×16/apps/applets-screenshooter.png

    Note that I couldn’t fine an appropriately sized icon for VS Code.

    Well that’s used up some time, but there’s still a month to go until the election. I’ll have to find something else to distract myself with.

    Using Git in SQLDeveloper

    Wed, 2019-11-06 14:09

    As I write, South Africa have just emerged victorious from the 2019 version of the Rugby World Cup having vanquished England in the final.
    This explains both the code used in the examples that follow and the fact that I’m writing this to escape from the “commiserations” pouring into my phone from my Welsh In-Laws. Tell me, what is the Welsh for Schadenfreude ?
    Continuing my SQLDeveloper appreciation fest, I’m going to look at the latest version of SQLDeveloper’s (19.2) level of integration with Git.
    Specifically, what I’ll be covering is :

    • Using the SQLDeveloper Files Tree to work with a local Git Repository
    • Creating a branch
    • Staging changes
    • Commiting changes
    • Comparing different versions of a file within a branch
    • Comparing branches
    • Merging branches
    • Deleting branches

    We’re going to do all of this without leaving the comfort of our favourite Oracle IDE…


    Running SQLDeveloper 19.2, we’ll be working on a local Git repository using Git 2.7.4.
    I’m doing this on Ubuntu but SQLDeveloper seems to behave in the same way on Windows. I haven’t had the chance to do any comparison on MacOS but I’ve no reason to believe it won’t be the same story there too.

    Changing your perspective

    By default, SQLDeveloper views the world through objects in the database :

    Fortunately, it can also be persuaded to look at Files on your computer.
    To achieve this, simply go to the View menu and select Files :

    If you start exploring this view, you may well find that you’re seeing spots in front of your eyes :

    Don’t worry, it’s not contagious. It’s just SQLDeveloper realising that this directory is actually a local Git repository (repo).
    We can find out the branch that is currently active by right-clicking any member of the repo and selecting Versioning/Properties.
    In this case, we can see that we’re on master :

    We want to make some changes to this application code. We want to create a branch to do this so…

    Creating a new branch

    Right-click on one of the repo members and select Versioning/Create Branch :

    Enter the name you want to give the new branch. In my case find_winners
    Then click Select Commit and choose the commit from which you want to branch. For me it’s usually the latest one, which is just as well in this case :

    I want to switch to the new branch as soon as it’s created so I need to check Checkout Created Branch :

    Click OK and I’m now in the new branch. To check, I just look at the properties window of one of the repo members again ( in my case it’s the rwc root folder) :

    I’m going to edit the package header file because I want a function to tell me who the winners were for a given tournament :

    create or replace package save_finals as
        function get_winners( i_year rwc_finals.year%type)
            return countries.country_name%type;
        procedure add( i_year rwc_finals.year%type, i_host rwc_finals.host_cid%type, 
            i_team1 rwc_finals.team1%type, i_team2 rwc_finals.team2%type);
        procedure result( i_year rwc_finals.year%type, 
            i_t1_score rwc_finals.t1_score%type,
            i_t2_score rwc_finals.t2_score%type);
    end save_finals;

    I can see that the icon for the changed file is now different in the File Tree :

    Furthermore, if I want to see what has changed since the last Git checkin, I can simply click on the History tab in the code editor and select the last Git checkin:

    Whilst I’m here, I’d like to add another view to my application, so I’m going to create a new file for it :

    create or replace view roll_of_honour_vw as
        select year, save_finals.get_winners(year) as champions
        from rwc_finals

    Now I’ve saved the file, I can see that it has a different icon in the Files Tree :

    Right, I want to start saving my changes to Git. First up then…

    Staging changes

    Now, it is possible to stage your files one-at-a-time by right-clicking the file and selecting Versioning/Add.

    However, if you select Add All instead, you get a list of all candidates for staging and you can then remove any files you don’t want using the Select checkbox.

    When you’re happy with your selection, hit OK

    We can now see that the file icons have changed again. The edited files ( save_finals.pks and save_finals.pkb) now have a green spot, whilst the newly created roll_of_honour_vw.sql now has a big plus icon :

    Incidentally, if you have a file in the tree that you don’t want git to manage, you can add it to the .gitignore file from the same menu.

    For example, I’ve added a todo list so I can keep track of what I’ve still got to write ( I can’t quite afford JIRA at the moment)

    If I right-click on the file and select Versioning/Add to .gitignore, I’ll get :

    Once I’m happy with my selections, I hit OK and the file icon changes :

    Committing changes

    The process for committing changes is similar to staging them. Right-click a repo member and select Versioning/Commit All…

    Note that we we could include non-stages files in the commit by checking Commit non-staged files.
    Finally, we add an appropriate commit message :

    Just as SQLDeveloper will allow you to compare the saved version of a file with the last committed version in the repo, you can also compare the file from two different commits :

    If we open save_finals.pkb and switch to the History tab, we can Filter to view only Git commits :

    If we now select the previous Git checkin, we can see what’s changed :

    Alternatively, you can achieve a similar effect by right-clicking the file then selecting Versioning/Version History.

    Comparing branches

    SQLDeveloper will even allow you to compare different branches.
    First, you need to go to the Teams menu and select Git/Branch Compare

    Remember, where on the find_winners branch so we need to select the master branch from the drop-down :

    You can compare the versions of a file in each branch by right-clickikng and selecting Compare : compare on a file :

    Merging branches

    We’ve finished our enhancement and we’re happy with it. It’s now time to merge the changes back into master.
    Before we get to merging, we need to switch to the branch that we want to merge to ( in this case, master).

    So, from the Team menu, select Git/Checkout

    Now click Select Branch and choose the branch we want to switch to ( in our case, Master) :

    Finally, hit the OK button and we’re on master.

    Next we need to go back to the Team menu and select Git/Merge

    Select branch you’re merging from and select a commit…

    The merge Window now looks something like this :

    Hit OK

    If we now look at the Commit History ( Team/Git/Commit History…) we can see that the latest commit matches the one that we selected to merge :

    We can also see the list of files that formed a commit by selecting it in the history pane:

    Deleting the branch

    Now we’ve finished with the branch, let’s do a bit of tidying up.

    From the Team menu, select Versions. This will bring up the Versions pane.
    Here, we can expand the Git node until we can see our local branches :

    The current branch has a green icon.

    To delete the find_winners branch, we simply select it in the tree and then click the big red cross :

    …and it’s gone.


    Everything I’ve covered here involves using a local repository.
    Blaine Carter’s Using Git with Oracle SQL Developer video includes steps to connect to a remote repository.

    Galo Balda has also written a series of posts on using Git with SQLDeveloper. You can find them here.

    Oracle and Alternative Facts – generating test data with DBMS_RANDOM

    Tue, 2019-10-01 15:55

    There’s a bug in DBMS_RANDOM.VALUE. It consistently fails to return the correct set of lotter numbers.
    On the plus side, it is surprisingly useful when you find yourself in a situation which is all too common for a Data Warehouse developer.
    There’s a requirement for a new feed into your Warehouse from another system. You’ve managed to agree the file specification – what data will be included in the file, datatypes etc, but the developers working on the upstream system won’t be able to start providing test files for loading for weeks yet. Meanwhile, you need to start writing your code to ingest the feed.
    Fortunately, you have all the tools available to :

    • Generate lots of rows of test data
    • Generate random values for strings, numbers and dates
    • Use a set of pre-defined values randomly in your data generation

    First of all, let’s take a look at…

    The Feed File specification

    The file will contain records with the following attributes :

    Attribute Datatype first_name string last_name string email string phone_number string hire_date date job_id string salary float commission_pct float manager_id integer department_id integer

    Yes, it does look rather familiar. It is, in fact, a new feed into the HR.EMPLOYEES table.

    Generating lots of rows

    This is one of those tricks that are easy when you know how, but may leave you scratching your head the first time you see it, here’s how to generate lots of rows using a single dual query in Oracle :

    select rownum
    from dual
    connect by level <= 1000

    Yep, a thousand rows of data (or 10000, or a million), very quickly and with little typing.
    Of course, we’re going to need more than just a single column of integers to build our test data file. Fortunately, DBMS_RANDOM does have a few tricks up it’s sleeve…

    Think of a number

    When it comes to generating a random integer, you may first be tempted to do something like this :

    select dbms_random.random
    from dual; 

    This is a reasonable start but there are a few drawbacks with DBMS_RANDOM.RANDOM, namely :

    • you can’t control the range of numbers between which the result will fall
    • this function is depracated in later versions of Oracle

    On the face of it, DBMS_RANDOM.VALUE also has it’s issues. Passing in the minimum and maximum permissable values does offer control over the range of the result. However, a decimal value is returned :

    select dbms_random.value(1,59)
    from dual; 

    Fortunately, it’s a SQL function which means that you can use other SQL functions to offer greater control over the output :

    select trunc(dbms_random.value(1,59))
    from dual;

    This technique also applies when you want to generate a decimal to a set precision – e.g. :

    select round(dbms_random.value(100, 900), 2)
    from dual;

    It’s worth noting that the function returns a value greater than or equal to the lower bound value but less than the upper bound value. Therefore, if you want the possibility of including the maximum value in the result set you’ll need to add one to the maximum value you pass into the function :

    select trunc(dbms_random.value(1,60)) -- lottery numbers now go up to 59 !
    from dual;
    Dates to remember

    Generating random dates is a bit more involved. However, remember that dates are really just numbers underneath it all…

    select to_char(sysdate, 'J')
    from dual;

    The result is the number of days since 1st January 4712 BC.

    This means that we can perform mathematical operations on dates in the same way as numbers. Throw in a bit of randomness and we can generate random dates. For example, if we want to generate a date somewhere in the last decade…

    select to_date(to_char(sysdate, 'J') - trunc(dbms_random.value(1, 3652)), 'J') -- 3652 days in the last 10 years
    from dual;

    As we’re dealing with whole days here, the dates generated from this query all have the time element set to midnight…

    alter session set nls_date_format = 'DD-MON-YYYY HH24:MI:SS';
    select to_date(to_char(sysdate, 'J') - trunc(dbms_random.value(1, 3652)), 'J') -- 3652 days in the last 10 years
    from dual;
    08-MAY-2015 00:00:00

    …but with a bit more jiggery-pokery, we can persuade DBMS_RANDOM to cough up a proper timestamp. Note that there are 86400 seconds in a day so…

            to_char( sysdate, 'J') 
                - trunc( dbms_random.value( 1, 3652))
        , 'J')
                + trunc(dbms_random.value(1,86400))/86400
    from dual;
    29-MAY-2010 02:57:00
    Stringing things together

    DBMS_RANDOM has one more neat trick up it’s sleeve – the ability to generate character strings of a defined length…

    select dbms_random.string('u', 20) as uppercase,
        dbms_random.string('l', 20) as lowercase,
        dbms_random.string('a', 20) as mixedcase,
        dbms_random.string('x', 20) as upper_alphanum,
        dbms_random.string('p', 20) as printable
    from dual;
    UPPERCASE            LOWERCASE            MIXEDCASE            UPPER_ALPHANUM       PRINTABLE           
    -------------------- -------------------- -------------------- -------------------- --------------------
    UIASUAOOHMJMCDBFMLTP xmscwbcyzfzukhnwrihj YmPMxDQHOMwrFxXiUGio 8ICORVCRGCLX62VVJ185 :#h(1HHjH1[wqZ$7[\!H

    The first parameter specifies the character set from which the string should be built ( uppercase alpha, lowercase alpha etc).
    The second parameter specifies the length of the string.
    Of course, if you wanted to mix things up a bit in terms of the length of string returned, you can always pass a random number as the string length…

    select dbms_random.string('u', trunc(dbms_random.value(1,21))) as stringy
    from dual
    connect by level <= 5
    Randomly selecting valid values

    Whilst we could make an attempt at generating a test file simply using the techniques we’ve looked at so far, we know that this will only get us so far.
    The ultimate destination for the records in the file is the EMPLOYEES table, which has Foreign Keys to both JOBS (JOB_ID) and DEPARTMENTS (DEPARTMENT_ID).
    Whilst we could simply hard-code values to ensure that they are valid, we could get a little more creative and make a random selection.
    In the event, I’ve decided to both.
    I’ll hard-code the JOB_ID to be “IT_PROG” because, let’s face it, you can never have enough programmers.

    That leaves us with DEPARTMENT_ID and MANAGER_ID, which we can get from the DEPARTMENTS table. We want to select random values from this table. Now, before we go any further, it’s probably worth considering this, seemingly innocuous query :

    with some_values as (select rownum as id from dual connect by level <=5)
    select id
    from some_values
    where id = trunc(dbms_random.value(1,6))

    If you execute this a few times, you’ll start to see something rather strange. The query will sometimes return no rows at all, or more than one row.
    The reason for this behaviour is explained by Mr Kyte himself here.

    This means that we need to take an alternative approach to selecting our random DEPARTMENTS records.

    First of all, we’ll need to fetch a set of unique records from the table. As it’s a small table, we may as well get all of them. Although they already have a synthetic key, we’ll want to give them an additonal one for reasons which will shortly become apparent…

    select rownum as id, department_id, manager_id
    from departments
    ---------- ------------- ----------
             1            10        200
             2            20        201
             3            30        114
             4            40        203
             5            50        121
             6            60        103
             7            70        204
             8            80        145
             9            90        100
            10           100        108
            11           110        205
            12           120           
            13           130           
            14           140           
            15           150           
            16           160           
            17           170           
            18           180           
            19           190           
            20           200           
            21           210           
            22           220           
            23           230           
            24           240           
            25           250           
            26           260           
            27           270           
    27 rows selected. 

    We’ll need to know how many records we have in the result set so that we can set the maximum value we want to select from when we randomly pick our record.
    So, our query now evolves :

    with dept as ( 
        select rownum as id, department_id, manager_id
        from departments)
    select max(id) from dept

    Using this second query as an in-line view, we can use this value to seed a randomly generated key in another in-line view, one which we can then use to join back to the DEPARTMENT records

    with dept as (select rownum as id, department_id, manager_id from departments),
        dept_count as (select max(id) as num_depts from dept),
        -- The main query that generates most of the data
        emp as (
                dbms_random.string('u', trunc(dbms_random.value(1,22))) as  first_name,
                dbms_random.string('u',trunc(dbms_random.value(1,22))) as last_name,
                dbms_random.string('l',trunc(dbms_random.value(10,27))) as email,
                -- use hard-coded value or null 50/50 chance
                case when mod(trunc(dbms_random.value(1,11)),2) = 0 then null else '555.123.4567' end as phone_number, 
                to_date(to_char(sysdate, 'J') - trunc(dbms_random.value(1,3652)), 'J') as hire_date,
                trunc(dbms_random.value( 1000, 3500)) as salary,
                case when mod(trunc(dbms_random.value(1,11)), 5) = 0 then trunc(dbms_random.value( 0,20)) end as commission_pct,
                -- value to use as a key to join onto the dept ilv
                trunc(dbms_random.value(1, dept_count.num_depts +1)) as dept_key,
                'IT_PROG' as job_id -- hard-coded for every record - ensure it's valid this time around.
            from dual, dept_count
            connect by level <= 10)
        from emp
        inner join dept 
            on emp.dept_key =

    The output is suitably, well, random :

    Of course, if you want to delimit the file and are using SQLDeveloper or SQLCl, you can simply do something like…

    set sqlformat loader

    …and your query output will be pipe delimited automagically.

    Extending the range of my Home Network Wi-fi by converting an old router into a Wireless Access Point

    Mon, 2019-09-02 08:11

    The ongoing Ashes series appears to have become all about comebacks.
    First of all we had Steve Smith, in his first test since the end of his Sandpaper-gate ban essaying one of the great feats of sporting redemption.
    Then we had Ben Stokes dragging England to a preposterous victory target of 359 having been skittled for 67 in their first innings.

    Making a rather more modest comeback in this post is an old EE Bright Box 1 router, which has been gathering dust since I got a replacement a couple of years ago.

    What follows is one way to adapt this old piece of kit to extend the WiFi range on a current home network.
    I want to do this by creating an additional Wireless Access Point positioned away from the main router.

    A Typical Home Area Network

    A Home Area Network (HAN) usually consists of a Router that acts as a single gateway for connection to the internet.
    All the devices in the home will connect to the router and therefore get access to the internet, as well as other devices in the HAN.
    Within the HAN, the router will hand out a unique IP address to each device as it connects using the standard Dynamic Host Configuration Protocol (DHCP).
    The DHCP allocated IP addresses are normally somewhere in the range of to (although the start and end addresses of the range will vary).

    Powerline Adapters

    Creating a second wireless access point to your network can be a fairly low-tech, low-cost approach to utilising your old router.
    Essentially, you need to connect it to the main router via an ethernet (network) cable then place it somewhere where your wireless signal isn’t so great. This, in itself, might not seem very practical unless you have a very long network cable.
    Alternatively, you can use a simple Powerline Adapter.
    You actually need two adapters. The first is plugged into a power socket near the main router and connected to it via a network cable.
    The second can then be plugged into any socket on the same electrical circuit.
    You can then connect the router to the second extender via a (shorter) network cable. Devices will then be able to connect to the network wirelessly simply by selecting the second router as their wireless access point.

    Information required from your main router

    Once we’re finished, our main router will regard our Bright Box as simply another device attached to the LAN. However, we’ll need to give the Bright Box a fixed IP address that doesn’t conflict with any other connected device.

    To determine what your options are, you’ll need to check the settings on your main router to establish the range of possible DHCP addresses it uses.
    To do this, you need to go to the Router’s admin page.

    In the unlikely event that your Router is using this entire address range to allocate IP addresses to devices connecting to the network, you will need to change it.
    In my case, my Router’s DHCP settings are to assign addresses starting at, so I know that choosing an address outside of that for my Bright Box (e.g. should be safe.

    This is something I’ll come back to in a bit. First though…

    Accessing the Admin Page on the Bright Box

    To start playing around with the configuration of the router, you need to be able to access it’s admin pages.
    In the case of my Bright Box 1, the admin address and login details are included on a sticker on the underside of the router.
    The address for the admin screen is :

    At this point, I don’t want to have the Bright Box connected to the main network, I just need to note the Wireless Network Name (SSID) on the sticker and connect to that network once it’s switched on.
    In my case the Bright Box’s Network name is currently set to EE-BrightBox-qq5xjw.

    Once I connect to this network, I can visit the admin page in the browser :

    …and check/change the wireless password as required :

    Remember, at this point, the Bright Box is on a completely separate network to the main router.

    Backing up the existing Router config

    Before I start changing stuff, I want to make sure that I can get back to a known working configuration – i.e. the one currently in place on the Bright Box.
    Like most routers, it has the option to backup this setting.
    On the Bright Box admin site, I need to go to Advanced/Tools/Configuraton/Backup :

    By clicking on the Backup button, I can choose the location to save a file called backup.bin which I can use to restore my router settings to what they are currently should I need to.

    Once that’s done, we can move onto…

    Performing a Factory Reset

    Whilst you can do this via the reset button on the back of the Bright Box, I chose instead to use the Router’s Admin application.
    Navigating to Advanced/Tools/Factory Default, we bring up :

    Clicking the button results in :

    As we’ve already backed up the router configuration, we’re happy to proceed.

    We’ll then have to wait for the reset to complete. In the meantime, we’ll get a screen like this :

    Perhaps unsurprisingly, the router will kick you off the admin site once the reset is complete. However, you can verify the changes by reconnecting and looking at Basic/Broadband Settings :

    Re-configuring the Bright Box as a Wireless Access Point

    To start with, we need to navigate to Basic/Broadband Settings and set :

    • Broadband Type : Fibre/Ethernet
    • Protocol: Bridging

    …and then hit Save Settings.

    Next, we go to Advanced/DHCP. Here is where we specify a static address for the Bright Box ensuring that it is outside of the DHCP range of the main router:

    Gateway IP Address :

    We also disable the Bright Box’s own internal DHCP functionality :
    DHCP Server : Disable

    Next, we want to disable Network Address Translation (NAT) on the Bright Box as we’re just using it to pass traffic through to the main router.
    Select NAT from the left-hand menu and set :

    NAT module function : Disable

    The next two steps are optional.
    Firstly, I want to change the Network Name (SSID) that the Bright Box broadcasts.
    To do this, I need to go to Wireless Settings/SSID Management and set :

    Wireless Network Name(SSID) : mike1

    The second of the optional steps is specific to the Bright Box.
    The default password for this router is not very secure – you can check this article for details – so I’m going to change it.
    To do this, I need to go to Basic/Wireless Settings :

    OK, so you might want to pick a slightly more secure password than this, but you get the idea.

    Finally, click Save Settings.

    Once all this is done, we need to power down the Bright Box and then connect the Ethernet cable from the Powerline Adapter to the WAN port (LAN 4) on it.

    Now we can power up the Bright Box once more and check the effect of our changes :

    Testing the new configuration

    Once the Bright Box is up and running again, the first thing we should notice is that we have a new entry in our list of available wifi networks. For example, if I check this on my Ubuntu laptop by running…

    nmcli dev wifi

    I can connect to this new access point using the password we set during the configuration steps we’ve just completed :

    Thus connected, I can now access the internet in the normal way :

    …and still have time to get comfortable before David Warner’s now inevitable double-hundred at Old Trafford.

    VirtualBox – running a Windows 10 Guest on an Ubuntu Host

    Fri, 2019-08-02 10:02

    Yes, you read that right. There are lots of guides out there on how to set up and run Ubuntu in VirtualBox on a Windows host.
    These days, you even have access to an Ubuntu sub-system in Windows itself.
    If, like me, you’re OS of choice is Ubuntu but you need to test how something behaves in Windows – is it possible to knock up an appropriate environment ?
    The answer is, of course, yes – otherwise this would be quite a short post.

    The following steps will work for VirtualBox on any host – Linux, Mac, even Windows.

    What I’m going to cover is :

    • Finding a Windows ISO
    • Configuring the VM in VirtualBox
    • Persuading VirtualBox to use a sensible screen size for your new VM

    But first…

    A quick word about versions

    The Host OS I’m running is Ubuntu 16.04 LTS.
    I’m using version 5.0 of VirtualBox.
    NOTE – steps to install VirtualBox on a Debian-based host such as Ubuntu can be found here.
    The Guest OS I’m installing is, as you’d expect, Windows 10.

    Finding a Windows ISO

    Depending on which Windows edition you are after, there are a couple of places you can look.
    Microsoft provides an ISO for a 180-day evaluation version of Windows Server here.

    In this case, I simply want to try Windows 10 so I need to go to this page.

    Once here, I need to select an edition…

    …and the language…

    …before we’re presented with a choice of 32 or 64-bit :

    I’ve chosen 64-bit. After the download, I am now the proud owner of :

    -rw-rw-r-- 1 mike mike 4.7G Jul 10 17:10 Win10_1903_V1_English_x64.iso
    Creating the VirtualBox VM

    Fire up VirtualBox and click on the New button to start the Create Virtual Machine wizard :

    …Next assign it some memory

    I’m going to create a Virtual Hard Disk :

    …using the default type…

    …and being dynamically allocated…

    …of the size recommended by VirtualBox :

    I now have a new VM, which I need to point at the Windows ISO I downloaded so that I can install Windows itself :

    All I have to do now is follow the Windows installation prompts, a process which I’ll not bore you with here.
    However, you may be interested to learn that you don’t necessarily require a Product Key for this installation.
    Chris Hoffman has produced an excellent guide on the subject.

    Installing Guest Additions

    Now I’ve configured Windows, I still need to install VirtualBox Guest Additions. Among other things, this will help to control the screen size of the VM so that I don’t need a magnifying glass !

    First of all, we need to virtually eject the virtual cd containing the Windows ISO. To do this, we actually go to the VM’s VirtualBox menu and select Devices/Optical Drives/Remove disk from virtual drive :

    Now, using the same menu (Devices), we select Insert Guest Additions CD Image :

    When Windows prompts you, choose to install :

    Accept the defaults when prompted and then reboot the VM.

    If, by some chance you are still faced with a small viewport for your Windows VM, you can try the following…

    Resizing the VM display

    Go to the VirtualBox application itself and with the VM selected, go to the File/Preferences menu.

    Click on Display, and set the Maximum Guest Screen Size to Automatic

    When you next re-start the VM, the window should now be a more reasonable size.
    In fact, with any luck, your desktop should now look something like this :

    The best way to run Windows !

    Customizing DML in an APEX Interactive Grid

    Tue, 2019-07-23 16:06

    It should have been quite a relaxing Cricket World Cup final. After all, it was England v New Zealand. I was guaranteed to be on the winning side.
    After several hours of nerve-shredding tension had failed to separate the teams England were awarded the trophy on the basis of dumb luck hitting more boundaries. The result was born with stoicism by the Black Caps, whose philosophy would, in other countries, be known as “Elite Niceness”. By a cruel twist of fate, Ben Stokes – England’s star all-rounder and Man of the Match – was actually born in Christchurch.
    Oracle APEX has it’s own star all-rounder in the shape of the Editable Interactive Grid ( see what I did there ?)
    As well as presenting information in the same way as an Interactive Report, it allows users to perform DML operations on the records it displays – provided it’s based on a single table.
    What we’re going to look at here is how to base an Interactive Grid (IG) on a Query rather than a table whilst retaining the ability to perform DML operations on the displayed records. To achieve this, we’ll be customizing the PL/SQL that is executed when a DML operation is invoked in the IG.

    The Application

    For what follows, I’ll be using APEX 18.2 running against an Oracle 18cXE database.

    We have two related tables which hold information about Men’s 50-over Cricket World Cup Finals :

    The tables were created as follows :

    create table teams(
        cid varchar2(3) primary key,
        team_name varchar2(100) not null)
    create table finals(
        tournament_year number(4) primary key,
        date_played date,
        venue varchar2(100),
        winning_tcid varchar2(3) references teams(cid),
        losing_tcid varchar2(3) references teams(cid),
        winning_margin varchar2(100))

    … and have been populated with some data. The TEAMS table first…

    insert into teams( cid, team_name)
    values('AUS', 'AUSTRALIA');
    insert into teams( cid, team_name)
    values('ENG', 'ENGLAND');
    insert into teams( cid, team_name)
    values('RSA', 'SOUTH AFRICA');
    insert into teams( cid, team_name)
    values('WI', 'WEST INDIES');
    insert into teams( cid, team_name)
    values('IND', 'INDIA');
    insert into teams( cid, team_name)
    values('NZL', 'NEW ZEALAND');
    insert into teams( cid, team_name)
    values('PAK', 'PAKISTAN');
    insert into teams( cid, team_name)
    values('SL', 'SRI LANKA');
    insert into teams( cid, team_name)
    values('ZIM', 'ZIMBABWE');
    insert into teams( cid, team_name)
    values('BAN', 'BANGLADESH');
    insert into teams( cid, team_name)
    values('AFG', 'AFGHANISTAN');
    insert into teams( cid, team_name)
    values('IRL', 'IRELAND');

    …and then FINALS…

    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1975, to_date('21-JUN-1975', 'DD-MON-YYYY'), 'LORDS', 'WI', 'AUS', '17 runs');
    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1979, to_date('23-JUN-1979' , 'DD-MON-YYYY'), 'LORDS', 'WI', 'ENG', '92 runs');
    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1983, to_date('25-JUN-1983' , 'DD-MON-YYYY'), 'LORDS', 'IND', 'WI', '43 runs');
    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1987, to_date('08-NOV-1987' , 'DD-MON-YYYY'), 'EDEN GARDENS', 'AUS', 'ENG', '7 runs');
    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1992, to_date('25-MAR-1992' , 'DD-MON-YYYY'), null, 'PAK', 'ENG', '22 runs');
    -- deliberate mistake to be corrected later
    insert into finals(tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
    values(1997, to_date('17-MAR-1996' , 'DD-MON-YYYY'), 'QADDAFI STADIUM', 'SL', 'AUS', '8 wickets');

    The data that we wish to present to application users can be retrieved with the following query :

    select f.tournament_year, f.date_played, 
        initcap(f.venue) as venue,
        initcap(win.team_name) as winners,
        initcap(ru.team_name) as runners_up,
    from finals f
    inner join teams win on f.winning_tcid = win.cid
    inner join teams ru on f.losing_tcid = ru.cid

    There are a couple of issues with the data as it stands so we want users to be able to edit the existing application records and add new ones.
    As we’re using APEX, it would be good if we could use an Editable Interactive Grid as this would mean only needing to write a single page to handle all of these actions.
    Of course, we could simply create a view using this query and then knock-up an Instead Of trigger to handle any DML. Alternatively…

    Creating the Interactive Grid

    The first step is to create a Region…

    …and define it as an IG…

    …using the above query as the Source SQL Query

    When we run this (after saving our changes), we can see that the ROW_SELECTOR and ROW_ACTION widgets are missing :

    Making the IG Editable

    In order to persuade APEX to add these widgets, we need to make the IG Editable. We can do this by going to the Region’s Attributes and setting the Edit Enabled property to Yes

    Among other things, this automatically creates a Save Interactive Grid Data process :

    However, if we attempt to run the page now (after saving these changes), we’ll hit an error…

    …so we need to select a Primary Key.
    TOURNAMENT_YEAR will fulfil this purpose in our IG, so we just need to adjust the properties of the column :

    When we save the change and run the page we can see that the ROW_SELECTOR and ROW_ACTION are now present :

    The IG is not based on a table or view, remember, so we still need to tell APEX what to do when any DML actions are initiated by the user.

    Customising the Save Interactive Grid Data process

    Returning to the Processing Tab in the Page Designer we need to change the Type of this process to PL/SQL Code :

    In the Source PL/SQL Code box, we need to enter a PL/SQL block which will be run whenever the Page processes a DML action.
    In order to tell what specific DML action a row is subject to, we can look at the value of the built-in $APEXROW_STATUS variable.
    The possible values are :

    • C – for Create
    • U – Update
    • D – Delete

    Therefore, one approach for our PL/SQL block would be simply to include the actual DML statements we want to execute in-line like this :

        l_win_tcid finals.winning_tcid%type;
        l_ru_tcid finals.losing_tcid%type;
        cursor c_team_cid( i_name teams.team_name%type)
            select cid
            from teams
            where team_name = upper(i_name);
        -- reset the variables for each pass through this process
        l_win_tcid := null;
        l_ru_tcid := null;
        if :APEX$ROW_STATUS = 'D' then
            -- DELETE the record
            delete from finals
            where tournament_year = :TOURNAMENT_YEAR;
            -- As we're either doing an UPDATE or an INSERT, we need to find the 
            -- CID value for each of the team names ( if specified)
            if :WINNERS is not null then
                open c_team_cid(:WINNERS);
                fetch c_team_cid into l_win_tcid;
                close c_team_cid;
            end if;
            if :RUNNERS_UP is not null then
                open c_team_cid(:RUNNERS_UP);
                fetch c_team_cid into l_ru_tcid;
                close c_team_cid;
            end if;
            if :APEX$ROW_STATUS = 'U' then
                -- UPDATE the record
                -- Note that, although DATE_PLAYED is a DATE field, the bind variable
                -- contains a string so we need to handle the conversion to a date here
                update finals
                set date_played = nvl(to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')), date_played),
                    venue = nvl(:VENUE, venue),
                    winning_tcid = nvl(l_win_tcid, winning_tcid),
                    losing_tcid = nvl(l_ru_tcid, losing_tcid),
                    winning_margin = nvl(:WINNING_MARGIN, winning_margin)
                where tournament_year = :TOURNAMENT_YEAR;
            elsif :APEX$ROW_STATUS = 'C' then
                -- CREATE (INSERT) as new record
                -- We need to return the Primary Key of the new record as APEX will
                -- use it to refresh the IG display and show the newly created row
                insert into finals( tournament_year, date_played, venue, 
                    winning_tcid, losing_tcid, winning_margin)
                values( :TOURNAMENT_YEAR, to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')), :VENUE,
                    l_win_tcid, l_ru_tcid, :WINNING_MARGIN)
                returning tournament_year into :TOURNAMENT_YEAR;
            end if;
        end if;

    The main points to note are :

    • The possible values of APEX$ROW_STATUS
    • the bind variables for the IG column values return a string, hence the explicit date conversion of :DATE_PLAYED
    • when the insert code generates a new primary key value (e.g. an ID taken from a sequence), APEX needs to know what it is so that it can display the new record once processing is completed.

    On that last point, I believe that we may not need to return the PK value in this case because we’re already providing it explicitly in the UI. I’ve left it here as an illustration that this may be the case in other circumstances.

    Anyhow, let’s give this a test.
    First of all, we’re going to update the 1992 record with the venue :

    The record for 1997 is wrong. The Tournament actually took place in 1996, so we’ll create a corrected record and delete the incorrect one :

    In both cases, we get a success message :

    …and if we check in the database, we can see that the DML has worked as expected :

    NOTE : for any cricketing pedants reading – yes, I know that Sri Lanka won by 7 wickets, not 8. I’ll be correcting this when I “notice it” in a few paragraphs time.

    At this point you may be less than thrilled at the prospect of having to maintain large chunks of PL/SQL in your APEX application.
    Luckily for you, as the process code is a PL/SQL block, you can do pretty much anything you like…

    Calling Stored Program Units from our APEX process

    I’ve created a database package to handle DML operations on the FINALS table :

    create or replace package edit_finals as
        procedure save_final( 
            i_year in finals.tournament_year%type, 
            i_date in finals.date_played%type,
            i_venue in finals.venue%type default null,
            i_winners in teams.team_name%type default null,
            i_losers in teams.team_name%type default null,
            i_margin in finals.winning_margin%type default null,
            o_year out finals.tournament_year%type);
        procedure delete_final( i_year in finals.tournament_year%type);
    end edit_finals;
    create or replace package body edit_finals as
        function get_team_cid( i_team in teams.team_name%type)
            return teams.cid%type 
            rtn_cid teams.cid%type;
            select cid 
            into rtn_cid
            from teams 
            where team_name = upper( i_team);
            return rtn_cid;
        end get_team_cid;
        procedure save_final( 
            i_year in finals.tournament_year%type, 
            i_date in finals.date_played%type,
            i_venue in finals.venue%type default null,
            i_winners in teams.team_name%type default null,
            i_losers in teams.team_name%type default null,
            i_margin in finals.winning_margin%type default null,
            o_year out finals.tournament_year%type)
            win_tcid teams.cid%type := null;
            ru_tcid teams.cid%type := null;
            if i_winners is not null then 
                win_tcid := get_team_cid( i_winners);
            end if;
            if i_losers is not null then 
                ru_tcid := get_team_cid( i_losers);
            end if;
            merge into finals 
                using dual 
                on ( tournament_year = i_year)
            when matched then update
                set date_played = nvl(i_date, date_played),
                    venue = nvl(i_venue, venue),
                    winning_tcid = nvl(win_tcid, winning_tcid),
                    losing_tcid = nvl(ru_tcid, losing_tcid),
                    winning_margin = nvl(i_margin, winning_margin)
                where tournament_year = i_year
            when not matched then 
                insert( tournament_year, date_played, venue, winning_tcid, losing_tcid, winning_margin)
                values( i_year, i_date, i_venue, win_tcid, ru_tcid, i_margin);
            o_year := i_year;
        end save_final;
        procedure delete_final( i_year in finals.tournament_year%type) is 
            delete from finals
            where tournament_year = i_year;
        end delete_final;
    end edit_finals;

    This means that the PL/SQL code for the Save IG Process is a bit more compact :

        if :APEX$ROW_STATUS = 'D' then
            edit_finals.delete_final(i_year => :TOURNAMENT_YEAR);
        elsif :APEX$ROW_STATUS in ('C', 'U') then
               i_year => :TOURNAMENT_YEAR, 
               i_date => to_date(:DATE_PLAYED, sys_context('userenv', 'nls_date_format')),
               i_venue => :VENUE,
               i_winners => :WINNERS,
               i_losers => :RUNNERS_UP,
               i_margin => :WINNING_MARGIN,
               o_year => :TOURNAMENT_YEAR);
       end if;

    The behaviour is the same…

    …which we can confirm in the database…

    The benefit of this approach is that, if you decide to migrate from APEX to another front-end technology, the Package is there in the database and does not need to change.


    I’ve not managed to find too much else out there on this topic, hence this post.
    There is a very good forum post by Patrick Wolf.
    This Oracle Tips and Tricks article may be worth a read.
    Finally, there’s this rather comprehensive look at Interactive Grids by John Snyders.

    Oracle Express Edition – features new to 18cXE

    Sat, 2019-07-13 15:55

    I learned a number of things watching the recently concluded Women’s Soccer World Cup.

    • it is possible for a human body to be fouled in the penalty area without then falling over as if it has just been shot (see Lisa-Marie Utland for Norway against England for proof)
    • England have developed a happy knack of reaching the Semi-Final of every tournament they enter
    • Alex Morgan is a tea-drinker

    There were some complaints that Morgan’s celebration of her goal against England were disrespectful. Personally, I though it was rather witty. Let’s face it, if she’d really want to stir up some controversy, she’d have mimed putting the milk in first.
    That said, she is going to face a challenge at the Olympics next year were she may herself up against a united Great Britain team.
    If you’re not up on your sporting geopolitics, Great Britain (for now at least) comprises four nations – England, Wales, Northern Ireland and Scotland.
    Should Morgan need to celebrate in a similar vein, the tea will be just the start. She’ll then need to neck a pint of Brains SA (known as “Skull Attack” in Cardiff) followed by a Guinness ( there is no border in Ireland when it comes to the Black Stuff) before moving on to a Scotch single-malt chaser.

    Anyone looking for an object lesson in how to up their game could do far worse than have a look at how Oracle Express Edition has evolved from 11g to 18c…

    “Hey Megan, how much extra stuff did Oracle squeeze into 18c Express Edition ?”

    Using the License documentation for 18c XE and that of 11g XE, I’ve compiled a list of features which are now included in Express Edition but were not in 11gXE.
    This is mainly for my own benefit as I keep being surprised when I find another – previously Enterprise Edition only – feature in Express Edition.
    I’ve also listed the new stuff that wasn’t previously available in any edition of Oracle 11g.

    Anyhow, for anyone who might find it useful…

    Extra functionality in 18c

    Using the Functional Categories mentioned in the license documents as a template, the features newly available in 18c Express Edition are :


    Perhaps the most profound structural change is the advent of Multitenant functionality.
    18c XE comes with Oracle Multitenant and allows up to three Pluggable Databases (PDBs).

    Development Platform

    SQLJ is now available.

    High Availability
    • Online Index Rebuild
    • Online table organization
    • Online table redefinition
    • Trial recovery
    • Fast-start fault recovery
    • Flashback Table
    • Flashback Database
    • Cross-platform Backup and Recovery

    Sharded Queues have been introduced in Oracle since 11g.


    Network Compression is also new to Oracle since 11g.

    • Client Side Query Cache
    • Query Results Cache
    • PL/SQL Function Result Cache
    • Adaptive Execution Plans
    • In-Memory Column Store
    • In-Memory Aggregation
    • Attribute Clustering
    • Column-Level Encryption
    • Tablespace Encryption
    • Oracle Advanced Security
    • Oracle Database Vault
    • Oracle Label Security
    • Centrally Managed Users
    • Fine-grained auditing
    • Privilege Analysis
    • Real Application Security
    • Redaction
    • Transparent Sensitive Data Protection
    • Virtual Private Database
    Spatial and Graph Data

    11g XE contained no spatial functionality at all. In 18c you get :

    • Oracle Spatial and Graph
    • Property Graph and RDF Graph Technologies (RDF/OWL)
    • Partitioned spatial indexes
    VLDB, Data Warehousing, and Business Intelligence
    • Oracle Partitioning
    • Oracle Advanced Analytics
    • Oracle Advanced Compression
    • Advanced Index Compression
    • Prefix Compression (also called Key Compression)
    • Basic Table Compression
    • Deferred Segment Creation
    • Bitmapped index, bitmapped join index, and bitmap plan conversions
    • Transportable tablespaces, including cross-platform and full transportable export and import
    • Summary management—Materialized View Query Rewrite
    Stuff that’s not included

    Unlike it’s predecessor, 18cXE does not come with a version of Application Express (APEX). Fortunately, you can still get APEX and Oracle Rest Data Services for the same low, low price of – well, nothing – and install them separately.

    SQLcl ALIAS – because you can’t remember everything.

    Tue, 2019-06-25 08:47

    I want to find out which file is going to hold any trace information generated by my database session. Unfortunately, I keep forgetting the query that I need to run to find out.
    Fortunately I’m using SQLcl, which includes the ALIAS command.
    What follows is a quick run-through of this command including :

    • listing the aliases that are already set up in SQLcl
    • displaying the code that an alias will execute
    • creating your own alias interactively
    • deleting an alias
    • using files to manage custom aliases

    Whilst I’m at it, I’ll create the alias for the code to find that pesky trace file too.

    In the examples that follow, I’m connected to an Oracle XE18c PDB using SQLcl 18.4 from my Ubuntu 16.4 LTS laptop via the Oracle Thin Client. Oh, and the Java details are :

    Meet the ALIAS command

    As so often in SQLcl, it’s probably a good idea to start with the help :

    help alias

    …which explains that :

    “Alias is a command which allows you to save a sql, plsql or sqlplus script and assign it a shortcut command.”

    A number of aliases are already included in SQLcl. To get a list of them simply type :


    …which returns :


    If we want to see the code that will be run when an alias is invoked, we simply need to list the alias :

    alias list tables
    tables - tables <schema> - show tables from schema
    select table_name "TABLES" from user_tables

    Connected as HR, I can run the alias to return a list of tables that I own in the database :

    Creating an ALIAS

    To create an alias of my own, I simply need to specify the alias name and the statement I want to associate it with. For example, to create an alias called whoami :

    alias whoami =
    select sys_context('userenv', 'session_user')
    from dual;

    I can now confirm that the alias has been created :

    alias list whoami
    select sys_context('userenv', 'session_user')
    from dual

    …and run it…

    I think I want to tidy up that column heading. I could do this by adding an alias in the query itself. However, alias does support the use of SQL*Plus commands…

    alias whoami =
    column session_user format a30
    select sys_context('userenv', 'session_user') session_user
    from dual;

    …which can make the output look slightly more elegant :

    A point to note here is that, whilst it is possible to include SQL*Plus statements in an alias for a PL/SQL block (well, sort of)…

    alias whoami=set serverout on
    exec dbms_output.put_line(sys_context('userenv', 'session_user'));

    …when the alias starts with a SQL*Plus statement, it will terminate at the first semi-colon…

    Where you do have a PL/SQL alias that contains multiple statement terminators (‘;’) you will need to run any SQL*Plus commands required prior to invoking it.
    Of course, if you find setting output on to be a bit onerous, you can save valuable typing molecules by simply running :

    alias output_on = set serverout on size unlimited

    I can also add a description to my alias so that there is some documentation when it’s listed :

    alias desc whoami The current session user

    When I now list the alias, the description is included…more-or-less…

    I’m not sure if the inclusion of the text desc whoami is simply a quirk of the version and os that I’m running on. In any case, we’ll come to a workaround for this minor annoyance in due course.

    In the meantime, I’ve decided that I don’t need this alias anymore. To remove it, I simply need to run the alias drop command :

    alias drop whoami

    At this point, I know enough about the alias command to implement my first version of the session tracefile alias that started all this.
    The query, that I keep forgetting, is :

    select value
    from v$diag_info
    where name = 'Default Trace File'

    To create the new alias :

    alias tracefile =
    select value "Session Trace File"
    from v$diag_info
    where name = 'Default Trace File';

    I’ll also add a comment at this point :

    alias desc tracefile The full path and filename on the database server of the tracefile for this session

    My new alias looks like this :

    The aliases.xml file

    Unlike the pre-supplied aliases, the code for any alias you create will be held in a file called aliases.xml.

    On Windows, this file will probably be somewhere under your OS user’s AppData directory.
    On Ubuntu, it’s in $HOME/.sqlcl

    With no custom aliases defined the file looks like this :

    <?xml version="1.0" encoding="UTF-8" standalone="no"?>

    Note that, even though I have now defined a custom alias, it won’t be included in this file until I end the SQLcl session in which it was created.

    Once I disconnect from this session, the file includes the new alias definition :

    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <alias name="tracefile">
    <description><![CDATA[desc tracefile The full path and filename on the database server of the tracefile for this session
    <sql><![CDATA[select value "Session Trace File"
    from v$diag_info
    where name = 'Default Trace File']]></sql>

    Incidentally, if you’ve played around with SQLDeveloper extensions, you may find this file structure rather familiar.

    The file appears to be read by SQLcl once on startup. Therefore, before I run SQLcl again, I can tweak the description of my alias to remove the extraneous text…

    <description><![CDATA[The full path and filename on the database server of the tracefile for this session]]></description>

    Sure enough, next time I start an SQLcl session, this change is now reflected in the alias definition :

    Loading an alias from a file

    The structure of the aliases.xml file gives us a template we can use to define an alias in the comfort of a text editor rather than on the command line. For example, we have the following PL/SQL block, which reads a bind variable :

    v_msg varchar2(100);
    if upper(:mood) = 'BAD' then
    if to_char(sysdate, 'DAY') != 'MONDAY' then
    v_msg := q'[At least it's not Monday!]';
    elsif to_number(to_char(sysdate, 'HH24MI')) > 1200 then
    v_msg := q'[At least it's not Monday morning!]';
    v_msg := q'[I'm not surprised. It's Monday morning !]';
    end if;
    elsif upper(:mood) = 'GOOD' then
    v_msg := q'[Don't tell me West Ham actually won ?!]';
    v_msg := q'[I'm just a simple PL/SQL block and I can't handle complex emotions, OK ?!]';
    end if;

    Rather than typing this in on the command line, we can create a file ( called pep_talk.xml) which looks like this :

    <?xml version="1.0" encoding="UTF-8" standalone="no"?>
    <alias name="pep_talk">
    <description><![CDATA[How are you feeling ? Usage is pep_talk <emotion>]]></description>
    v_msg varchar2(100);
    if upper(:mood) = 'BAD' then
    if to_char(sysdate, 'DAY') != 'MONDAY' then
    v_msg := q'[At least it's not Monday!]';
    elsif to_number(to_char(sysdate, 'HH24MI')) > 1200 then
    v_msg := q'[At least it's not Monday morning!]';
    v_msg := q'[I'm not surprised. It's Monday morning !]';
    end if;
    elsif upper(:mood) = 'GOOD' then
    v_msg := q'[Don't tell me West Ham actually won ?!]';
    v_msg := q'[I'm just a simple PL/SQL block and I can't handle complex emotions, OK ?!]';
    end if;

    Now, we can load this alias from the file as follows :

    alias load pep_talk.xml
    Aliases loaded

    We can now execute our new alias. First though, we need to remember to turn serveroutput on before we invoke it :

    Once you’ve terminated your SQLcl session, the new alias will be written to aliases.xml.

    Exporting custom aliases

    There may come a time when you want to share your custom aliases with your colleagues. After all, it’s always useful to know where the trace file is and who doesn’t need a pep talk from time-to-time ?

    To “export” your aliases, you can issue the following command from SQLcl :

    alias save mike_aliases.xml

    This writes the file to the same location as your aliases.xml :

    You can then import these aliases to another SQLcl installation simply by sharing the file and then using the alias load command.


    As you can imagine, there are a wide variety of possible uses for the ALIAS command.

    As the original author of this feature, this post by Kris Rice is probably worth a read.
    Jeff Smith has written on this topic several times including :

    Menno Hoogendijk has an example which employs some Javascript wizardry which he has published on GitHub.

    Right, back to my trace files.

    Installing the OE demo schema on 18cXE

    Sat, 2019-06-15 13:18

    It’s always a good idea to be wary of assuming too much.
    Looking at the current Conservative Party Leadership contest, you might assume that a fantasy Brexit policy and a history of class A drug use were pre-requisites for the job of Prime Minister.
    You may further assume that one is a result of the other.
    That last assumption is unlikely however, unless the smoking, snorting and otherwise ingesting of illicit substances is widespread across all of the other major political parties. Then again…

    For my part, I’ve had to correct some of my assumptions about the process for installing the OE sample schema into Oracle 18cXE running on CentOS 7.
    What follows is a quick guide on how to accomplish this…without all the head-scratching over apparently spurious errors.
    Specifically, I will be covering :

    • getting the OE schema installation scripts
    • checking the pre-requisites for the OE schema installation have been met
    • preparing the scripts for execution
    • performing the installation

    Before we go too much further though, it’s probably wise to state some assumptions…


    These steps assume that you’re running 18cXE on CentOS or some other Red Hat compatible distro ( e.g. Oracle Linux, Fedora).
    We’re only installing the OE schema here. I already have the HR schema installed on the database and I do not want to drop and re-create it.
    If you want to install all of the demo schemas then you’ll need to check the instructions in the file once you’ve downloaded the installation scripts.
    Speaking of which…

    Finding the demo scripts

    As stated in the documentation, only the scripts for the HR schema are included in the Oracle software.
    If you want the other schemas, you need to download them from Oracle’s GitHub repo.

    Although we’re only interested in the OE schema at the moment, the source code is provided in a single archive file.
    Download the zip for the appropriate database release ( 18c in my case) and we should now have a file looking something like :

    -rw-rw-r-- 1 mike mike 28882177 Jun  9 17:03
    Pre-requisites for installing the OE schema

    Before I run off and start playing with my new zip file, I should really check that I’ve got everything I need to ensure that the setup will go smoothly.
    The pre-requisites are :

    • the HR schema must already be installed and unlocked in the PDB that you are installing into
    • Oracle Spatial must be enabled
    • the installation scripts need to run on the database server (or a filesystem visible to it)

    To check that HR is already available in the PDB (xepdb1 in my case) :

    alter session set container = xepdb1;
    select username, account_status
    from cdb_users
    where username = 'HR'
    and con_id = sys_context('userenv', 'con_id');
    ---------- ---------------
    HR         OPEN           

    If the query does not return any rows then you will need to install the HR schema. This can be done following the instructions in the aforementioned documentation.

    NOTE – before you do this it’s a good idea to double check to make sure that you are in the correct container database :

    select sys_context('userenv', 'con_name') from dual;

    If the ACCOUNT_STATUS is LOCKED then you need to unlock the HR user as the OE creation script will attempt to connect to the database as HR. To do this, connect to the target PDB as a user with the ALTER USER privilege (e.g. SYSTEM) and run :

    alter user hr account unlock;
    User HR altered

    As I’m on 18cXE, Oracle Spatial should be enabled. Thanks to Norman, I know that I can confirm this by running :

    select parameter, value       
    from v$option 
    where regexp_like(parameter, 'spatial', 'i')
    PARAMETER            VALUE     
    -------------------- ----------
    Spatial              TRUE      

    Next, we need to upload the zip file to the Database Server (which I’ve done using sftp).

    Preparing the Install scripts

    Now it’s on the correct machine, I need to change the ownership to oracle…

    sudo chown oracle:oinstall
    ls -l 
    -rw-rw-r--. 1 oracle oinstall 28882177 Jun 10 12:34

    …because I want to move it to the appropriate directory in ORACLE_HOME…

    sudo mv $ORACLE_HOME/demo/schema
    cd $ORACLE_HOME/demo/schema

    …and extract it as oracle…

    sudo su oracle
    cd db-sample-schemas-18c

    … and create a directory to hold the log file that’s output when we run the installation…

    mkdir order_entry/logs

    Now we’ve extracted the files, you may be tempted to have a quick peek at the code.

    Initially you may be somewhat surprised. For example, in order_entry/oe_main.sql you can see :

    DEFINE vscript = __SUB__CWD__/order_entry/coe_&vrs

    Is __SUB__CWD__ some SQL*Plus magic that has eluded you all these years ?

    Well, no. Looking that the file, we can confirm that it’s simply a placeholder that we need to replace with a valid absolute path to the scripts.

    Fortunately, the file also contains the code required to achieve this.
    Simply ensure that we’re in the db-sample-schemas-18c directory and run :

    perl -p -i.bak -e 's#__SUB__CWD__#'$(pwd)'#g' *.sql */*.sql */*.dat 

    After running this, we can see that the line in oe_main.sql now reads :

    DEFINE vscript = /opt/oracle/product/18c/dbhomeXE/demo/schema/db-sample-schemas-18c/order_entry/coe_&vrs
    Performing the installation

    The main order entry script is in the order_entry sub-directory and is called oe_main.sql.
    This script accepts 9 positional parameters :

    1. the password for the OE user that will be created as part of this installation
    2. the default tablespeace for OE
    3. the temporary tablespace for OE
    4. the password for the pre-existing HR user
    5. the SYS password
    6. the directory path for the data files (these are in the current directory as they are included in the zip file)
    7. the absolute path for the log directory we just created ( including a trailing slash)
    8. the version (it’s v3 in this case)
    9. the SQL*Plus connect string for the target database

    The “version” the script asks for is appended to the basename of some of the scripts that will be run as part of the install.
    To obtain this, simply run :

    ls -1 *_v?.sql

    From this, I can see that the value of version that the script needs is “v3”.

    Now, connect to the database via SQL*Plus as SYSTEM. Note that, as I’ve not setup an Oracle client on the server, I need to specify the server name, tns port and pdb name in the connect string :

    sqlplus system@frea.virtualbox:1522/xepdb1

    Finally, we can run the install script :

    @oe_main.sql oe_user_password users temp hr_user_password sys_password . /opt/oracle/product/18c/dbhomeXE/demo/schema/db-sample-schemas-18c/order_entry/logs/ v3 frea.virtualbox:1522/xepdb1

    The log file generated during the installation will contain everything that now whizzes up the screen.
    You can find it after the run in the logs directory we created earlier :

    ls -l logs
    total 8
    -rw-r--r--. 1 oracle oinstall 6244 Jun 10 13:50 oe_oc_v3.log

    Even if there aren’t any particularly alarming errors in the logfile, it would be nice to verify that all has gone as intended.
    First we can check that the OE schema now contains the expected number of objects of each type :

    select object_type, 
        count(object_name) as "Number of objects"
    from dba_objects
    where owner = 'OE'
    group by object_type
    order by 2 desc;
    OBJECT_TYPE          Number of objects
    -------------------- -----------------
    INDEX                               48
    TYPE                                37
    LOB                                 15
    TABLE                               14
    VIEW                                13
    SYNONYM                              6
    TRIGGER                              4
    TYPE BODY                            3
    SEQUENCE                             1
    FUNCTION                             1
    10 rows selected. 

    We can also confirm that data has been loaded into the tables :

    set serverout on size unlimited
        l_count pls_integer;
        for r_tab in (
            select dbms_assert.sql_object_name(owner||'.'||table_name) oe_table
            from dba_tables
            where owner = 'OE'
            and nested = 'NO'
            order by table_name)
            execute immediate 'select count(*)  from '||r_tab.oe_table 
                into l_count;
            dbms_output.put_line(upper(r_tab.oe_table)||' contains '||l_count||' records.');
        end loop;
    OE.CUSTOMERS contains 319 records.
    OE.INVENTORIES contains 1112 records.
    OE.ORDERS contains 105 records.
    OE.ORDER_ITEMS contains 665 records.
    OE.PRODUCT_DESCRIPTIONS contains 8640 records.
    OE.PRODUCT_INFORMATION contains 288 records.
    OE.PROMOTIONS contains 2 records.
    OE.WAREHOUSES contains 9 records.
    PL/SQL procedure successfully completed.

    The Country may be descending into chaos but at least we’ve got a working OE schema to play around with.

    Great Football Managers and Oracle Unified Auditing

    Fri, 2019-05-31 11:48

    It’s quite a good time for English football at the moment. Not only have English clubs monopolised the finals of the two main European Club competitions this year, but Manchester City have made history by winning all three domestic competitions in the same season.
    Note that this isn’t a British footballing first. Glasgow Rangers managed it way back in 1949. And whilst the European Cup ( Champions League if you must) has eluded City this season, Celtic managed that particular clean sweep in 1967.
    In English football however, this particular treble is unprecedented. In fact, there are remarkably few managers who have been able to win every one of the major domestic honours in their entire career.
    All of which will come in handy when looking for examples to illustrate the topic at hand, namely Oracle Unified Auditing.
    With the aid of 18c Express Edition, we’ll be looking at :

    • The Oracle supplied Unified Auditing Policies that are enabled by default
    • Where to find the Audit Trail
    • How to create our own Unified Auditing Policy to monitor DML operations on specific objects

    Unified Auditing default settings

    Unified Autiding was introduced in 12c. By default, it’s configured to work alongside auditing from earlier database versions.
    We can check that this the case – and that traditional auditing has not been disabled by checking that the Unified Auditing parameter is set to FALSE :

    Looking at the audit initialization parameters is also useful at this point :

    select name, value, description
    from v$parameter
    where name like '%audit%'
    and value is not null
    order by name
    NAME                           VALUE                          DESCRIPTION                                       
    ------------------------------ ------------------------------ --------------------------------------------------
    audit_file_dest                /opt/oracle/admin/XE/adump     Directory in which auditing files are to reside   
    audit_sys_operations           TRUE                           enable sys auditing                               
    audit_trail                    DB                             enable system auditing                            
    unified_audit_sga_queue_size   1048576                        Size of Unified audit SGA Queue                   

    We can see that we have a directory setup for audit files to be written to. However, the audit_trail is set to DB – i.e. a table in the database.
    To start with, we can have a look at the audit_file_dest directory and see what, if anything, is being written there :

    ls -lrt /opt/oracle/admin/XE/adump
    -rw-r-----. 1 oracle oinstall     866 May 30 13:16 XE_ora_3880_20190530131639680705403060.aud
    -rw-r-----. 1 oracle oinstall    1777 May 30 13:16 XE_ora_4375_20190530131639737578276396.aud
    -rw-r-----. 1 oracle oinstall    1202 May 30 13:16 XE_ora_4381_20190530131643953498040745.aud

    Closer inspection of these files reveal that they contain details of the operations performed when Oracle is started but before the database is open. For example :

    cat XE_ora_3880_20190530131639680705403060.aud
    Audit file /opt/oracle/admin/XE/adump/XE_ora_3880_20190530131639680705403060.aud
    Oracle Database 18c Express Edition Release - Production
    Build label:    RDBMS_18.
    ORACLE_HOME:    /opt/oracle/product/18c/dbhomeXE
    System name:    Linux
    Node name:	frea.virtualbox
    Release:        3.10.0-957.1.3.el7.x86_64
    Version:        #1 SMP Thu Nov 29 14:49:43 UTC 2018
    Machine:        x86_64
    Instance name: XE
    Redo thread mounted by this instance: 0 <none>
    Oracle process number: 299
    Unix process pid: 3880, image: oracle@frea.virtualbox (TNS V1-V3)
    Thu May 30 13:16:39 2019 +01:00
    LENGTH : '247'
    DATABASE USER:[1] '/'
    CLIENT USER:[6] 'oracle'
    STATUS:[1] '0'
    DBID:[0] ''
    SESSIONID:[1] '0'
    USERHOST:[15] 'frea.virtualbox'
    ACTION NUMBER:[3] '138'

    Well that could come in useful, but is there any auditing of operations once the database is up and running ?

    Oracle supplied Unified Audit Policies

    There are a number of policies that are set up by default :

    select owner, object_name
    from dba_objects
    where object_type = 'UNIFIED AUDIT POLICY'
    and oracle_maintained = 'Y'
    order by object_name;
    OWNER      OBJECT_NAME                                       
    ---------- --------------------------------------------------
    SYS        ORA_ACCOUNT_MGMT                                  
    SYS        ORA_CIS_RECOMMENDATIONS                           
    SYS        ORA_DATABASE_PARAMETER                            
    SYS        ORA_DV_AUDPOL                                     
    SYS        ORA_DV_AUDPOL2                                    
    SYS        ORA_LOGON_FAILURES                                
    SYS        ORA_RAS_POLICY_MGMT                               
    SYS        ORA_RAS_SESSION_MGMT                              
    SYS        ORA_SECURECONFIG                                  
    9 rows selected. 

    We can confirm which of these are active by running :

    select policy_name, user_name, 
        enabled_option, entity_name, entity_type, 
        success, failure
    from audit_unified_enabled_policies
    order by policy_name
    -------------------- -------------------- ------------ ------------ ---------- ----------
    ORA_LOGON_FAILURES   ALL USERS            ALL USERS    USER         NO         YES       
    ORA_SECURECONFIG     ALL USERS            ALL USERS    USER         YES        YES       

    One point to note is that the ORA_LOGON_FAILURES policy is configured to record statements on failure and not on success.


    We can see which activities are covered by this policy by running :

    select audit_option, audit_option_type 
    from audit_unified_policies
    where policy_name = 'ORA_LOGON_FAILURES';
    -------------------- ------------------
    LOGON                STANDARD ACTION   

    From this, we can infer that this policy will record any failed logon attempts in the audit trail. Let’s test that hypothesis…

    First off, I’m going to issue a failed login attempt :

    sqlplus hr/thisisnotthepassword@xepdb1
    SQL*Plus: Release Production on Thu May 30 14:10:13 2019
    Copyright (c) 1982, 2016, Oracle.  All rights reserved.
    ORA-01017: invalid username/password; logon denied
    Enter user-name: 

    Right, that should do it. Now to check the audit trail in the database :

    select event_timestamp, audit_type, dbusername, action_name, return_code
    from unified_audit_trail uat
    where unified_audit_policies = 'ORA_LOGON_FAILURES'
    and uat.event_timestamp > systimestamp - ((1/24/60) *5) -- happened in the last 5 minutes
    order by event_timestamp desc

    Note here that the record includes the return code, which looks to be the Oracle Error that was generated.
    Also note that, in a multitenant environment, you need to be in the relevant container when you query the audit trail.
    In this example, I was attempting to login to the XEPDB1 PDB. I would not see this logon failure if I was querying the audit trail from the CDB.

    The UNIFIED_AUDIT_TRAIL contains 99 columns and the relevance of each of them will vary according to what you’re looking at in the audit trail.
    As usual, the columns themselves have been documented with comments which you can find by running :

    select column_name, comments
    from dba_col_comments
    where owner = 'AUDSYS'
    and table_name = 'UNIFIED_AUDIT_TRAIL'
    order by column_name
    COLUMN_NAME                    COMMENTS                                          
    ------------------------------ --------------------------------------------------
    ACTION_NAME                    Name of the action executed by the user           
    ADDITIONAL_INFO                Text comment on the audit trail entry             
    APPLICATION_CONTEXTS           SemiColon seperate list of Application Context Nam
                                   espace, Attribute, Value information in (APPCTX_NS
                                   PACE,APPCTX_ATTRIBUTE=<value>) format             
    AUDIT_OPTION                   Auditing option set with the audit statement      
    AUDIT_TYPE                     Type of the Audit Record                          
    AUTHENTICATION_TYPE            Type of Authentication for the session user       
    CLIENT_IDENTIFIER              Client identifier in each Oracle session        
    XS_SESSIONID                   Real Application User Session Identifier          
    XS_TARGET_PRINCIPAL_NAME       Target principal name in Real Application Security
    XS_USER_NAME                   Real Application User name                        

    By contrast, ORA_SECURECONFIG audits a number of system privileges :

    select audit_option, audit_option_type, object_schema
    from audit_unified_policies
    where policy_name = 'ORA_SECURECONFIG'
    order by 1
    AUDIT_OPTION                             AUDIT_OPTION_TYPE  OBJECT_SCHEMA                 
    ---------------------------------------- ------------------ ------------------------------
    ADMINISTER KEY MANAGEMENT                SYSTEM PRIVILEGE   NONE                          
    ALTER ANY PROCEDURE                      SYSTEM PRIVILEGE   NONE                          
    ALTER ANY TABLE                          SYSTEM PRIVILEGE   NONE                          
    ALTER DATABASE                           SYSTEM PRIVILEGE   NONE                          
    ALTER DATABASE DICTIONARY                STANDARD ACTION    NONE                          
    ALTER DATABASE LINK                      STANDARD ACTION    NONE                          
    ALTER PLUGGABLE DATABASE                 STANDARD ACTION    NONE                          
    ALTER PROFILE                            STANDARD ACTION    NONE                          
    ALTER ROLE                               STANDARD ACTION    NONE                          
    ALTER SYSTEM                             SYSTEM PRIVILEGE   NONE                          
    ALTER USER                               STANDARD ACTION    NONE                          
    AUDIT SYSTEM                             SYSTEM PRIVILEGE   NONE                          
    BECOME USER                              SYSTEM PRIVILEGE   NONE                          
    CREATE ANY JOB                           SYSTEM PRIVILEGE   NONE                          
    CREATE ANY LIBRARY                       SYSTEM PRIVILEGE   NONE                          
    CREATE ANY PROCEDURE                     SYSTEM PRIVILEGE   NONE                          
    CREATE ANY TABLE                         SYSTEM PRIVILEGE   NONE                          
    CREATE DATABASE LINK                     STANDARD ACTION    NONE                          
    CREATE DIRECTORY                         STANDARD ACTION    NONE                          
    CREATE EXTERNAL JOB                      SYSTEM PRIVILEGE   NONE                          
    CREATE PLUGGABLE DATABASE                STANDARD ACTION    NONE                          
    CREATE PROFILE                           STANDARD ACTION    NONE                          
    CREATE PUBLIC SYNONYM                    SYSTEM PRIVILEGE   NONE                          
    CREATE ROLE                              STANDARD ACTION    NONE                          
    CREATE SQL TRANSLATION PROFILE           SYSTEM PRIVILEGE   NONE                          
    CREATE USER                              SYSTEM PRIVILEGE   NONE                          
    DROP ANY PROCEDURE                       SYSTEM PRIVILEGE   NONE                          
    DROP ANY SQL TRANSLATION PROFILE         SYSTEM PRIVILEGE   NONE                          
    DROP ANY TABLE                           SYSTEM PRIVILEGE   NONE                          
    DROP DATABASE LINK                       STANDARD ACTION    NONE                          
    DROP DIRECTORY                           STANDARD ACTION    NONE                          
    DROP PLUGGABLE DATABASE                  STANDARD ACTION    NONE                          
    DROP PROFILE                             STANDARD ACTION    NONE                          
    DROP PUBLIC SYNONYM                      SYSTEM PRIVILEGE   NONE                          
    DROP ROLE                                STANDARD ACTION    NONE                          
    DROP USER                                SYSTEM PRIVILEGE   NONE                          
    EXECUTE                                  OBJECT ACTION      SYS                           
    EXECUTE                                  OBJECT ACTION      REMOTE_SCHEDULER_AGENT        
    EXEMPT ACCESS POLICY                     SYSTEM PRIVILEGE   NONE                          
    EXEMPT REDACTION POLICY                  SYSTEM PRIVILEGE   NONE                          
    GRANT ANY OBJECT PRIVILEGE               SYSTEM PRIVILEGE   NONE                          
    GRANT ANY PRIVILEGE                      SYSTEM PRIVILEGE   NONE                          
    GRANT ANY ROLE                           SYSTEM PRIVILEGE   NONE                          
    LOGMINING                                SYSTEM PRIVILEGE   NONE                          
    PURGE DBA_RECYCLEBIN                     SYSTEM PRIVILEGE   NONE                          
    SET ROLE                                 STANDARD ACTION    NONE                          
    TRANSLATE ANY SQL                        SYSTEM PRIVILEGE   NONE   

    It so happens that I’ve installed the HR demo application on this database by following Oracle’s instructions and running $ORACLE_HOME/demo/schema/human_resources/hr_main.sql as SYSTEM.

    This policy has captured the DDL from this activity :

    select scn, dbusername, system_privilege_used, action_name, object_schema, object_name,
    from unified_audit_trail
    where unified_audit_policies = 'ORA_SECURECONFIG'
    and scn >= 2038863
    order by event_timestamp
    ---------- ------------ ---------------------------------------- -------------------- ------------ --------------------
       2038863 SYSTEM       CREATE USER                              CREATE USER                       HR                  
       2038869 SYSTEM       ALTER USER                               ALTER USER                        HR                  
       2038874 SYSTEM       ALTER USER                               ALTER USER                        HR                  
       2038880 SYSTEM       GRANT ANY PRIVILEGE                      GRANT                                                 
       2038886 SYSTEM       GRANT ANY ROLE, GRANT ANY PRIVILEGE      GRANT                                                 
       2038897 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           REGIONS             
       2038910 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           REGIONS             
       2038923 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           COUNTRIES           
       2038929 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           COUNTRIES           
       2038937 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           LOCATIONS           
       2038949 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           LOCATIONS           
       2038962 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           DEPARTMENTS         
       2038976 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
       2038988 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           JOBS                
       2039000 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           JOBS                
       2039016 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           EMPLOYEES           
       2039030 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           EMPLOYEES           
       2039036 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
       2039052 SYSTEM       CREATE ANY TABLE                         CREATE TABLE         HR           JOB_HISTORY         
       2039068 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           JOB_HISTORY         
       2040134 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
       2040179 SYSTEM       ALTER ANY TABLE                          ALTER TABLE          HR           DEPARTMENTS         
       2040303 SYSTEM       CREATE ANY PROCEDURE                     CREATE PROCEDURE     HR           SECURE_DML          
       2040318 SYSTEM       CREATE ANY PROCEDURE                     CREATE PROCEDURE     HR           ADD_JOB_HISTORY     
       2041099 SYSTEM       ALTER USER                               ALTER USER                        HR                  
    25 rows selected. 

    NOTE that the audit trail contains the System Change Number (SCN) as well as a timestamp, which can make it easier to select the part of the audit trail you are interested in at any given time.

    As well as the details of who did what, the audit trail holds the SQL that was executed as part of this operation. For example, if we wanted to check what statement was run when the HR.REGIONS table was created, we could run :

    select sql_text
    from unified_audit_trail
    where scn = 2038897;
    CREATE TABLE regions                                                            
        ( region_id      NUMBER                                                     
           CONSTRAINT  region_id_nn NOT NULL                                        
        , region_name    VARCHAR2(25)                                               

    Note that SCN is unique to a transaction. As we’re looking for a DDL statement, which is almost always contained in it’s own transaction (except when it isn’t), it’s probably not such a good idea to rely on it as a de facto Primary Key when you’re searching for DML statements, which may share a transaction ( and therefore an SCN) with several others. Obviously, an SCN is completely useless when you’re searching for SELECT statements.
    Whilst we’re on the subject of predicates when selecting from the audit trail, it’s worth bearing in mind that the underlying table is actually partitioned by EVENT_TIMESTAMP. By default, these partitions cover one month each. Including EVENT_TIMESTAMP in the predicate will cause the optimizer to perform partition pruning so may help query performance.

    Anyway, speaking of auditing DML, it’s time to introduce…

    The Demo Application

    We have an application to hold details of each manager who has won each of England’s major domestic trophies. The application is owned by MOTTY, who is a bit of a footie geek. We also have an application user called TEDDY who hasn’t been around long and still thinks that football didn’t exist before the Premier League…

    create user motty identified by motty
    alter user motty quota unlimited on users
    grant connect, create table, create sequence, create procedure to motty
    create user teddy identified by teddy
    grant connect to teddy

    The application objects are the WINNING_MANAGERS table…

    create table winning_managers(
        wm_name varchar2(100) constraint wm_pk primary key,
        league_title number(4),
        fa_cup number(4),
        league_cup number(4))
    comment on table winning_managers is 'Managers who have won a career treble and the years in which they first won each major English domestic trophy'
    comment on column winning_managers.wm_name is 'The name of the manager'
    comment on column winning_managers.league_title is 'The year the manager first won the Title'
    comment on column winning_managers.fa_cup is 'The year the manager first won the FA Cup'
    comment on column winning_managers.league_cup is 
        'The year the manager first won the League (Coca-Cola/Worthington/Carling/Capital One/EFL/Carabao) Cup'

    …and a package…

    create or replace package managers_api as
        procedure ins( 
            i_name winning_managers.wm_name%type, 
            i_title winning_managers.league_title%type, 
            i_fac winning_managers.fa_cup%type, 
            i_lc winning_managers.league_cup%type);
        procedure upd(
            i_name winning_managers.wm_name%type,
            i_title winning_managers.league_title%type, 
            i_fac winning_managers.fa_cup%type, 
            i_lc winning_managers.league_cup%type);
        procedure del( i_name winning_managers.wm_name%type);
    end managers_api;
    create or replace package body managers_api is
        procedure ins( 
            i_name winning_managers.wm_name%type, 
            i_title winning_managers.league_title%type, 
            i_fac winning_managers.fa_cup%type, 
            i_lc winning_managers.league_cup%type)
            insert into winning_managers( wm_name, league_title, fa_cup, league_cup)
            values( i_name, i_title, i_fac, i_lc);
        end ins;
        procedure upd(
            i_name winning_managers.wm_name%type,
            i_title winning_managers.league_title%type, 
            i_fac winning_managers.fa_cup%type, 
            i_lc winning_managers.league_cup%type)
            if i_name is null then
                raise_application_error(-20000, 'Manager name must be specified');
            end if;
            update winning_managers
            set league_title = nvl(i_title, league_title),
                fa_cup = nvl( i_fac, fa_cup),
                league_cup = nvl(i_lc, league_cup)
            where wm_name = i_name;
        end upd;
        procedure del(i_name winning_managers.wm_name%type)
            if i_name is null then
                raise_application_error(-20000, 'Manager name must be specified');
            end if;
            delete from winning_managers
            where wm_name = i_name;
        end del;
    end managers_api;

    For reasons which will shortly become apparent, Teddy has been granted access to the application as follows :

    grant select, insert, update, delete on winning_managers to teddy
    grant execute on managers_api to teddy

    Teddy is working on a project to identify all managers who have won each of the major English domestic trophies in their carrer.
    He immediately takes advantage of these new privileges to add all of the Premier League winning managers to the application…

        motty.managers_api.ins('SIR ALEX FERGUSON', 1993, 1994, 2006);
        motty.managers_api.ins('KENNY DALGLISH', 1995, null, 2012);
        motty.managers_api.ins('ARSENE WENGER', 1998, 1998, null);
        motty.managers_api.ins('JOSE MOURINHO', 2005, 2007, 2006);
        motty.managers_api.ins('CARLO ANCELOTTI', 2010, 2010, null);
        motty.managers_api.ins('ROBERTO MANCINI', 2012, 2011, null);
        motty.managers_api.ins('MANUEL PELLIGRINI', 2014, null, 2014);
        motty.managers_api.ins('CLAUDIO RANIERI', 2016, null, null);
        motty.managers_api.ins('ANTONIO CONTI', 2017, 2018, null);
        motty.managers_api.ins('PEP GUARDIOLA', 2018, 2019, 2018);

    When Motty checks the table, he can see that there are a few problems :

    select wm_name, league_title, fa_cup, league_cup
    from winning_managers
    -------------------- ------------ ---------- ----------
    SIR ALEX FERGUSON            1993       1994       2006
    KENNY DALGLISH               1995                  2012
    ARSENE WENGER                1998       1998           
    JOSE MOURINHO                2005       2007       2006
    CARLO ANCELOTTI              2010       2010           
    ROBERTO MANCINI              2012       2011           
    MANUEL PELLIGRINI            2014                  2014
    CLAUDIO RANIERI              2016                      
    ANTONIO CONTI                2017       2018           
    PEP GUARDIOLA                2018       2019       2018
    10 rows selected. 

    It looks like someone has not accounted for anything prior to the 1992/93 season.
    Motty needs to find out who is doing this so that he can explain that the FA Cup has been going since 1872, the League since 1888, and the League Cup since 1961.

    Auditing DML

    In order to create a policy, you need to have the AUDIT SYSTEM or AUDIT_ADMIN role.
    We’ll connect to the PDB as system and grant the role to MOTTY now :

    grant audit_admin to motty;

    Now, connected as MOTTY, we can create an audit policy for the table :

    create audit policy wm_audit
        actions all on motty.winning_managers
        when 'sys_context(''userenv'', ''session_user'') not in (''SYS'', ''SYSTEM'')' evaluate per session
    Audit created

    …and enable it…

    audit policy wm_audit
    Audit succeeded.

    Note that the when condition of the policy needs to be formatted as in this example. If you use the standard SQL method for escaping quotes – i.e. :

    when q'['sys_context('userenv', 'session_user') not in ('SYS', 'SYSTEM')']' evaluate per session

    you’ll find yourself on the wrong end of :

    ORA-46368: Audit policy does not have a simple rule condition.

    In terms of specifying which users should be included or excluded from this policy, you could re-create the it as follows :

    noaudit policy wm_audit
    drop audit policy wm_audit
    create audit policy wm_audit
        actions all on motty.winning_managers
    audit policy wm_audit except sys, system	

    Either way, with the policy in place and enabled, the next time Teddy connects and checks the table…

    select wm_name 
    from motty.winning_managers

    …MOTTY gets to see an entry in the audit log :

    select dbusername, sql_text
    from unified_audit_trail
    where unified_audit_policies = 'WM_AUDIT'
    DBUSERNAME           SQL_TEXT                                          
    -------------------- --------------------------------------------------
    TEDDY                select wm_name                                    
                         from motty.winning_managers                       

    Looking at the policy, we can confirm that any activities by SYS and SYSTEM on this table are excluded from any logging under this policy :

    select audit_condition
    from audit_unified_policies
    where policy_name = 'WM_AUDIT'
    sys_context('userenv', 'session_user') not in ('SYS', 'SYSTEM')

    So when SYSTEM runs a query against the table :

    select wm_name
    from motty.winning_managers
    where league_title is not null
    and fa_cup is not null
    and league_cup is not null

    It does not show up in the Audit Trail :

    select event_timestamp, sql_text
    from unified_audit_trail
    where unified_audit_policies = 'WM_AUDIT'
    and dbusername = 'SYSTEM'
    no rows selected

    At this point, you may be wondering why you don’t simply dispense with any journalling triggers you have hanging around in your application and just use the audit trail instead. Well, to illustrate one major difference in how these mechanisms might behave consider the following…

    Teddy has overheard Motty grumbling in the office and decides to insert another record into the table …

    insert into motty.winning_managers(wm_name, league_title, fa_cup, league_cup)
    values('MOTTY', 1888, 1872, 1961);

    …before having second thoughts…


    and checking that the record has indeed not been committed :

    select league_title, fa_cup, league_cup
    from motty.winning_managers
    where wm_name = 'MOTTY'
    no rows selected

    Unfortunately, when Motty checks the audit trail again, Teddy is busted…

    select dbusername, action_name, sql_text
    from unified_audit_trail
    where unified_audit_policies = 'WM_AUDIT'
    and event_timestamp > systimestamp - ((1/24/60) * 10) 
    order by event_timestamp
    DBUSERNAME           ACTION_NAME          SQL_TEXT                                
    -------------------- -------------------- ----------------------------------------
    TEDDY                INSERT               insert into motty.winning_managers(wm_na
                                              me, league_title, fa_cup, league_cup)   
                                              values('MOTTY', 1888, 1872, 1961)       
    TEDDY                SELECT               select league_title, fa_cup, league_cup 
                                              from motty.winning_managers             
                                              where wm_name = 'MOTTY'                 

    So, even though the INSERT statement was not committed, it still appears in the audit trail. Probably not the sort of thing you want from a journalling trigger.

    DML from stored procedure calls

    Teddy has now started to clean up the data using the MANAGERS_API package :

        -- Correct the dates for Sir Alex Ferguson's first FA Cup and League Cup wins as a manager
        motty.managers_api.upd(i_name => 'SIR ALEX FERGUSON', i_title => null, i_fac => 1990, i_lc => 1992);
        -- Delete Kenny Dalglish record...
        motty.managers_api.del(i_name => 'KENNY DALGLISH');
        --Re-insert the record to reflect the fact that he has been knighted
        motty.managers_api.ins(i_name => 'SIR KENNY DALGLISH', i_title => 1986, i_fac => 1986, i_lc => 2012);

    We can see each of the DML statements resulting from these stored program unit calls. However, whilst the SQL_TEXT shows the bind variable placeholders, the bind variable values themseleves can be found in the SQL_BINDS column :

    select dbusername, action_name, sql_text, sql_binds
    from unified_audit_trail
    where unified_audit_policies = 'WM_AUDIT'
    and event_timestamp > systimestamp - ((1/24/60) * 10)
    order by event_timestamp
    DBUSERNAME      ACTION_NAME     SQL_TEXT                                                     SQL_BINDS                     
    --------------- --------------- ------------------------------------------------------------ ------------------------------
    TEDDY           UPDATE          UPDATE WINNING_MANAGERS SET LEAGUE_TITLE = NVL(:B4 , LEAGUE_  #1(0):  #2(4):1990 #3(4):1992
                                    TITLE), FA_CUP = NVL( :B3 , FA_CUP), LEAGUE_CUP = NVL(:B2 ,   #4(17):SIR ALEX FERGUSON     
                                    LEAGUE_CUP) WHERE WM_NAME = :B1                                                           
    TEDDY           DELETE          DELETE FROM WINNING_MANAGERS WHERE WM_NAME = :B1             #1(14):KENNY DALGLISH        
                                     LEAGUE_CUP) VALUES( :B4 , :B3 , :B2 , :B1 )                4):1986 #3(4):1986 #4(4):2012 

    Realising that not completing the list of managers will annoy anyone who has read this far in the hope of finding out, Teddy knuckles down and finishes the job. Once he’s done, we can see the full list :

    select wm_name, league_title, fa_cup, league_cup, 
        greatest( league_title, fa_cup, league_cup) as "Career Treble"
    from motty.winning_managers
    order by 5
    MANAGER                        LEAGUE_TITLE       FA_CUP   LEAGUE_CUP Career Treble
    ------------------------------ ------------ ------------ ------------ -------------
    Joe Mercer                             1968         1969         1961          1969
    Bill Nicholson                         1961         1961         1971          1971
    Don Revie                              1969         1972         1968          1972
    Sir Alex Ferguson                      1993         1990         1992          1993
    George Graham                          1989         1993         1987          1993
    Jose Mourinho                          2005         2007         2006          2007
    Sir Kenny Dalglish                     1986         1986         2012          2012
    Pep Guardiola                          2018         2019         2018          2019
    8 rows selected. 
    Useful Links and Acknowledgements

    As ever, the Oracle documentation is a mine of useful information.

    The page for the AUDIT command is a useful starting point, as is the page for CREATE AUDIT POLICY.

    As usual, Tim Hall’s Oracle Base has a number of useful articles on this topic, this one being of particular use to me in putting this post together.

    ORA-01775 – Looping Chain of Synonyms and Epic Football Punditry

    Tue, 2019-04-30 04:59

    In 1990, Liverpool became English League Champions for the 10th time in 15 seasons.
    Despite this impressive track record, my Dad came over all Yoda-esque and confidently predicted that they would not win The Title again in his lifetime.
    Since then, Liverpool have won everything else, including the Champions League – OK Dad , the European Cup – but the prediction has held.
    In fact, it’s gone on so long that it probably qualifies as a prophecy by now.
    Before the start of each season, I can assess Liverpool’s prospects, by simply enquiring after his health.
    “Musn’t grumble, ‘cos if you do no-one bloody listens !” can be taken as a synonym for “I’ll be around for a while yet, so don’t waste your money on backing Liverpool to win it this season”.
    Which brings us to the subject of this post – namely the apparently random nature of the ORA-01775 error, where synonyms are concerned…

    To demonstrate what’s going on and (hopefully) why, I’ve created a table in the MIKE schema and granted SELECT on it to HR :

    create table predictions (
        predictor varchar2(128), 
        prediction varchar2(4000))
    insert into predictions values('MY DAD', 'Liverpool will not win the League again in my lifetime')
    grant select on predictions to hr

    As a result, I can query the table when connected as HR :

    I can also create a local synonym for the table so I don’t have to remember which schema it’s in …

    create or replace synonym predictions for mike.predictions;

    Now, if we were to drop the table (back in the MIKE schema, remember) …

    drop table predictions;

    … and then attempt to reference it directly, the result is entirely, well, predictable…

    Meanwhile, the synonym is unaffected by the removal of it’s underlying table …

    …which means we get a different error when we use it to try to access the dropped table…

    I’m using SQLCL here so we only see the first line of the error. However, we can use SQLCL’s OERR function to get the full error text, which is far more informative :

    The error message makes it very clear what might be causing the error and gives you a good idea how to fix it.
    Of course, you’ll always get this error if the synonym you’re referencing is pointing to a non-existent table, right ?

    When Oracle starts yanking your chain

    Let’s re-create the table in MIKE again …

    create table predictions (
        predictor varchar2(128), 
        prediction varchar2(4000))
    insert into predictions values('MY DAD', 'Liverpool will not win the League again in my lifetime')
    grant select on predictions to hr

    This time however, we’re going to create a public synonym (connecting as a user with CREATE PUBLIC SYNONYM privilege) …

    create public synonym predictions for mike.predictions

    …and drop HR’s local synonym…

    drop synonym predictions;

    …which leaves us with only the Public synonym pointing to the table…

    HR can still access the table, but this time it’s using the public synonym…

    Of course, if we drop the table again, we’ll get the same error when we attempt to access it via the synonym…

    Wait, what ? What happened to that nice, explanatory ORA-00980 ?

    OERR isn’t saying anything :

    According to the docs :

    Oracle Database attempts to resolve references to objects at the schema level before resolving them at the PUBLIC synonym level

    In this instance, it seems that Oracle thinks that the synonym is pointing to itself.

    We can confirm this hypothesis by using a public synonym with a different name to that of the table it’s pointing to.

    First of all though, we need to drop our current public synonym or we’ll wind up right back here in looping chain lunacy :

    drop public synonym predictions;
    create table predictions (
        predictor varchar2(128), 
        prediction varchar2(4000))
    insert into predictions values('MY DAD', 'Liverpool will not win the League again in my lifetime')
    grant select on predictions to hr
    create public synonym prophecies for mike.predictions;

    HR now accesses the table using the PROPHECIES public synonym…

    If we drop the underlying table again and attempt to use the synonym we’re back to a rather more reassuring error :

    The moral of this tale ? Well, if you get an ORA-01775 error then it’s probably worth checking the availability of any tables that are a target of a public synonym as a first step to debugging the issue.

    Oh, and my Dad knows best.

    Easy Listening with Datapump in the SQLDeveloper DBA Module

    Mon, 2019-04-01 12:05

    There are a number of ways to transfer data between Oracle Databases, one of which is to use the PL/SQL Datapump API – DBMS_DATAPUMP.
    If you wish to avail yourself of this utility but find the syntax a bit fiddly, you always have the option of getting SQLDeveloper to do (most of) it for you.
    What we’re talking about here is how to persuade the SQLDeveloper DB module to :

    • Create and execute a custom Datapump export job
    • do most of the work creating an import of a subset of the exported data

    The Application

    I’ve got a simple application which consists of a single table and some data :

    select title, artist, original_artist, is_mellow
    from mike.tracks
    TITLE                          ARTIST                ORIGINAL_ARTIST                                    IS_MELLOW
    ------------------------------ --------------------- -------------------------------------------------- ---------
    MOTORHEAD                      COURDUROY             HAWKWIND                                           Y        
    MOTORHEAD                      MOTORHEAD             HAWKWIND                                           N        
    THUNDERSTRUCK                  STEVE N SEAGULLS      AC/DC                                              Y        
    WHISKY IN THE JAR              METALLICA             THIN LIZZY                                         N        
    HIGHWAY TO HELL                CARLA BRUNI           AC/DC                                              Y        
    SHIPPING UP TO BOSTON          GODDESSES OF BAGPIPES DROPKICK MURPHYS                                   Y        
    ANOTHER GIRL ANOTHER PLANET    BLINK 182             THE ONLY ONES                                      N        
    SMOKE ON THE WATER             PAT BOONE             DEEP PURPLE                                        Y        
    EMERGENCY                      MOTORHEAD             GIRL SCHOOL                                        N        
    SMELLS LIKE TEEN SPIRIT        2 CELLOS              NIRVANA                                            Y        
    WHOLE LOTTA LOVE               CHICO ARNEZ           LED ZEPPLIN                                        Y        
    KING OF TWILIGHT               IRON MAIDEN           NEKTAR                                             N        
    PARANOID                       GUNTER NORIS          BLACK SABBATH                                      Y        
    C'MON FEEL THE NOISE           OASIS                 SLADE                                              N        
    14 rows selected. 
    The Datapump Export

    Note that I’ve deliberately kept things simple in this example. If you were looking at ways of transferring this kind of data-volume then Datapump is probably overkill.

    Anyhow, I want to export this table and all of it’s data. To do this in SQLDeveloper, connected as a DBA user I can to go to the View menu and select DBA.
    In the DBA tree, I need to right-click the Data Pump node and select the Data Pump Export Wizard

    This brings up the first screen of the Export Wizard. Slightly confusingly, it’s labelled as Step 2.
    The table I want to export is in a different Schema to the one I’m connected as so I select Schemas from the radio group :

    After clicking the Next button we get the chance to choose one or more schemas to include in the export. In my case, I just want the one schema :

    The next screen affords us the opportunity to include or exclude types of object. The Filter Types are the same as those you’d see in the export log file :

    On this occasion however, we’re not going use these filters so the screen for this step looks like this :

    The next screen allows us to select which specific tables we may want to include in the export.

    If I enter a search string in the Name field and click the Lookup button, I will eventually get a list of matching tables :

    I can then highlight the required table(s) and add them to the list for export by clciking the down arrow…

    In this case I want to export all of the data in all of the columns in the table so I have not specified a column list or a predicate.

    Next, I can specify the name and location of a logfile for the export.
    I’ve left the other options on this screen at their default setting…

    I now need to specify the name and location of the dump file. Once again, all other settings are left at the default :

    I’d like to run the job immediately so I don’t need to change any of the scheduling information :

    Finally, I can see a summary of the job :

    Interestingly, if I click on the PL/SQL tab, I can see the code that SQLDeveloper has generated for this export :

    That’s quite interesting. Maybe this feature will come in handy in a bit.
    For now though, I want to run the export immediately, so I just click the Finish button and SQLDeveloper starts the job whilst providing me a nice interactive view of it :

    If I go to the Data Pump/Export Jobs node in the DBA tree, I can see a record of the job.
    It will also display the contents of the log file :

    If we now check the directory on the OS – the path for which can be found with…

    select directory_path
    from dba_directories
    where directory_name = 'DATA_PUMP_DIR'

    … we can see that the export file has been created, with a timestamp appended in the filename :

    cd /opt/oracle/admin/XE/dpdump/7DE2A3259D9C3747E0530F84F25CE87C
    ls -l
    -rw-r-----. 1 oracle oinstall 331776 Mar 28 12:30 mike_tracks-12_30_11.dmp

    We need to make a note of the file name because we’ll need it when we come to do…

    The Datapump Import

    I want to import this table into the HR schema. However, I don’t want all of the data, just the mellow tracks because I really need something to chill out to. Oh, and drown out the sound of the country going down the toilet.

    Fortunately, SQLDeveloper can get us most of the way.

    Once again, we need to right-click the Data Pump node of the DBA tree and select Data Pump Import Wizard.
    We need to start by telling the Wizard the type of Import we want to do ( Tables) and the name of the dump file ( the export file we’ve just created) :

    Next, we select the table using the shuttle buttons :

    We want to import into the HR schema rather than the original schema :

    I only want to create the table if it does not already exist :

    Once again, I don’t want to schedule the job :

    Finally, we have the summary :

    Unlike the export, you’ll notice that the import Wizard has not given us the option to restrict the actual table rows we will be importing.
    However, we do still get the code that has been generated in the PL/SQL tab of the summary window :

    This time, rather than executing the job, I’m going to copy the code into an editor and add a manual “tweak” – namely :

        -- Make sure we only get the mellow tracks because I really need to chill out...
        dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => q'[where is_mellow = 'Y']', table_name => 'TRACKS', schema_name => 'MIKE');

    The full script now looks like this :

    set scan off
    set serveroutput on
    set escape off
    whenever sqlerror exit 
        s varchar2(1000); 
        h1 number;
        errorvarchar varchar2(100):= 'ERROR';
        tryGetStatus number := 0;
        h1 := (operation => 'IMPORT', job_mode => 'TABLE', job_name => 'mike_playlist', version => 'COMPATIBLE'); 
        tryGetStatus := 1;
        dbms_datapump.set_parallel(handle => h1, degree => 1); 
        dbms_datapump.add_file(handle => h1, filename => 'mike_tracks_imp.log', directory => 'DATA_PUMP_DIR', filetype=>DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE); 
        dbms_datapump.set_parameter(handle => h1, name => 'KEEP_MASTER', value => 1); 
        dbms_datapump.metadata_filter(handle => h1, name => 'SCHEMA_EXPR', value => 'IN(''MIKE'')'); 
        dbms_datapump.metadata_filter(handle => h1, name => 'NAME_EXPR', value => 'IN(''TRACKS'')'); 
        dbms_datapump.add_file(handle => h1, filename => 'mike_tracks-12_30_11.dmp', directory => 'DATA_PUMP_DIR', filetype => 1); 
        dbms_datapump.metadata_remap(handle => h1, name => 'REMAP_SCHEMA', old_value => 'MIKE', value => UPPER('HR') ); 
        -- Make sure we only get the mellow tracks because I really need to chill out...
        dbms_datapump.data_filter(handle => h1, name => 'SUBQUERY', value => q'[where is_mellow = 'Y']', table_name => 'TRACKS', schema_name => 'MIKE');
        dbms_datapump.set_parameter(handle => h1, name => 'INCLUDE_METADATA', value => 1); 
        dbms_datapump.set_parameter(handle => h1, name => 'DATA_ACCESS_METHOD', value => 'AUTOMATIC'); 
        dbms_datapump.set_parameter(handle => h1, name => 'TABLE_EXISTS_ACTION', value => 'SKIP'); 
        dbms_datapump.set_parameter(handle => h1, name => 'SKIP_UNUSABLE_INDEXES', value => 0);
        dbms_datapump.start_job(handle => h1, skip_current => 0, abort_step => 0); 
        dbms_datapump.detach(handle => h1); 
        errorvarchar := 'NO_ERROR'; 
            IF ((errorvarchar = 'ERROR')AND(tryGetStatus=1)) THEN 
            END IF;

    I can now run this code in a SQLDeveloper Worksheet…

    Whilst the feedback is not quite as good in this environment, you can still find the job itself in the DBA module, where you can see all of the details…

    If I now check the new HR.TRACKS table :

    Appropriately enough given current events, eight tracks is the precise number needed to form a Desert Island Discs playlist.

    Adding an item to the Ubuntu Unity Launcher

    Sun, 2019-03-24 17:20

    I would begin this post by saying something pithy about the latest Brexit crisis gripping the nation. However, watching any news or current affairs program at the moment leaves me feeling rather like this :

    Fortunately, I had a new version of SQLDeveloper to install on my Ubuntu 16.04 laptop to take my mind off things.
    After installing the software, I forgot – as I almost always do – how to add a new item to the Unity Launcher, so I thought I’d write down the steps this time.

    Unity is similar to Gnome, KDE and – as it turns out – XFCE in that it seems to follow the Desktop Entry Specification.
    So, all I need to do is :

    • create a .desktop file in a suitable location to tell Unity how to run SQLDeveloper
    • add it to the Unity Launcher
    Creating the desktop file

    The desktop files in this Ubuntu version are in /usr/share/applications so…

    sudo nano /usr/share/applications/sqldeveloper184.desktop

    The file itself, looks like this

    [Desktop Entry]
    Name=SQLDeveloper 18.4

    It’s probably worth briefly covering some of the entries in the file…

    In Unity, the Name is the string which appears in the tooltip on the Launcher.
    Exec is the fully qualified path of the command to run for this menu item.
    Type is mandatory and must be one of Application, Link, or Directory.
    Categories are optional but may be relevant if you are using a desktop which offers access to applications via a menu.

    Now we’ve created the file, we need to add it to the Launcher.

    In Unity, the Name is the string which appears in the tooltip on the Launcher.
    Exec is the fully qualified path of the command to run for this menu item.
    Type is mandatory and must be one of Application, Link, or Directory.
    Categories are optional but may be relevant if you are using a desktop which offers access to applications via a menu.

    Now we’ve created the file, we need to add it to the Launcher.

    Adding the application to the launcher

    The easiest way to do this, is to simply open the directory where the file is located in the File Manager…

    nautilus /usr/share/applications

    Sorting the files by Modified date (latest first), I can easily find my new file :

    Then, I just need to drag it over to the Launcher…

    Useful Links

    As is so often the case, this sort of thing is easy when you know how…or when someone is kind enough to tell you. So…

    Installing APEX and ORDS on Oracle 18cXE on CentOS

    Fri, 2019-03-01 12:06

    It’s been rather a trying week.
    Wales beat England in the Rugby on Saturday and every Welsh person alive has been observing the ancient tradition of rubbing English noses in it ever since.
    My claim to Welsh heritage by marriage have been given short-shrift by Deb, whose accent has become rather more pronounced ever since the final whistle.

    All in all, the onslaught of Welsh chauvinism has left me feeling rather like this :

    Until things blow over, I’ve decided to spend more time in the shed. Fortunately, the Wifi signal is still pretty good so I’ve decided to use the free time by installing APEX 18.2 into an Oracle 18c RDBMS. As I’ve got time on my hands ( celebrations are unlikely to fizzle out for a couple of months yet), I’ve decided to follow Oracle’s recommendation and configure it to run on ORDS 18.4.
    Specifically, what I’ll be covering here is :

    • installing APEX 18c
    • installing ORDS 18c
    • configuring APEX to run on ORDS
    • configuring ORDS to run on HTTPS with self-signed SSL certificates
    • using systemd to start ORDS automatically on boot

    That should keep me occupied for a while…

    The Environment

    The server is running CentOS 7, a Linux distro that is functionally identical to the corresponding RHEL (Red Hat Enterprise Linux) release.
    The Oracle Database is 18c. In this context, the edition doesn’t really matter, but I’m using Express Edition.
    The APEX (Application Express) version is 18.2.
    The ORDS (Oracle Rest Data Services) version is 18.4.

    Whilst using ORDS with APEX makes your application architecturally N-tier – the ORDS server is a separate piece of software form the RDBMS hosting APEX – you can physically run ORDS on the same server as the database itself and this is what I’m doing here.
    Once again, this should make little (if any) difference to the steps required to complete the installation.

    I’m assuming that the server you want to install ORDS on will be headless. Therefore, all of the server-side steps described here are performed on the command line.

    Oracle Recommendations

    There are several recommendations spread through the relevant Oracle documentation which I have followed :

    • I’m running a multi-tenant database so APEX is installed in a PDB
    • I’ve installed APEX before ORDS
    • I’ve configured ORDS to run on HTTPS

    I’ll link to the relevant documentation for each recommendation as and when we get to it.

    Helpful Links

    I was going to put these at the end but then I realised you might want to refer to them before you get thoroughly confused by my ramblings. So…

    First up then…

    Installing APEX

    APEX itself sits entirely within the Oracle RDBMS.

    Checking Requirements

    According to the Installation Guide, the database requirements for APEX 18.2 are :

    • Oracle Database or higher
    • a database MEMORY_TARGET of at least 300MB
    • At least 220MB plus 60MB for each additional language in the “Apex” tablespace
    • At least 100MB free in the SYSTEM tablespace
    • If installing the development environment, Oracle XML DB

    Note that “Apex” tablespace is in quotes because, by default, the APEX users tend to get installed into the SYSAUX tablespace.

    Let’s have a quick check to make sure that all these requirements are met on our system before we go any further.

    For the Database version, we can run the following query :

    select banner
    from v$version
    Oracle Database 18c Express Edition Release - Production             

    The MEMORY_TARGET setting is a bit more convoluted as it’s set to 0 by default :

    select display_value, isdefault, description
    from v$parameter
    where name = 'memory_target'
    DISPLAY_VALUE        ISDEFAULT  DESCRIPTION                             
    -------------------- ---------- ----------------------------------------
    0                    TRUE       Target size of Oracle SGA and PGA memory

    The description of the MEMORY_TARGET in this query provides a clue as to how you can make sure that this is the case.

    The sga_target parameter holds the target size of the sga
    the pga_aggregate_target parameter holds the “Target size for the aggregate PGA memory consumed by the instance”


    select sum(value)/1024/1024 as "Total Size (MB)"
    from v$parameter
    where name in ('sga_target', 'pga_aggregate_target')
    Total Size (MB)

    Alternatively, if you’re running 12c or later, you can simply use Enterprise Manager Express :

    As for the tablespace space availability :

    select tablespace_name, 
        round((sum(maxbytes) - sum(bytes))/1024/1024) as "MB Free"
    from dba_data_files
    where tablespace_name in ('SYSTEM', 'SYSAUX')
    group by tablespace_name;
    TABLESPACE_NAME                   MB Free
    ------------------------------ ----------
    SYSTEM                              32398
    SYSAUX                              32148

    Finally, we can check that Oracle XML DB is present with :

    select comp_name, version_full
    from dba_registry
    where upper(comp_name) like 'ORACLE XML%'
    COMP_NAME            VERSION_FULL                  
    -------------------- ------------------------------
    Oracle XML Database                    

    Now that’s all done, we can go and get the software.

    Downloading APEX

    Head over to the APEX Download Page and pick up the latest version ( at the time of writing). Note that there’s no OS specific options because APEX sits entirely within the RDBMS.

    You can choose between the “All languages” version (705MB uncompressed) or “English language only” (310MB uncompressed). I’ve gone for the latter and therefore ended up with this file :

    -rw-rw-r-- 1 mike mike 94421975 Feb 20 11:51

    First we need to change the ownership of the file to the oracle user as that’s the os user we’ll be running the install as :

    sudo chown oracle:oinstall

    Now we can switch to the oracle user and unzip the file to what is usually the $ORACLE_BASE directory (/opt/oracle) :

    sudo su oracle
    unzip -d /opt/oracle
    echo $ORACLE_BASE
    cd $ORACLE_BASE/apex

    Before we connect to the database, it’s worth noting that the Installation Guide has this to say about APEX on a Multitenant Database :

    “Oracle recommends removing Oracle Application Express from the root container database for the majority of use cases, except for hosting companies or installations where all pluggable databases (PDBs) utilize Oracle Application Express and they all need to run the exact same release and patch set of Oracle Application Express. ”

    In my case I’m installing into an 18cXE database which does not have APEX pre-installed. Either way, I want to install into a PDB rather than the CDB.
    It’s also worth noting that you’ll be prompted for the following when you run the installation script :

    • The tablespace for the APEX application user (usually SYSAUX)
    • The tablespace for the APEX files user (SYSAUX)
    • The temporary tablespace (TEMP)
    • The virtual images directory (“/i/”)

    So, still as the oracle user, from /opt/oracle/apex :

    sqlplus /nolog
    conn / as sysdba
    alter session set container = xepdb1;

    If you want to make sure that you are where you should be :

    select sys_context('userenv', 'session_user') as session_user,
        sys_context('userenv', 'con_name') as container
    from dual
    ------------------------------ ------------------------------
    SYS			       XEPDB1

    Next, we need to check that the Default Profile’s password complexity function is disabled :

    select limit
    from dba_profiles
    where profile = 'DEFAULT'
    and resource_type = 'PASSWORD'
    and resource_name = 'PASSWORD_VERIFY_FUNCTION'

    If there is a password complexity function assigned, you’ll need to disable it.
    Remember to make a note of it’s name first so that you can put it back once the installation is complete.
    To unset it :

    alter profile default password_verify_function null;

    Finally, we can start the installation. We want the full development environment so…

    @apexins.sql SYSAUX SYSAUX TEMP /i/

    This causes screens of messages and can run for some considerable time.
    Eventually though, you should end up with :

    Thank you for installing Oracle Application Express
    Oracle Application Express is installed in the APEX_180200 schema.
    The structure of the link to the Application Express administration services is as follows:
    http://host:port/pls/apex/apex_admin (Oracle HTTP Server with mod_plsql)
    http://host:port/apex/apex_admin     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
    http://host:port/apex/apex_admin     (Oracle REST Data Services)
    The structure of the link to the Application Express development interface is as follows:
    http://host:port/pls/apex (Oracle HTTP Server with mod_plsql)
    http://host:port/apex     (Oracle XML DB HTTP listener with the embedded PL/SQL gateway)
    http://host:port/apex     (Oracle REST Data Services)
    timing for: Phase 3 (Switch)
    Elapsed: 00:00:06.44
    timing for: Complete Installation
    Elapsed: 00:05:51.38
    PL/SQL procedure successfully completed.
    1 row selected.

    According to the Installation Guide, we should now have 3 new users, howver, it seems that four are actually created…

    select username, 
        default_tablespace as default_ts, 
        temporary_tablespace as temp_ts
    from cdb_users
    where trunc(created) = trunc(sysdate)
    USERNAME                  DEFAULT_TS                     TEMP_TS
    ------------------------- ------------------------------ ------------------------------
    APEX_PUBLIC_USER          USERS                          TEMP
    FLOWS_FILES               SYSAUX                         TEMP
    APEX_180200               SYSAUX                         TEMP
    APEX_INSTANCE_ADMIN_USER  USERS                          TEMP
    4 rows selected.

    APEX_INSTANCE_ADMIN_USER is not mentioned in the documentation but seems to have been created in addition to the three expected accounts.

    Setting up the APEX Admin User

    The apxchpwd.sql script we run for this purpose will prompt for a password. The script enforces the following password complexity rules :

    • Password must contain at least 6 characters
    • New password must differ from old password by at least 2 characters
    • Password must contain at least one numeric character (0123456789)
    • Password must contain at least one punctuation character (!”#$%&()“*+,-/:;?_)
    • Password must contain at least one upper-case alphabetic character


    Setting up the APEX_PUBLIC_USER database account

    As we saw, the APEX_PUBLIC_USER account has been created as part of the installation.
    At this point, it has been created with a randomly generated password, which we’ll need to change to something we know.
    Additionally, you may feel it prudent to make sure that the password, once reset, won’t expire as, if it does, your application will stop working until you change it again.
    Note that this is something you need to consider carefully – does the convenience of not having to worry about password expiration for this account outweigh the security risks raised by never changing it ? In my case I think it’s fine because I’m messing about with a VM on my laptop. If your in a more formal environment, you may have a somewhat different risk appetite.

    If you’re horrified by the flagrant disregard for password security that I’m about to demonstrate, look away now…

    First, we need to create a profile where the password does not expire :

    create profile apex_pu limit password_life_time unlimited;

    Note that all of the other profile properties will have default values :

    select resource_name, limit
    from dba_profiles
    where profile = 'APEX_PU'
    order by resource_type, resource_name
    ------------------------------ --------------------

    Next, we assign this profile to APEX_PUBLIC_USER :

    alter user apex_public_user profile apex_pu;

    To confirm that the profile has been assigned :

    select profile
    from dba_users
    where username = 'APEX_PUBLIC_USER';

    Security conscientious objectors can look again now

    To change the password :

    alter user apex_public_user identified by Land0fmyfath3rs;

    …replacing Land0fmyfath3rs with your own choice of non-rugby referencing password.

    Finally, if you’ve unset the password verify function before starting, you now need to put it back :

    alter profile default password_verify_function myfunc;

    …where myfunc was the original password verify function.

    At this point, the APEX installation is pretty much done.

    Until now, I’ve been content to use the PL/SQL Gateway to serve my APEX pages. This involves using the Web Listener that is embedded in the database.
    If you want go down this route, the installation steps can be found in Appendix A of the Installation Guide.

    However, the Installation Guide has this to say on the subject of choosing a web listener :

    “Oracle HTTP Server and Embedded PL/SQL gateway are considered legacy web listeners and their configuration instructions are in the appendix.”

    This time, I’m going to go with Oracle’s recommendation and use ORDS.

    Installing ORDS

    ORDS – or Oracle Rest Data Services to give it it’s full name – is a Java EE based Web Listener. For this installation, we’ll be running it standalone using it’s built-in Jetty Web Server.
    The official Installation and Configuration Guide can be found here.


    ORDS 18.4 requires an Oracle Database running release 11.1 or later. As the APEX install we’ve just done requires a slightly later minimum database version, we should be fine.

    If you really feel the need, you can confirm that we’re good to go with the following query :

    select banner
    from v$version
    Oracle Database 18c Express Edition Release - Production             

    The other requirement for ORDS is the presence of a JDK for version 8 or later.
    This can be ascertained from the server command line by running :

    java -version

    Downloading ORDS

    Back to Oracle we go, this time to the ORDS Download Page.
    Unsurprisingly given that we’re downloading a Java application, the download is not OS specific. The current version is 18.4.

    A short time later, you should now be the proud owner of…

    -rwxrwx---. 1 mike mike 59777214 Feb 20 12:01

    As with the APEX install, we’ll want to transfer ownership of the file to oracle…

    sudo chown oracle:oinstall

    …as this is the account we’ll be using for the installation…

    sudo su oracle
    echo $ORACLE_BASE
    mkdir $ORACLE_BASE/ords

    Now we extract the file into the new directory…

    unzip -d /opt/oracle/ords

    …which produces screens of output ending with…

    inflating: /opt/oracle/ords/examples/soda/getting-started/indexSpec1.json  
      inflating: /opt/oracle/ords/examples/db_auth/index.html  
      inflating: /opt/oracle/ords/examples/pre_hook/sql/install.sql  
      inflating: /opt/oracle/ords/examples/pre_hook/sql/uninstall.sql  
      inflating: /opt/oracle/ords/examples/plugins/plugin-echo-cmd/src/  
      inflating: /opt/oracle/ords/examples/plugins/plugin-echo-cmd/src/  
      inflating: /opt/oracle/ords/examples/plugins/plugin-demo/build.xml  
      inflating: /opt/oracle/ords/examples/plugins/plugin-demo/.classpath  
      inflating: /opt/oracle/ords/examples/plugins/plugin-demo/.project  
      inflating: /opt/oracle/ords/examples/plugins/lib/javax.inject-1.jar  
      inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.2.json  
      inflating: /opt/oracle/ords/examples/db_auth/sql/install.sql  
      inflating: /opt/oracle/ords/examples/pre_hook/sql/custom_auth_api.pls  
      inflating: /opt/oracle/ords/examples/soda/getting-started/poUpdated.json  
      inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.3.json  
      inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.5.json  
      inflating: /opt/oracle/ords/examples/soda/getting-started/poPatchSpec.json  
      inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.1.json  
      inflating: /opt/oracle/ords/examples/soda/getting-started/QBE.4.json  
      inflating: /opt/oracle/ords/examples/soda/getting-started/po.json  
      inflating: /opt/oracle/ords/examples/pre_hook/  
      inflating: /opt/oracle/ords/examples/soda/getting-started/qbePatch.json  
      inflating: /opt/oracle/ords/examples/soda/getting-started/POList.json  
      inflating: /opt/oracle/ords/examples/pre_hook/index.html  
    Make sure the PL/SQL Gateway is disabled

    You can check whether the PL/SQL Gateway is currently enabled by running…

    select dbms_xdb.gethttpport
    from dual;

    If this query returns something other than zero then you can disable the PL/SQL Gateway as follows :

    exec dbms_xdb.sethttpport(0)
    Copy the APEX images

    Before we configure ORDS, we need to copy the APEX images somewhere that is visible to the ORDS server so…

    cd /opt/oracle/ords
    mkdir apex


    cd apex
    cp -r $ORACLE_BASE/apex/images .
    ls -l
    total 44
    drwxr-xr-x. 33 oracle oinstall 28672 Feb 20 16:57 images
    Initial ORDS Installation

    To start with, we’re going to install ORDS and configure it to run on HTTP. This is simply so that we can sanity check that ORDS and APEX are working together as expected.
    Note that I’m accepting the default location for the default tablespace of the two new users that will be created as part of the installation. If you’re planning to do the same then you should make sure that you have a USERS tablespace available in your PDB.

    Finally, we can now run the installation. Still connected as oracle :

    cd $ORACLE_BASE/ords
    java -jar ords.war install advanced

    At this point, we now have the option to start ORDS…

    …which causes a lot of feedback…

    It’s perhaps unsurprising that we hit the ORA-28000 error at this stage…

    alter session set container = xepdb1;
    select username, account_status
    from dba_users
    where username like 'ORDS%'
    or username like 'APEX%'
    USERNAME                       ACCOUNT_STATUS                
    ------------------------------ ------------------------------
    APEX_180200                    LOCKED                        
    APEX_INSTANCE_ADMIN_USER       OPEN                          
    APEX_PUBLIC_USER               LOCKED                        
    ORDSYS                         EXPIRED & LOCKED              
    ORDS_METADATA                  EXPIRED & LOCKED              
    ORDS_PUBLIC_USER               OPEN                          
    6 rows selected. 

    We’ll sort that out in a bit. For now though, let’s just check that ORDS’ Jetty server is accessible.
    As ORDS is running in the foreground, we’ll need to leave it running and start a separate session.
    Then we can test it with :

    curl -ISs http://frea.virtualbox:8080

    …which should return the HTTP header :

    Now we’re happy that ORDS itself is running, we can stop it by pressing [CTRL]+C in the Terminal session it’s running in.

    Next, we need to run ords with the validate option :

    cd $ORACLE_BASE/ords
    java -jar ords.war validate 

    The output looks innocuous enough :

    However, if we look at the log file that has been written, we can see that there’s been a fair bit of activity…

    [*** script: ords_alter_session_script.sql] 
    PL/SQL procedure successfully completed.
    [*** script: ords_schema_mapping.sql] 
    INFO: Configuring ORDS_PUBLIC_USER to map APEX Workspaces and ORDS schemas
    Session altered.
    Configuring APEX and ORDS schemas for url mapping
    APEX_REST_PUBLIC_USER does not exist
    APEX_LISTENER.POOL_CONFIG synonym does not exist, stubbing out
    PL/SQL procedure successfully completed.
    PL/SQL procedure successfully completed.
    Grant succeeded.
    INFO: Completed configuring ORDS_PUBLIC_USER to map APEX Workspaces and ORDS Schemas
    Session altered.
    [*** script: ords_repair_proxy_connect.sql] 
    INFO: Checking ords enabled schemas and its proxy user
    Session altered.
    PL/SQL procedure successfully completed.
    [*** script: ords_migrate_grant_priv.sql] 
    Session altered.
    INFO: Verify if Application Express exists to setup the migration privileges for
    INFO: Completed setting up the APEX REST migration privileges for ORDS.
    PL/SQL procedure successfully completed.
    [*** script: ords_validate_objects.sql] 
    Session altered.
    INFO: 15:25:18 Validating objects for Oracle REST Data Services.
    VALIDATION: 15:25:18 Starting validation for schema: ORDS_METADATA
    VALIDATION: 15:25:18 Validating objects
    VALIDATION: 15:25:19 Validating ORDS Public Synonyms
    VALIDATION: 15:25:20 Total objects: 262, invalid objects: 0
    VALIDATION: 15:25:20     72  INDEX
    VALIDATION: 15:25:20      1  JOB
    VALIDATION: 15:25:20     12  PACKAGE
    VALIDATION: 15:25:20     12  PACKAGE BODY
    VALIDATION: 15:25:20     44  PUBLIC SYNONYM
    VALIDATION: 15:25:20      1  SEQUENCE
    VALIDATION: 15:25:20     14  SYNONYM
    VALIDATION: 15:25:20     27  TABLE
    VALIDATION: 15:25:20     26  TRIGGER
    VALIDATION: 15:25:20     20  TYPE
    VALIDATION: 15:25:20      6  TYPE BODY
    VALIDATION: 15:25:20     27  VIEW
    VALIDATION: 15:25:20 Validation completed.
    INFO: 15:25:20 Completed validating objects for Oracle REST Data Services.
    PL/SQL procedure successfully completed.
    Session altered.
    Commit complete.
    [*** script: ords_alter_session_script.sql] 
    PL/SQL procedure successfully completed.

    In case you’re wondering, the scripts referenced in this log file are located in ords.war itself.

    Now we’re ready to…

    Configure APEX to run on ORDS

    As oracle…

    cd $ORACLE_BASE/apex
    sqlplus / as sysdba

    Once connected to the database…

    alter session set container = xepdb1;

    This will create two new user accounts :


    You will be prompted for a password for each of them.

    Once the script is completed, you should be able to confirm that two further accounts have been created :

    select username, account_status
    from dba_users
    where username in ('APEX_LISTENER', 'APEX_PUBLIC_USER')
    order by 1
    USERNAME                       ACCOUNT_STATUS                
    ------------------------------ ------------------------------
    APEX_LISTENER                  OPEN                          
    APEX_PUBLIC_USER               LOCKED                        
    Granting access to the APEX owner via ORDS

    Once again, connect to the database as sysdba :

    alter session set container = xepdb1;
            host => 'localhost',
            ace => xs$ace_type(
                privilege_list => xs$name_list('connect'),
                principal_name => 'APEX_180200',
                principal_type => xs_acl.ptype_db));
    alter user apex_public_user account unlock

    Now if we re-start ORDS…

    java -jar ords.war standalone

    …and in a separate session we should be able to get a sensible header from the apex_admin URL :

    curl -ISs http://frea.virtualbox:8080/ords/apex_admin

    We could just call it a day at this point. However, if you like your applications to be a little more secure than an England three-quarter under a Dan Bigger garryowen, you’ll want to follow Oracle’s recommendation :
    “If you want to use RESTful services that require secure access, you should use HTTPS.”

    In order to do this, we’re going to have to do some messing about with SSL certificates.

    Generating a self-signed SSL Certificate

    We need to connect as root and create directory to hold the key :

    sudo -s
    mkdir -p /etc/ssl/private/frea.virtualbox
    cd /etc/ssl
    chmod -R 700 private/frea.virtualbox

    Now we can generate the key and the certificate :

    cd /etc/ssl/private/frea.virtualbox
    openssl req -newkey rsa:2048 -nodes -keyout frea.virtualbox.key -x509 -days 3650 -out frea.virtualbox.crt

    Note that the days value I’m using will create a certificate that does not expire for 10 years. Whilst this does mean I won’t have to worry about the certificate expiring and stopping my application from working at an unexpected moment, it’s probably not strictly in line with security best practices. If you find yourself doing this in a production environment, you may want to consider a rather shorter lifetime for your certificate.

    Anyhow, we will be prompted to supply some values. The ones I’m using are :

    • Country Name : UK
    • State or Province Name : England
    • Organization Name : The Anti-Kyte
    • Organizational Unit Name : Mike
    • Common Name : frea.virtualbox

    …all of which looks like this :

    You should now have two new files :

    ls -l
    total 8
    -rw-r--r--. 1 root root 1363 Feb 22 11:45 frea.virtualbox.crt
    -rw-r--r--. 1 root root 1708 Feb 22 11:45 frea.virtualbox.key

    OK, we can stop being root now.

    Incidentally, if you want to verify the expiry date of our your new certificate :

    sudo openssl x509 -text -noout -in /etc/ssl/private/frea.virtualbox/frea.virtualbox.crt |grep 'Not After'
     Not After : Feb 19 11:45:07 2029 GMT

    The easiest way to reconfigure ORDS to use the certificate – and HTTPS – is to stop any running instances of ORDS, connect as oracle and then start it again, using the appropriate command line parameters :

    cd $ORACLE_BASE/ords
    java -jar ords.war standalone --secure-port 8443 --secure-host frea.virtualbox --secure-cert-path /etc/ssl/private/frea.virtualbox/frea.virtualbox.crt --secure-cert-key-path /etc/ssl/private/frea.virtualbox/frea.virtualbox.key

    If we test using curl…

    curl -ISs https://frea.virtualbox:8443/ords/apex_admin

    The presence of a self-signed certificate will cause comment :

    …so we’ll have to use a bit of TLC…

    curl -kISs https://frea.virtualbox:8443/ords/apex_admin

    This does mean that your web browser is also likely to object to the cert the first time we point it at this site. We’ll come onto that in a bit.

    For now though, we can see that the ssh settings have been added to the properties file in standalone sub-directory :

    cd $ORACLE_BASE/ords/ords/standalone

    The file now looks like this :

    #Fri Feb 22 11:48:35 GMT 2019

    From now on, when ORDS starts, it will use these properties.

    Now ORDS is installed and configured, we need to get it to start when the server boots…

    Creating a Systemd service for ORDS

    The documentation mentions the fact that there is a limit on the size of POST data when running standalone and suggests increasing this limit this by starting ORDS like this :

    java -Dorg.eclipse.jetty.server.Request.maxFormContentSize=3000000 -jar ords.war

    We will implement this suggestion in our service.

    So, as root :

    cd /etc/systemd/system
    nano ords.service

    The file should look something like this :

    Description=Oracle Rest Data Services (ORDS) Embedded Jetty WEB Server for APEX
    ExecStart=/usr/bin/java -Dorg.eclipse.jetty.server.Request.maxFormContentSize=3000000 -jar /opt/oracle/ords/ords.war standalone

    We now need to make the file executable :

    chmod a+x ords.service

    Finally, we need to add the ORDS service to systemctl :

    systemctl daemon-reload
    systemctl enable ords
    systemctl start ords

    Now we can check that ORDS is up :

    systemctl status ords

    It’s alive !!!

    We can test once again with…

    curl -kISs https://frea.virtualbox:8443/ords/apex_admin

    …which should return something like :

    HTTP/1.1 302 Found
    Date: Thu, 28 Feb 2019 22:27:34 GMT
    Content-Type: text/html;charset=utf-8
    X-Content-Type-Options: nosniff
    X-Xss-Protection: 1; mode=block
    Cache-Control: no-store
    Pragma: no-cache
    Expires: Sun, 27 Jul 1997 13:00:00 GMT
    Set-Cookie: ORA_WWV_USER_250198699356158=ORA_WWV-R7rbCSQ886zYN9Q6CXIOpnb2; path=/ords; secure; HttpOnly
    Location: https://frea.virtualbox:8443/ords/f?p=4550:10:14143531583026:::::
    Transfer-Encoding: chunked
    Making APEX available to remote machines

    Now we’ve got everything configured, we simply need to update the server firewall to allow traffic to the HTTPS port :

    sudo firewall-cmd --zone=public --permanent --add-port=8443/tcp
    sudo firewall-cmd --reload

    We can now confirm that the port is available :

    sudo firewall-cmd --list-ports
    1522/tcp 5500/tcp 8443/tcp

    Finally, we can now access APEX from a remote machine.

    When we first hit the APEX URL, Firefox is understandably skeptical of the my self-signed certificate…

    …so I need to convince it that I’m trustworthy (or just add an exception)…

    …before I can finally see APEX in the browser :

    That’s it, I can now leave my sanctuary safe in the knowledge that APEX and ORDS are now configured and that the Welsh Nationalist fervour has abated…except that it’s now St. David’s Day. On the plus side, it looks like I’m having Cheese and Leek Pie for tea rather than the Humble Pie I’ve been eating all week.

    Oracle Create Schema – multiple DDL statements in a single transaction

    Thu, 2019-02-14 16:08

    I was going to begin with some extended Brexit metaphor to illustrate the chaos and confusion that can ensue when you first encounter Oracle’s CREATE SCHEMA command.
    Fortunately, the Dutch Government saved me the trouble :


    Much as I’d like to believe that the Cookie Monster has finally overcome his Sesame Street type casting, I can’t help noticing that the Brexit Monster never seems to in the same room as this guy…


    In Oracle, the term “schema” is used interchangeably with the term “user”. Creating a user in Oracle automatically creates a schema of the same name belonging to that user.
    The process is so seamless that it’s almost impossible to tell where one begins and the other ends.
    You may therefore be somewhat confused the first time you encounter Oracle’s CREATE SCHEMA command…

    Contrary to what you might expect, CREATE SCHEMA does not enable you to create a database user. What it does do is to enable you to run multiple DDL statements inside a single transaction.
    Now, if you thought that any DDL statement in Oracle would end with an implicit commit of the current transaction, well that makes two of us (at least).

    To demonstrate why this is not necessarily true, let’s say we have a user created like this :

    create user brexit_monster identified by lets_go_dutch 
        default tablespace users quota 5M on users
    grant create session, create table, create view, create procedure to brexit_monster

    …and a role created like this :

    create role erg

    If we want to create a table in the brexit_monster schema, then a view on that table, followed by a grant to the ERG role we could achieve this with three separate statements in a script…

    create table brexit_types( brexit_cid varchar2(30),  classification varchar2(65000))
    create or replace view promised_land as select brexit_cid from brexit_types where classification = 'HARD'
    grant select on promised_land to erg

    …however, if the first statement fails, the next two will also fail leaving something rather messy…


    If instead, we were to wrap these statements into a single CREATE SCHEMA…

    create schema authorization brexit_monster
        create table brexit_types( 
            brexit_cid varchar2(30),  
            classification varchar2(65000))
        create view promised_land as 
            select brexit_cid 
            from brexit_types 
            where classification = 'HARD'
        grant select on promised_land to erg

    …the error stack is somewhat more manageable


    Note however, that the View statement has changed a bit. It’s now a straight CREATE rather than CREATE OR REPLACE.
    In fact, if you try to plug any DDL statement into CREATE SCHEMA that is not either a CREATE TABLE, CREATE VIEW or GRANT then you will get :

    ORA-02422: missing or invalid schema element

    If we issue the correct statement…

    create schema authorization brexit_monster
        create table brexit_types( 
            brexit_cid varchar2(30),  
            classification varchar2(5))
        create view promised_land as 
            select brexit_cid 
            from brexit_types 
            where classification = 'HARD'
        grant select on promised_land to erg
    Schema created.

    …we can see that all of the DDL has been executed…

    select object_name, object_type
    from user_objects
    order by 2
    ------------------------------ -----------------------
    select grantee, privilege
    from user_tab_privs
    where owner = 'BREXIT_MONSTER'
    and table_name = 'PROMISED_LAND'
    ------------------------------ ------------------------------
    ERG			       SELECT

    At this point you may still be somewhat skeptical about whether all this really happens in a single transaction.

    I mean, how do we know that Oracle isn’t just parsing each statement to make sure they’re all valid and then trusting it won’t hit a runtime error ?
    One way to find out is, of course, to engineer a runtime error.

    You remember when I created the BREXIT_MONSTER user and you thought that I was a bit stingy with the tablespace quota allocation ? Well…

    set serverout on
    exec dbms_output.put_line('Current transaction = '||dbms_transaction.local_transaction_id(true));
    create schema authorization brexit_monster
        create wto_terms(is_problem varchar2(3) default 'NO')
        create table little_objects as select * from all_objects fetch first 5 rows only
        create table my_objects as select * from all_objects
    exec dbms_output.put_line('Current transaction='||dbms_transaction.local_transaction_id(true));

    When we run this we get …


    We can see that the local_transaction_id has changed. So the transaction that the CREATE SCHEMA was running in has ended. Question is – has it been commited or rolled back ?
    Now to check if any of the tables have been created…

    select table_name
    from user_tables
    where table_name in ('WTO_TERMS', 'LITTLE_OBJECTS')
    no rows selected

    We could go into great detail here and do some digging around in trace files.
    Then again, there’s been quite enough procrastination around this whole Brexit business already.
    As we can see, the functionality of CREATE SCHEMA is that it does execute multiple DDL statements in a single database transaction – i.e. all statements succeed or none do.
    In other words, if it walks like a Brexit Monster and talks about sunlight uplands it’s probably Boris Johnson.

    Pi in a time of Brexit – Remote Controlling Raspberry Pi from Ubuntu using VNC

    Sun, 2019-01-27 14:52

    What with Larry the Downing Street Cat and Palmerston, his counterpart at the Foreign Office, Teddy suspects he knows the real reason for the Country’s current travails.
    Here he is, doing his best Boris Johnson impression :

    “No wonder Brexit’s a Cat-astrophe !”

    In an attempt to distract myself from the prospect of the country being ruined by this feline consipracy, I’ve been playing with my new Raspberry Pi Model 3 B-spec.
    At some point, I’m going to want to connect remotely to the Desktop on the Pi. What follows is how I can do this using VNC…

    Why VNC ?

    Regular readers (hello Mum!) may be wondering why I’m returning to this topic, having previously used RDP to remote to a Pi.

    Well, this newer model RaspberryPi is running Raspbian Stretch ( or version 9) as opposed to the older machine, which was running Jessie (version 8).
    Stretch has VNC included by default so it makes sense to use this protocol for connecting to the desktop remotely.

    Now, the more observant among you will notice that you can simply and easily enable VNC in the same way as you can enable SSH during initial setup.
    You can see this option in the Preferences/Raspberry Pi Configuration menu when you click on the Interfaces tab :

    If, like me, you don’t discover that this is the way to go until after you’ve put away all those old peripherals you had to dig out of the attic to setup your Pi then fear not, you can also do this from the command line…

    On the Pi

    First of all, we want to make sure that we do, in fact, have the required VNC software on the Pi.
    So, once I’ve connected to the Pi via SSH, I can run this from the command line :

    apt list realvnc*

    …which should come back with :

    Now we want to configure VNC on the pi so, on the command line we need to enter …

    sudo raspi-config

    This will bring up the Software Configuration Tool screen below.
    Using the arrow keys on the keyboard, navigate to the line that starts 5 Interface Options and hit the [Enter] key.

    …which brings up a sub-menu. Here, you need to navigate to P3 VNC and hit [Enter]

    …and [Enter] again to confirm you want to enable VNC…

    …before you receive a message confirming that VNC is now enabled :

    To exit, hit [Enter]

    I’m not sure if it’s strictly necessary, but at this point, I re-started the pi by entering :

    sudo reboot
    In Ubuntu

    Meanwhile, on the Ubuntu machinea (I’m running Ubuntu 16.04), we need to head over to the VNC Viewer download site.
    As I’m on a 64-bit version of Ubuntu, I chose the DEB x64 version to download.

    Incidentally, you can tell if you’re running a 32-bit or 64-bit Linux distro, you can run :

    uname -i

    If this returns x86_64 the you’re on a 64-bit platform.

    Anyhow, when prompted, I opted to open the downloaded file – VNC-Viewer-6.19.107-Linux-x64.deb with Software Install

    …which results in…

    Now we simply click Install and enter our password when prompted.

    Once the installation is completed we’re ready to connect remotely.

    Running VNC Viewer

    To start the viewer, you can simply open a Terminal and run :


    After you’ve accepted the licence, enter the address of the server to connect to (in my case pithree) :

    You’ll then be prompted to enter the username and password of a user on the Pi :

    Press OK and…

    You can tweak the display to make it a bit more practical.
    In the VNC Window, move the cursor to the top of the screen so that the Viewer menu slides down then select the cog-wheel icon (second from the right) :

    In the Options tab, set Picture Quality to High and Scaling to Scale to fit window :

    After this, the VNC viewport should scale to the size of the VNC window itself.

    Now all I need is to something else to distract myself from the ongoing battle between Project Fear and Project Farce.