Friday, 19 June 2015

Building an Oracle 11G XE Server with APEX 5, ORDS and SQLcl in 30 minutes


1. You have VirtualBox installed

2. You have the necessary Oracle installation files (oracle-xe, apex5, ords3 and sqlcl)

3. You have the installation image for CentOS - I just grabbed the Minimal image from a local mirror -

Set up the virtual machine

Give the Virtual machine an appropriate name, and since CentOS is based off Red Hat, specify the type as Red Hat (64-bit).

Allocate 1GB RAM

Create a new virtual hard drive (for this demo, I will go with a 10gb disk size).

Your machine and virtual hard drive will now have been created and you should see the machine in your machine list in Virtual Box. The next step is go into settings to apply some configuration.

Firstly, you will want to verify your network configuration - for this demo, I will just be using a bridged network connection.

You will also want to load in the Linux disk image. This is done through the storage tab by selecting the empty disk node and selecting the image - mine has been used in the past, so I can easily select it without locating a new disk.

Now we can start up the machine and install the operating system.

Before you begin the installation process you will want to go into network configuration and turn on the device as well as giving the machine a host name.

Once the installation begins, you will want to set the root password and create a personal user account. Here, I also make my self an administrator - this will allow us to run commands as root using the sudo command.

Once that process has finished, you can reboot the machine to get into your newly installed system.

Transferring the Oracle files to your machine

Before getting started we need to have all the installation media on our new server. I have already downloaded these files onto my machine. So first, I'll create a folder on the server where I will transfer these to.

mkdir ~/orafiles

Then to transfer the files across, I'll use FileZilla (an FTP program). We can connect here using sftp (which uses the SSH port/22).

Installing Oracle XE, APEX, ORDS and SQLcl

The next step is to install Oracle. There is a project on GitHub that has some scripts that basically automate the whole installation process. So for this, we will need to first install git.

sudo yum install git

The project is located at:

So, we need to clone the project onto our server, which is done with the following command.

git clone

This will clone all the project files into a new folder named oraclexe-apex at the current file location. The next step is to change into that directory

cd oraclexe-apex

In this folder, there is a config file that needs updating - One editor that comes installed on our minimal CentOS is vi, but you can install another if you prefer - a popular choice is nano.


The important properties you need to add a value for are:


These all related to the Oracle installation media that we copied over to the server in the previous step. So the top of the config file should end up looking something like:

#To download Oracle XE, go to:

#To download APEX, go to:

#To download ORDS, go to:
#Note, for now ORDS 2 is in production. ORDS 3 specific scripts have also been included and need to be finalized once its out of beta

#To downlaod SQLcl, go to:
#Note: at time of writing SQLcl is still in beta
#No default value in here since only if you want to add it in (not required)

The path starts with file:///home/trent/orafiles because my user account is named trent, and I placed all the files in a new folder named orafiles.

The rest of the parameters are quite self explanatory - mostly relating to ports to use for various services.

The script also creates an APEX workspace during the installation. If you don't wish to create a workspace/user during the installation, you can set the property OOS_ORACLE_CREATE_USER_YN to N. Otherwise, you can update the property OOS_APEX_USER_WORKSPACE to your preferred workspace name.

I'd also suggest updating OOS_APEX_ADMIN_EMAIL to something other than (default).

Now, we can start the installation process.

sudo ./

This installation process should take around 20 minutes to complete. After it completes, the script is set to reboot the machine - so ensure you save any unsaved work before running the above.

Once it has completed and successfully rebooted, you should then be able to access APEX on port 80. e.g. if your IP address of your machine is, you would access with: The default login details for the internal workspace are admin/Oracle1! (if you didn't update those configuration parameters).

It's also worth noting that the default configuration doesn't leave open port 1521 for connecting to the database. So you will need to set up SSH port forwarding on your local machine in order to connect (or open that port on the server). SQL Developer has support for this from within the software.

Monday, 18 May 2015

Building ORDS plugins

Oracle Request Data Services (ORDS) released version 3 recently. If you didn't notice, this version allows you to develop plugins (with Java) that can be imported into your ORDS instance, and provide extended functionality.

You want to refer to the Oracle REST Data Services Plugin API, located here:, which itself includes a couple of nice documents:
  1. Getting started guide
  2. Developer guide
This post just re-iterates a lot of what has already been covered in the getting started guide, with a focus on using IntelliJ IDEA.

Setting the project up in IntelliJ IDEA

So the first step is to set up a new project. First however, you will want to verify what version of Java you are using on your server (assuming you are developing on a separate workstation). This can be verified with the command java -version.

[user@host ~]# java -version
java version "1.7.0_79"
OpenJDK Runtime Environment (rhel- u79-b14)
OpenJDK 64-Bit Server VM (build 24.79-b02, mixed mode)

So, in my instance, I'm running on Java 1.7, so I'll want to target that same JVM when setting up the new project.

After setting up the project, you will want to include the library files required for developing plugins. The libraries can be found in ORDS_INSTALL_FOLDER/examples/plugins/lib. This is done in IDEA by opening the Project Structure tool window (File --> Project Structure... or Ctrl+Alt+Shift+S). Select the Libraries nodes and add all the specified files.

Now that the libraries are available to our project, we can starting coding our plugin (or in this example case, just add in the Demo java file as from ORDS_INSTALL_FOLDER/examples/plugins/plugin-demo/src/

The next step is to get the build process to generate the jar file. In IDEA, this is typically done by setting up an Artifact (again through the Project Structure tool window). 

The only problem is that this method on its own doesn't seem to generate the oracle.dbtools.plugin.api.di.providers file (inside the META-INF folder), which contains a list of classes that define a Provider. The @Provider annotation in the source file in effect advertises the class the D.I. frame work

However, building through Ant does include this necessary file. Here, you can make a copy of the build file in the ORDS_INSTALL_FOLDER/examples/plugins/plugin-demo folder, and tweak as necessary. This requires you to know where the libraries are stored on your file system (since IDEA doesn't actually copy them into your project folder, but references the file location when you set them up). 

The other strategy is to get IDEA to build the ant build file for you. So you would still set up the Artifact so that the generated build file includes the build jar task. After the artifact has been set up, generate the build file.

On the build menu, select the option Generate Ant Build...

Once the build file has been generated, you need to set it up as an ant build script. This is done by opening the Ant tools window (View --> Tool Windows --> Ant Build), and then adding that build script.

Then, we can generate our jar file by running the all ant target, which will clean, build and generate the artifact, which is output to PROJECT_FOLDER/out/artifacts/ARTIFACT_NAME (by default). 

Loading the plugin into ords.war

Now that we have a built plugin file (the jar), the next step is to embed it into the ords.war file. This is done with the plugin command. 

java -jar ords.war plugin Demo.jar

Where Demo.jar is the filename of the jar file that was generated from the java program. You will see output like:

May 17, 2015 5:14:43 PM oracle.dbtools.cmdline.plugin.UpdatePlugin execute
INFO: The plugin jar named: Demo.jar was added to or updated in ords.war

What this in-effect does is load your plugin into the ords.war/WEB-INF/lib folder. If a library with the same name already exists, it will simply overwrite the existing library. So, if you need to remove your plugin (I'm unaware of any official method) - you should be able to do so with the command: zip -d ords.war WEB-INF/lib/<plugin_name.jar> (I say cautiously not basing this off any ORDS documentation).

Testing the plugin

This demo in effect demonstrates connecting to the schema name that is the first part of the path in the URL, after the ords portion, and issuing a query before outputting a hello string combining the schema name and the query string parameter.

First set up a schema that you would like to test with, granting it the connect and resource privileges, as well enabled ords access to the schema.

SQL> create user demo_user identified by demo_user;

User created.

SQL> grant connect,resource to demo_user;

Grant succeeded.

SQL> connect demo_user
Enter password: 

SQL> exec ords.enable_schema;

PL/SQL procedure successfully completed.

Then, in the browser, if you go to the path: /ords/demo_user/plugin/demo?who=thomas (as defined in our plugin code), you will see the message:

DEMO_USER says hello to: thomas

In the above example, where we enabled the schema, this creates a URL mapping pattern to the same name of the schema (with the default parameters). However, this can be customised to another URL pattern by setting the parameter p_url_mapping_pattern on the call to ords.enable_schema to something else (refer to ORDS PL/SQL package documentation for more information). E.g. if we want to map it to demo_plugin instead of demo_user:

exec ords.enable_schema(p_url_mapping_pattern => 'demo_plugin');

The URL mapping pattern is important, because if you have an APEX workspace with the same name as the URL mapping pattern, you will get a clash and the workspace will generally get the priority.

Wednesday, 13 May 2015

Building dynamic forms

How do I build a dynamic form? Well first, you need to have your questions stored somewhere, so we set up the following (basic) model:



create sequence dynamic_qs_seq;

create or replace trigger BI_DYNAMIC_ANS
before insert on DYNAMIC_ANS
for each row
    :NEW.ID := dynamic_qs_seq.nextval;

insert into DYNAMIC_QS values (1, 'NAME', 'TEXT', 10);
insert into DYNAMIC_QS values (2, 'AGE', 'TEXT', 20);
insert into DYNAMIC_QS values (3, 'ADDRESS', 'TEXTAREA', 30);

create or replace view v_dynamic_qs as
  , dynamic_qs.question_key
  , dynamic_qs.question_type
  , dynamic_ans.question_ans
  , row_number() over (order by dynamic_qs.question_Seq) f_index
    left outer join dynamic_ans on (dynamic_qs.question_key = dynamic_ans.question_key);

There are two strategies that come to mind, to achieve a dynamic form.
  1. Utilising the APEX_ITEM API
  2. Custom jQuery POST using the g_f01 and g_f02 arrays (for item keys and item values)

Utilising the APEX_ITEM API

The first, is to make use of the apex_item API for rendering different types of fields, and apex_application API for submitting the data. So first off, we should set up a view on our data with an additional column, starting with 1, that increments by 1, for each question (used f_index in my view).

Then, on the page, we set up a dynamic PL/SQL region with the following code:


    for i in (
        select *
        from v_dynamic_qs
        if i.question_type = 'TEXT'
                    p_idx => i.f_index,
                    p_Value => i.question_ans
        elsif i.question_type = 'TEXTAREA'
                    p_idx => i.f_index,
                    p_Value => i.question_ans
        end if;    
        htp.p('<br /><br />');        


That allows us to render the form and display any saved values, should they exist. The next part is an on-submit process that will save the new values. For this, we can re-use the same view, but will need to use some dynamic SQL in order to get values from the apex_application.g_f0x array.

    l_sql varchar2(4000);
    --get rid of old values before saving the data again
delete from dynamic_ans; for i in ( select id , question_key , to_char(f_index, 'FM09') f_index from v_dynamic_qs ) loop l_sql := ' begin insert into dynamic_ans (question_key,question_ans) values (:1 , apex_application.g_f' || i.f_index || '(1) ); end;'; execute immediate l_sql using i.question_key; end loop; end;

Custom jQuery POST using the g_f01 and g_f02 arrays

Another approach is to build the HTML form controls yourself, and do an AJAX post to an on-demand process. To do this, you need to parse the form controls and use the apex_application.g_f01 and apex_application.g_f02 arrays in your on-demand process.

So similar to above, set up a dynamic region which will render the HTML. Unlike the previous example which was dependent on a sequence ID, this example will use the question_key field (which will be submitted as an array in f01). At run time, the text "QS_" is stripped from the html elements' ID attribute, and mapped into an Array. Similarly, the item values are mapped to an array and submitted in f02.


    for i in (
        select *
        from v_dynamic_qs
        if i.question_type = 'TEXT'
            htp.prn('<input class="dynamicQs" id="QS_' || i.question_key || '" type="text" value="' || i.question_ans || '" />');
        elsif i.question_type = 'TEXTAREA'
            htp.prn('<textarea class="dynamicQs" id="QS_' || i.question_key || '">' || i.question_ans || '</textarea>');
        end if;    
        htp.p('<br /><br />');     
    end loop;


Then, set up an AJAX Callback process on your page.

    --get rid of old values before saving the data again
    delete from dynamic_ans;
    for i in 1..apex_application.g_f01.COUNT
        insert into dynamic_ans (question_key, question_ans) values (apex_application.g_f01(i), apex_application.g_f02(i));

Finally, add a submit button and tie it up to dynamic action that executes JavaScript code:

var _f01 = $.map($('.dynamicQs'), function(el) { return $(el).attr('id').replace('QS_', ''); } );
var _f02 = $.map($('.dynamicQs'), function(el) { return $(el).val(); } );

        "p_flow_id" :  $v('pFlowId'), 
        "p_flow_step_id" : $v('pFlowStepId'), 
        "p_instance": $v('pInstance'),
        f01: _f01,
        f02: _f02
    function success(data) { 

Again, these are obviously a simplified versions, but hopefully you get the gist of how to accomplish a dynamic form.

Friday, 8 May 2015

Deploying individual components

The scenario is you have two (or more) environments, some changes are ready from the development environment and others aren't - so you can't deploy the whole application, because some components are broken. Yes, you could employ the use of build options to prevent any issues happening, and only enable components when they are ready, but what other options are there?

Well, APEX has functionality to export pages and individual components. So for one example, you've been good and kept the schema name, workspace names consistent. You also maintained the same application ID amongst environments.

So, you do an page export from one environment, and try to import it to the other, but you receive the following error:

This page was exported from a different application or from an application in different workspace. Page cannot be installed in this application.

These components were exported from a different application or from an application in a different workspace. The components cannot be installed in this application.

For a page or component export respectively.

The missing link?

You need to have consistent workspace ID's amongst workspaces!

Once you have consistent workspace ID's amongst environments, when importing components, you will find you have better success!

Great, now you know what to do in the future. What if you haven't done this for existing systems? Don't despair, you still have options, albeit, a bit more lengthy!

You need to export the application that has all the new enhancements in it, and import it into the same workspace where the target application is, but don't replace the existing application - instead just auto assign a new ID. Now, in this case, instead of doing page exports and imports, you need to use the copy function that comes with apex.

So go to your target application and in page designer go to the Create toolbar and select Page as Copy.

Then specify Page in another application

From here, you will specify the application you had just imported and then the page that contains all the enhancements you are after. Follow the steps and you will find the page from the other application in your target application.

Not as elegant as if you had consistent workspace ID's in your environments, so from here on in.. consistent workspace ID's ;-)

Sunday, 22 March 2015

APEX 5 Give me Interactive Reports

One of the statements of direction for APEX 5 is multiple interactive reports on a single page.

Multiple Interactive Reports – Allow any number of Interactive Reports to be defined on a single page.

So, time to explore that functionality.

One pattern I've used in the past is loading particular saved reports (rather than the default primary report). You may have some pre-defined filters, a row highlight, specific columns. So for demonstrations, I've built an IR on the emp table.

Then I go and apply some filters, and save an alternate report:

Then in the page designer assign an alias to that report.

As you can see, that allows me to enter in the request portion of the URL: IR_SAL_TOTAL. However, if we add a second IR to the page and try to access that saved report, we would receive an error "Report does not exist". This is because it tries to find that that report name in both IR's.

To get around this, you MUST set a static ID property for your IR regions. Then the request portion of the URL would become: IR[<static ID>]_<ALIAS>. So I gave a static id: empIr, thus the request portion would be: IR[empIr]_SAL_TOTAL, to link to the report I previously saved.

The same concept applies to filtering your IR in the URL. The supported URL operations can be reviewed in the (beta) docs:

  • C = Contains
  • EQ = Equals (this is the default)
  • GTE = Greater than or equal to
  • GT = Greater Than
  • LIKE = SQL Like operator
  • LT = Less than
  • LTE = Less than or equal to
  • N = Null
  • NC = Not Contains
  • NEQ = Not Equals
  • NLIKE = Not Like
  • NN = Not Null
  • NIN = Not In (escape the comma separated values with backslash, \)
  • IN = In (escape the comma separated values with backslash, \)
  • ROWFILTER = Row Text Contains (this searches all columns displayed in the report with type STRING or NUMBER)

So, in the itemNames portion of the URL, you would spacify IR[<static id>]<operator as above>_<column>. So for example, if we wanted employees with a salary greater than 2000 we would use IR[empIr]GT_SAL giving us a URL like so:


Friday, 20 March 2015

Identifying functions and procedures without arguments

I wanted to find a report on all procedures in my schema, that accepted zero arguments.

There are two views that are useful here:
With USER_PROCEDURES, if the row is referring to a subprogram in a packaged, object_name is the package name, and procedure_name is the name of the subprogram. With any other subprogram out of the context of a package, the object_name is the name of the subprogram and procedure_name returns NULL.

With user_argument, object_name becomes the name of the subprogram, with package_name being NULL when you are not dealing with a package's subprogram. 

In the case of subprograms out of the package context, no rows are returned in the user_arguments  view. That differs from a subprogram in a package - you get a row, but argument_name is set to NULL. You will never get a NULL argument if there is at least one argument.

In the case of functions, you will get an additional argument with argument_name set to NULL that relates to the return type. 

So with that information in mind, we can identify procedures like:

select user_procedures.object_name
    left outer join user_Arguments proc_arguments on (user_procedures.object_name = proc_arguments.object_name )
    proc_arguments.object_id IS NULL
    and user_procedures.object_Type = 'PROCEDURE'

Then, we can get functions:

with func_arg_count as (
select func_args.*, count(*) over (partition by func_args.object_id) arg_count
    left outer join user_Arguments func_args on (user_procedures.object_name = func_args.object_name )
    user_procedures.procedure_name  IS NULL and func_args.package_name is null
    and user_procedures.object_Type = 'FUNCTION'
select object_name
from func_arg_count
where argument_name is null 
and data_Type is not null
and arg_count = 1

Giving us a consolidated query for functions and procedures of:

with func_arg_count as (
      , user_procedures.object_id  
      , user_procedures.object_Type
      , func_args.argument_name
      , func_args.data_Type
      , count(case when func_args.object_id is not null then 1 end) over (partition by user_procedures.object_id) arg_count
        left outer join user_Arguments func_args on (user_procedures.object_name = func_args.object_name )
        user_procedures.procedure_name  IS NULL and func_args.package_name is null
select object_name
from func_arg_count
where (
        ( object_Type = 'FUNCTION' and argument_name IS NULL and ARG_COUNT = 1)
        ( object_Type = 'PROCEDURE' and arg_count = 0 )

Then we can do the packages like so:

with pkgs as (
      , user_procedures.object_id  
      , user_procedures.object_Type
      , user_procedures.procedure_name
      , func_args.in_out
      , func_args.argument_name
      , func_args.data_Type
      , func_args.subprogram_id
    , count(*) over (partition by user_procedures.object_id, func_args.subprogram_id) arg_count
        join user_Arguments func_args on (user_procedures.object_name = func_args.package_name and user_procedures.procedure_name = func_args.object_name)
        user_procedures.object_Type = 'PACKAGE'
select object_name pkg, procedure_name 
from pkgs
where argument_name is NULL and arg_count = 1

Monday, 9 February 2015

Reset an Interactive Report (IR)

To reset an IR back to the default settings, you may know you can go to the actions menu, and hit reset:

If you inspect the apply button you will see it's calling gReport.reset()

And there are a bunch of examples using this gReport object both for resetting the report and other IR functions.

The problem? This is not documented, and with APEX 5 supporting multiple IRs, this will no longer work. In your console, if you enter gReport, you will see that object no longer exists.

The other technique you can use is the clear cache portion of the URL. According to the docs:

To reset an interactive report in a link, use the string "RIR" in the Clear-Cache section of a URL. This is equivalent to the end user choosing the Reset option from the interactive report actions menu on the target page. The report is returned to the default report settings specified by the developer or saved by the user.

Just testing this on a 4.2 instance, this this only resets the PRIMARY report, and takes you back to the PRIMARY report i.e. any filters added on saved reports will remain. The reset button in the actions menu on the other hand just clears the filters added on the saved report, and you remain on that saved report.

So, the best way to programatically reset an IR?

First, I suggest giving your IR a region static ID so you can easily query the apex dictionary.

Then if we look at the APEX_IR API, we can see it has the procedure (2) RESET_REPORT, which accepts 3 parameters - page id, region_id and report_id. It is acceptable to pass in NULL to p_report_id, and it will just use the last viewed report for that region - otherwise, you'd need to make use of APEX_IR.GET_LAST_VIEWED_REPORT_ID.

Then, we can have a dynamic action with 2 true actions: Execute PL/SQL code and Refresh region, with the PL/SQL code looking something like this:

    l_region_id apex_application_page_regions.region_id%type;

    select region_id into l_region_id
    from apex_application_page_regions
    where application_id = :APP_ID
    and page_id = :APP_PAGE_ID
    and static_id = 'region-static-id';--update this accordingly
        p_page_id => :APP_PAGE_ID
      , p_region_id => l_region_id
      , p_report_id => NULL

This re-iterates the importance of only using documented functions.