0% found this document useful (0 votes)
90 views52 pages

Oracle: Work, REST and The Day-to-Day

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
90 views52 pages

Oracle: Work, REST and The Day-to-Day

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 52

Edition highlights:

Upgrading from Oracle It matters where your Improve your DevOps EBS: remain or move to a
Forms 5 to 11g data is with Docker co-existence model?

OracleScene #OracleScene Autumn 17 | Issue 65

Work, REST and


the Day-to-Day

G NEWS BREAKING NEWS BREAKING NEWS BREAKING NEWS BREAKING NEWS BREAKING NEWS BREA

Agenda’s Launched

www.ukoug.org
An independent publication not
affiliated with Oracle Corporation
OracleScene

AUTUMN 17
Welcome to Oracle Scene

Inside this issue


06 17
Oracle Scene Editorial Team
Editor: Martin Widlake
Email: [email protected]
Deputy Editor (Apps): Khalil Rehman
Deputy Editor (Apps): Toby Price WHERE IS MY DATA?
UPGRADING A FORMS 5 A REAL-WORLD EXAMPLE OF
Deputy Editor (Tech): Alan McClean
Deputy Editor (Tech): Nicholas Shearer APPLICATION TO FORMS 11g PERFORMANCE TUNING
by Tom Reid by Martin Widlake
UKOUG Contact: Karen Smith
Email: [email protected]
Sales: Kerry Stuart
Email: [email protected]

UKOUG Governance
A full listing of Board members, along with
26 PIMP YOUR ORACLE
BUSINESS INTELLIGENCE
40 THE FIVE KEY CHALLENGES
details of how the user group is governed, DEVOPS WITH DOCKER FACING PAYROLL
can be found at: by Gianni Ceresa by Claire Milner
www.ukoug.org/about-us/governance

UKOUG Office
UK Oracle User Group, User Group House,
591-593 Kingston Road, Wimbledon
TECHNOLOGY
London, SW20 8SA REST for the Database Professional: What’s in it for You? by Jeff Smith 10
Tel: +44 (0)20 8545 9670 Removing IDENTITY Columns from your Tables by Robert Jackson 14
Email: [email protected]
Web: www.ukoug.org Ask Jonathan by Jonathan Lewis 48

Produced and Designed by


Why Creative
Tel: +44 (0)7900 246400
Web: www.whycreative.co.uk APPLICATIONS
An Post: Customer Analytics Using Oracle Analytics Cloud by Tony Cassidy 30
Next Oracle Scene Issue Safeguard your Business by Adil Khan 34
Issue 66: December 2017
Content deadline: 12th September Where’s Your Head At? by Steve Davis 43
Why Machine Learning Might be the Saviour of Advertising by Daryn Mason 46

UKOUG
UKOUG 2017 Conferences 24

More than 17,000 people


follow UKOUG.
Join them now. @UKOUG REGULAR FEATURES
News & Reviews 04

OracleScene© UK Oracle User Group Ltd


The views stated in Oracle Scene are the views of the
author and not those of the UK Oracle User Group Ltd.
We do not make any warranty for the accuracy of any
published information and the UK Oracle User Group
will assume no responsibility or liability regarding the
use of such information. All articles are published on
More information on submitting an article can be found online at:
the understanding that copyright remains with the www.ukoug.org/oraclescene
individual authors. The UK Oracle User Group reserves
the right, however, to reproduce an article, in whole
or in part, in any other user group publication. The
reproduction of this publication by any third party,
in whole or in part, is strictly prohibited without
the express written consent of the UK Oracle User
Group. Oracle is a registered trademark of Oracle
Corporation and /or its affiliates, used under license.
This publication is an independent publication,
not affiliated or otherwise associated with Oracle
OracleScene View the latest edition online and
join UKOUG to access the archive:
Corporation. The opinions, statements, positions
and views stated herein are those of the author(s) or
D I G I T A L www.ukoug.org/os
publisher and are not intended to be the opinions,
statements, positions, or views of Oracle Corporation.

02 www.ukoug.org
First Word

First word
This Autumn 2017 edition of Oracle Scene kicks off with
an article on migrating from Forms 5 to Forms 11g. Forms
is a bit of an odd Oracle product in that it has been around
for a very, very long time and, despite there being more
modern products that fill the same niche (Oracle APEX
being the obvious one), it just keeps going. Some tech
products have a very long shelf life.
Like many Oracle technical people of a certain age, my Oracle are being used to try and make advertising more relevant and less
career started with Forms back in the 1990’s – in my case annoying to the customer, improving the customer experience.
V2.3 very briefly but for several years I was a Forms 3 builder As you can see, here at Oracle Scene we try to not just cover the
and then V4 &V4.5. Over time Forms has changed and it various areas of the Oracle environment (Tech, Apps, BA and the
often seemed to be that Oracle would like to kill it off, but cross-over between them) but also cover both the latest tools
it has remained a popular product, is still part of the Oracle & developments along with the older stuff that many of us still
environment and many people with an older V5 or V6-based want to use and we need to support.
system will be interested in the article on how to move forward.
And as the article covers, Oracle will continue to support and Of course, an ideal place to learn about both what’s new in Oracle
develop the Forms product. Not bad for something that has and also what you currently use (even old Forms!) is at the annual
been used for over 25 years. UKOUG conferences; Apps17, JDE17 and Tech17 in December. In
this edition we announce some details of what you will be able to
At the other end of the age spectrum in the tech sphere, Oracle’s experience onsite this year. I’ll be there, it’s the one conference I
Jeff Smith provides an article about why REST is relevant to Oracle have made sure I attend every year, for over 10 years.
development and Gianni Ceresa discusses how you use Docker to
support DevOps for the Business Analytics world. Both of these On the topic of the 2017 UKOUG conferences, the next edition
technologies are relatively young and “hot” at the moment. On of Oracle Scene will be the conference edition – everyone who
the Apps side, Steve Davis’ article on Oracle E-Business Suite attends gets a copy, so the articles and adverts in that copy
shows how you can continue with that technology and, again, are seen by a large number of people. If you are presenting or
Oracle is committed to ongoing support into the future – or you exhibiting at the conference this year, this is the ideal issue to
can go modern and move to the Cloud. For leading edge, look no offer articles for. Get your submission to us by 12th September to
further than Daryn Mason’s article on how machine learning & AI be considered for inclusion.

ABOUT Martin Widlake


Database Architect & Performance Specialist, ORA600
THE Martin is an independent consultant specialising in Oracle database design,
EDITOR performance, PL/SQL and making systems work better. He has been working with Oracle
technology for half his life. Despite this, he is a passionate supporter of user groups,
sharing knowledge and explaining how Oracle works. Martin is a regular conference
presenter, both in the UK and internationally, an Oracle ACE director and a member of
the Oak Table network. His blog is part technical, part management and part random
musings on working in I.T. His real passion is genetics. And cats.
Blog: mwidlake.wordpress.com
uk.linkedin.com/pub/martin-widlake/2/7a2/8b/en
@MDWidlake

www.ukoug.org 03
OracleScene

AUTUMN 17
News & Reviews

OUG Scotland 2017

Another Great Year


By Debra Lilley, OUG Scotland Board Sponsor

On the 21st June almost 200 delegates met in Glasgow for the OUG Scotland annual
event. There was a great buzz across all streams: APEX, Business Analytics & EPM, Cloud
Apps, EBS Apps, Database and Development.
Oracle Scotland led by Steve Gold, VP, I introduced Nadia Bendjedou as the of our continued, successful programme
Scotland County Lead, was in attendance Apps Keynote, and delivered two sessions of pre-event Meetups by hosting 10-15
and talked to many of our delegates. myself. I was only able to attend one delegates from all parts of the globe at the
It was interesting to see many Scottish other session which was a fantastic, world renowned Pot Still public house.”
Government, Public Sector and University “where we are on our journey” from the
organisations present and there were lots University of Birmingham. These end The OUG Scotland Project Lead, John
of conversations had over coffee and at user stories are always rated the best Thomas summarised:
the end of the day. sessions from the audience and receive
the highest feedback. The rest of the “We had a fantastic agenda: we find that
UKOUG President Paul Fitton commented, time I had a couple of meetings, one of presenters who appear at the UKOUG
“It was a pleasure to open the tech side of which was with Oracle Academy, who are conferences in December often seem
the OUG Scotland conference this year and based in Scotland, and was around how to use OUG Scotland as a practice run.
consume some of the inspiring content UKOUG can work together with that next That means quick low-cost updates
on offer. A great keynote was delivered by generation of Oracle Users. from the best Oracle technical or Apps
Gerald Venzl and Patrick Wheeler on the presentations, six months ahead of the
transformation of data with DB 12c. I also Also in attendance at the event was rest of the UK and you don’t even have to
enjoyed an opportunity to learn from Jo UKOUG Executive Director, James Jeynes: book a hotel room!”.
Haverfield of Expedia on their successful
upgrade of EBS and I got some hands-on “I was glad I made time to travel to
experience with Oracle Application Builder Glasgow to experience the energy at our Thanks to both the committee and the
Cloud Service and met some more of the flagship Scotland event. We began the office for the delivery of another excellent
brilliant community we serve.” event build up the night before, as part OUG Scotland, see you next year.

WHERE BEST TO CATCH UP? A UKOUG MEETUP


UKOUG has moved forward with its Meetup initiative by The meetup facility is open to all UKOUG
running two successful pre-event meetups in June before members to host evening Meetups
UKOUG EPM & Hyperion 2017 and OUG Scotland 2017, these under the UKOUG banner, where
have been primarily to provide those delegates and speakers themes and topics to help our Oracle
who are the in vicinity the night before these events with a communities can be discussed in an informal manner, in more
place to go to meet a friendly face and start networking ahead localised areas. To join our group or find out more head to:
of their day of learning. www.meetup.com/UK-Oracle-User-Group-Meetup

04 www.ukoug.org
News & Reviews

Oracle E-Business Suite at OUG


Scotland 2017 & at UKOUG Apps17
By Debra Lilley, UKOUG Member Advocate Chair

Nadia Bendjedou, who we need to congratulate on her promotion to Vice President of Product
Strategy for Oracle E-Business Suite earlier in the year, gave the Apps community keynote at OUG
Scotland; talking about EBS - where it is and where it is going. The session was pitched at a high level
to ensure she was able to get through it all, but more about that later.
Support of EBS is very key and an audience member asked Cliff Godwin who is the SVP for EBS, is always available to help
about what Oracle had done about their exposed Java as the big us at UKOUG, and his community keynote is a must every year
browser providers start removing their support for this. A great for all the EBS customers in the UK. Nadia and I chatted after
question and Nadia was quickly able to answer that it had been her keynote in Scotland, and we discussed how we could make
addressed in Release 12.1.3. Apps17 even better for this audience.

“I was thinking there are a lot of EBS customers who are happy with Cliff will deliver his traditional EBS keynote on the Monday,
extended or 3rd party support on earlier versions and it will be but instead of trying to cover the detail, he will talk at an
interesting to understand the impact on these customers? Perhaps information level and on his summary slide, he will advertise
this could be a question explored in our Apps Tech community?” the drill down sessions that follow throughout the rest of the
conference. Cliff, Nadia and others in the team will deliver an
Nadia also said ‘just google Steven Chan and Java’ - and guess in-depth session for each of the highlights. UKOUG have also
what it works! https://blogs.oracle.com/stevenchan/java-web- sourced customer case studies for some of these features. Cliff
start-now-available-for-ebs-121-and-122. Steven Chan is part will also then be able to have a small Q&A in his keynote and
of Oracle’s EBS technical group: this is a great team supporting remember that every year he and Nadia invite our end user
the development and customer base for EBS. Steven’s blog is delegates to have 1-2-1s at conference.
the must go to blog for EBS technical people and his colleague
Elke Phelps posts regularly on their Facebook page - Oracle Cliff and his team really enjoy the opportunity to present
E-Business Suite: Applications Technology, https://www. and interact with their customers and get a huge amount of
facebook.com/groups/EBS.SysAdmin/. value from the questions they receive. An example of this is
where recently delegates expressed frustration that although
The team also have user groups front and centre of their remit, Oracle talk about a long life for EBS the timeline slide didn’t go
and every year they bring great content to UKOUG. Steven very much into the distance. This is where user groups shine,
and Elke have both spoken at UKOUG events along with Kevin feedback like that from a group, really does have influence, and
Hudson who will be speaking at Apps17 on online patching. the new slide used at OUG Scotland simply showed EBS will be
here for a long time.

A ROUNDUP OF THE RECENT JD EDWARDS UPDATES


JD Edwards Orchestrator Now in EnterpriseOne Core Tools previous Tools releases and how customers are leveraging
and Infrastructure these features. http://nnf.questdirect.org/questmediaviewer.
Licensing for Orchestrator (formerly called Internet of Things aspx?video=224948089
Orchestrator) has changed. Rather than being licensed as
LearnJDE Search Now Includes Product Documentation
a discrete product, Orchestrator is now included as part of
JD Edwards continues to enhance the user experience of
JD Edwards EnterpriseOne Core Tools and Infrastructure.
LearnJDE. The most recent enhancement allows customers
Customers can employ the power of Orchestrator and
to find product content in the JD Edwards documentation
Orchestrator Studio like the other integration and
libraries and display them within the portal. Search for
interoperability tools. Find out more at: http://bit.ly/2trVil2
program keywords in LearnJDE to see results from the
Announcing EnterpriseOne Tools Release 9.2.1.4 Applications and Tools documentation libraries. http://
Oracle recently announced JD Edwards EnterpriseOne learnjde.com
Tools Release 9.2.1.4 which provides an improved user
Continued Investments in JD Edwards
experience and is up to date with the latest JD Edwards and
A new JD Edwards eBook is available on the JD Edwards
partner technologies. This release includes enhancements
EnterpriseOne page of Oracle.com. Read more on the
to EnterpriseOne Search, new Orchestration capabilities,
investments being made by Oracle and JD Edwards for the
additional flexibility for Media Object storage, and platform
products you use every day. http://www.oracle.com/us/
certifications. Discover more at: http://bit.ly/2trVil2
products/applications/jd-edwards-enterpriseone/overview/
The Future of JD Edwards Tools index.html
If you missed the recent webinar with Senior Director of
Head to JDE17 this December for live updates from the
Product Management Jeff Erickson, please listen to the reply
Oracle team. Find out more at: www.jde17.ukoug.org
which includes key features that have been delivered in

www.ukoug.org 05
OracleScene

AUTUMN 17
Technology

Upgrading a Forms 5
Application to Forms 11g
For as long as I care to remember the demise of Oracle Forms as a tool for developing
GUI interfaces to the database has long been predicted. I’m happy to say that, in those
famous words, the reports of its death have been greatly exaggerated. I thought I’d
share my recent experience in upgrading our Oracle Forms 5 application to Oracle
Forms 11g.
Tom Reid, Oracle Developer, Euromoney Indices

First a bit of background. Historically, Oracle Forms applications 11g install disk from Oracle. What follows is pretty much a step-
were the main way for non-technical users to interact with the by-step process of what we did next.
Oracle database. They presented a GUI interface that enabled
users to select, insert, and update data without having to The first thing you must get your head around is that Forms
know any SQL or underlying table structures at all. In our own 11g runs under a totally different architecture than Forms 5 did.
case before Forms 5, our Forms 4 application was, frankly, a The Forms 11g run-time is a web-based 3-tier type architecture
bit rubbish and we did a wholesale update to Forms 5 around compared with Forms 5 client/server architecture. What this
2001 and since that time it’s been the primary interface to means is that you will be accessing your Forms 11 screens via
our Oracle database in use by our data analysts. It served us a web browser and importantly some of the things you did in
well in all database versions we ran from Oracle 7 through to Forms 5 either don’t work in the same way under Forms 11 or
Oracle 9i. However, when we upgraded our database to Oracle just don’t work full stop.
10g we hit a snag. I know what you’re probably thinking, “…
those guys are still just on Oracle 10!!” In our defence our main
server is an OpenVMS Compaq Alpha and let us be kind and say Believe me when I say that the most difficult
that in terms of database releases and for various reasons the
OpenVMS platform hasn’t always been a priority for Oracle. step you’ll face in upgrading a Forms 5 app to
Forms 11g will probably be the installation of
Although our Forms 5 application could still run against our 10g
database, we found we couldn’t develop new Forms against it. the new Forms 11g binaries itself.
Compiling any changed Forms whilst connected to it resulted
in the Form builder application crashing. The stop-gap solution
was of course to install a separate Oracle 9i database and There are two main parts to the installation. The webserver
develop against that but deploy against our Oracle 10 database. - which is easy and the fusion/Forms11g bit itself - which
This wasn’t ideal though as we had to basically replicate a great ironically is also easy - as long as your system is correctly set-up
deal of the Oracle 10 database table structures and contents to beforehand to begin with! I nearly tore my hair out over this step
the Oracle 9 database just to get the Form to compile and in any as I just couldn’t get the new Forms installed properly at all. The
case who knows what would have happened when we upgraded reason it turned out, was simply due to the ports in our server
to Oracle 11 or 12. So, the decision was taken to upgrade to a not being fully opened due to IT security concerns. Once we had
more modern version and we chose Forms 11g for no better arranged with our IT department to provide a server with the
reason other than we had just happened to receive a shiny new ports open, the installation was a cinch. My advice is this- If you

06 www.ukoug.org
Technology: Tom Reid

have taken care of the really obvious and stupid requirements exactly what Forms you actually need to convert. In our case, out
such as no spaces in environment variables like PATH and TMP of approximately 340 Forms, we chose to convert some 200 of
and so on and you are still having issues installing Forms, there them. So use this time to get rid of the Forms you don’t use. We
is really something more fundamental going on such as the used a simple traffic light system. Green for absolutely essential
issue we had with ports or virus checking software interfering Forms that must be converted, amber for “nice-to-haves” that
with your install. You need to get this sorted first before doing can be done as and when time allows and red for “bin it”.
anything else.
Update your Formsweb.cfg configuration file as required. This
At this stage you will also have to decide which of the two file is where you can put things like the start form for your
main Forms installation modes you will use, deployment or app, the look and feel, window size etc... The other config type
development. My advice is to go for deployment. Don’t worry, files such as register.dat and default.env can also be used for
deployment doesn’t mean you can’t develop Forms but it does different configuration options such as the use of user-defined
hold some advantages over the development mode such as the icon for menu items.
installation of the Enterprise Manager tool which can be very
useful indeed. Finally, we can now start the conversion process. It’s pretty much
the same for each form.
The next step is to install Oracle Forms and Reports version 6.
Come again? I thought this was a Forms 5 to Forms 11g upgrade. • Open the V5 form in Forms 6 builder
Why are we installing Forms 6 too? Well, in case you didn’t know • Compile the form
already, the bad news is that the upgrade path from Oracle 5 • Save as a V6 form
Forms to Oracle 11g Forms requires that you upgrade first of all • Open the V6 form in Forms 11g builder
to Oracle Forms 6 as an intermediate step. Don’t worry, this is • Compile the form
not as onerous as you might think and generally only involves • Save the V11 form
opening the Forms 5 form using the Forms 6 builder tool, re- • You’re done!
compiling it and saving it. If you don’t have a copy of Oracle
Forms 6 lying around you need to contact Oracle Support. They Although we preferred to convert each form individually by
will give you a link – normally valid for 24 hours - where you can hand, as it were, there are utilities built into both Forms 6 and
download this version. 11g builder that enable you to bulk convert forms. It’s your call.
If all has gone well you should get similar results to what we
At this point you should also take this opportunity to identify achieved.

www.ukoug.org 07
OracleScene

AUTUMN 17
Technology

FIGURE 2
FIGURE 1

The screen-shots above show an example of the same form complex reports. Ours were quite simple. Once the reports
- our main Stock Details screen - running under both Forms were in 11g format we turned our attention to how they were
11 (left image) and Forms 5 (right image). As you can see they being called from Forms. It’s likely that your existing call to a
are very similar in look and feel which was our intention. We report under Forms 5 will be something like this:
took the opportunity under Forms 11 to re-arrange and tidy up
our menu structure and got rid of our short-cut menu icons. I Run_Product(REPORTS,’your_report.rdf’,
mention in the next section some of the potential issues that ASYNCHRONOUS,BATCH,FILESYSTEM,’paramdata’, NULL);
can affect buttons and you can see an example of that near the
bottom right hand side of the images where some of the button Under Forms 11g there are a couple of changes you have to
colours/text under the Links section have changed from grey/ make. The first thing you’ll notice is that there is a new REPORT
black under Forms 5 to white/red under Forms 11, see Figure 1 node in the object navigator. In here you define certain aspects
& 2. of your report, such as its name, its associated filename, printer
and so on. So do that here. The report name is anything you like,
say, REP001. Filename would be your_report.rdf, execution type
Potential issues as run – say, report server name and so on.
What follows is a non-exhaustive list of issues you may or may
not encounter during your upgrade. Once you have this you will need to change the call to run_
product above to be something like:
• Colour changes to Forms backgrounds and/or buttons
and fields
ReportServerJob VARCHAR2(254);
For reasons unknown the backgrounds in some of our Forms pl_id paramlist := create_parameter_list(‘paramdata’);
changed from a light grey colour to teal. Equally inexplicably add_parameter(pl_id,’P_1’,text_parameter,:some_formfield_name_
some of the button colours and text changed to a red or white here);report_id :=
find_report_object(‘REP001’);
colour. In both cases you can simply go into the form under /* If you were printing the report out include the line below */
Forms 11g and change back as required. SET_REPORT_OBJECT_PROPERTY(report_id, REPORT_DESNAME, ‘printer_
device_name’);
/* And the line below to actually run the report */
• Frame/field/button/label sizes ReportServerJob := run_report_object(report_id,pl_id);
We found that for some Forms a few of the fields/buttons,
labels and frames needed a touch of re-adjustment to get
them to fit onto their place on the Forms window properly.
Summary
That’s pretty much it. For what it’s worth my advice is this - If
• Forms that call either reports or graphics
you don’t need to upgrade your Forms app then don’t. However
These will need closer attention. Luckily we had only one form
you’re probably just kicking the inevitable down the road a
that used an Oracle Graph and, on closer inspection, found
year or two and at some stage you’re going to have to go for
that it wasn’t really being used so we could simply scrap the
it. There is also a lot of talk around moving to or converting to
part of the form that displayed the graph. If you can’t do this
Oracle APEX but again if you’re happy with Forms then I would
you will need to implement your graph as a JAVA bean. The
go for the Forms upgrade, it really isn’t as complicated as you
demo pack that comes with Forms 11g has some examples of
might think and I hope this article is of help to you. If you are
this. There is no direct equivalent to the old Oracle Graphics
at all worried about the future of Forms then perhaps the
product under Forms 11g.
following snippet from Oracle - taken from their Oracle Forms
Statement of Direction, last updated in March 2017 will assuage
We also had about half a dozen Forms that called Oracle
your concerns. NB The full article can be accessed at https://
reports. For these it was a case of opening the reports under
support.oracle.com (requires a login account). Search for Doc ID
the Reports 11g builder, compiling them then saving them.
2009262.1
We didn’t have to do the intermediate step of opening them
under Reports 6 but you may have to do this for particularly

08 www.ukoug.org
Technology: Tom Reid

“Oracle continues its commitment to Oracle Forms. New releases With Cloud Computing being such an important part of today’s IT
are being planned and new features and other improvements are landscape, Oracle is investigating what possibilities the Cloud may
currently being reviewed. New releases are planned to include offer for the Oracle Forms product and its customers. Using Oracle
some of the following, as well as many others: Forms in the Oracle Public Cloud could offer significant cost savings
 simply by reducing the typical cost of hardware upgrades and
• Design-time productivity improvements maintenance. For Independent Software Vendors (ISV), and other
• Application Deployment utilities software providers, the use of the Oracle Public Cloud could make
• Performance improvements the delivery and accessibility of Oracle Forms based applications
• New and enhanced object properties much easier and cost effective.
• New runtime UI features
• New and improved integration with various products and Also, a significant part of modern computing is mobile
technologies technologies. Working closely with Oracle Partners, Oracle will
• Support for new Java versions continue to investigate possible mobile solutions and how they
• Support for new operating systems may apply to an Oracle Forms customer.”
And many more...

ABOUT Tom Reid


Oracle Developer, Euromoney Indices
THE Tom lives and works in Edinburgh for Euromoney Indices. He has been an Oracle developer
AUTHOR on and off for over fifteen years, working with such companies as: BT, SEMA Group, JP
Morgan and HSBC. He graduated with honours in Physics before undertaking a
postgraduate diploma in Software Engineering.
www.linkedin.com/in/tom-reid-5a2a3a/

www.eFileReady.com

HMRC RTI eReturns


with ORACLE XML Files

Supports full range of


RTI Data eReturns
FPS, EPS, NVR, EYU and
RTI Data eReceipts (DPS)
P6, P9, SL1, SL2, NINO Notices.
Proven and well used by Users

eFileReady (fully cloud based system) will accept your Oracle generated files in XML, CSV or iXBRL formats and
will e-file your data to HMRC via the Internet Channel, a direct replacement for the EDI channel for eReturns to
HMRC. We also provide e-filing services for Companies House and Pension providers.

Send in your enquiry E-mail to: Or contact Mr. Ashley Thomas at Tel:
Ashley.Thomas@efileready .com 020 8452 9516
© Copyright 2017 eFileReady Ltd, UK. All other ® & TM company or product trademarks are the property of the respective trademark owners.

eFileReady_HalfPage_Advert_11JUL2017
www.ukoug.org
11 July 2017 16:50:13 09
OracleScene

AUTUMN 17
Technology

REST for the Database Professional:

What’s in it for You?


The title was an actual question at a recent conference. I think it’s more of a reaction,
than a question, to being exposed to a paradigm that seems to be in direct conflict
with the database’s traditional access model, which can be distilled to:
“Persistent connections to the database from a client over an Oracle driver,
authenticated via an Oracle user, running through one or more transactions”.
Jeff Smith, Product Manager, Oracle

REST is stateless. It’s delivered over HTTP(S). Authentication is Cloud services could not work without REST APIs. When a new
generally done via the Webserver, but definitely by a party OTHER customer adds a service to their cart and checks out, a series of
than the database. The transaction lives only during the request. automated processes are kicked off, and they all speak to each
other, via REST.
That’s not very appealing to the everyday database developer,
administrator, or even user. In other words, a series of PUTs and POSTs allow for your brand-
new Oracle Cloud databases to be stood up and ready to go.
So why is it that you keep hearing about this ‘REST’ stuff? Underneath all these calls, the systems are built with shell
REpresentational State Transfer (REST) has become the ‘go to’ scripts that undoubtedly look very similar to the ones you have
model for building applications today. And you do not have to just written for you own shoppers.
take my word for it – in a 2014 survey of published APIs, REST was Why has Oracle and the rest of world agreed to use REST? There
found to be the delivery model for almost 70% of the collection. are a few reasons, but in general, the consensus is that:

And just in case you would prefer to hear about a more first- 1. It’s easy
person accounting of this phenomena, our own Oracle Public 2. It’s accessible.

10 www.ukoug.org
Technology: Jeff Smith

FIGURE 1: DEVELOPERS PREFER TO WORK WITH THE SIMPLER OF THE TWO

I have not seen many papers stating that REST has won out over • Simply throwing an HTTP interface onto some stored procedures
SOAP because it is the ‘best’ - rather it always comes down to • Ignoring HTTP conventions around error messages and
being easier. The data transformation method is JSON versus behaviours
XML. XML is often associated with being overly verbose and • Bad or missing documentation
complicated.
Not to be mean, but if you have built a PL/SQL package.
In terms of accessibility, REST calls are handled via HTTP(S) procedure called ‘submit order’, having a https://system.com/
– there’s no need to install and configure a driver for your db1/submit_order accessible by a GET request is NOT REST. Why
application stack. Simply make the required calls out to the does that matter? Your existing systems will expect a REST API
network, GET, PUT, POST, or DELETE to interact with a resource. they can simply plug-into using the published API calls. They will
And, just about ANY programming language is going to have the expect and DEMAND that a GET will be ‘safe’ – nothing created,
libraries required to both parse and work with JSON and make changed, or deleted. It would expect to see an ORDERS resource
HTTP(S) requests. it could access via PUT to submit an order. And if it wanted to
retrieve ORDERS, it would know by definition, that it could do so
Even relative newcomers to the internet know that a 404 via a GET on ORDERS.
response means what you are looking for can’t be found and
that a 500 response means that something ‘very bad’ has Doing these things incorrectly will severely limit the usability
happened. This combination of well-known error codes and and uptake of your database REST APIs. The worst thing you can
verbs, JSON, and portability has made REST almost impossible do is provide an unpredictable system.
to resist.
Do you want to be the REST hero? Make your database and its
When I’m on my mobile device, I’m not going to wait for it to data, accessible via REST, following the rules of REST. And be sure
connect to my database over a 3G signal, persist a connection, to include a smart looking REST API set of docs along the style of
and run a series of calls or transactions over several minutes. I Swagger.
mean, I could, but I would HATE my life. And I would HATE the
database. No, I want to talk to the database, just like I would talk
to any website. And my response time better fall in the range of
hundredths or tenths of seconds, not even seconds.

So what does this have to do with MY database?


Imagine your application or IT infrastructure. Everything is a
well-oiled machine. Disparate systems easily communicate with
each other over REST. If a new component is to be added, simply
plug it in via REST.

Enter the database. It’s REST in, REST out throughout the entire
system, but the database is different. The database is, special.
The application now needs some custom code to fake REST calls
over SQL*Net or JDBC. This has been ‘the way’ for 30 to 40 years,
but maybe it’s time to consider change?

Of course, your job is to protect the organisations’ most FIGURE 2: WHAT’S AVAILABLE, HOW DO YOU CALL IT, WHAT’S THE EXPECTED
important asset: the data. Your job is to also maximise the OUTPUT?
profitability of the business when it comes to taking advantage
of that data. REST empowers accessibility of that data, WITHOUT
sacrificing security. How do we get started?
Included with your Oracle Database license is a technology
Ok, so you want to make the database play nice with REST. What called Oracle REST Data Services. Its core mission is to make your
does this mean? Here’s what it does NOT mean: database available over REST.

www.ukoug.org 11
OracleScene

AUTUMN 17
Technology: Jeff Smith

FIGURE 3: ORACLE REST DATA SERVICES (ORDS)

Typically found in the mid-tier, ORDS is a Java application that can


map a HTTP(S) URI to ‘something’ in your database, and access
it via JDBC. It also handles converting JSON in your POST request
FIGURE 4: ARE WE GOING
bodies to store procedure inputs or taking a query result or TO USE SQL OR PL/SQL?
REFCURSOR and sending it back down to your application as JSON.

So to access your database via REST, you don’t need to know


Java. You don’t need to worry about setting up a JDBC driver URL You are NOT limited to just running SELECT * FROM queries
for your application, or even worrying if your application can when implementing your RESTful Service GET handlers. In
REACH your database via JDBC. You simply need a copy of the API fact, the opposite is true – the entirety of the SQL and PL/SQL
Docs and the appropriate login credentials. languages are available. They will be executed via JDBC in the
ORDS connection pool servicing your database. In this case we
ORDS provides a security model that allows you to protect your are using a PL/SQL anonymous block to call a stored procedure.
database resources so that ONLY authenticated users with the
appropriate roles and privileges can get to the /ords/workforce/ ORDS is friendly enough to see that your REFCURSOR might
peeps/ POST handler that would give folks a raise or holiday contain data types such as INTERVALs, TIMESTAMPs, and custom
bonus. You’re of course aliasing HR and EMPLOYEES objects such types. It will nest and convert them to JSON as appropriate
that they’re not being exposed via your URIs. when returning the results as necessary.

Database Collections via REST…How? A low code alternative


ORDS provides two primary methods for defining the database Instead of writing SQL or PL/SQL to power your REST handlers,
workloads behind any collection. For this discussion, we will ORDS also offers an ‘Auto’ implementation. For tables and views,
consider your EMPLOYEES table as a REST collection. A member ORDS can publish a full CRUD REST API – GET one or more
of this collection, an employee, would be modeled as a resource. entries, insert a new entry, update an entry, or delete an entry.
The feature is flexible enough to allow for predicates on your
ORDS allows the database professional to create one or more ‘SELECT * FROMs’, ordering your result sets, and even querying
modules. A module is a way to organise related collections, ASOF a point in time.
each of which have a unique Universal Resource Identifier (URI).
Modules serve much in the same way as packages do for PL/SQL To learn more about this feature in particular, you may want to
units. And a URI is the ‘address’ that defines how a collection can check out this post in Oracle Magazine. goo.gl/dbxNgG
be accessed in your HTTP(S) call, e.g. /ords/workforce/places/.
If you prefer to always interact with your data via existing PL/
When you want to make the collection of PLACES available SQL APIs, then ORDS also allows you to make your PL/SQL
to your application, you simply need to write the code that available via an RPC scheme – effectively executing your PL/SQL
will execute when you use the GET verb. This code is where via POST. I will just say that you need to remember putting your
the experienced database professional can jump right in and PL/SQL out there via HTTP(S) isn’t REST-enabling your database.
optimize the database transaction that is about to take place. You need to follow the rules and the overall REST paradigm.

Database operations
Automation is the name of today’s game. Having someone on
standby to run a SQL*Plus or bash script as needed is not going
to win you any innovation awards. No, instead you’ll be expected
to make the day to day lifecycle operations and database
maintenance work also available via REST.

Need to create a database? That’s a PUT. Need to get a list of


locking sessions? That’s a GET. Need to kill a session? That’s a
DELETE or POST.

FIGURE 5: THE GET IS SERVICED BY CALLING THE ANON BLOCK WHICH IN TURNS
CALLS THE STORED PROCEDURE, WHICH RETURNS A REFCURSOR

12 www.ukoug.org
Technology: Jeff Smith

The ORDS development team is now working on making a more, but that would require legal, and let’s just say you can
database REST API available with your ORDS installs. It will be expect more news on this front in the very near future.
fully documented using a Swagger style markup. I would say

Conclusion and takeaways


REST is here. Your IT and development organisations are already using REST APIs to build and deliver services internally and
externally to your customers. If you have not already been approached to ‘open’ your database to REST, you soon will be. ORDS
makes this not only possible, but easy. Don’t take our word for it – go to www.oracle.com/REST and get started today.

ABOUT Jeff Smith


Product Manager, Oracle
THE Jeff is a Product Manager in the Database Development Tools Group at Oracle, and has
AUTHOR been obsessing over saving people clicks and keystrokes for the last decade.

Blog: www.thatjeffsmith.com
@thatjeffsmith

from Rookery Software


www.configsnapshot.com

Oracle® E-Business Suite Configuration


Full Stack Oracle Apps & Lifecycle Management

Expertise So You Don’t Covering configurations and master data for more than
130 modules and technical areas across all versions
Have To Do IT. from 11.0.3 to 12.2.6, ConfigSnapshot is enabling
hundreds of organisations to: reduce time - reduce
errors - reduce cost - reduce risk in managing their
GET THERE FASTER WITH VELOCITY. Oracle E-Business Suite environments.
Document Plan
o Clear, concise and highly flexible o Extract and transform setups
o BR100, DS030 etc. in minutes o Enter new setups
o Accurate / current o Environment harmonisation
o Repeatable o Cross version transformation

Analyse Migrate
Target/format specific data Load setup to any environment
300+ Global Oracle Resources
o o
o Customisation identify/impact o Automated
o Upgrade/patch impact o Pre-validation
o Understand setups o Action dynamically determined

Managed Services Partner for Compare Track


o Instances (DEV / TEST / PROD) o Identify change over time
Oracle Cloud o Entities (OUs, Ledgers etc.) o Point to point change control
o Versions (11i, R12 etc.) o Full audit track available
o Planned vs. actual o What / Who / When

Unparalleled applications Monitor Comply


Monitor setups User access reporting and analysis
expertise delivered by VCAMP™ o
o Define reports without coding
o
o Segregation of duties reporting
o Schedule reporting o Support audit & control requirements
o Identify policy violations o Check setups vs. required settings

“Every time we use ConfigSnapshot we find new


CONTACT US functionality and new ways of saving time and effort”
VelocityCloud.co.uk/Contact Applications Manager, Communications Infrastructure Company

+44 (0) 141 202 6300 www.configsnapshot.com

www.ukoug.org 13
OracleScene

AUTUMN 17
Technology

Removing IDENTITY
Columns from your Tables
For as long as I can remember Oracle has used structures called SEQUENCES to
generate numbers automating creation of unique keys for database tables. When you
wrote a statement to INSERT a row into a table you would write an insert statement
providing all of the values you wish to save in the table along with the unique key. The
key used a specialised syntax noting the SEQUENCE name and the function NEXTVAL.
Insert statements looked much like this, unless you used a TRIGGER to auto-populate
the key.
Robert Jackson, Senior Software Engineer, KBRwyle

Now with 12c you can create a default value for a column from a
INSERT INTO my_table VALUES(my_seq.NEXTVAL, ‘ABC’, 456);
SEQUENCE by using the .NEXTVAL function. This eliminates the
need for writing the BEFORE INSERT and UPDATE triggers, it’s all
I had always thought it was a bit of a hassle to use a SEQUENCE done for you now.
and write the BEFORE INSERT and AFTER UPDATE triggers
associated when implementing a primary key for a table, but CREATE SEQUENCE event_pk_seq;
being an Oracle Developer I was just used to it.
CREATE my_table
( event_pk INT
CREATE TABLE departments ( DEFAULT event_pk_seq.NEXTVAL
id NUMBER(10) NOT NULL, PRIMARY KEY,
description VARCHAR2(50) NOT NULL); event_desc VARCHAR2(50)
);
ALTER TABLE departments ADD (
CONSTRAINT dept_pk PRIMARY KEY (id));
Another new feature in 12c is that it now has IDENTITY fields.
CREATE SEQUENCE dept_seq START WITH 1; These are fields that take yet another step out of generating
CREATE OR REPLACE TRIGGER dept_pk_upd AFTER UPDATE OF id primary keys. Using IDENTITY fields is much less cumbersome
ON department FOR EACH ROW than dealing with SEQUENCES and TRIGGERS. Merely by
BEGIN
RAISE_APPLICATION_ERROR(-20010,’Cannot update column ID in
identifying a column default as IDENTITY it automatically
table DEPARTMENTS as it uses sequence.’); generates a new sequential value for any record that gets
END; inserted into that table. This really streamlines and simplifies
your database design process. As with a sequence you can issue
CREATE OR REPLACE TRIGGER dept_bi an ALTER TABLE command and change the increment value and
BEFORE INSERT ON departments
FOR EACH ROW
start value for the IDENTITY column just like you could with a
SEQUENCE.
BEGIN
SELECT dept_seq.NEXTVAL
INTO :new.id Knowing this, I felt confident that this was a good idea and
FROM dual; decided to implement IDENTITY columns in a new application
END;
/
we were developing. Some of the other databases in our
department had already upgraded to 12c so I felt confident

14 www.ukoug.org
Technology: Robert Jackson

that there would be an Oracle 12c database available for me to


select ‘ALTER TABLE ‘||table_name||’ ADD (‘||column_name||’1
deploy on. We then proceeded to create all of the tables with ‘||data_type||’);
IDENTITY columns. ‘
from user_tab_columns where identity_column = ‘YES’;

CREATE TABLE mytab ( ALTER TABLE XYZDATA ADD (XYZDATAPK1 NUMBER);


c1 NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
c2 VARCHAR2(10)
);
2. UPDATE new Primary Key with the data contained in the old
Primary Key.
Things were going smoothly. Our C# developers were happy
and .NET Entity Framework was working well with the IDENTITY
columns. We used Tools for Oracle Application Development select ‘UPDATE ‘||table_name||’ SET ‘||column_name||’1 =
‘||column_name||’;
(TOAD) to create the initial database design, and TOAD handled ‘
the IDENTITY columns. We created the database objects and from user_tab_columns where identity_column = ‘YES’;
development began. Life was good!
UPDATE XYZDATA set XYZDATAPK1 = XYZDATAPK;

Problem:
We went to implement our code in the customer’s environment. 3. ALTER the table and DROP the old Primary Key. This will also
Due to circumstances outside their control, our customer’s drop foreign key constraints that refer back to this primary key
environment was unprepared to stand up our 12c database. We constraint.
were faced with having to implement our new application in
their Oracle 11g environment. Of course, the 12c IDENTITY fields select ‘ALTER TABLE ‘||table_name||’ DROP COLUMN ‘||column_
would not work in that environment. At this point we were name||’ cascade constraints;

faced with ALTERing all the tables and creating SEQUENCES and from user_tab_columns where identity_column = ‘YES’;
TRIGGERS. After attempting a couple of ALTER commands, it
was apparent that altering an IDENTITY field was not an option!
The field has to be dropped and a new Primary Key created or ALTER TABLE XYZDATA DROP COLUMN XYZDATAPK cascade constraints;
the entire table has to be dropped and recreated excluding
the IDENTITY syntax. We had loaded quite a bit of data in the 4. ALTER the table and rename the new Primary Key to the old
database and did not want to lose or have to recreate the data, Primary Key’s name. When the new column is added, it is
so we chose the ALTER table path. added to the bottom of the table, and is assigned the highest
COLUMN_ID. The select grabs the highest COLUMN_ID and
strips off the last character and renames it back to the original
Solution: column name.
These are the steps we took to eventually fix our 12c tables and
make them 11g compatible.
select ‘ALTER TABLE ‘||table_name||’
RENAME COLUMN ‘||column_name||’ TO ‘||substr(column_name,
1. ALTER each table to create a new alternate Primary Key 1,length(column_name)-1)||’;
2. UPDATE new Primary Key with the data contained in the old ‘
from user_tab_columns a where column_id = (select max(column_
Primary Key id) from user_tab_columns b where b.table_name = a.table_name);
3. ALTER the table and DROP the old Primary Key
4. ALTER the table and rename the new Primary Key to the old
Primary Key’s name ALTER TABLE XYZRECORD
RENAME COLUMN XYZRECORDPK1 TO XYZRECORDPK;
5. ALTER the table and add a constraint to make the new field
the Primary Key
6. Create all the SEQUENCES 5. ALTER the table and add a constraint to make new field the
7. Create all the BEFORE INSERT triggers primary key. Re-enable the primary key constraint for the
newly renamed Primary Key. Again, use the MAX(COLUMN_ID)
Since we had a lot of tables at this point, we developed “SQL to get the last column added to the table.
from SQL” scripts to automate the task as we had to do it in our
TEST, DEVELOPMENT, LOAD and PRODUCTION environments. select ‘ALTER TABLE ‘||table_name||’
ADD CONSTRAINT ‘||substr(column_name,1,27)||’_PK
Here are the statements we used to create the scripts and a PRIMARY KEY
(‘||column_name||’)
sample output created used to ultimately update the database. ENABLE VALIDATE;

from user_tab_columns a where column_id = (select max(column_
1. ALTER each table to create a new alternate Primary Key. id) from user_tab_columns b where b.table_name = a.table_name);
We named the new column the same as the old column, but _id) from user_tab_columns b where b.table_name = a.table_
appended a ‘1’ to the end. name);

www.ukoug.org 15
OracleScene

AUTUMN 17
Technology: Robert Jackson

ALTER TABLE XYZDATA


CREATE OR REPLACE TRIGGER BI_XYZDATA BEFORE INSERT
ADD CONSTRAINT PK_XYZDATA
ON XYZDATA FOR EACH ROW
PRIMARY KEY
BEGIN
(ZYXDATAPK)
:new.XYZDATAPK := XYZDATA_SEQ.nextval;
ENABLE VALIDATE;
END;
/

6. Create all the SEQUENCES. Create a sequence based on


the table name of all the tables. Note that we started the
Other issues:
sequences well above the max of the existing records in the
Another issue we ran into was that some of our newly created
database.
fields, TRIGGER names, and SEQUENCES were too long and ran
over the column, table, constraint, or sequence name length
select ‘CREATE SEQUENCE ‘||table_name||’_SEQ restrictions. We had to do a little tweaking to fix these issues as
START WITH 80000
NOMAXVALUE they arose, but using the scripts performed a lion’s share of the
NOMINVALUE changes we had to make. Since we were using 12c, we cloned
CACHE 20 our development environment creating a new database and
;’
from user_tables; used that environment to test and finalise the script needed to
ultimately fix our database.

CREATE SEQUENCE XYZMAINREC_SEQ We then ran the script in our development environment. Our
START with 80000
NOMAXVALUE front end was developed using Microsoft Visual Studio .NET
NOMINVALUE with Microsoft’s object-relational mapping framework, Entity
CACHE 20
;
Framework and C#. Since Entity Framework had initially been
created with the IDENTITY database fields, we had to correct
this within Visual Studio. To do this we deleted the existing
7. Create all the BEFORE INSERT triggers. Finally create the
database model within Entity Framework, then updated it from
triggers needed to populate the Primary Keys.
the development database which pulled the updated schema
that held our changes. We did this by RightClicking entity
select ‘CREATE OR REPLACE TRIGGER BI_’||table_name||’ BEFORE (table) in the edmx file, and selecting delete from model, NOT
INSERT
ON ‘||table_name||’ FOR EACH ROW
remove from diagram. We then RightClicked empty canvas, and
BEGIN selected “Update Model from Database” to re-add the entities.
:new.’||column_name||’ := ‘||table_name||’_SEQ.nextval; Then clicked the Primary Keys and selected “Identity” in the
END;
/’ “StoreGeneratedPattern” property.
from user_tab_columns where column_name like ‘%PK’;

Conclusion
There are probably other ways to handle this problem, but this worked for our situation. As this situation illustrates, it’s important
to remember to make sure you are developing to your target environment. While this little hiccup was temporarily problematic, it
wasn’t terribly difficult to remedy. We look forward to the 12c upgrade so we can start utilising more of the features available in
the multitenant architecture.
One thing we are looking into is going the opposite way and changing from using SEQUENCES to implementing IDENTITY
columns and dropping the SEQUENCES and TRIGGERS. I hope to provide you with an update as we start that undertaking after
our 12c upgrade.

ABOUT Robert Jackson


Senior Software Engineer, KBRwyle
THE Robert Jackson is a Senior Software Engineer for KBRwyle. He has been a Defense
AUTHOR contractor for 30 years. Robert has been using Oracle tools for two decades. His experience
runs the gamut of database design, database tuning, application development and
application integration with large government database systems.

16 www.ukoug.org
Technology

Martin Widlake,
Database Architect &
Performance Specialist, ORA600

Where is my Data?
A Real-World Example
of Performance Tuning
There are many aspects to database performance tuning. Most of us know about SQL Hints,
adding indexes to support queries, re-writing the SQL to aid optimisation, and ensuring your
object statistics are good enough to allow the optimiser to choose a good execution plan.
There are other considerations too. One of them that is often overlooked is where the rows
you want actually are.

Another key aspect of performance tuning is deciding what to several minutes to sort out the resulting problems caused by the
do to improve performance of slow-running SQL. You often have timeout (and still with the customer waiting).
several options and what you choose to do should be influenced
by factors such as the design of the application, how easy it is to The in-house developers had identified the SQL statement
change things and the likely impact on the rest of the system. that was taking up the time and it was a simple one (thank
The following article is based on a real-world situation. goodness!) See Figure 1. They checked out the indexes on the
tables and the execution path and it seemed to them to be
The Client’s Pain about as good as it could be. They tried a few things (maybe
A client was having issues with one part of their application that the hint /*+ RUN_FASTER */) but nothing helped, at least not
was occasionally taking too long to run and was timing out - the consistently. The developers asked the DBAs to help and they
middle tier would only wait 30 seconds or so for the database to ruled out a few things, such as the SQL swapping between an
respond to a request. It was not just that the users of the system optimal and suboptimal plan - a common problem with bind
had to wait for so long (and this was usually when dealing with variables on V10 &V11. The same SQL plan was used every
a customer who also had to wait) but that it would then take time the code ran, but the duration varied from milliseconds to

www.ukoug.org 17
OracleScene

AUTUMN 17
Technology: Martin Widlake

SELECT (SUM(NVL(T.TRAN_VALUE_CR,0))-SUM(NVL(T.TRAN_VALUE_DB,0))) ,
COUNT(*)
FROM ACCOUNTS A ,
TRANSACTIONS T
WHERE A.ACC_XYZ_IND =:3
AND A.ACC_ACCOUNT_NO =:1
AND A.ACC_SUBACC_NO =:2
AND T.TRAN_XYZ_IND =A.ACC_XYZ_IND
AND T.TRAN_ACCOUNT_NO =A.ACC_ACCOUNT_NO
AND T.TRAN_SUBACC_NO =A.ACC_SUBACC_NO
AND T.TRAN_P_IND =:4
AND T.TRAN_DLM_DATE >=TO_DATE(:5,’YYYYMMDD’)

-------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
-------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 46 | 43 (0)|
| 1 | SORT AGGREGATE | | 1 | 46 | |
| 2 | NESTED LOOPS | | 1 | 46 | 43 (0)|
| 3 | NESTED LOOPS | | 1 | 46 | 43 (0)|
|* 4 | INDEX RANGE SCAN | ACC_PRIME | 1 | 16 | 3 (0)|
| 5 | PARTITION RANGE MULTI-COLUMN | | 1 | | 39 (0)|
|* 6 | INDEX RANGE SCAN | TRAN2_3 | 1 | | 39 (0)|
|* 7 | TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS | 1 | 30 | 40 (0)|
-------------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
4740 consistent gets
3317 physical reads

FIGURE 1: THE SQL STATEMENT AND EXECUTION PLAN

occasionally over 30 seconds. One thing they did identify was In this case, I had AWR reports (not much help here), the SQL
that this problem only occurred when it was processing data for statement, information on run durations for different spreads of
a customer who had not had their records updated for several data, the explain plan output and some information about the
years. But not each time. See Figure 1. underlying tables.

This problem had been looked at on and off for several weeks Looking at the SQL statement and Execution plan in Figure 1,
and it was occurring more often, so I was called in. I should some things immediately stood out:
mention that this is a non-RAC, 11.2 Enterprise Edition • The expected number of rows is 1 for each step. That often
database with 2 physical standby databases being maintained indicates something wrong with the object statistics unless
in maximum availability mode (causing no issues) and the you are fetching only a small number of rows. This was
workload in the database is not high. fetching about 4,000 rows.
• Partitions are being used and I had no information about the
partitioning.
Investigation of Performance Issues • It is a simple query. It is getting a small number (probably 1)
This client has called on me before and they give me information of accounts and for each getting all the transactions. A very
about what is happening and what they have seen - but not standard sort of query, a master-detail query.
what they think the underlying issue is. Clients usually say • Look at the consistent gets and physical reads. The majority of
what they think the problem is, but I like this approach for two consistent gets were not satisfied from the buffer cache but
reasons: require a physical read from storage.

• Firstly, If I come to the same conclusion as them, it’s not that I was already thinking of several possible reasons for the
I have just parroted back to the client what they have said to occasional slow performance which included failure to do
me. They know I came to that conclusion for my own reasons. proper partition exclusion; a huge variation in the number of
• Secondly, if I confirm their thoughts, it gives them confidence transactions for an account; was the index really that suitable?;
that they are heading in the right direction. One of the the physical spread of data.
challenges to solving problems, whether they are performance
or other, is feeling you can rely on your skills and judgement. We needed more information on the object statistics and the
partition information. See Figure 2 for the relevant table and
With this client, I do not have direct access to the systems. If I index statistics for the two tables involved. I do not show it, but
want information, they get it for me. If we need to do ad-hoc the statistics for tables, indexes and partitions were “good”,
investigations into one of the DBAs “drives”, I tell them the gathered recently, sample size “full” and there were no column
queries I want running. histograms to confuse matters.

The ACCOUNT table is not a problem - it is not that large, it is


being accessed by all the columns in the primary key. Why the
This is not at all unusual, especially with range scan and not a unique scan? The last column had an
sensitive data such as financial or medical implicit data conversion on it but it is only ever 1 value. It’s not
“right” to have the data conversion but it was not the issue here.
information. It is far from ideal but it’s the Part of performance tuning or any other problem investigation
real world. is identifying what is wrong but is probably ignorable. A single
range scan to find one row compared to a unique scan was
insignificant compared to a 30 second run time.

18 www.ukoug.org
Technology: Martin Widlake
TABLE_NAME NUM_ROWS BLOCKS AVG_L
-------------- ------------- ----------- -----
ACCOUNTS 7,407,450 475,348 436

INDEX_NAME TYP PRT UNQ BL L_BLKS DIST_KEYS CLUSTF


--------------- --- --- --- -- ---------- ----------- ------------
ACC_PRIME NOR NO UNI 2 31,936 7,435,253 1,670,452

INDEX_NAME TABLE_NAME PSN COL_NAME


---------------------------- ---------------- --- --------------
ACC_PRIME ACCOUNTS 1 ACC_ACCOUNT_NO
ACC_PRIME ACCOUNTS 2 ACC_SUBACC_NO
ACC_PRIME ACCOUNTS 3 ACC_XYZ_IND

TABLE_NAME NUM_ROWS BLOCKS AVG_L


-------------- ------------- ----------- -----
TRANSACTIONS 271,871,132 9354,820 215

INDEX_NAME TYP PRT UNQ BL L_BLKS DIST_KEYS CLUSTF


--------------- --- --- --- -- ---------- ----------- ------------
TRAN2_1 NOR YES NON 3 1164,996 223,271,268 219,688,148
TRAN2_2 NOR YES NON 3 909,090 11,657 36,728,844
TRAN2_3 NOR YES NON 3 1137,733 161,789,971 223,148,544
...

INDEX_NAME TABLE_NAME PSN COL_NAME


---------------------------- ---------------- --- --------------
...
TRAN2_3 TRANSACTIONS 1 TRAN_ACCOUNT_NO
TRAN2_3 TRANSACTIONS 2 TRAN_SUBACC_NO
TRAN2_3 TRANSACTIONS 3 TRAN_DLM_DATE
...

FIGURE 2: RELEVANT TABLE AND INDEX DETAILS FIGURE 3: EFFICIENT AND INEFFICIENT RANGE SCANS

The other thing I ignored, which may surprise you, was those (Note: Oracle introduced a new stats-gathering option in V12.1
inaccurate values for 1 row in the explain plan steps. I could called TABLE_CACHED_BLOCKS that modifies how the clustering
see that the reason these were so low was Oracle was using factor is calculated)
estimates for the multiple filter clauses and not allowing for
data correlation, plus one or two other things. But the plan was In our case, the clustering factor of TRAN2_3 is very close to
still a nested loop plan, which was (probably) the best option. the number of rows in the table. It is a poor index to support
an index range scan and subsequent access to the rows in the
The TRANSACTION table is key. It is relatively large, over 9 million table. Almost each index entry will be to a different block in
blocks, and though it had 9 indexes (not shown) Oracle was the table. This is why so many of the consistent gets require a
picking the one for which it had all the columns, TRAN2_3. I’ve physical read, most of those blocks will not be in the cache and
highlighted three pieces of information in the statistics – the won’t hold other rows the query needs.
number of rows and blocks in the table and the clustering factor
of the index. The clustering factor is a vital piece of information I had a bit of an advantage in being able to spot this problem
about an index, it basically says how well the order of the index – I’ve already seen it several times, as it is a consequence
entries matches the order of rows in the table. of how many “activity over time” based systems work, be it
financial accounts, customer activity (think telephone calls) or
Imagine scanning the whole index and for each entry going to patient records. Let’s consider the classic Account/Transactions
the table for the row. You get the first index entry and this links situation, as that is our real-world example in the article.
to a row in a block in the table. If the next index entry links to a
row in a different block then the clustering factor increases by An account has activity - transactions. These transactions are
1. If the third index entry links to a row in the same block as the inserted into the table as they occur, at the “growing end” of the
previous index entry, the clustering factor is not increased. Each table. A record or two are inserted. A day or few days pass by
index entry is checked and each time the associated table block and then another transaction record is created at the end of the
changes the clustering factor is incremented, eventually giving table. Over time the transaction data is spread throughout the
the final clustering factor. table as it grows. When you want to process all the transactions
for a single account, they are spread all through the table.
If the clustering factor is close to the number of blocks in the
table then the order of entries in the index matches the order One other thing I was suspicious of was the partitioning, as
of rows in the table and access via a range scan will be efficient. I had no information on how the TRANSACTION table was
If it’s close to the number of rows in the table, the order of the partitioned. Were there hundreds or thousands of partitions
index is totally different to the order in the table and such access that were being needlessly checked for data? It turned out
will be inefficient. See Figure 3. that there were only a few partitions – about 9 - holding data,
in a manner designed for a specific application need for data
life-cycle management. I wondered if we could target a specific
partition for this query, but the required data was spread across
The clustering factor of an index has to be all the partitions and partition exclusion was occurring, via
interpreted in respect of the number of rows TRAN_DLM_DATE when possible.
and blocks in the associated table
Corroborating the root cause
The above analysis had taken maybe a couple of hours and it
was looking like the root cause was due to the required rows

www.ukoug.org 19
OracleScene

AUTUMN 17
Technology: Martin Widlake

Months ago “Today”


12 3 0 10:00 13:00 17:00
db file scattered read 9 13 20 38 23 19
db file sequential read 5 6 15 17 7 8
FIGURE 5: PHYSICAL READS WERE SLOWING AND ERRATIC

FIGURE 4: CORRELATION OF RUN DURATION AGAINST DAYS


COVERED AND RECORDS PROCESSED

for the query being spread over a large number of table blocks, db file sequential read is where a single 8K block is being read,
the vast majority of which were not cached. It was a physical IO scattered reads are generally 1MB multi block reads.
limitation. But whenever you think you know the root cause of
an issue you should try and corroborate it. AWR reports are not ideal for spotting short term issues if the
reporting period is too wide – in this case daily and hourly.
The issue as reported was occurring when several years of data But an average of 17ms for db file sequential read is high.
was processed, but not each time. Was it actually the temporal Evidence of the variability was provided by the AWR section on
spread of the data or the number of records being processed histograms of wait times, for an hour period when the storage
that determined the SQL execution time? The data was a bit was really suffering, see Figure 6.
messy and the performance more variable than I would have
liked but the correlation was more consistently with the number 46% of the db file sequential reads (physical 8K block IO
of rows processed than years processed. See Figure 4. requests) were under 1ms - they were being provided from the
cache at the storage level. But if the data was required from the
Errors had been reported as occurring generally when more actual disks, wait times were generally between 8ms and 32ms.
than 6 years of data was processed, but it was more like when 15% of them take over 32ms!
4,000 TRANSACTIONS or more were processed. Processing
4,000 records resulted in about 5,200 consistent gets and 3,640 Periods when the storage was struggling, and we saw average
physical reads (70% of consistent gets required a physical read). db file sequential reads of 16ms or more, coincided with the
timeouts. If you quickly do the maths you can see why:
Some of you are probably thinking that 30 seconds to process
4,000 records is a long time for a modern Oracle database. It is 4,000 records processed resulting in 3,640 physical reads at
unusually long and the timeout failures for large numbers of 16ms each is…58 seconds. That is a bit more than the
TRANSACTIONS occurred only sometimes. 30 second timeout.

There was another factor. The storage was performing poorly Running the SQL statement with PARALLEL set to 4 or 8 at these
and erratically. The client has a central enterprise-level storage times does not significantly improve performance as we are
solution which is used by many different systems, only one of limited by the IOPS the storage can provide to the database.
them being this Oracle database. During our investigations, the Parallel processing only helps when you have spare resource.
storage performance would vary significantly. It was one of the
things that had hampered the investigations by the in-house
staff - something would run fine at some point and then run Summary of the issue and potential fixes
poorly later that day. There was lots of anecdotal evidence To summarise the problem: due to the physical spread of
across many systems for this and in fact a new storage solution relevant records in the table, the SQL code is resulting in a
was in development - but it was not coming on line for a few relatively large number of physical single block reads. When
months. It kept being claimed that the current storage solution around 4000 or more TRANSACTIONS are processed and the
was slowing but coping, but that was mostly based on only one central storage is struggling, this causes the SQL to exceed the
aspect of the storage. timeout limit and fail.

There are two main performance criteria for storage and Now the question is, what can be done about it? Thankfully, as
networks. this is an in-house developed system we could change code,
• Bandwidth - how much data you can push out, which is database structure etc. The SQL is so simple that a rewrite would
measured in MB/GB per second not help. The current access path is the best available so forcing
• Latency - how long it takes to respond to an IO request, which a different one via hints, baselines, altering object statistics etc.
is measured in I/O operations per second will not help. As already mentioned, parallel processing does
not help. Let’s list some alternatives (there were a few more we
The central storage was doing OK for bandwidth, but IOPS came up with):
to the Oracle server were poor and varied. We could see this
from the “DB file x read” wait event statistics in AWR reports 1. Add the other filtering columns to the index:
going back 12 months and on the specific day tested. Figure 5 It could be that many records are being visited in the table that
shows the average response times in milliseconds. The event are then filtered out. Adding the filtering columns to the index

20 www.ukoug.org
Technology: Martin Widlake

Event Total Waits <1ms <2ms <4ms <8ms <16ms <32ms <=1s >1s
db file 1570.9K 46.3 .5 .9 3.6 14.0 19.8 14.9 .0
Waits
<32ms <64ms <1/8s <1/4s <1/2s <1s <2s >=2S
64ms- 2s
sequential read 233.8K 85.1 11.4 2.9 .4 .1 .0 .0 .0
FIGURE 6: WAIT TIMES FOR DB_FILE_SEQUENTIAL_READS VARIED WIDELY & COULD BE VERY LONG

would prevent this, but detailed analysis showed that this was 8. Create a new index to specifically support this query and hold
not the case. 95% of the records that the current index identified all columns required:
are needed. This option is not the same as option 1, adding the extra filter
columns to an existing index. This is holding all the columns the
2. Alter the timeout period: query needs from that table in one index. This means Oracle
Changing the timeout period from 30 seconds to, e.g. 1 minute, does not actually need to visit the table blocks, all the data it
would stop most of the errors. However, that change might have needs is in this index. This is sometimes called an “overloaded
to be system wide. index”.

3. Alter the application to detect the timeout for this specific Option 8 was the simplest to test and implement, and going to
action and handle it: have the least impact on the rest of the application. It is in effect
This would be an application code change. It was discussed creating a mini “Index Organized Table” structure for use by this
briefly with the developers who thought it might be possible but key query.
not as simple as it initially seems.
A new index was created rather than adding columns to the
4. Pre-create summary records: existing TRAN2_3 index as it would be a substantial change to
We discussed several ways in which summary data could be that index which could negatively impact any SQL using it.
calculated now for all unprocessed accounts that have not been This is the index we created. Note, it contains the columns from
updated for e.g. 1 year or more. All involved new code, changing the SELECT list as well as the WHERE clauses. If a single column
the application and handling of any changes to historic data. A from the TRANSACTIONS table that is referenced in the query is
lot of work. missed from the index, the table blocks will still need to be visited:

5. Create a Materialized View on the TRANSACTIONS table to create index TRAN2_FQ on TRANSACTIONS
support this query: (TRAN_ACCOUNT_NO ,TRAN_SUBACC_NO ,TRAN_DLM_DATE
In theory, a materialized view (MV) could be used to support ,TRAN_P_IND ,TRAN_XYZ_IND
,TRAN_VALUE_CR ,TRAN_VALUE_DB)
the query. But the MV would constantly be made stale by the local
activity on the TRANSACTIONS table. MVs are far more suitable (...partition clause stating tablespaces...
)
to semi-static data such as in Data Warehouses.

6. Manually order the data in the TRANSACTION table segments


to be in Account order: Testing the new index
The fundamental issue is that the TRANSACTIONS records for a At this point in an article like this you might expect me to say
given account are spread out in the table. This can be improved something like “the index was tested and it gave X performance
by manually ordering the records by creating a copy of the data improvement”, because testing a new index is a simple task.
with a CTAS (Create Table As Select) with the relevant ORDER BY Except it is not that simple.
clause. This is then copied back or partition swapping utilised.
However, it is a “one shot” fix and potentially impacts SQL that To test the new index, we need to identify a set of accounts with
gathers a range of data by another column(s) such as by date. a large number of transactions that need to be processed. We
do that by running a select to find such accounts. Only now we
7. Convert the TRANSACTIONS table to be an Index Organized have cached the blocks holding the data in the buffer cache. The
Table, organized by TRAN_ACCOUNT_ID and related columns: thing we are testing is where the data is coming from the disk.
Index Organized Tables (IOTS) are designed to keep data in the Having cached the data by identifying it, running the problem
structure of an index, i.e. ordered by that index. They are ideal SQL statement with either the old or new index in place showed
where child records are nearly always accessed by the parent key, them both to run in under 1/10th of a second. I could point to
e.g. ACCOUNT_ID or similar. All new data is automatically stored the reduction in key metrics, which you as Oracle people would
next to the existing data. This would benefit this code and might accept, but management wanted to see a “slow” version and a
benefit much of the processing within the application but it is a “fast” version. Sometimes performance tuning is as much about
significant architectural change to the database. managing perceptions as solving technical issues.

www.ukoug.org 21
OracleScene

AUTUMN 17
Technology: Martin Widlake

(SUM(NVL(T.TRAN_VALUE_CR,0))-SUM(NVL(T.TRAN_VALUE_DB,0))) COUNT(*)
----------------------------------------------------------- ----------
-65.86 3551
Elapsed: 00:00:00.05

---------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)|
---------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | 46 | 42 (0)|
| 1 | SORT AGGREGATE | | 1 | 46 | |
| 2 | NESTED LOOPS | | 1 | 46 | 42 (0)| FIGURE 7: THE IMPROVED PLAN USING THE NEW
|* 3 | INDEX RANGE SCAN | ACC_PRIME | 1 | 16 | 3 (0)| INDEX AND NO TRANSACTION TABLE ACCESS
| 4 | PARTITION RANGE MULTI-COLUMN| | 1 | 30 | 39 (0)|
|* 5 | INDEX RANGE SCAN | TRAN2_FQ | 1 | 30 | 39 (0)|
---------------------------------------------------------------------------------
Statistics
----------------------------------------------------------
56 consistent gets
52 physical reads

We could flush the buffer pool (or even the whole SGA) before Table 1 shows a set of test runs for first, where the new
each run but not only is this a bit of a fake situation (as you “overloaded” index is in place, and then with the original index.
flush out the index root and branch nodes which tend to be It can be clearly seen that with the index in place the number of
cached, so the first runs are unusually slow and then get quicker consistent gets and, more importantly, physical disk reads required
as the index structure is cached) but it did not really work. The to satisfy the query is greatly reduced by about 40x - all of you who
root issue was the slow performance of a shared storage array. understand anything about Oracle performance will know this
They have their own large caches. Remember the histogram of is a significant improvement. For managers who want “absolute
db_file_sequential_reads and many were below 1ms? They were proof”, the elapsed times are improved by a similar amount.
coming from the storage array cache. So even after flushing the
Oracle buffer cache, the SQL ran very quickly. Of course, we could Figure 7 is the plan for the SQL statement with the new index
wait a few hours for the data to move out the storage array in place.
cache but we did not know how long to wait and it made testing
more cumbersome. If you look at the plan you can see there is now no reference to
the TRANSACTIONS table – all data required for transactions is
Remember, performance variation seen within the database obtained from the new index.
may be due to caching and latency outside the database
The solution was, in the end, relatively simple. We queried
for the tests accounts on the primary database and then ran Conclusion
the tests on the two standby instances (which were opened There are many potential causes of slow select performance
during the day for general testing purposes), one with the test in an Oracle database. One that is often overlooked is where
index and the other without. There was a gap of several hours your row data is physically stored or, more accurately, how
between creating the new index and running the tests, helping the row data is grouped in database blocks. If the data
to remove any issue with storage level caching. required from a table is scattered as individual records in
many blocks then a lot more work will be required to collect
these blocks, potentially requiring physical IO which is always
slow compared to memory access. Oracle SQL performance
Records Duration Consistent Disc can also be impacted by external factors - in this case, a
Account ID Processed (secs) Gets Reads
struggling storage array.
New Index
522322509 1550 0.533 46 28 Once you know the actual cause(s) of slow performance you
21853535 1676 0.459 44 25 can come up with solutions to the problem, some of which
524464997 1832 0.346 45 26
might not involve doing anything in the database, such as
changing the application or fixing hardware issues.
35355742 1910 0.463 47 29
188346460 2099 0.644 47 28 In this example, we needed to alter the grouping of records
34755245 2307 0.283 49 30 to reduce IO. If the problem had been impacting many SQL
17141429 2530 1.111 50 34 statements, altering the structure of the database may have
160353504 2728 0.924 49 33 been called for. In this case, we needed to create a more
suitably organised version of the required data by using an
Old Index
overloaded index.
75637856 995 5.594 1,358 955 The improvement in performance was significant.
52146762 1043 8.008 1,262 977
23465114 1511 5.742 1,818 1,177
353539060 2299 11.520 2,985 2,163
336361299 2323 14.923 2,880 2,369
520464685 2923 12.928 3,186 2,355 For Martin’s biography, please see page 3,
523535314 3670 16.527 4,711 3,121 First Word
353575590 5459 23.699 6,622 4,608
TABLE 1

22 www.ukoug.org
Financial Planning Technology: Jeff Smith

How to Make the Rolling


Forecast a Reality
Malcolm Hewlett,
ACMA ACIS, Oracle Practice
Manager Europe, Excel4apps

T
he rolling forecast is a major trend in financial planning. THE
It lets you adjust for changing technological and market
forces, making it more useful for managing business
performance than an annual budget alone. Still, many
companies haven’t achieved rolling forecasts because they
assume expensive planning software like Oracle Hyperion REALLY IS REALLY
GREENERIS GREE
REALLY IS GREENER
Planning is necessary. However, incremental improvements to
existing processes and tools offer a realistic route to the rolling
forecast.
Big Solutions without Big Costs
Most companies use Microsoft Excel in their planning process. But if your
Excel planning model isn’t directly linked to your Oracle database, it’s
too manual and cumbersome for monthly forecasts. Accuracy is also a
concern, as spreadsheet data extracted from your ERP system at different
times can introduce version control errors.
Alternatively, Excel planning add-ins, such as Excel4apps Budget Wand,
integrate planning functionality into an Excel interface that links directly
to your central Oracle database, so you can achieve real-time planning
modeling without a big-bang EPM implementation. Direct Oracle
integration with Excel eliminates manual historical data collection steps,
supports refreshable planning models and automates uploading of
Excel-based planning back into Oracle to cost-effectively support rolling
forecasts with minimal user training.

Small Steps to Substantial Benefits


Take the Leap and
If you choose an Excel add-in that integrates to Oracle, here are steps to Get Started
make the journey to rolling forecasts a smoother one:
1. Conduct a current state assessment of your planning process Effective Excel-based E-Business
Use data inflows and outflows to map your current process and Suite reporting tools Alternatives
document where and how you use Excel. Identify process gaps, to OBIEE, Discoverer, Client /
as well as key personnel in support of the rolling forecast transition. Web ADI, and FSGs that reduce
Finally, create process mapping for your desired future state. dependency on IT
2. Prepare an account structure for planning You can either modify Real-time subledger reporting in
your existing hierarchy for both financial and managerial reporting Excel Access a library of pre-built
or create an alternate hierarchy for planning and management reports and Discoverer converter
reporting. for self-service reporting
3. Integrate your current Excel planning framework with live ERP bi-
Achieve a 360-degree month end
directional data using Excel add-ins Tweak existing templates and
close Report on real-time data,
models for quick results and ongoing improvements.
analyze variances, then adjust
Following these steps, you could achieve modest improvements monthly, your GL journals in Oracle – all
significantly improving your forecasting capability before the next annual from Excel
planning cycle starts. Find out more about how you can easily achieve the
rolling forecast with the tools you already have.

Visit: go.excel4apps.com/rollingforecast
Excel4apps is the Gartner-recognized, award winning solution provider that enables
27,000+ Oracle E-Business Suite and SAP users in 78+ countries to streamline their
Download Today!
financial reporting, analysis, budgeting and data uploading in Microsoft Excel. To
receive regular tips on how to shorten your month end and other financial processes,
go.excel4apps.com/freetrial
please subscribe to our blog: go.excel4apps.com/bottomline

www.ukoug.org 23
AUTUMN 17
UKOUG Events

DAYS of Carefully Crafted


3 Content for the Oracle
Community
Make Apps17, JDE17 and Tech17; UKOUG’s co-located
conferences, your destination of choice this December.

4 - 6 DE C 20 17 #ukoug_apps17 5 - 6 D E C 20 1 7 #ukoug_jde17 4 - 6 DE C 20 1 7 #ukoug_tech17

Join the largest, independent gathering of Oracle professionals in the UK


and benefit from:

400 2
exhibitors

presentations social events


One of the main reasons delegates travel to our conferences is to “gain insight into product
roadmaps” and “further their Oracle knowledge” and where better to start that by attending
the sessions from our Community Keynotes.

24 www.ukoug.org
UKOUG Events

Introducing a few of our many UKOUG 2017’s keynotes:

CX
Martin Ward, Senior Director, Solution Consulting (Western Europe) for CX SaaS Solutions
Join Martin at Apps17 for “To Transform the Experience You Deliver to Your Customers, Start With Your Culture.”
Digital transformation efforts for many companies are being driven by the desire to re-invent the end-customer experience.
Technology has a critical role to play in this process, but there is also a need to re-invent culture to introduce a new digital
mindset. Change at this deep cultural level is critical to the overall success of digitalisation efforts and in this presentation
Martin will give the audience his insights into why this is the case and how enterprises can go about making such
fundamental changes.
Session sponsored by:

DATABASE
Connor McDonald, Developer Advocate; Maria Colgan, Master Product
Manager; Chris Saxon, Oracle Developer Advocate for SQL
Connor, Maria & Chris, aka the Ask The Oracle Masters Team, will be
teaming up at Tech17 to deliver “Solving the Most Common User Request
.... Make it go Faster!”

Database architectures are constantly changing due to new technologies and new requirements, from hierarchical to relational, object-relational,
NoSQL, document, XML, binary, graph, ... the list is endless. Yet throughout this landscape of change, one goal has remained common throughout,
and it’s a very simple one: “Can you make my program go faster?”. From indexes to In-Memory, execution plans to Exadata, optimizers to options,
compression to complexities, these Oracle technologists, will using real world examples, to cover the vast array of facilities and techniques at our
disposal to meet this time-honoured goal.

MIDDLEWARE
Regis Louis, Vice President of Product Management & Strategy, Oracle Cloud Platform, EMEA/APAC
Regis invites you along to “Scaling Innovation with Oracle Cloud Platform” at Tech17.
As the pace of cloud adoption accelerates, enterprises must leverage current IT investments and simultaneously innovate
for the future. Join this session to understand Oracle’s Cloud Platform strategy and direction, and learn about new Platform-
as-a-Service capabilities for cloud-native application development, API management, data and enterprise integration,
Internet of Things, machine learning, security, and mobile chatbots. The session will also showcase many examples of
customer innovations with Oracle Cloud Platform to illustrate these capabilities and adoption trends.
*Information correct at time of print.*

ORACLE E-BUSINESS SUITE


Cliff Godwin, Senior Vice President for Applications Development
Cliff will deliver his traditional “Oracle E-Business Suite - Update, Strategy and Roadmap” on day 1 at Apps17.

View the full conference agendas and register your place at: www.apps17.ukoug.org | www.jde17.ukoug.org
& www.tech17.ukoug.org. Delegates can attend any session across the three conferences. Find out about the
most cost effective ways to attend on page 51.

www.ukoug.org 25
OracleScene

AUTUMN 17
Business Analyics

Pimp your Oracle


Business Intelligence
DevOps with Docker
Modern DevOps, the process of applying Agile/Fast Provisioning practices
to Production environments, is a hot topic across the whole IT industry at
present. Business Analytics (BA) is not an exception to this and needs to adopt
the iterative processes of Continuous Integration and Continuous Deployment.
However, BA faces some issues due to its complexity: Docker can help facilitate
DevOps by making the processes simpler and smoother.
Gianni Ceresa, Managing Director, DATAlysis GmbH
Baking an OBIEE Docker image

Fitting OBIEE inside a container isn’t the most natural thing - Expose the ports which your image will be allowed to
to do! Docker is primarily intended for a “one service = one listen on
container” architecture, each process in its own container. - Define the default command to execute when a container
OBIEE is everything but Docker-friendly: a big, monolithic, old- is started
school application requiring multiple services and processes
as well as a database to store schemas for internal use. It’s important to note that the configuration steps for OBIEE,
the ones creating the BI domain, the RCU schemas etc. are
But… Docker can build an image automatically by reading a set not performed inside the image. These activities will be
of instructions from a Dockerfile. Also, OBIEE can be installed performed when a container is first started. This allows you to
in silent mode in a fully automated way by using response have as many OBIEE containers running at the same time as
files (just like many Oracle products). By combining these two you want, without conflicts that would result from trying to
elements it is possible to bake a nice OBIEE Docker image. use a fixed database schema. When creating a container, the
target database for the RCU is provided as a parameter and,
The steps to produce the image are similar to installing OBIEE by default, the RCU prefix is dynamic, based on the unique
on a physical server or a VM: identifier of the container.

- Start with an Oracle Linux 7 image (available and pulled A little script is required to detect that OBIEE isn’t configured
automatically from Docker Hub) on the first start-up and perform the configuration steps,
- Copy the required binaries inside the image otherwise just start the stack if everything looks good.
- Add an “Oracle” OS user (by default only root exists) The same script will also listen for stop signals sent to the
because it’s never good to run things as root itself container, in order to do a clean shutdown of OBIEE and avoid
- Install the required packages and Java using YUM corrupting anything.
- Set some variables defining locations of installed
components like ORACLE_HOME etc. The full recipe with detailed instructions can be found on
- Install Weblogic and then OBIEE GitHub: https://github.com/gianniceresa/docker-images/tree/
- Clean up by deleting binaries and any unnecessary files master/OracleBIEE

26 www.ukoug.org
Business Analytics: Gianni Ceresa

DevOps is everywhere nowadays: it’s one of the highest DevOps processes


trending topics in I.T. The reason is simple. After adopting agile Continuous Integration (CI) can be achieved by defining a
methodologies for development it was natural to extend this to process, often called a “pipeline” by many CI tools, defining
the test and release phases as well. various steps and the different operations to execute for each
step. There isn’t a standard, single process. The pipelines are
That’s what DevOps is: a set of principles and practices, a bit always closely related to the existing processes and internal
like “best practices”, providing ways and possible methods that rules of an organisation, designed to conform to the existing
can be adapted and customised - not blindly applied. It’s also policies for quality control, auditing etc. See Figure 1.
important to remember that DevOps isn’t itself a tool, but is
possible thanks to a set of tools. A main assumption is that a version control system, like GIT, is
already used to version the code. For OBIEE it would be at least
DevOps and OBIEE are not the most natural combination. the RPD and the Catalog that needs to be version controlled.
Automating tests in OBIEE can be challenging because it is a Versioning tools generally use a branching model to control new
heavy, monolithic application, making it complex to achieve real version of objects and this is required for the DevOps process.
Continuous Integration (CI) using physical hardware or classical GitFlow is probably the best known, GitLab Flow is also an
virtual machine setups. interesting one: similar but with some little differences trying to
make it even simpler for developers.
The basic elements required to set up automated testing of
OBIEE are simple on paper: You need an OBIEE server, the RPD
(the file containing the metadata of the physical sources and
business models) and Catalog (containing the front-end objects
DevOps: Extend Agility to Production by
like dashboards and analysis) you want to test, and the test tool. implementing Continous Integration and
Set it up, shake it, wait 10-30 minutes and it’s done: the tests
results are ready to be analysed.
Continuous Delivery

There are a few aspects to this.


A potential simple set of DevOps processes for OBIEE could look
like the following: 3 different sets of steps based on the “trigger”,
Baseline Validation Tool the committing of a branch in the versioning tool. When a
The main (and only) test tool provided by Oracle for OBIEE is commit happens, the steps triggered depend on the activity
the Baseline Validation Tool (BVT). It was introduced by Oracle performed and are intended to support Continuous Integration
with the first release of OBIEE 12c as a tool to make upgrading and potentially Continuous Delivery.
simpler, taking care of the comparison between the old and new
environment and producing a list of the regressions found. New feature development
When a new commit happens on a work (feature) branch,
The idea of BVT is to execute a set of operations simulating the where something new is being developed, a set of actions can
behaviour of an end user on a given OBIEE environment. When be performed. Execute an RPD consistency check to validate no
running, it will take a copy of the generated LSQL (the “logical warnings or errors have been added to the RPD. Run a BVT test
SQL” used by OBIEE internally), the structure of objects in the and compare the result with the main development (stable)
Catalog, exports of analysis in Excel, PDF and CSV and also branch checking for unwanted regressions. Documentation can
screenshots how pages look. The same operation is performed also be part of a CI process by generating a simple list of the
on a different environment and the two “snapshots” are changes detected in the RPD compared to the main one (from
compared, producing a report identifying differences if any exist. the development branch), with the same for the Catalog. This

FIGURE 1

www.ukoug.org 27
OracleScene

AUTUMN 17
Business Analytics: Gianni Ceresa

generates a simple list of the objects impacted by the commit. During the execution of these steps nobody and nothing else
can use that environment. It is “locked” for the tests and other
In order to extend CI to Continuous Delivery (CD) it can be people or processes will need to queue and wait for the tests to
helpful to give the developer a simple way to provide access to be over to be able to use the environment for their own needs.
the newly developed functionality to a functional end user in This may not be an issue for teams with 1-2 developers but
order to get approval for the new feature. This requires having things get worse when it is 3-5 or more developers. Tests can
a way to trigger the deployment of the current developed RPD be long and people will maybe have to wait days before having
and Catalog version on an OBIEE instance accessible by the user, access to the test environment: that’s not what CI is!
which we will come to later.
Here is where Docker changes things: instead of having a single
Merge to the main development branch server reserved for OBIEE tests, any server with Docker installed
Once a new feature has been developed a merge is generally and with enough resources to execute OBIEE for a short period
done to bring this new feature, the new code, into the main of time (as long as the tests take) can be used.
development branch. At this point some tests can also be
performed like executing a RPD consistency check to guarantee
that we still have a clean RPD at this stage. We should also Install, use, drop OBIEE on the fly
execute a BVT test and keep the results ready as “reference”, An OBIEE Docker container can be started, without any
so other branches can compare with this snapshot of the code requirement other than Docker installed on the host (see the
and detect regressions. Finally, we can create documentation box for information on how to create an OBIEE Docker image). In
here as well. With this being the main development branch it’s a few minutes OBIEE will be configured (including the creation
important to track, at each commit, the changes that happen in of the RCU schemas in the designated database: the database
the RPD and Catalog. isn’t part of the OBIEE image) and ready to be used, the RPD will
be deployed as well as the Catalog and tests can be performed.
Deploy to production Once done, the results are stored in a safe location (the CI tool
When using Gitflow the production code is often represented by in general) and the OBIEE instance is stopped, the RCU schemas
an independent branch where code is pushed when approved, dropped from the database and the container removed. There is
meaning it now needs to be deployed to the production server no visible sign left that OBIEE was running there a few minutes
as the new official running code. No Continuous Integration is earlier, there is nothing but the saved test results.
needed here but we definitely benefit from Continuous Delivery:
when a commit to the production branch is detected the code The database isn’t included in the OBIEE container: it can be
is deployed on the real production server automatically (or an already existing database or even an empty one running in
scheduled for a nightly deployment). Docker as well (Oracle has published an official Oracle Database
Docker image in the Docker Store). A connection string and
With those three types of activity in mind, how does Docker credentials of a user with SYSDBA is all you need to pass as
support this? parameter to the OBIEE Docker container.

Thanks to this flexibility all your tests can be executed in parallel,


Docker makes it easy! spinning up and down OBIEE instances on the fly for as many
The DevOps processes described above are not really big, but users and time as per the requirements. There is no need to
they have multiple steps, some being unique based on the allocate some hardware full time for OBIEE needs: resources will
context and others being performed for multiple contexts (like be shared with other applications using the Docker hosts, which
documentation). The BVT steps are the most challenging ones as can be both physical servers or virtual machines. (Docker Swarm
they require a real, running OBIEE environment where the tested is a clustering and scheduling tool for Docker containers, which
RPD and Catalog must be deployed, then BVT must be executed allows you to easily set up and manage a cluster composed of
pointing to that environment. multiple hosts). All these activities can be performed by hand

FIGURE 2

28 www.ukoug.org
Business Analytics: Gianni Ceresa

or scripted, automated and scheduled. Tools like GitLab CI or having hardware resources locked for nothing. Just start one or
Jenkins (probably one of the better known CI tools) provide many OBIEE containers when needed.
frameworks to simplify the scheduling and automation.

Patching and Upgrades


Applying bundle patches in OBIEE and upgrades is often a really
Docker brings agility to your process by time-consuming and expensive operation: The new or patched
making parallel execution simple and version must be tested and then the deployment is done on all
the environments one after the other. It often has an impact on
transparent productivity as you freeze development for a few days (or even
weeks) and resources are used for tests instead of development.

One of the steps of the CD process was the ability to allow a The more environments you have, the more effort it will take. If
business user to access the current development and provide you have some physical servers or virtual machines dedicated
approval or feedback. If multiple features are being developed to tests and used as sandbox by the developers you will have to
and tested in parallel it means having multiple and independent carefully plan which one to update and when.
OBIEE instances. Now think the Docker way: need a preview
environment? Spin up an OBIEE Docker container, deploy the Docker helps with this activity as well: by patching the Docker
required code inside. Send the unique URL to the users for their image it’s easy to execute all the regression tests and compare
validation (Docker containers can be exposed by binding to with the old OBIEE version. If it looks good then the newly
random numbered ports on the hosts, avoiding port number patched OBIEE Docker image is tagged as the new reference
conflicts). Once done the container can be removed, usually a one - and all the future CI & CD processes will automatically pick
few hours later. No need for multiple VMs waiting to be used or that one for their automated executions.

In Summary
- DevOps for OBIEE can be a tedious process - OBIEE instances are created and dropped on the fly
- BVT is helpful but requires a fully working OBIEE - Multiple OBIEE containers can be spun up when required
environment - The shared database required by OBIEE for internal use can
- Docker helps by bringing flexibility in providing the OBIEE also be a Docker container
environment
- Continuous Integration can be parallelised, hardware
resources are shared

ABOUT Gianni Ceresa


Managing Director, DATAlysis GmbH
THE Gianni Ceresa is an OBIEE enthusiast more widely interested in BI/DW/EPM solutions with
AUTHOR a special focus on Oracle products. Currently working for DATAlysis, his own consulting
company in Switzerland. His other activities include OBIEE training delivery, R&D and
supporting the Oracle community on the OTN forums, blogging and speaking at
various conferences.

www.ukoug.org 29
OracleScene

AUTUMN 17
Business Analytics

Tony Cassidy, CEO, Vertice An Post:


Customer Analytics
Using Oracle Analytics Cloud
An Post, is the provider of postal services in Ireland…the most in the normal parcels delivery process; whereby An Post and their
reputable organisation in Ireland (*RepTrak 2017), and a trusted customers interact on a daily basis.
and well known name to many Oracle Users in Ireland and also
a long term Oracle customer, with lots of internal experience in The customers are varied and some examples are;
both Oracle Technology, Applications and now also Analytics in - global names in terms of Parcel delivery who utilise An Post
the Cloud. network to deliver in Ireland
- other well known online retailers
An Post has started its journey to the cloud in recent years and
now its utilising Oracle Public Cloud as part of its Hybrid Big Data
The CXM project enabled customers to analyse An Post’s quality
and Analytics Solution… of service on successfully delivered items, highlight downfalls
and identify the status of specific items. It was decided that the
solution to best serve this requirement would include Oracle
They have been an early adopter of Oracle Analytics Cloud Service Analytics Cloud Service (OACS) and Database Cloud Service
(OACS) and they have chosen it, to fit a recently emerging and (DBCS) fed by the on-premise Data Warehouse via ODI 12c and
Innovative business need - which created the requirement for a secured by Virtual Private Database (VPD). The source data is
Customer Experience Management (CXM) project. collated from track and trace data for all registered items.
The remit of which was to securely provide both An Post’s
customer account managers and also the end customer account
managers, with an analytics suite and related KPIs, which they Architecture
could both utilise simultaneously, both online and near real Recently, An Post has restructured their business to focus upon
time, in order to succinctly and expediently manage any issues parcels and packet delivery in an attempt to capture a bigger

30 www.ukoug.org
Business Analytics: Tony Cassidy

SAMPLE SOURCE EXTRACT PHYSICAL VIEW

share of the €600 million a year online shopping market and Security
focus less upon the declining letter business. Before the CXM To enable data to be securely extracted from the Data Warehouse
project was initiated, An Post’s business analysts had been and pushed to the Cloud, the An Post security team opened
using the track and trace subject area in the existing BI solution a route through their firewall out to DBCS. Secure Shell (SSH)
for quality of service KPIs and data analysis. The data from the tunnels were then created to enable the BI team to create a
11g based track and trace source system was previously being connection between the Data Warehouse, ODI and DBCS. Paired
extracted twice daily via ODI 12c to the 12c Data Warehouse SSH-2_RSA public and private keys were generated which were
hosted on Exadata. Now, after the high demand both within An used during the configuration of the cloud services to create a
Post and its customers, data is being loaded to the on-premise secure connection between the An Post network and the Cloud
Data Warehouse then directly pushed to DBCS near real time. via the SSH tunnel.

The Architecture and Implementation is managed by us, their As sensitive data is being provided to customers outside of the
trusted managed service partner, Vertice. Please refer to Figure 1. business, it is crucial that customers can only view An Post data
related to their own business. VPD was configured on DBCS and
the RPD to enforce this row-level security. A context is used to
set and store the identity of the person logging into OACS. This
context is passed to the database via the RPD configuration. This
context value is then used by the RDBMS Policy to identify which
customers’ data the user is configured to view. This policy alters
the SQL generated by OACS to implement row-level data filtering.
A user with no configuration will be unable to view any data.
When customers have a query, An Post support employees can
‘Act As’ the customer to view data as the customer would.

ETL
Business analysts had been using track and trace data for years
which was being refreshed twice daily by ODI packages taking
40 minutes to complete on average. The CXM project required
this data to be pushed to the Cloud hourly for consumption
by customers. As there was a high risk of the existing ELT
FIGURE 1: ANPOST OAC ARCHITECTURE DIAGRAM
overrunning and not being able to be refreshed hourly, the
existing packages had been split into two separate load plans,
one to load the Dimensions once daily and another to load the
Fact data hourly. The load plans had been optimised to minimise

www.ukoug.org 31
OracleScene

AUTUMN 17
Business Analytics: Tony Cassidy

this risk, the Dimension load plan completes in less than 1 Oracle Analytics Cloud Service (OACS)
minute with the Fact load plan taking 20 minutes on average, The VPD policy is applied at the database level and in order to
dependent on volumes at runtime. apply row-level data filtering, VPD was also configured in the RPD.
This involved enabling the ‘Virtual Private Database’ check box in
Data is loaded from the Track and Trace source system to the the database properties, setting a security sensitive USER variable
staging area on Exadata (Oracle 12cR1) where business logic is and adding a call in the database connection pools which
applied using various functions, views and staging tables. The executes on connection and sets the users data filter.
data is transformed as part of the ELT load into the on-premise
Data Warehouse into the STAR schema using the LKM Oracle to When users login to OACS, they are directed to the CXM
Oracle (DBLINK view Target) Knowledge Module to guarantee Dashboard which contains three Dashboard pages, KPIs
good performance. Overview, Daily Summary and Delivery Sites Map. The Delivery
Sites Map displays parcel delivery information on a map of
Once the incremental load to the Data Warehouse has Ireland which helps to identify the performance of delivery
completed, the load to DBCS is started. Only the data that has offices and highlights any downfalls in the An Post delivery
changed since the last extract is required to be pushed to DBCS. network. The customer’s logo appears on each of the pages, text
As the business logic has been performed on Exadata, no further displaying date ranges selected, a series of prompts and the last
transformations are required and the delta records are pushed data refresh time.
using the LKM Oracle to Oracle Push (DB Link).GLOBAL Knowledge
Module.

As the push to DBCS is an extra step in the ELT process, we


Users have the ability to drill on each of the
needed to ensure that any failure in this step would not affect data points provided to view individual items
the on-premise data. For example, if the SSH tunnel was closed
due to a networking or firewall issue. The load plan had been
and export data if required.
constructed so that the on-premise loads would continue to run
hourly, populating the Data Warehouse even though there was
an issue with the push to DBCS. Once issues had been resolved The CXM dashboard is available both on-premise and OACS and a
with the DBCS load, it automatically synchs the data using a ‘Last strict release management process is in place to ensure that any
Extract’ variable and a ‘Record Modified On’ column, the next changes are fully tested before releasing to users. Any changes
time the load plan is run. To ensure the data is in synch after a are completed in each of the BI environments, signed off and
recovery, a series of scripts are available to the BI team to perform post-release checks are completed to ensure any changes have
comparisons efficiently. Documentation is also available to the been released correctly. Dashboard changes are migrated from
An Post BI team detailing past failures and the recovery steps. OBIEE to OACS while a cut down version of the RPD including
only relevant subject areas is deployed.

While the project was in the development phase, caching on the


This helps to ensure that both on-premise CXM tables had been disabled as the data was being refreshed
and OACS users are not left with stale data hourly and users needed to view fresh data. Before the project
went live, caching had been enabled to improve user experience
for an extended period of time. even further. Users still view fresh data each time the ELT
process has been completed. This was achieved by setting up
the S_NQ_EPT event polling table in the RPD. Each time the ELT
process completes, a record is entered into the S_NQ_EPT table to

32 www.ukoug.org
Business Analytics: Tony Cassidy

identify which tables have been updated so any associated cache With all of the new requirements and more data to be processed
is purged. within the same ELT timeframe, further enhancements are being
carried out to ensure refresh times are not affected. In the short
term, ODI 12c will still be the tool of choice however with higher
Future/Further Enhancements demand for data, this is likely to involve GoldenGate and Oracle
The CXM project has proved very popular with An Post employees GoldenGate Cloud Service in the future.
and customers with a series of enhancements planned for
the next phases of the project. There are new requirements to
extract more data, for example, parcel recipient contact details Business Value
so recipients can easily be contacted by customer services. There In summary the solution has fulfilled a requirement for An Post, it
are new business processes where An Post is aiming to improve has been validated as a value add by the “Voice of the Customer”
delivery rates by attempting to deliver an item three times before and the respective interaction and feedback received therein.
providing the customer with a docket to collect their item at their It has also strengthened the An Post business offering in terms
local post office. of parcels delivery, where Quality of Service is paramount and
this solution not only enables improved process, but improved
Changes are required so this new information can easily be Quality of Service metrics and KPIs, based on Analytics and Big
identified if a recipient reports they have not yet received their Data aspects from Oracle technology. Overall it helps enhance
item. This could be something as simple as there was no answer the existing processes in place and improves their parcel business
during the first two delivery attempts. customer experience as intended.

ABOUT Tony Cassidy


CEO, Vertice
THE Tony has been an Oracle practitioner for close to 20 years and has covered most roles in
AUTHOR the Oracle Eco-system from both a Commercial and Delivery perspective. He has been
involved and volunteered for both UKOUG and OUG Ireland for 8 years or more, in varied
roles including SIGs, Conference Committees, Oracle Scene reviewer and more recently the
Appointments Group. Tony is the Founder and CEO of Vertice who are an Oracle Platinum
Partner, focused on end to end Oracle Solutions in UKI region, with key specialisms
in Oracle Analytics, EPM and Big Data. Vertice were awarded Gold in the Analytics
UKOUG Partner of the Year 2016/2017 category.

www.ukoug.org 33
OracleScene

AUTUMN 17
Applications

Safeguard your Business


with access controls that mitigate the risk of cyberthreats,
financial misstatements and fraud in Oracle Applications
In this article, we will provide practical techniques to streamline user security management
process with workflows that prevent security risks from user access requests. We will also
cover audit analytics that detect Segregation of Duty (SoD) violations, data breaches, fraud,
and other security risks.

Adil Khan, CEO, SafePaaS

Oracle Applications include security forms that are used to access management tools to provision user security and maintain
manually create, modify, and disable user accounts and their access controls over roles, responsibilities, and entitlement
responsibilities across Oracle Applications. This standard configurations no longer meet the access policy management
user-responsibilities assignment process is inefficient and needs. This can impede effective process enablement.
inconsistent where users are granted access without necessary
policy checks and approvals. As the security risks are growing Business managers responsible for access controls often cannot
with the adoption of Cloud and Mobile, businesses are looking obtain accurate function-mapped entitlement listings from
to streamline the user-provisioning process by consistently enterprise applications, and thus have difficulty in building
enforcing access policies (such as SoD rules) before violations effective access controls to enforce SoD policies.
get introduced into the ERP environment. This protects sensitive
business information from potential threats and vulnerabilities. Access monitoring reports within the enterprise applications are
not well designed to identify SoD violations, especially when it
We will share best practices to automate onboarding, offboarding comes to policy-based user provisioning, cross-application SoD
and other administration user processes such as new hires, control monitoring, and ability to validate user access rights
transfers, and promotions. We will also provide techniques to across disparate systems.
automatically aggregate and correlate identity data from HR,
CRM, email systems and other “identity stores.” You will also learn User Access Provisioning tools such as Identity Management
to streamline the user access certifications that are a critical (IDM) systems operate at such a high level that they cannot
requirement of many data security and privacy regulations, see what is going on in an enterprise application at the user
including UK Privacy Laws, US Sarbanes-Oxley, EU Directive and function level. They also do not consolidate detailed user activity
others. By regularly validating the appropriateness of user access logs unless those logs pertain to the administrators of the IDM.
privileges, your organisation can effectively meet audit and Consolidated activity logs which are critical for compliance
compliance requirements and improve its overall risk posture. reporting, auditing and forensics cannot be accomplished with
IDM alone.

Application access risks Mitigate risk of fraud, waste and errors with access controls
Application access risks are growing as organisations rapidly Organisations require Segregation of Duties access controls in
add users to their enterprise applications to execute all major ERP applications to mitigate the risk of fraud, waste and error.
business processes. Enterprise applications such as Oracle SoD is an internal control that prevents a single person from
Cloud ERP, E-Business Suite, Peoplesoft and JD Edwards enable completing two or more tasks in a business process.
organisations to better engage and empower employees in the
workplace, improve collaboration with business partners, and Actual job titles and organisational structures may vary
effectively manage customer relationships. However, ineffective greatly from one organisation to another, depending on the
access control within enterprise applications can result in size and nature of the business. Therefore, it is important for
operational losses, financial misstatements, and fraud. management to analyse the skillset and capabilities of the
individuals involved based on the likely risk and impact to
Deficiencies in standard user access management business processes. Critical job duties can be categorised into
Managing user access to application entitlements has grown in four types of functions: authorisation, custody, record keeping,
complexity with the increase in functionality, transaction data and and reconciliation. In a perfect system, no one person should
complexity of security configurations. The standard application handle more than one type of function.

34 www.ukoug.org
Applications: Adil Khan

FIGURE 2

a user can access to process a transaction, change a setup or


FIGURE 1 update a data object.

You can apply the following options to segregate job duties: Ensure access policy compliance
To ensure compliance with access policies, you must test the
• Sequential separation (two signatures principle) security design to ensure that the responsibilities assigned to
• Individual separation (four eyes principle) a user do not grant access to conflicting entitlements marked
• Spatial separation (separate action in separate locations) in the matrix. You can test access control effectiveness by
• Factorial separation (several factors contribute to completion) extracting the security configuration from security tables
and creating an access violation program that tests security
Many companies find it challenging to implement effective configuration for violations of access policies defined in the
SoD controls in their ERP systems, even though the concept entitlement matrix. The IT security team, with the approval of
of SoD is simple, as described above. To a large extent, this a remediation plan from business process owners, can correct
is due to the complexity and variety of the applications that and access violations by removing entitlements from users and
automate key business processes. Also, the ownership and roles. Auditors can use this “Access Violation” report to provide
accountability for controlling those processes requires complete independent opinions regarding the effectiveness of access
analysis of thousands of functions available across roles and controls. Figure 2 shows how SoD rules are applied to the Oracle
responsibilities assigned to users. For example, to analyse the E-Business Suite security model.
SoD risk that enables users in Oracle E-Business Suite to create
a supplier and pay that supplier, you must identify all Oracle Analyse access violations
functions that constitute the entitlements granted through one Violations of access polices must be analysed to change
or more responsibilities such as Payables Manager, Purchasing user access assignments and correct application security
Manager, etc. However, you must also exclude any false positives configurations. You can start this analysis by examining the
from the SoD control violation results that may occur as a result application function level access mapped in the rule sets that
to overriding attributes, profiles, page level configurations or are tested in the relevant ERP security model. For example,
customisations that prevent such access. vendor-update rights may be executed through a series of
Responsibilities, Menus and Functions, “access points”, within
Create access policies using the entitlements matrix a Payables and Purchasing application. The presence of these
The Access Entitlement Matrix lists potential conflicts to access points assigned to specific users should be verified,
determine what risk may be realised should a user have access walked through and documented in order to accurately verify
or authorisation to a combination of entitlements. For example, a particular conflict. The challenge is that in most modern
what is the likelihood that a user can create a fictitious supplier applications there is more than one way to execute the same
and make a payment to that supplier? The risk likelihood and transaction. For example, there may be ten ways to pay a vendor
impact varies based on industry, business model and even in a payables application, but the company may use only five of
individual business unit. It is not uncommon for a large global them. Moreover, the company is typically not aware of the other
company to have more than one matrix due to differences in the ten ways and usually does not restrict access to or control these
business processes by location or business unit. For example, a other methods to execute a vendor payment.
company may have a manufacturing business unit with a large
amount of inventory, requiring a SoD matrix that focuses on The access violation analysis requires that you discover all
specific inventory transactions. They may also have a service- the potential methods for executing a transaction in order to
based business unit necessitating a focus on project accounting, understand the full potential for fraud, not just the limited view
requiring a different SoD matrix. Though knowledge of similar of the known methods. Analysing all of the ways a user could
businesses and industries can help to establish the entitlement potentially execute an application function is critical to accurate
conflict matrix, each business unit must perform a customised remediation and preventions of access risk.
analysis of its conflicting transactions in order to capture the
real risk for that particular business model. See Figure 1.
User access management challenges
In this example, the matrix provides a financial risk rating Today, organisations are challenged to ensure an effective and
of access roles called “responsibilities” in Oracle E-Business efficient access management process with a rapidly growing
Suite that are assigned to a user. Each responsibility should be assortment of Cloud, on-premise and mobile applications. These
designed to mitigate the access control violation risks. challenges can result in management fatigue, materialised risk
A responsibility design consists of menus, functions and options and operational losses:

www.ukoug.org 35
OracleScene

AUTUMN 17
Applications: Adil Khan

FIGURE 3 FIGURE 4

• User access requests, processed through various fragmented Access control deficiencies
channels, without effective audit trails, waste time and Ineffective access request management across fragmented
money channels with limited audit trails, lack of visibility into potential
• Lack of visibility into potential access policy violations access policy violations and mission critical systems are
during the request approval can compromise the security of unprotected against data breach, fraud and financial misstatement
enterprise applications and sensitive data such as financial risks. As a result, deficient application access controls are a common
statements, customer orders and supplier payments source of internal abuse and a top focus for IT audits. According to
• Companies can risk reputation with headline making security a recent Gartner survey, 44% of IT audit deficiencies are related to
breaches, if security vulnerabilities in unprotected systems are user access management. External audit firms are increasing focus
exploited from outside or inside the company on application access management testing as major regulations
around the world require companies to comply with data privacy
User access request management policies and ensure the effectiveness of internal control over
Many organisations process hundreds of users every day, adding, financial statements. In a report published by Ernst & Young, 7 of
changing and deleting requests that are received through the top 10 control deficiencies relate to user access control.
multiple sources including emails, paper forms, help desk
tickets, etc. The user request process is inconsistent, ad-hoc and The following diagram shows the common access control
platform dependent. It is difficult to test access requests against deficiencies reported by auditors:
company polices because the approval request is granted
without testing the security risks against policies at the granular
functional level. Therefore, auditors cannot rely on the access
controls and require management to manually test application
access across disparate provisioning tools and workflow that
consist of many human touch points including business
managers, help desk, IT Security, etc, as shown in Figure 3.

Consequently, there are no consistent access policy


enforcements within and across applications. Lack of common
access controls and centralised audit trails increases the threat
of data breach and cost of audits. IT security and management
are burdened with time-consuming remediation tasks to ensure
compliance with access policies.

User access assignment FIGURE 5


User access assignment in ERP applications requires a security
administrator to enter or update user details such as user
ID, password, and associated employee information before
assigning roles which entitle the user to access application Automated access controls management
functions and data. The standard application user assignment In this section, we will describe methods to automate and
process is inefficient and inconsistent because this process does streamline the application access controls management process.
not prevent the security administrators from granting access to We will provide examples to:
one or more roles that may violate an access policy.
• Monitor access policies using user and responsibility violation
For example, in Oracle E-Business Suite, when a user is assigned reports
a responsibility using the standard security form available to • Manage access roles to remediate access violations by
the security administrator, there are no messages, warning or excluding functions from responsibilities, simulating the
approval workflow generated if any of the functions available impact and deploying the corrected security model in Oracle
within a responsibility violate any access policy within the • Deploy a self-service user provisioning workflow that provides
assigned responsibility or result in any SoD violation due to access risk information to the approvers to ensure that access
the combination of functions available to the user across policy violations are prevented before a user is assigned one or
responsibilities. The screenshot in Figure 4 of the Oracle more responsibilities in Oracle E-Business Suite.
E-Business Suite User Security Form shows all the direct, as well • Certify user access to assigned responsibilities by notifying
as indirect, user security and functional assignment attributes manager of user access and capturing information to disable
granted without any preventive policy enforcement. access that is no longer required.

36 www.ukoug.org
Applications: Adil Khan

FIGURE 6 FIGURE 8

Figure 6 shows the complete access controls management Remediate control defects
life-cycle. Access risk remediation requires two major types of corrective
actions. Firstly, updating the security configuration in the
Monitor policy compliance application roles that pose “inherent” risk by enabling the user
Once you have established the entitlement matrix based on to access conflicting entitlements within a single role. Secondly,
access risks identified by management, you can create access reassigning user roles where the violation is caused by the user
rules that identify conflicting business activities. For example, in having access two or more conflicting roles.
Oracle E-Business Suite, the business activities are assigned to
users through responsibilities which enable the user to access User role security configuration is the root cause for the
functions on forms and pages through menus. Therefore, to majority of access policy violations. However, updating roles
monitor policy compliance in Oracle, you must define function in a production ERP system with hundreds or even thousands
sets that enable business activities. of active users can negatively impact business performance.
Many companies and their auditors get bogged down during
The following screenshot shows a SoD rule to detect user remediation because of the difficulty in changing security design
access violations where a user can Create Supplier and Create a while business users need to perform their task. Therefore, we
Payment for that Supplier. recommend automating the role redesign process by analysing
the source roles with violations and creating “target” roles that
can be reconfigured and tested for access policy compliance in
a simulated environment before deploying the compliant roles
into the production system.

For example, the following image shows a new target role “FWY
Payable Manager” that is derived from the source role “Payables
Progress UK Super User” in Oracle E-Business Suite:

FIGURE 7

You will note that there are five functions in Oracle to create
suppliers and six functions to pay suppliers. The grouping of
functions into business actives enables business managers and
application security administrators to assess the business risk as
well as make technical configuration changes to remediate it. FIGURE 9

Once the rules are created, you can run the access violations You will note that this role has a number of SoD access policy
program that test the rule against a “snapshot” of the ERP violations including “Create Supplier and Create Payments”. Let’s
security tables where the user and responsibilities do not say that we want to remove the Create Supplier entitlement to
comply with the policy. The results can be viewed in an access correct the security configuration in the target role. We can use
policy violations report. the exclusion method available in Oracle User Security Form to
exclude all the functions associated with the Create Supplier
For example, the report in Figure 8 shows that a user named entitlement. The following page shows that we can simply check
Bruno has the potential access risk of creating a supplier and off the supplier functions in Figure 10.
paying that supplier.
Once the configuration is saved, this program simulates the
Notice that the report shows the responsibilities, menus access policy test to ensure that the target role is compliant with
and functions in each row that enable the user to access the policy. The program also generates the LDT file that can be
the business activities in conflict. Also, we can expedite the loaded directly into EBS using the standard FNDLOAD program,
remediation actions by reporting the organisation, Vision France, without impacting the user in the production system and saving
and the name of Bruno’s manager, Mr. Mareul Vincent from the costly administration activities.
HR table.

www.ukoug.org 37
OracleScene

AUTUMN 17
Applications: Adil Khan

FIGURE 11

FIGURE 10

Provision users with policy compliance Once the user submits the access request, it is routed by the
Once you have detected and remediated the user access pre-configured workflow to each person assigned the approval
violations in your product ERP system, it is important to prevent role in the workflow. The IS Security Administrator can monitor
the violations from recurring as new user requests are processed all access requests and change or cancel a request if required.
and the security model is updated to meet new business The following report shows an example of the display screen
requirements. Otherwise, all your effort will have to be repeated that provide real time status to all self-service user-provisioning
in the next audit cycle, if the users’ role assignments are requests:
changed without testing for access policy impact.

The swim-lane diagram in Figure 11, shows the key activities


by business roles that are required to support self-service user
provisioning process to ensure compliance with access policies.

The first step in setting up a user access request workflow is to


determine the approval levels and roles. In Figure 12, we have
set up three levels of approval so that the employee’s manager
is the first approver. The manager information is obtained from
the HR tables as part of the ERP security “snapshot” that is FIGURE 14

processed at the frequency defined by management. After the


manager approval, the approval request goes to a primary and The approvers receive workflow notifications to approve or reject
a secondary approver as well. The primary approver can be a each user access request routed to them. The request includes
“functional” manager most familiar with the functions available the responsibilities requested as well as any potential access
in the requested Oracle Responsibility. A technical manager with risks based on the policies defined in the access management
the understanding of the Oracle security model may be assigned system. If the request is approved, the user access request is
the approval responsibility as a secondary approver. See approver executed in Oracle E-Business Suite using standard security
workflow setup below: APIs to provision user and responsibility access. Otherwise, if
the approver rejects the request and provides a comment, it’s
logged in the audit report and the information is sent back to
the requester. It’s also possible for the approver to approve a
user request where the access risk is reported but the approver
can provide “compensating” controls that mitigate that risk. For
IT users that need emergency access to the production system,
the approver may provide temporary access called “Firefighter”
access where all the activities are tracked and an audit trail is
FIGURE 12 made available to ensure compliance with access policies.

Once the workflow is configured and the approvers are The pages and report in the examples above are created using
assigned to active Oracle E-Business Suite responsibilities, Oracle APEX application available on SafePaaS.
a registered user can use the access request page to access
new responsibilities. The following image shows a user that is
requesting the Payables Vision services R&D (USA) responsibility:

FIGURE 13

38 www.ukoug.org
Applications: Adil Khan

Conclusion an access request workflow where all new access requests


are analysed for policy violations and approvers can make
The standard user security administration tools available
decisions based on the access risks before a user is granted
within enterprise applications are no longer sufficient to
access to new application privileges.
mitigate the growing risk of fraud, financial misstatement and
operational losses. Business Managers, Application Security In this article, we have provided the best practices to
Administrators, and Auditors can’t rely on the standard user remediate access risks and prevent recurrence in the future.
responsibilities assignment process where users are granted However, we also recognise that most organisations must
access without necessary policy checks and approvals. tolerate some level of access risks where the business
resources are constrained. For example, in a small remote
You can automate and streamline the application access
business unit, you may have the same person enter and post
controls management process by detecting user access risks
journal entries. In such cases, you can deploy Continuous
in the existing ERP security model where the users have
Controls Monitoring (CCM) to identify suspicious transactions,
access to sensitive or conflicting functions. The access risk
alert process owners when key application configurations
can be mitigated by reconfiguring the application roles that
are changes by “super users” and maintain audit trail over
contain inherent access risk. In addition, you can reassign
data changes such as customer credit limits, supplier bank
user roles where combined entitlements across all the roles,
accounts, etc. We did not cover CCM here, but you should
assigned to a user, are in compliance with your company’s
consider it as part of your compensating control strategy to
access policies. After remediation of access risks, it’s important
manage overall access risks.
to prevent any future access policy violations by establishing

ABOUT Adil Khan


CEO, SafePaaS
THE Adil Khan is the CEO at SafePaaS, a firm specialising in Governance, Risk and Compliance
AUTHOR solutions with over 250 customers in the Americas, EMEA and Asia Pacific. Adil has
authored “Governance, Risk and Compliance Handbook for Oracle Applications” and he
serves on the board of the Oracle Applications Users Group (OAUG) GRC Special
Interest Group. He has given over fifty presentations on GRC trends, best
practices and case studies at many industry conferences including Gartner
GRC Summit, IIA, ISACA, Collaborate, UKOUG and Oracle OpenWorld.

www.eFileReady.com

HMRC E-filing
Specialists
Specialising in RTI, CIS, VAT, iXBRL
and Pension eReturns to HMRC
via the Internet Channel
(a direct replacement for the EDI channel)

Proven and well used by Users

eFileReady (fully cloud based system) will accept your Oracle generated files in XML, CSV or iXBRL formats and
will e-file your data to HMRC via the Internet Channel, a direct replacement for the EDI channel for eReturns to
HMRC. We also provide e-filing services for Companies House and Pension providers.

Send in your enquiry E-mail to: Or contact Mr. Ashley Thomas at Tel:
Ashley.Thomas@efileready .com 020 8452 9516
© Copyright 2017 eFileReady Ltd, UK. All other ® & TM company or product trademarks are the property of the respective trademark owners.

www.ukoug.org
eFileReady_HalfPage_Advert_20APR2017 39
20 April 2017 15:11:24
OracleScene

AUTUMN 17
Applications

The Five Key Challenges


Facing Payroll
Your people are However, employee costs are the
single greatest expense within most
Compliance
There are many issues to consider in

your biggest asset organisations so having accurate payroll


is one of the fundamental parts of this
the realms of compliance. Even more so
as there are various changes that have

and as such you process - as such it is vital that payroll is a


seamless and error free function.
come into force or are imminent that put
further demand on payroll.

need to engage, Whilst the last few years have produced Your payroll needs to consider:
significant hurdles for payroll teams (RTI
motivate, develop and auto-enrolment to name a couple), Gender Pay Gap Regulations 2017
today continues with real challenges and The Gender Pay Gap Reporting will
and retain them. perhaps also a few red herrings. apply to employers with 250 or more
UK employees. This will include LLPs,
In this article we consider the five main partnerships, companies, unincorporated
Claire Milner, challenges faced by payroll professionals bodies or other types of employing
today; compliance, accuracy, complexity, entity. This is likely to affect around
Head of Outsourced Payroll, security and cost. 7,000 companies, covering just over
Symatrix 10 million employees (Department for
Business, Energy and Industrial Strategy

40 www.ukoug.org
Applications: Claire Milner

(BEIS)). Employee pay must be reported going to open registration for including help overcome these challenges and
to demonstrate any differences between benefit in kind in payroll. Areas that are outsourcing your payroll function can
male and female pay. This includes a included in this are stocks and shares, mitigate and eliminate these risks.
range of remuneration, including the commodities, financial instruments,
mean and median pay gap figures, but including long service awards and Also, ensuring that you have a single
there are some exceptions. anything that can be readily converted system which ensures that all legislative
into assets. Whilst this isn’t yet compliance is taken into account can help
The Apprenticeship Levy compulsory it is envisaged that it soon ensure that your company adheres to all
Those businesses with a wage bill of more will be in 2018-2019. legal requirements.
than £3million are now subject to a 0.5%
Apprenticeship Levy. Also companies that Statutory Leave Pay Rate Changes
are connected to other companies or From April 2017, rates of pay for Statutory Complexity
charities for employment allowance who Maternity, Paternity, Adoption and Sick There are many elements that continually
have an annual pay bill of £3million or leave changed. Whilst these rates have interact with payroll; the management of
more may be affected. been frozen since April 2015, the increase employee data, the process of calculating
to the Consumer Price Index (CPI) to the payable salary, declarations to the
National Living Wage and Minimum September 2016, sees these rates rise for government and the impact of geography
Wage 2017-2018. amongst others. But what is it that really
The Government’s declaration to increase makes payroll so complex? Some would
the “National Living Wage” by 4% from Changes to Rules for Employing Foreign attribute this to the balance of managing
£7.20 to £7.50 an hour from April 2017, Nationals internal specific policies whilst staying
is a little below the £7.60 figure that Since April 2017 employers sponsoring legislatively compliant.
the Office for Budget Responsibility foreign workers with a tier 2 visa are
indicated would be needed to reach the required to pay an immigration skills With the ever increasing amount of new
£9.00 an hour originally targeted by the charge of £1,000 per worker (£364 legislative requirements that impact
Government by 2020. for small employers and charities). the administrative volumes facing a
The immigration skills charge will be payroll department, it can be very time
Holiday Pay for Regular Overtime/ in addition to current fees for visa consuming just getting to grips with
Commission applications. the background, impact and application
There have been many cases taken to of the new rules not to mention the
court regarding the issue of accounting The minimum salary threshold for additional effort that is now required
for regular overtime or commission when “experienced workers” applying for a to produce new reports and perform
calculating holiday pay. It has been ruled tier 2 visa also increased to £30,000. additional checks and calculations.
that UK law must be interpreted in a way New entrants to the job market, and
that conforms to EU law by requiring some health and education staff, will be Another aspect that adds to the
employers to take into account non- exempted from the salary threshold until complexity is the amount of time it takes
guaranteed overtime payments when 2019. for payroll to be carried out due to the
calculating holiday pay amount of collation and reconciliation
that is required to ensure nothing falls
Salary Sacrifice Accuracy through the cracks (or more often no
Employers may need to reconsider their The demand for accuracy in payroll is one). Often, the payroll system is different
benefit offerings as tax savings, though imperative. That’s why balancing, auditing to the HR system so the data itself is
many salary-sacrifice schemes were and segregating of duties are critical. maintained in multiple places; many
abolished from 6 April 2017. These steps ensure you minimise error organisations find that payroll ends up
Schemes related to pension savings rates and can also help avoid off-cycle being their most accurate version of the
(including pensions advice), childcare, payments. Manual workarounds and truth when it comes to employee data
cycle-to-work and ultra-low emission cars manual processes can lead to the need as so much effort goes into ensuring
will not be affected. for rekeying of data. Differing interfaces everything is captured to comply with
between the technologies used to legislation; in short payroll is effectively
Digitisation of HMRC carry out payroll calculations can also audited to an extent on a monthly basis.
The HMRC announced that they were cause issues. Using a single system will

Security
All data held by an organisation is sensitive
and rightly so is governed by legislation
National Living Wage to increase by 4% to £7.50 per to protect it, however, arguably payroll
hour, with a target of £9.00 per hour by 2020 information is one of the most sensitive
data types alongside medical and banking

www.ukoug.org 41
OracleScene

AUTUMN 17
Applications: Claire Milner

details. As a mixture of a remuneration and utilise sophisticated technology, which might lead to legal action or fines
information, personnel files and bank having varying motives from simple against your company and which could
details, the security risks are significant if it identity theft to more extreme activity. ultimately affect your reputation as an
ends up in the wrong hands. In May 2018, the forthcoming European employer.
Union’s General Data Protection
Confidentiality of employee payroll data is Regulation (GDPR) will be introduced. Hidden costs can be varied but must also
critical because a leak can result in discord It was designed to harmonise data be taken into consideration, specifically if
among employees, adverse publicity and privacy laws across Europe to protect and they are triggered by a legislative change
fines. empower all EU citizen’s data privacy and (i.e. IR35 for contractors, Apprenticeship
to reshape the way organisations across Levy). Unfortunately, the impact of new
Identifying potential risks that your HR the region approach data privacy. requirements isn’t necessarily limited
and Payroll departments are exposed to to financial costs associated with new
is vital to avoid the consequences of any legislation (e.g. pensions auto-enrolment
critical information leakage. Cost in recent years). There is also additional
When considering the cost, it is not just administration required such as collating
Most companies are aware of the the financial cost of your payroll that and submitting new reports to governing
paramount importance of data security must be taken into account, the real cost bodies and in some cases investment in
and that cyber-attacks are an increasing is made up of a combination of hidden new technology to be able to address
challenge. Hackers are often highly skilled costs, running costs and inaccuracies these additional activities.

Conclusion
Payroll is not a one-size-fits-all endeavour. Businesses today must take into account their organisational goals and their cultural
and industry practices when deciding how best to approach payroll strategically. Managing payroll has become a complex and
sensitive process that can have ripple effects on an organisation when things go wrong. Many companies make the decision to
outsource some or all of their payroll, allowing their HR and Finance functions to devote more time to the company’s strategic
plans, helping drive productivity, leadership, engagement and innovation.
Organisations who implement modern cloud based HCM solutions that integrate both HR and Payroll see real efficiencies and
cost savings; one area where this is evident is in reducing the need to maintain interfaces or double keying between HR and
Payroll. Furthermore, being on a platform that is automatically updated as and when new legislation is enforced removes the
need to invest in technology or adopt manual workarounds.
In such circumstances a change in payroll mind-set is required to be more focused on value creation and business performance.
The core employee data is no longer ‘owned’ by payroll in an integrated HR and Payroll system. The core data is owned by HR
where employee policies and contracts reside. Payroll now take care of the business of processing payroll and carrying out
the usual and necessary checks and balances. They no longer double-key, double-check or approve the changes. Approval is
managed as part of the HR process and data is shared with payroll automatically to be a process in the pay run. Bearing these
important changes in mind, HR and Payroll need to work together to harmonise their efforts across teams, processes and
systems.

For more information on payroll, take a look at the two whitepapers from Symatrix: https://www.symatrix.com/the-knowledge-centre/

ABOUT Claire Milner


BPO Manager, Symatrix
THE Claire Milner is BPO Manager at Symatrix, who have been supplying fully managed
AUTHOR outsourced payroll on Oracle HCM for several years. She and has 20 years’ experience
working in HR & Payroll with 16 years of those working as a Payroll Manager. She has
experience of payroll for companies from 1 to 45,000 employees both in-house and for
outsource providers looking after clients across all sectors. A Six Sigma Green belt,
Claire is focused on utilising technology and driving efficiencies.
www.linkedin.com/in/claire-milner-64bb1956/

42 www.ukoug.org
Applications

Where’s Your
Head At?

Oracle announce, “Continued Investment and Support for Years to Come” on


E-Business Suite Releasesi. Back in Issue 58 I wrote an article “Should I Stay or Should
I Go” looking at whether organisations are ready and willing to move from Oracle
E-Business Suite (EBS) to the Cloud.

Now 2 years on, what has changed, if anything? And with Oracle’s recent
announcement stating there will be another R12 dot release with Premier Support
running until at least 2030, how does this affect the current R12 user base?
Steve Davis, COO, Namos Solutions

So, it appears there is plenty of life in the old EBS dog yet (or is
there? – read on, I dare you), however for the decision makers in
your organisation is it enough to put your “imminent” move to
a Cloud solution on hold? And surely for existing EBS users this
is the crux of the dilemma facing organisations at the moment,
Should I and if so, when do I go to Cloud?

Whilst there are plenty of customers on Cloud, be it CX, ERP/EPM


or HCM etc. not many of them were existing EBS users in the
first place, and if they were, they have added a Cloud module or
pillar (Cloud HCM with EBS Financials) rather than moving their
entire Oracle EBS estate over (there has been some traction, and
numbers are increasing of course).

A majority of Cloud customers are new to Oracle, so is this a a cloud solution? Well instead of one or the other it’s actually
change of dynamic within Oracle to pay attention to EBS users providing choice to remain on EBS or move to a co-existence
whilst still on the other side proactively moving customers to model, and invest in the future; “Options” being the key word!

www.ukoug.org 43
OracleScene

AUTUMN 17
Applications: Steve Davis

Whilst improvements in functionality will no doubt reach EBS Food for thought
in what will clearly be 12.2.7, 12.2.8, and dare we say 12.3 they On the face of it then there is a main course of EBS
will be few and far between in the core forms, core products and improvements and lengthy support until 2030 but it does have a
core processes. large side order of Cloud in the detail. Not only those mentioned
regarding Apps above, but also the offer to take your whole
But let’s not forget those on R12.0 or R12.1, there has already customised EBS estate and put it in the Cloud using various
been a whole host of new features with 12.2, so this extension options in IaaS and PaaS.
of development and support may tempt them to stay on EBS
and upgrade rather than go Cloud. They could have plenty to
cheer, as who can forget this graphic?

Well, as I said earlier, options, options, options!!! The graphic


below illustrates the steer perfectly; more than happy for you to
Plenty of good stuff and we haven’t even touched on the carry on with EBS but how can Oracle help you?
introduction of online patching and the efficiencies that
provides, particularly predicted downtime and more importantly
minimal downtime for global or 24/7 businesses.

So, without much investment in changed forms and core


functionality, instead the Cloud look and feel (already available
in R12.2), will be enhanced and there’ll be serious improvements
in mobile applications and user interfaces. The ability to
use mobile apps for approvals, expenses, procurement and
timecards will be a major plus to those living in the forms based
world (albeit self-service style HTML forms), the modern world is
really changing for the better, and it’s all proven functionality

Re-platform perhaps, or maybe extend with the SaaS Apps


already spoken about but then ultimately migrate through to
SaaS altogether, although more options are here with co-
existence by;

• Moving certain regions, countries or parts of your business to


SaaS or,
• Adding new pillars (or modules), so as we’ve seen with a
number of customers having ERP on EBS R12 but running
PBCS or HCM in the Cloud. This route is proving very popular
and for us as system integrators is an area where customers
are getting actively engaged. The ability to protect your
current investment (keep it close at home), but sometimes
I know how alien all that sounds. Clearly modern agile cloud you’ve just got to go out for a night (or out out if you’re really
based applications are the future and you just can’t get away bold!) and so dipping your toe in the Cloud pool gives you an
from that, instead…embrace it people!!! understanding of the differences in delivery, environment
Using all the best design from their raft of Cloud products, or pod management, the “as-a-service” model, the Oracle
Oracle are giving you the best of both worlds, preservation of relationship and your support relationship with your
your known solution for many more years to come but with the implementation partner (I’ve a lot of thoughts on what you
facility to link up with the latest technology. However… should expect from that, but not enough column inches here).

44 www.ukoug.org
Applications: Steve Davis

Where’s your head at? More like where’s your tech at? Whatever Who’d have thought 2 years on that I’d be saying “the future of EBS
your combination, there is a solution for you, hey even the “do is the Cloud” or you can have your E-Business Suite in the Cloud or
nothing” option is still on the table. But you can’t stand still a Cloud Machine on-premise, but that’s the future of tech, always
in competitive business for too long, so please talk about it. evolving and surprising! Time for a lie down I think to sort my head
Conversation is the way forward, discuss these options asap. I’m out. I’ll try not to leave it 2 years next time dear reader.
more than happy to talk with you about your future road-map,
as every organisation is unique and has their own drivers for Oracle will go into more detail on all the areas of this
change, and now is the perfect time to get going. announcement at the UKOUG Conferences, so be sure to attend
4th-6th December 2017 at ICC Birmingham.

ABOUT Steve Davis


COO, Namos Solutions
THE Steve Davis; COO of Namos Solutions Ltd is a certified Oracle Cloud specialist and a highly
AUTHOR experienced and industry recognised programme and project manager delivering Oracle
solutions in multiple industry sectors. Steve manages and mentors a number of
consultants in the Namos Solutions organisation, taking responsibility for resourcing the
most appropriate individuals to the clients’ needs, and ensuring quality of delivery every
time. Plus he sits on a number of committees at UKOUG, and always has an opinion
if you ask him!
www.linkedin.com/in/steve-davis-081298a/
@DeadlyNamos

i
http://www.ukoug.org/what-we-offer/library/applications-keynote-oracle-e-business-suite-update-strategy-and-roadmap/

www.ukoug.org 45
OracleScene

AUTUMN 17
CX

Why Machine Learning Might be the

Saviour of
Advertising
Advertisers have come up against a wall. The use of ad blockers is on the rise as
consumers look to control how much interruptive advertising they receive online
and over social media. eMarketer predictedi that more than one quarter of US
internet users will be using an ad blocker by the end of 2016.

Daryn Mason, Oracle CX Evangelist

People may not welcome what they see as an endless barrage


of messaging, but that doesn’t mean they want to block
everything. They just want communications that are relevant.
“Ad blocking technology is a blunt
In the UK, for instance, the Internet Advertising Bureau (IAB) instrument which, by default, makes no
foundii that more than half of consumers would turn ad
blocking off to receive the content they want.
differentiation between poor and quality
advertising.”
Machine learning may be the technology that takes advertising
to this next level. A wave of “intelligent” software will allow
advertisers to probe deeper into each shopper’s digital
Some do offer a limited degree of control. AdBlock Plus allows
interactions to learn which brands, products and services
users to whitelist advertisers and publishers who have agreed to
most appeal to them and target them with genuinely
abide by user-generated criteria.
impactful content.

A battle of the digital wits


On the right track
Many companies’ response to ad blockers has been to
Today’s online marketing is definitely more personalised than
implement software that circumvents them.
it used to be. The use of programmatic advertising has been a
major leap forward in terms of communicating with customers
Facebook launched a workaround to AdBlock Plus and other
based on their online activity and purchasing habits, but
similar services this year, and as a result saw its desktop
customers still find themselves overwhelmed by the volume of
advertising revenue jump 18%iv. The practice has also gained
messaging they are sent. Hence their growing preference for ad
traction in the publishing spacev, with major media outlets like
blockers.
The New York Times and The Guardian rolling out ad blocking
workarounds to secure their advertising revenue.
The problem is that ad blockers are generally inflexible. As Aiden
Joyceiii, Chief Executive of anti-ad-blocking start-up Oriel,
has said:

46 www.ukoug.org
CX: Daryn Mason

What we’re seeing is essentially a battle of the wits where ad- Syncing up marketing and sales
blockers and merchant workarounds are trying to outsmart each It’s worth highlighting the importance of timing in determining
other in increasingly sophisticated ways. This dynamic doesn’t the impact of marketing. Overexposing a prospect to messaging
breed better content and is not a sustainable way forward. when they aren’t primed to buy or if they are already in talks
with your brand’s sales team runs the risk of turning them off.
That’s why marketing and sales departments need to align more
Welcome to the machine closely when dealing with customers.
Machine learning brings greater nuance to the way consumers
manage content and to how brands distribute it. The technology Consider the case of a mobile software start-up that needs
is turning ad blockers from all-or-nothing sorting programs to buy additional bandwidth from a telecoms provider to
into sophisticated ‘content brokers’ that scrutinise adverts for reach more customers. Chances are the company has already
relevance based on each customer’s needs. researched a number of vendors online, and a programmatic
approach to advertising would see them targeted with adverts
Just like Google anticipates search terms based on our browsing to capitalise on their interest. However, if the start-up is already
history, ad blockers will increasingly rely on artificial intelligence speaking with sales then the continuous wave of marketing
(AI) to curate brand messages on a customer’s behalf. And, much might just come off as aggressive and unhelpful.
like Google, these programs will develop a deeper understanding
of each user over time, which means they’ll only become The keys to effective advertising are relevance and timing, and as
more accurate. people continue to adopt sophisticated ad ’brokers’, brands will
need to work harder to deliver content that is both opportune
The use of AI will continue to evolve for both merchants and and valuable. Those companies that put in the effort will find
customers, and by 2020 we will see volume-based marketing themselves getting closer to their customers and prospects,
dialled back as more targeted, productive communications take rather than continuing to drive them back behind a wall.
its place.

ABOUT Daryn Mason


Oracle CX Evangelist
THE Daryn has been working in the IT industry for 33 years (including 21 years in
AUTHOR management), where he has built a reputation for effective leadership through change
and transition. His areas of expertise include: Digital Marketing, Customer Experience
(CX), CRM, SaaS, Cloud Computing, Enterprise Social Networks, Sales Effectiveness and
Solution Selling. He is an avid user of social media as a way to engage customers,
partners and colleagues in stimulating discussions.

i
https://www.emarketer.com/Article/US-Ad-Blocking-Jump-by-Double-Digits-This-Year/1014111
ii
https://iabuk.net/about/press/archive/iab-uk-reveals-latest-ad-blocking-behaviour
iii
https://www.ft.com/content/abf110aa-00b0-11e6-99cb-83242733f755
iv
https://techcrunch.com/2016/11/02/add-cash-plus/
v
http://www.thedrum.com/news/2016/09/09/which-media-outlets-are-blocking-ad-blockers

#ukoug_lme
UKOUG LICENCE MANAGEMENT 2017
24 OCTOBER 2017 Does Oracle licensing still send shivers
CAVENDISH CONFERENCE CENTRE, LONDON through your business? Get expert advice
and your questions answered on best
practice, compliance and managing risk.
Find out more at www.ukoug.org/lme

www.ukoug.org 47
OracleScene

AUTUMN 17
Ask Jonathan

Ask
JONATHAN
QUESTION 1: exec dbms_stats.gather_table_stats(USER,’table1’);
Creating indexes
explain plan for create index ind1 on table1(col1, col2, col3);
select * from table(dbms_xplan.display);
I can use explain plan on a “create index”
Plan hash value: 1872740920
statement to get a note telling me how ----------------------------------------------------------------------------------
big the resulting index will be; but is the | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
indication of “bytes” in the plan a good ----------------------------------------------------------------------------------
| 0 | CREATE INDEX STATEMENT | | 734M| 17G| 9903K (1)| 33:00:43 |
indicator of how much space I will need in | 1 | INDEX BUILD NON UNIQUE| IND1 | | | | |
the temporary tablespace if the sort spills to | 2 | SORT CREATE INDEX | | 734M| 17G| | |
| 3 | TABLE ACCESS FULL | TABLE1 | 734M| 17G| 8902K (1)| 29:40:28 |
disc while creating the index? ----------------------------------------------------------------------------------

Note
This question appeared recently on the -----
OTN database forum with the following - estimated index size: 30G bytes
code and plan, see Figure 1.
FIGURE 1

Before saying anything else, it’s worth making a couple of So, we have 734 million rows of three columns, and the space
related points. First – the estimated index size doesn’t make any estimated for each row has been understated by: 3 (length bytes)
allowances for compression; secondly if the indexed columns + 8 (rowid + length) + 4 = 15 bytes. Multiply by 734M and you get
contain a lot of nulls the estimate won’t be very accurate; a total of 11 billion bytes of extra space needed – for a total of
thirdly, although the question asked explicitly about the 38 billion bytes. (In passing, G = 10e9 in the execution plan, not
temporary tablespace and the 17G shown in the plan it’s worth 2e30 – that’s a difference of more than 7%).
remembering that there are two different reasons for getting
“ORA-01652: unable to extend temp segment …” when creating If you assume you need a couple of hundred bytes per block for
an index – one is that you didn’t have enough space in the overhead then we can divide by 8,000 to convert to blocks, and
temporary tablespace for the necessary sorting, the other is that multiply by 8192 to convert to actual temp storage requirement,
you didn’t have enough space in the target tablespace to build giving a total of roughly 36 Gigabytes. And that’s just the raw
the final copy of the index: make sure you check the tablespace storage requirement. When a sort spills to disc Oracle will write
name reported in the error before you start adding space to the multiple sorted streams of data as it scans the table – that’s
wrong tablespace. where we first get 36GB written to disc – then Oracle has to
start merging the initial sorted streams to create fewer, larger,
So, the question here is whether or not the 17G is (roughly) the sorted streams, and repeats the process until the output is a
amount of space we will need in the temporary tablespace. The single sorted stream that can be written as an index into the
answer is no – if that’s all the space you’ve got (and assuming final tablespace.
you can’t build the index in memory) then the statement will fail.
The number is simply the result of the calculation “user_tables. This means we need even more space so that we can write
num_rows * sum(user_tab_cols.avg_col_len)” – and there are one large stream while reading several smaller streams. In
lots of extra bits to add when that volume of data goes into the principle this might mean that Oracle could use twice the initial
temporary tablespace: space – but I think the code is much smarter than that and
manages to keep the extra space to a minimum; even so I think
• The value avg_col_len includes one length byte, but when the it might be possible to use up to 30% more space than the basic
extracted row goes into temp each column is allowed two requirement, and to play safe I’d simply double the original
bytes to hold its length estimate rather than risk wasting a lot of time on a failed index
• To create an index you need the rowid for each row so (for a build. 
simple heap table) that’s another 6 bytes, plus 2 bytes for the
length
• Each row in the temp space has a 4 byte header
• Then there’s the block overhead

48 www.ukoug.org
Ask Jonathan

OracleScene
In this issue: How accurate is Oracle in its hint about the temporary space you’ll need to create
an index; Why did Oracle ignore my parallel() hint, and Why are my archived redo log files so
much smaller than the online redo log files?
Submit your questions, listing the topic area to [email protected]. Jonathan may summarise your question and, with your
prior agreement, may contact you to fill in a bit of background.

QUESTION 2:
Missing parallelism

Why did Oracle ignore my parallel hint?

This question came from the Oracle-L list server, and has to be met with my standard response: Oracle doesn’t ignore hints if you’ve
managed to apply them perfectly correctly (unless you’ve hit a bug). In this case the supplied SQL and plan were as follows:

SELECT /*+ parallel(pv,4) */


DISTINCT(PV.PARAMETER_VALUE_NAME),D.DEVICETYPE_ID
FROM
DEVICE D,
PARAMETERVALUE PV,
TMP_HDM_CLEANUP_INSTANCE TMP
WHERE
D.CACHED_DATA_RECORD_ID = PV.DATA_RECORD_ID
AND D.DEVICETYPE_ID = TMP.DEVICETYPE_ID
AND PV.PARAMETER_VALUE_NAME LIKE TMP.PARAMETER_VALUE_NAME ESCAPE :B1

-------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows |Bytes | Cost (%CPU)| Pstart| Pstop |
-------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | | | 12395 (100)| | |
| 1 | HASH UNIQUE | | 1 | 100 | 12395 (1)| | |
| 2 | NESTED LOOPS | | 1 | 100 | 12393 (1)| | |
| 3 | NESTED LOOPS | | 1 | 57 | 12390 (1)| | |
| 4 | TABLE ACCESS FULL | TMP_HDM_CLEANUP_INSTANCE | 4125 | 185K| 11 (0)| | |
| 5 | TABLE ACCESS BY INDEX ROWID| DEVICE | 1 | 11 | 3 (0)| | |
|* 6 | INDEX RANGE SCAN | SYS_C0016783 | 1822K| | 3 (0)| | |
| 7 | PARTITION HASH ITERATOR | | 1 | 43 | 3 (0)| KEY | KEY |
|* 8 | INDEX RANGE SCAN | UQ_PARAM_NEW | 1 | 43 | 3 (0)| KEY | KEY |
-------------------------------------------------------------------------------------------------------------

FIGURE 11
The parallel hint has been used correctly – it specifies an object (identified within the index) from parametervalue: it’s probably
by its alias and supplies a degree. But that doesn’t mean any not a good idea to do a tablescan of a large table to find one row.
part of the query has to operate in parallel, it simply means
that the optimizer should use the arithmetic for parallel degree But we can see there’s definitely a problem with the arithmetic
4 any time it’s estimating the cost of a full tablescan of the – the index range scan at operation 6 has a cost of 3 and is
parametervalue table. That doesn’t mean the final plan is expected to return 1.8 million rows each time. Even with very
required to use a tablescan of parametervalue, though, and if good compression rates you can’t squeeze that many index
there’s a serial plan that’s cheaper than every plan that includes entries into a small enough number of blocks to give you a cost
a (parallel) tablescan of parametervalue the optimizer will of 3: clearly there’s some sort of problem with the statistics on
choose the serial plan. the table (columns) or on the index (or maybe this piece of SQL
has found an optimizer bug). Without a detailed investigation of
In fact, this plan gives us a clue about why the costing may the table and index definitions and statistics it’s not possible to
favour avoiding a tablescan of parametervalue. If you look at say more – but if Oracle got a better cardinality estimated for the
operation 2 you can see that the optimizer’s estimate of the join and a more realistic cost for that index range scan perhaps
number of rows coming out of the join between tmp_hdm_ a tablescan of the parametervalue table would look sensible, at
cleanup_instance and device is just one and we can see at which point parallel execution would take place.
operation 8 that the optimizer thinks this will join to a single row

www.ukoug.org 49
OracleScene

AUTUMN 17
Ask Jonathan

SK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONAT

Ask
JONATHAN
HAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK JONATHAN ASK J
QUESTION 3: so it “pre-allocates” space in the log file to ensure that any extreme
Big logs, little archives condition can be met. In 64-bit systems the private redo buffer is
128KB, so once you’ve discovered how many private threads there
Why are my archived redo log files much smaller than my online are you can work out how big a difference there could be between
redo log files? the size of the online and the archived redo logs.

There’s one obvious cause – someone is making this happen So how did this user see 512MB online logs turning into 44MB
deliberately, maybe a dbms_scheduler task or a cron job has been archived logs? Well the number of private redo threads Oracle
set up to “archive log current” every 30 minutes, or perhaps some allocates at startup is transactions / 10, but the default value for
parameter like archive_lag_target has been set with the intention the transactions parameter is 5 + sessions * 1.1, and the default
of limiting the loss of redo information. If this is the case then it’s value for sessions is processes * 1.1, and the last time I saw figures
likely that there will be a regular interval visible in the timestamps for this system it had processes set to 30,000. So the gap could be
of the files and the files might show a significant variation in size. as large as: 128KB * 30000 * 1.1 * 1.1 * / 10 = 453.75MB – and that’s
a pretty good match to the gap in the question. The machine in
But what if the files are all similar in size and the timestamps have question also reported 192 CPU, and the number of public redo
wildly random gaps? A recent case on OTN reported online log threads is cpu_count / 16 – which would be 12 public redo threads,
files of 512MB with archived log files a fairly constant 44MB, and and I have seen some indications that the log file switch may also
timestamps showing gaps of anything between one minute and leave space pre-allocated for the public threads as well.
several hours. This is indicative (though a fairly extreme case) of
space reserved for private redo threads. The numbers you see when you start checking file sizes can be
a little puzzling because Oracle seems to have some ability to
Since 10g Oracle has allowed for public and private redo log recognise dynamically how many private and public redo threads
buffers (a.k.a. redo threads, or redo strands). A session that does a have been active in the recent past and seems to reduce the pre-
small amount of work and then commits will first write its redo allocation accordingly, but as a guideline you may find that your
into a private buffer, and on the commit the private redo buffer will archived redo log files can be smaller than the online log files by
be copied to one of the public buffers before being written to file a value equal to the total space allocated to the log buffer (which
by the log writer. might be accurately reflected in the parameter log_buffer).

When a log file is full Oracle will perform a “log file switch”, but Does this matter? Normally I’d think not – but if your log files are
when that happens there may be lots of private redo buffers that 512MB and you’re getting a switch and archive after only 44 MB
still have some pending redo which needs to be transferred to the it’s possible that you’re going to see critical time lost waiting for
public buffers and written to file – which is why you sometimes log file switch completion that you’re not expecting. Your response
see waits for: “log file switch (private strand flush incomplete)”. might be to increase the size of the redo log files – on the other
Oracle wants to transfer all the pending private redo to file before hand I’d probably be looking at why the setting for processes was
performing the switch. so high and whether the hardware could actually support that
many processes.
This is the critical clue – Oracle anticipates the threat of private
Footnote: At the time of writing this question is still open, and I’m waiting to get some details
redo strands being full at the moment it needs to switch log files, of parameter settings from the originator.

ABOUT Jonathan Lewis


Freelance Consultant, JL Computer Consultancy
THE Jonathan’s experience with Oracle goes back more than 25 years. He specialises in
AUTHOR physical database design, the strategic use of the Oracle database engine and solving
performance issues. Jonathan is the author of ‘Oracle Core’, ‘Cost Based Oracle –
Fundamentals’ and ‘Practical Oracle 8i – Designing Efficient Databases’ and has
contributed to three other books about Oracle. He is one of the best-known speakers
on the UK Oracle circuit, as well as being very popular on the international scene,
having worked or lectured in 50 different countries. Further details of his published
papers, presentations and tutorials can be found through his blog.
Blog: jonathanlewis.wordpress.com
@JLOracle

50 www.ukoug.org
REGISTER NOW REGISTER NOW REGISTER NOW REGISTER NOW REGISTER NOW REGISTER NOW

4 - 6 DE C 20 17 #ukoug_apps17 5 - 6 D E C 20 1 7 #ukoug_jde17 4 - 6 DE C 20 1 7 #ukoug_tech17

Biggest UK Oracle Community


Event of the Year
Join us and the leading lights of the Oracle world for the largest independent gathering
of Oracle professional users in the UK.

Education Networking Collaboration


Benefit from expert knowledge and learn from user experiences. An opportunity to build valuable
networks and collaborate. Through the collective strength of the combined Oracle community, UKOUG act
as a single independent voice in to Oracle.

MEMBERS NON-MEMBERS
Platinum and Gold members are entitled to a 50% discount New to UKOUG? Benefit from our Platinum for Gold** offer!
on additional Conference 2017 tickets* when purchased Start your membership with the benefits of a platinum
in September. Not sure of your membership entitlement? package, for the price of gold. That’s less than £170 per
Contact [email protected] with any enquiries. conference day pass. This includes: 6 conference day tickets,
10 Special Interest Group passes, access to UKOUG’s ebulletin,
Oracle Scene magazine and full library access.

All information and prices correct at time of printing.


*Full conference attendance. **For new first-time members only.
UKOUG will be in contact regarding the allocation of any inclusive conference day tickets.

UK Oracle User Group (UKOUG) is a not-for-profit membership association


organising over 40 events a year. Our events enable you to connect with other
organisational users and solution providers in an unbiased, collaborative
environment coming together for education, innovation and information.

If you would like to apply or for more information, contact us on: +44 (0)20 8545 9670 or [email protected]
ORACLE
GOLD PARTNER

Providing End-to-End ERP


Support Services Platform for
Oracle ERP Clients and Partners

Migration Integration Implementation Support

We can help your organization by:


• Data Migration Services • Strategic/Operational Reporting Services
(OBIEE/OBIA/BICS/Custom Analytics/ XML)
(Extraction/Transformation/Loading
Reconciliation) • On-Premise/Cloud Co-Existence -
Cloud /On Premise (R12.x.x) Integration/Development services
• Custom Development Services • ODI Services
(OAF/ADF/ APEX/Other)
• Oracle Cloud Integration services
• Fixed Price R12 Upgrades

Local Government Solutions

“eAppSys has deep domain knowledge and extensive


experience in delivering solutions to Local Government’’

ORACLE
GOLD PARTNER For more info visit: www.eappsys.com

You might also like