Install this theme

Posts tagged: database

Load/Drop Jars to/from Oracle database using Ant

Oracle Client Package provides loadjava and dropjava tools to loading/dropping java classes / jars / resources to/from the Oracle database. 

Sometimes however it is necessary to run this functionality on the machine that doesn’t have Oracle Client package installed.

This post describes how to achieve this using Ant.

Note! This instruction is for Oracle 11g.

Prerequisites

From the machine having Oracle Client installed, copy ojdbc5.jar (typically located in $ORACLE_HOME/product/11.x/client_1/jdbc/lib) and aurora.zip (typically located in $ORACLE_HOME%/product/11.x/client_1/javavm/lib) to some folder accessible by your Ant script.

Below i’ll assume, that this 2 files are located in the same folder where the Ant script located.

Load Java Target

<target name="loadjava" description="target for deploying jars to the database">
	<java classname="oracle.aurora.server.tools.loadjava.LoadJavaMain" fork="true">
		<jvmarg value="-Xint" />
		<classpath>
			<pathelement location="aurora.zip" />
			<pathelement location="ojdbc5.jar" />
		</classpath>
		<arg line="-thin -user SCOTT/TIGER@DBHOST:1551:DBSID -resolve my-jar-to-upload.jar" />
	</java>
</target> 

This target will deploy my-jar-to-upload.jar file to the Oracle database identified by SCOTT/TIGER@DBHOST:1551:DBSID url.

Drop Java Target

<target name="dropjava" description="target for dropping jars from the database">
	<java classname="oracle.aurora.server.tools.loadjava.DropJavaMain" fork="true">
		<jvmarg value="-Xint" />
		<classpath>
			<pathelement location="aurora.zip" />
			<pathelement location="ojdbc5.jar" />
		</classpath>
		<arg line="-thin -user SCOTT/TIGER@DBHOST:1551:DBSID my-jar-to-upload.jar" />
	</java>
</target> 

This target will drop my-jar-to-upload.jar file from the Oracle database identified by SCOTT/TIGER@DBHOST:1551:DBSID url.

Database-driven unit tests with Liquibase

In the previous article “Database change management with Liquibase” i demonstrated the standard Liquibase usage for managing database changes.

This post will describe, how to construct an infstructure for executing database-driven unit tests, a more likely untypical task for Liquibase.

To be able to execute database-driven tests we have to put our database into a known state between test runs. This is where Liquibase will help us.

First, let’s create a sample set of changes that populates the database with the test data:

<databaseChangeLog xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
	xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-2.0.xsd">
	<changeSet id="testdata" author="bar">
		<loadData tableName="t_role" file="db/testdata/roles.csv">
			<column name="name" type="STRING" />
			<column name="description" type="STRING" />
		</loadData>
		<loadData tableName="t_user" file="db/testdata/users.csv">
			<column name="id" type="NUMERIC" />
			<column name="username" type="STRING" />
			<column name="role" type="STRING" />
		</loadData>
		<loadData tableName="t_address" file="db/testdata/addresses.csv" />
		<rollback>
			<delete tableName="t_address" />
			<delete tableName="t_user" />
			<delete tableName="t_role" />
		</rollback>
	</changeSet>
</databaseChangeLog> 

In the changelog above we make use of the <loadData> tag that is able to load data from the CSV file and insert it into the database (alternatively you may use <insert>, <update> and <delete> tags to manipulate the database contents). Furthermore the <rollback> block describes how to remove the inserted changes from the database.

As brief overview, here is an example of the roles.csv file:

name,description
USER,A simple user
ADMIN,Administrator user
ANONYMOUS,NULL 

First row in the CSV file specifies the column names to populate. All subsequent rows contains the test data.

Please consult GitHub for other CSV files used in this post: https://github.com/bigpuritz/javaforge-blog/tree/master/liquibase-sample/db/testdata

Now, let’s make our project ready to use Liquibase together with the Junit.

Keep reading

Database change management with Liquibase

Consult https://github.com/bigpuritz/javaforge-blog/tree/master/liquibase-sample for the sample project sources.

Quote:

Liquibase is an open source (Apache 2.0 Licensed), database-independent library for tracking, managing and applying database changes. 

It is built on a simple premise: All database changes are stored in a human readable yet trackable form and checked into source control.

This post is a simple tutorial demonstrating how to use Liquibase in a real world project. We’ll assume that our sample project lives through multiple phases, each of which adds diverse changes to the database.

Let’s prepare our sample project to use Liquibase within Maven build first. We need to define the liquibase-maven-plugin within the <plugins>…</plugins> block and point it to the liquibase.properties file, containing all properties required by the Liquibase at runtime. Both are demonstrated below.

Keep reading

Database-driven message source in Spring

Typical Spring’s message source (see the MessageSource Javadoc) relies on the resource bundles. However it is quite easy to create a custom MessageSource implementation that determines messages from some other external data source.

This post demonstrates how to initialize a database-driven MessageSource and presents two approaches how messages can be saved in the database.

Keep reading

Oracle: Aggregating SQL statement results as single XML CLOB

Once again a short reminder how to aggregate SQL statement execution results as a single XML CLOB in Oracle.

Assume there is an Oracle table containing user data:

create table t_user(
	user_id		number(15) not null,	
	user_name	varchar2(100) not null,		
	role		varchar2(100) not null,			
	email		varchar2(100)	
)
/

Now aggregating all the user entries as single XML CLOB can be done using following statement:

select xmlelement("users",
                  xmlagg(xmlelement("user",
                                    xmlattributes(    t.user_id as "id",
                                                      t.user_name as "name",
                                                      t.role,
                                                      t.email
                                    )
            ))
        ).getclobval() xml
from t_user t

This will result in a CLOB that looks like this:

<users>
	<user id="1" name="foo" role="USER" email="[email protected]"></user>
	<user id="2" name="bar" role="ADMIN" email="[email protected]"></user>
	...
	<user id="999" name="xyz" role="USER" email="[email protected]"></user>
</users>  
Configuring Spring based Web Application from database or other external data source

The most common usecase to configure Spring’s application context using external properties is a combination of a property file and a PropertyPlaceholderConfigurer bean. If you deploy your application in multiple environments (e.g. development, pre-production, production..) it is a good idea to externalize this configuration file and to bring the application to load this file from external location on startup.

This post demonstrates how to replace the property file by other external data source (especially database) and make it possible to configure a Spring based web application on startup from it.

Keep reading

Oracle: returning table contents as a custom table type

Often it is necessary to return table contents or some select-statement execution results as a custom table type (e.g. from within a stored procedure or function). 

A short reminder how this can be done.

Suppose following structures are defined in the database (some table, object type and a corresponding table type) :

create table t_user (
  id                 number(15) not null,
  username           varchar2(100) not null,
  email              varchar2(100),
  date_of_birth      date
)
/

create or replace type obj_user as object (
  id                 number,
  username           varchar2(100),
  email              varchar2(100),
  date_of_birth      date
)
/

create or replace type user_list as table of obj_user
/

Now using a combination of Oracle’s cast and multiset operators we can retrieve the contents of the t_user table and return it as user_list type:

select cast(
        multiset (select obj_user(id, username, email, date_of_birth)
        from (select * from t_user)
     ) as user_list) as users
from dual
Random test data generation in Oracle

There is nothing new or extraordinary in this statement.

Just a reminder for me to not dig around the Google next time I need this.

select level id,
       mod(rownum, 100) city_id,
       trunc(dbms_random.value(20, 60), 0) age,
       trunc(dbms_random.value(1000, 50000), 2) income,
       decode(round(dbms_random.value(1, 2)), 1, 'M', 2, 'F') gender,
       to_date(round(dbms_random.value(1, 28)) || '-' ||
               round(dbms_random.value(1, 12)) || '-' ||
               round(dbms_random.value(1900, 2010)),
               'DD-MM-YYYY') dob,
       dbms_random.string('x', dbms_random.value(10, 30)) address
  from dual
connect by level <= 1000;

Will generate 1000 rows of random data like this:

image
Getting n-th maximum row with SQL

This SQL statement will return n-th maximum row from the table:

select distinct a.myColumn
	from myTable a 
	where &n = (
		select count(distinct(b.myColumn))
		from myTable b
		where a.myColumn <= b.myColumn)

where &n is a max. row number to return.

For example this statement will return an employee with the second best salary:

select distinct a.name, a.salary
  from employee a
  where 2 = (select count(distinct(b.salary))
             from employee b
             where a.salary <= b.employee.salary)