postgresql and jdbc

In postgresql – selecting maximum for each group, we saw the actual stored function.  That was the easy part.  Then there was dealing with Postgresql and JDBC.  I encountered a few surprises while doing this.

Calling the stored function

I expected to use JDBC’s CallableStatement since I was calling a stored procedure.  Nope.  The proper way to call it is:


PreparedStatement stmt = conn.prepareStatement("select * from highlighted_topic_per_forum(?) limit ?");

stmt.setString(1, formattedDate);
stmtp.setInt(2, limit);

Passing a timestamp

Originally, I was passing a timestamp into the stored function by calling stmt.setTimestamp() and a type of “timestamp without time zone”.  Try as I may, postgresql didn’t like this.  It complains $1 not found.  I eventually decided to pass the timestamp as a string.

Passing an array

I was originally planning to pass the category ids via JDBC.  Given what what I experienced with timestamps, I wasn’t so enthusiastic about trying this on an application I work on in my free time.  When I saw the advice online to extend the Array class, I decided to hard code the two category ids.

clone a postgresql database for testing cleanly

I’m looking at writing integration tests for the back end of JavaRanch‘s JForum install.

A few “pesky” requirements/constraints

  • Multiple developers all over the word have their own local test databases filled with data in different states.  The tests must work for everyone.  Ideally they won’t leave data floating around either.
  • The tests must use PostgreSQL.  While the original JForum supported multiple databases, the JavaRanch version has been scaled down to just run with the one we need.  We do have some PostgreSQL specific SQL which rules out using an embedded database like HSQLDB or Derby.
  • Developers are using both Eclipse and IntelliJ.  Tests should care about the IDE anyway, so this isn’t a big constraint.
  • Developers are using a variety of operating systems and languages on their operating systems.  While code is in English, there can’t be assumptions as to the OS state.

Strategy

I think the best strategy is to create a second database just for testing.  The JForum database would remain untouched and a jforum_integration_test database can be created for the tests.  dbUnit can control the state of that special database.

The problem

Before I even start thinking about dbUnit, I did a proof of concept to ensure I could create a new database from scratch using the command line.  Creating a database is the easy part.  The “hard” part is that JForum doesn’t come with a schema.  It comes with an installation servlet that creates the schema.  While few people will be creating a schema for JForum, the technique I used applies elsewhere.

The procedure “before”

  1. Start up the JForum war
  2. Go to the JForum install URL and enter some information which creates the tables
  3. Run the JavaRanch customizations.

How to clone a database for which you only have a partial script

  1. Create an empty database
    createdb jforum_integration_test
  2. Arrive at the base schema
    1. Go the JForum installation URL
    2. Enter the information to create the tables
  3. Export the schema thus far
    pg_dump -U postgres jforum_integration_test > c:\temp\postgres.sql
  4. Provide instructions for the rest of the sql which were created by our developers.

How to import

Now for the easy part!

Importing this dump is a matter of a single command:

psql -U postgres jforum < "pathToWorkspace\JForum\javaranch-docs\deployment\file.ddl"

Lessons learned after

The next day I learned that this wasn’t enough.  We also needed some test data from the server.  I ran this a few times to get the relevant test data.

pg_dump --data-only --inserts -U user -W database --file roles  --table tableName

Conclusion

My next step will be to actually configure dbUnit against this new database and start writing tests.

Live from JaveOne: JDBC Spec Enhancements

Live from the JavaOne conference again, this time attending a session about enhancements to the JDBC specification 4.1. Four members of the JDBC specification team have presented a number of useful enhancements due in the upcoming version of JDBC. Some of these features may not make it into 4.1, but will be in future versions of JDBC down the road.

1. Named Prepared Statements
As many of you may already know, Prepared Statements support parameters referenced by numeric index such as ps.setString(1,”val1″) where “1” corresponds to the 1st index of “?” in the SQL string. Perhaps rightfully so, a lot of new developers to JDBC often complain that JDBC should support named parameters as well as numeric indexes. Well those developers are in luck, future versions of JDBC may support syntax such as the following:

NamedPStatement nsmt = con.createNPStatement("SELECT f1, f2, f3 FROM foo 
     WHERE f1 = :f1 AND f2 < :f2");
nstmt.setParameter("f1","val1");
nstmt.setParameter("f2","val2");
ResultSet rs = nstmt.executeQuery();

A quick screenshot of the slide covering this information:

2. Chaining Commands
Using the previous example, the team intends to support chaining commands such that setParameter() returns a reference to the object allowing single-lined commands such as the following.

ResultSet rs = nstmt.setParameter("f1","val1").setParameter("f2","val2").executeQuery()

3. Changing schemas on the fly
For databases that support the notion of database schemas, you will be able to change your current schema dynamically after the connection is established, using the following command:

Connection connection = ...
connection.setSchema("mySchema")

For those databases that do not support such commands, this would likely be a no-op.

4. Timezone support
Better support for time zones are due in distant future versions after Java 1.7 adds a lot of new features related to them.

More information from their slide: