Atlassian application releases come out fairly quickly, they support every sort of OS and DB combinations.
As a cross-product of JIRA/Confluence/Bitbucket version * operating system * database * etc., the number of possible combinations quickly grows to several hundreds.
And we want to be sure that we ship great quality add-ons for all the combinations!
Of course, we have automated tests (unit and integration), but we also run ad-hoc or planned manual smoke testing.
In our experience, testing against every point release may not be very important, and luckily OS or DB is rarely a source for bugs, but even with that, you may need to maintain tens of test instances of JIRA and Confluence. Plus you want to do that with pre-populated test data and with sharing among your team members.
How do you guys handle that?
You have a server running every sort of combos that your staff can use? You have VM images? Docker? Vagrant? Puppet? Chef?
I’m curious… why would your add-on be affected by OS / DB combinations? The Atlassian host product should take care of that for you. If your add-on uses the Atlassian API’s (for instance for caching) or Active Objects for database operations, you should not need to care about the underlying OS / DB.
Which operations are you performing that might break due to quirks in the OS / JVM / DB?
@remie Active Objects are not DB agnostic! And any add-on code that write to the file system (for example) will need testing across OSes.
We tend to write integration tests and run the same set of tests against different versions of the applications/DB etc using Maven profiles or similar.
Arquillian (and an extension we wrote for it) is worth looking at for straight forward setup/teardown of application content.
Which operations are you performing that might break due to quirks in the OS / JVM / DB?
My first memory is that we needed to wrote a SQL query with an IN(…) clause where potentially lots of items were enumerated. MS SQL has an upper limit for the number of items, while other DB vendors doesn’t have that limitation.
So, AO clearly hides most of the DB dependent parts, but definitely not everything (and that isn’t a realistic goal, either).
And this is a “fine” example for a bug in Confluence core that only happens with certain JVM versions!
TimSort has been introduced in Java 7, it is a stable sort that was designed for relatively expensive comparison operations. Only used for sorting objects, not primitives. Its problematic code path is only reached for collections with size >= 32
And for OS dependent functionality, our automation actions (here and here) that write to the filesystem are great example of stuff to test at least on U*x and Windows.
We’ve run into a couple of similar issues. One of them being deleting more than 2000 entities in one swoop (MS SQL didn’t like that). Or using .count() (Oracle didn’t like that).
That said I think there are enough documentation(are issues documentation?) at ActiveObjects - Issues - Ecosystem Jira to warrant testing on multiple combinations…