Opinions on how to handle a development problem

I’m wondering if I’m going in the right direction, so I would love to have input from more experienced plugins developers.

The need: we have a self-service project in Jira where people can create projects based on templates, tell which users will be admins, activate Structure, etc. It’s all based on workflows, screens and post functions running Groovy code (ScriptRunner).

Now, we want to extend that to display information coming from CSV files that contains a list of applications and components that people can select, and the new Jira project will be named based on the selected information.

The way I wanted to do it is to create a plugin that will use the scheduler to import the data into Jira with tables (entities) unique to the plugin, create a REST module to expose the data and use ScriptRunner’s Select List Conversions to display details coming from the REST module.

I did work on this, but the main issues are that serialization of the Active Objects entities is not working because the entities are interfaces, and the scheduler examples are not working with the current SDK, but that part could be worked out.

Am I going in the right direction? Anyone had any luck with REST services and AO entities (didn’t find anything useful in searchs)?

FYI, we are running Jira Data Center, that’s partly why I want to do a plugin so that it’s cluster safe.

Hello Pascal…

Have you considered using resources - Resources. There is an example there of reading your CSV files using JDBC. If you have multiple CSV files you can put them in the same directory and treat them as a database. I think this may be simpler than attempting to suck it into AO and sync it.

We also have a picker field - Database Picker - that can pick from a SQL query configured to use one of the Resources, so you could have a field that represented a query on one or more of the CSV files.

This is rather new… in fact it’s so new it was in a release that we had to unfortunately pull from marketplace yesterday due to an unrelated issue, but we will be re-releasing today.

If you like, get in touch and we can talk it through on a video conference…

cheers, jamie

1 Like

Hi Jamie,

Didn’t see that. I will try it, but the tricky thing is that the CSV are in a bad format, it have relationships in the same file :-/ For example, it contains the ID and name of an application, and might include up to four components (ID and name) per application on the same line in the CSV :-/ And I have no control of the files that I receive. But I might be able to run some Java or Python code to create decent CSV…

The other tricky thing is that we run Data Center, so I guess the JDBC host will be localhost, and have a copy of the CSV on each node.

Hi Pascal,

Well, you have to mung it at some point, you could either mung it when you receive the files and put it in new CSV files or some external database, or, query it as is and then manipulate the results…

The other tricky thing is that we run Data Center, so I guess the JDBC host will be localhost

The csv driver I used doesn’t have any server component… you would put the CSV files in a place that was mounted the same on each node (eg a sub-dir under shared home dir), then it should work fine.

cheers, jamie

True. I’m playing with CsvJDBC right now, and the URL is the directory where the CSV are. Putting them on NFS will work.

Happy to video conference if you get stuck…