Creating Custom Link Types for Rational Team Concert

It can be useful to be able to create custom link types for RTC. This is Interesting if a special business logic/behavior needs to be implemented and the available link types don’t fit. How can this be done?

It is surprisingly easy to do as Eduardo describes in Creating a New Link Type in his blog Extending Rational Team Concert – RTC Extending. I had to do it recently and thought it would be useful to describe the experience, adding a bit more detail to the content of the blog above.

License and Getting Started

The post contains published code, so our lawyers reminded me to state that the code in this post is derived from examples from as well as the RTC SDK. The usage of code from that example source code is governed by this license. Therefore this code is governed by this license. I found a section relevant to source code at the and of the license. Please also remember, as stated in the disclaimer, that this code comes with the usual lack of promise or guarantee. Enjoy!

As always, please note, If you just get started with extending Rational Team Concert, or create API based automation, start reading this and the linked posts to get some guidance on how to set up your environment. Then I would suggest to read the article Extending Rational Team Concert 3.x and follow the Rational Team Concert 4.0 Extensions Workshop at least through the setup and Lab 1. This provides you with a development environment and with a lot of example code and information that is essential to get started. You should be able to use the following code in this environment and get your own extension working.

You can download the code for this extension here.

Creating a Custom Link Type

All that needs to be done to create a custom link type is to create a plug-in.

Oh, no! Another Extension! Can’t I just define a new link type in the process configuration. I can literally hear it 8). Unfortunately that is not supported in RTC today. Vote for this work item if you want this kind of capability.

On the other hand, if there is really a need for business logic and operational behavior for the link type, an extension would be needed and it would not matter.

To make the new link type available in the Eclipse UI as well as in the Web UI, the plug-in needs to be deployed in the client as well as in the server. The RTC SDK calls this common API and it makes sense to follow this example.

As explained in other posts already it is crucial to come up with a good naming schema for the various projects and ID’s needed. Have a unique part in it (in my case js as infix) to be able to find it on disk and in the UI.

When creating the link type an ID for the link is needed as well as ID’s for the endpoints of the link. It is crucial to keep these values once they are chosen. If you change them while developing, you would otherwise introduce dangling references into your test model. I managed to damage my test database and got unreliable results when doing this. If this happens, it is possible to delete the folder server that sits in the same folder as the workspace and create a new test repository running the [RTCExt] Create RTC Test Database JUnit test as described in the Extensions Workshop.

This is the extensions editor for the plug-in I created:

Link Type Common Plugin

Link Type Common Plugin

Different to the aforementioned blog post, the plugin defines a component as well as the link type. The reason is that the component allows to see if the plugin was successfully deployed in the server. At some time in the future a component might become required as well. So I always define a component for my extensions.

You can use the context menu of the editor to add the the extension elements for the source, target, endpoints and the itemReferenceTypes.

The endpoints allow to specify the multiplicity. This has actually an impact on how the links behave. If an endpoint specifies 0..1 as multiplicity, only one item can be referenced with the endpoint. If an item is selected already and another item is chosen in the add link dialog, the old item is replaced by the new one.

The plugin.xml looks as follows

Final plugin.xml

Final plugin.xml

The implication of the values in the plugin.xml can be found in the extension point description that can be opened from the extension point itself. Review it to understand the options. From that description:

  • id – String id for the link type.
  • editors – Semicolon separated string of ids of those components permitted to create and delete links of this type. For example, the string “;” specifies the two components “” and “”. If the attribute is not present, this indicates that there is no restriction regarding which components (or client) is permitted to create and delete links of this type.
  • constrained – If true, the defining component requests that Links of this link type not be created by other components (without permission, or without going through an API provided by the defining component).
  • internal – If true, this link type is an internal detail of the implementation of the defining component, and is not intended as a generic link type which users can freely create, delete and view.
  • componentId – The id of the component defining this link type. Component ids are declared using the extension point. If set and if constrained=true, then only services that are part of that component may save and delete links of this type.

The itemReferenceType entry is still a bit mysterious. It is easy to review existing examples. On the extension select “Show References” in the context menu and browse through the examples. Especially the references in are of interest. It is possible to define different kinds of endpoints, dependent on what items to link.

In this case work items are supposed to be linked to work items. Both ends select this itemReferenceType. Please be aware that this kind of links will only work within one CCM repository. It will not allow to link across repository borders. The are other CLM link types that would allow this to happen.

I added some icons for the link types that show up in the editors. For the final deployment, I made sure that the folders icons and META-INF as well as the plugin.xml file are selected in the binary build.

I tested the new link type in a Jetty based test server OSGI2 launch as well as in an Eclipse2 Application launch.

Prepare to Deploy

As already mentioned the common plugin needs to be deployed in the server as well as in the Eclipse client. Please follow Lab 6 in the Rational Team Concert 4.0 Extensions Workshop if you have never done this and want to understand how deployment works in general. The workshop explains in great detail which files you need to deploy on the server. The procedure below follows the deployment procedure on the server side. Other than in the workshop the client extension gets deployed using an update site.

To make this easier I created a Feature project as well as an Update Site project. To make deploying it in the server easier I also created a normal Eclipse project that contains the deployment folder structure as well as the provision profile INI file. The projects look as follows:

Final Project Structure

Final Project Structure

Once the Update Site project is built, It is easy to copy the site.xml file and the folders  features and plugins into the folder js_custom_linktype. The provision profile js_custom_linktype.ini looks as follows:


It references the sub folder in the site folder that will contain the feature and plugins folders.

TIP: Always delete all content of the Update Site project except the site.xml before building the update site again. I have seen cases where subsequent builds and deploys did not successfully pick up all changes.

Deploy in the Eclipse client

To deploy in the Eclipse client, use the Help>Install New Software menu, add the Update Site and install the extension. You can package the Update Site or provide it in a web server in a company context.

Deploy in the Server

Copy the generated folders and the site.xml file into the folder underneath the sites folder which is referenced by the provision profile js_custom_linktype.ini file in the  serverdeploy project folder. This would be easy to automate with a build script by the way.

Now select the folders provision_profies and site with all the content and copy the into the servers folder /server/conf/ccm. Allow to overwrite the folders and request a server reset. Then restart the server.

Add the New Link Type to the Quick Information Presentation

As Sam suggests in his comment below, you also want to add the new link type so that it shows in the quick Information presentation. This is done in the Process Configuration>Project configuration>Configuration Data>Workitems>Quick Information Presentations. It allows the workitem summary page Quick Information section to show a count of and quick link to the new link in a workitem This would look like below:

Quick Information Presentation

Quick Information Presentation

Admire your work

You can now admire the result of your work in the work item editors. If not, check your deployment setup. As usually the first attempt was, of course, on a test server to not affect the production system until you have perfected your deployment process.

 Too many Link Types! What to do?

If you have created all the new link types your business demanded, users might start to complain that there are too many link types and many are not needed anyways. You knew that would be coming and decided long ago to review the article Customization of work item editor presentation to show or hide link types in Rational Team Concert. to understand how to fix this issue once needed.

That was easy, wasn’t it? Now you can create behavior based on the link type.

I have tested it against a test server on Tomcat and Derby. There is no real code this time. However, as always, I hope the post is an inspiration and helps someone out there to save some time. If you are just starting to explore extending RTC, please have a look at the hints in the other posts in this blog on how to get started.

Posted in CLM, Jazz, RTC, RTC Automation, RTC Extensibility | Tagged , , , | 5 Comments

The Process Sharing API

RTC process sharing allows to manage a process in one project area and to use the process in multiple other project areas. This post explains the internal API that allows to use  this feature in automation to make operations easier.

The RTC process sharing feature allows to minimize process customization while having a common process in many project areas. This is interesting for all kinds of users, especially with a growing number of project areas while in the need of a common process.

I worked with a team, that needed to create many project areas to contain the information for each project in one project area while limiting access to other project areas. So the idea was to automate the process of creating project areas, which required to be able to use an API. I found hints to this API provided by Sam on I consolidated the code into an example that I could share with others.

You can find more about how this feature works in the following articles on

The code in this post is client API.

This blog post uses internal API which can be changed at any time.

This blog post uses internal API which can be changed at any time. If the Internal API changes, the code published here will no longer work.

Warning, some of the code uses internal API that might change in the future. If the Internal API changes, the code published here will no longer work.

The post contains published code, so our lawyers reminded me to state that the code in this post is derived from examples from as well as the RTC SDK. The usage of code from that example source code is governed by this license. Therefore this code is governed by this license. I found a section relevant to source code at the and of the license. Please also remember, as stated in the disclaimer, that this code comes with the usual lack of promise or guarantee. Enjoy!

As always, please note, If you just get started with extending Rational Team Concert, or create API based automation, start reading this and the linked posts to get some guidance on how to set up your environment. Then I would suggest to read the article Extending Rational Team Concert 3.x and follow the Rational Team Concert 4.0 Extensions Workshop at least through the setup and Lab 1. This provides you with a development environment and with a lot of example code and information that is essential to get started. You should be able to use the following code in this environment and get your own extension working.

The Code

To provide at least some wrapper around the internal code calls, I created a utility class ProcessProviderUtil. This utility wraps the API to set a project area to provide the process to be used by other project areas, as well as the API to set a project area to use another project area’s process and provides methods to use this API. The methods just delegate to the internal API call at this time.

You can download the code from here.

This is how the code looks like:

 * Licensed Materials - Property of IBM
 * (c) Copyright IBM Corporation 2014. All Rights Reserved. 
 * Note to U.S. Government Users Restricted Rights:  Use, duplication or 
 * disclosure restricted by GSA ADP Schedule Contract with IBM Corp.


 * Skeleton for a plain-Java client, see
public class ProcessProviderUtil {

	 * Set a project area to provide its process for sharing
	 * @param projectArea
	public static void setIsProcessProvider(IProjectArea projectArea, boolean isProcessProvider) {
		((ProjectArea) projectArea).setIsProcessProvider(isProcessProvider);

	 * Test if the projectArea Provides its process
	 * @param projectArea
	 *            Determines whether other project areas can get their process
	 *            from this project area.
	 * @return true if this project area allows other project areas
	 *         to use its process, false if not.
	public static boolean isProcessProvider(IProjectArea projectArea) {
		return ((ProjectArea) projectArea).isProcessProvider();

	 * Share the process on another project area
	 * @param usingProjectArea
	 * @param providingProcessArea
	public static void setProcessProvider(IProjectArea usingProjectArea,
			IProjectArea providingProcessArea) {
		((ProjectArea) usingProjectArea)

	 * Get the project area from which we use the process
	 * @param projectArea
	 * @return {@link IProjectAreaHandle} or null if this project
	 *         area does not rely on another project area for its process.
	public static IProjectAreaHandle getProcessProvider(IProjectArea projectArea) {
		return ((ProjectArea) projectArea).getProcessProvider();

Example Usage

The methods are so simple that there is no reason to explain them in detail. The code below shows how they can be used. The example needs two project areas to be provided. One project area e.g. based on the Scrum process template. A second project area based on the ‘Unconfigured Process’. The User that runs this automation needs to be member of the JazzAdmins repository group.

The code to set the first project area to provide its process to be used by another projects looks like below.

IProcessItemService processItemService = (IProcessItemService) teamRepository.getClientLibrary(IProcessItemService.class);

System.out.println("\nProcessing project area: "+ providingProjectAreaName);
URI uri = URI.create(providingProjectAreaName.replaceAll(" ", "%20"));
IProjectArea providingProjectArea = (IProjectArea) processItemService.findProcessArea(uri, null, monitor);
if (providingProjectArea == null) {
	System.out.println("....Error: Project area not found: ");
	return false;
providingProjectArea = (IProjectArea) providingProjectArea.getWorkingCopy();

System.out.println("...Project area provides its process for usage:"
		+ new Boolean(ProcessProviderUtil
// Set to provide the process for sharing
System.out.println("...Set project area process to provided for usage");
System.out.println("...Project area provides its process to other project areas:"
		+ new Boolean(ProcessProviderUtil

// Save project area that uses the provided process, monitor);

The code basically finds the project area somehow (by its name). It then gets a working copy to be able to modify it. It then provides if the project area is already providing its process to others. Regardless of this information it sets the project area to provide its process for usage. It again prints if the process is provided. Finally the changes are saved.

The code to use the process of a project area that provides it looks as below.

System.out.println("\nProcessing project area: " + usingProjectAreaName);
uri = URI.create(usingProjectAreaName.replaceAll(" ", "%20"));
IProjectArea usingProjectArea = (IProjectArea) processItemService.findProcessArea(uri, null, monitor);
if (usingProjectArea == null) {
	System.out.println("....Error: Project area not found: ");
	return false;
usingProjectArea = (IProjectArea) usingProjectArea.getWorkingCopy();

System.out.println("...Project area provides its process to other project areas:"
			+ new Boolean(ProcessProviderUtil
Boolean used = new Boolean(false);
IProjectAreaHandle handle = ProcessProviderUtil
if (null != handle) {
	used = new Boolean(true);
System.out.println("...Project area uses a process provided by another project area:" + used.toString());
// Set to share process
System.out.println("...set to use a process provided by another project area");
used = new Boolean(false);
handle = ProcessProviderUtil.getProcessProvider(usingProjectArea);
if (null != handle) {
	used = new Boolean(true);
System.out.println("...Project area uses a process provided by another project area:" + used.toString());
// Save project area that uses the provided process, monitor);

As in the part above the code first finds the project area. It then detects if the project area provides its process for informational reasons. Then it checks if it uses the process of another project area. It then sets the project area to use the process of the first project area, prints a check of the fact and saves the process change.

Please note, there is no real error handling here. e.g. if the project area provides its process and can actually share another process is not tested.

In the inverse case, if providing the process is supposed to be disabled, the save would fail if project areas still share the provided process.

The code is experimental. I have tested it against a test server on Tomcat and Derby. It is by no means production ready and can be enhanced for various scenarios. However, as always, I hope the code is an inspiration and helps someone out there to save some time. If you are just starting to explore extending RTC, please have a look at the hints in the other posts in this blog on how to get started.

Posted in Jazz, RTC, RTC Automation, RTC Extensibility | Tagged , , , , , | Leave a comment

Give Me A REST

Recently I worked with OSLC/REST and discovered a nice REST client that I find really useful. I thought I would share it with you here.

If you have found tools that help you with this and want to share, please comment to the post. Please use English, describe the tool, what it does and why it is useful. If possible provide the URL in the comment as well. This will help me distinguishing the valuable information from occasional spam that slips through.

* Update * See Postman as another alternative. Links can be found below.

I have actually not done that much work with OSLC/REST in the past. I worked with the Java API’s most of the time. I have however done the Open Services for Lifecycle Collaboration Workshop in the past and also helped submitting it in classes.

The workshop suggests to use Firefox and one of the REST Client Addons available. I did exactly that and it was a pain in the, err.., neck. Why? The whole URL was maintained in one line and that, every time I changed the focus, snapped back to the position in the front. I found myself frantically scrolling back and forth trying to find the last edit location. In general the display did not give me a good way to understand what the request did and if there were issues with it. Not pretty. I tried out alternatives, but I couldn’t find anything satisfying.

I decided to try a different browser – despite the fact that I really like Firefox.

I looked into Chrome, which I recently started to use for JavaScript based Attribute Customization as described in this Wiki page and the Process Enactment Workshop (Lab 5). It suits me way better for JavaScript debugging, as I find it a lot easier to find the script, compared to Firefox Firebug.

The Nugget – Advanced REST Client

I discovered the Advanced REST client. And I it’s a nugget. A really big one actually, I think. You can simply download and install it in Chrome and have it in your Apps tile in the bookmarks bar.

AppBookMarksThis makes it easy to reach and does not take away a lot of real estate.

Now what are the things that distinguish this from other REST tools I have seen so far?

URL Management

The URL management is really nice as it basically allows you to work in two modes.

There is the traditional mode where you have the whole URL in the URL field. This is most efficient to use if one has a complete URL available and wants to copy/paste it.

Traditional URL ManagementThe first real great difference however is the small triangle in front of the URL field. It allows you to automatically decompose the URL into its interesting parts with respect to REST and also to easily manage the query parameters.

Note, I found some URL’s where this does not work as expected for the decomposition. You can still use the feature, but folding/unfolding needs some manual work during unfolding. I hope this gets corrected soon.

Decomposed URLThis is very useful, as it allows to focus on the parts that are really interesting and removes the need to parse the URL visually. It is really easy to edit, add and remove query parameters to create the needed content.

There are also several ways to help with encoding and decoding the request data.

The same is available for the request headers.

Request HeaderIt is possible to use the Raw mode to copy and paste complete headers and it is possible to switch to a Form view like above that works similar to the query parameter section.

 Manage Projects And Save Requests

The second really useful feature is the ability to manage requests and to save them in projects. This allows to store requests that work or are under construction for later use.

Manage And Save RequestsWhile I was looking into OSLC, I had issues with my request several times and I found myself maintaining my URL’s in a text file while I was exploring. Being able to save the request makes it a lot easier and it is no longer necessary to switch between applications.

If something works, it can be kept and changed to work towards the final goal.

You can have multiple projects to manage the request, for example for different projects.

Other REST Tools

I found another REST tool that has similar capabilities.

Postman is available as application (online usage) and as Packaged App. It provides similar capabilities, except I miss the encoding/decoding option. The advantage of Postman is that it keeps a request history. This is easier to use compared to saving the change every time.


The Advanced REST client together with Chrome made my life a lot easier and I can only recommend to use it. If you do, give it a big thumbs up and donate to help supporting it.

As always I hope this is useful to the Rational Team Concert and CLM users out there.

Posted in CLM, Jazz, REST/OSLC, RTC | Tagged , , , , , | 4 Comments

Using RTC to Work with DevOps Services and With Bluemix

I recently had a look into Bluemix and how to use it with Eclipse to develop cloud applications. The blog post also mentions that there is an integration to DevOps Services that enables to use work items for planning. It also allows to use GIT or Jazz SCM to manage the source code.

Recently I had a look into how that works and I would like to share here what I learned. This post assumes you have performed the first steps to setup your environment following the Getting started With Bluemix post already.

Please note: DevOps Services as well as Bluemix are evolving quickly, adopting for new needs as they arise and what is described here might not be the only possible solution or outdated if you look at it later. It might be a good idea to check with the current documentation of DevOps Services.

Creating a new DevOps Services Project

The first step to get started with DevOps Services, is to create a new project to manage work items and the source code.

After signing into DevOps Services, using the IBM ID created for Bluemix, it is possible to create a project. The screen shot below shows the information needed to do this. Basically it has to have a name, how the source code should be managed, how the project template should look like. There is also a choice to integrate Bluemix with the project.

For the following part of this blog I am assuming that Jazz SCM was chosen.

New DevOps Services ProjectFor the Bluemix integration provide the organization – basically the Bluemix ID and the password.

Clicking the Create button creates a RTC project (which is working under the hood of DevOps Services).

On the overview page, you can select to edit the code, track and plan work with work items, and configure and manage build and deployment.

Configure Eclipse ProjectThere is also a “Configure Eclipse Client” choice available. Clicking at it provides the information of an invitation that can be used in the RTC Eclipse client to set up the connection.

Configure Eclipse ClientJust copy the invitation data and paste it into the ‘Accept Invitation’ action, provide the password and the connection is created. We will look into the next steps done with Eclipse later.

Enabling the Bluemix Integration

Switch to the Build & Deploy section using the button. This page allows to configure the build and deploy mechanism, request a new build and deploy and view the deployment status.

Configure Deploy and BuildThe Build and Deploy has basically two settings. Click Simple to select the Simple setting which are adequate for now (this means I haven’t been able to use the advanced settings). Then click the configure button.

Configure DeploymentThis basically defines the structure needed to deploy an app.

The integration expects the manifest.yml in the root folder in the jazz SCM system. Since there currently is no example code, the first builds&deploys will probably fail.

Jazz SCM in the Project Web UI

Switching to the Edit Code page allows to access the SCM information.

Please note: I had issues with seeing the stream information, versioned files and other data with the latest Version of the Firefox Browser ESR (31.2.0).

Chrome worked for me, so I would suggest to use that browser. It is unclear why, because other users apparently don’t have that problem. It might as well be one of these weird effects we ave to put up with in a browser-based world.

The project creation dialog created a Stream, a repository workspace and a component already. The names are based on the name of the project.

You can browse the repository workspace and create files and folders in the Orion editor in the web UI and deliver your changes to the stream to be deployed.

My task was doing this with the Eclipse client, so there I went first.

Jazz SCM in the Eclipse Client

There is a description for this step that I could find here in the documentation. However, I had problems with performing them. This might be different today, however, if you run into anything, it might have similar reasons.

At this point the assumption is that the invitation from DevOps Services has been used to create a repository connection and the client is logged into the project.

As a first step, a new repository workspace is needed. The easiest way to create one is to find the stream in the Team Artifacts view and create the repository workspace from that. This creates the repository workspace and sets the default and current flow back to the stream. Tip: Name the repository workspace e.g. putting ‘Eclipse’ into the workspace name. This is to not confuse this workspace with the one used by the Web UI in the Orion editor. The reason is that repository workspaces are not designed to support one instance to be loaded and modified multiple times in different places (streams are designed for this).

Next step would be to load the repository workspace. Before attempting this, keep in mind that the Build&Deploy step assumes the manifest.yml file to be in the root folder. To achieve that using the Eclipse client and RTC Jazz SCM, there is only one option: Load the component as root folder as shown below. Trying this however, failed for me the first time around. The reason for that is that the default name of the component is derived from the project name and has a pipe ‘|’ symbol in the name. This is not allowed as name in a file or folder on the filesystem (Windows at least). Best approach is to rename the component to some useful mane. At least replace the pipe symbol by a valid one, for example a dash.

After this has been done the component can be loaded.

Load Repo Workspace ComponentIn the second step of the load wizard select the component to be loaded and press finish.

Select Load Repo Workspace Component As FolderWhile loading the data to disk, the RTC Eclipse client creates an artificial project file to mark this folder as an Eclipse project. dependent on the scenarios one wants to perform later, one might or might not want this file to be checked into version control. If one would like to have Eclipse projects on a deeper level, the file could get into the way.

Since the file is always created if the data is loaded this way, I added the file to the Jazz ignore file.

It is now possible to add the files for the application. For example the files from the example from Bluemix from my last post can be used as shown below. This would for example look like below:

Example File Structure

Why this structure? The project.json file is from configuring the project. It contains the property for the project name. I left it there.

The manifest.yml file is needed for the boilerplate/runtime our sample is using. It need to be in the root folder. It is specific to how Bluemix builds and deploys. In the example above I basically moved the original the manifest.yml from the enclosed eclipse project rsjazz01 into the root folder. Then I changed the path to pint into my Eclipse project/folder rsjazz01.  The content is changed to reflect the path to the Node.js project in the sub folder rsjazz01.

Manifest FileIf the path set above, would be just the root folder, the package.json file would be required also in the root folder. As it is above the file is needed in the sub folder.

The way it is now, would allow to load the repository workspace to find the rsjazz01 folder as node.js project and do local debugging on it.

Working with the Code

Once the general structure is set up, it is possible to edit the code in the Web UI as well as in the Eclipse client. Once you deliver the code to the stream it gets automatically built and deployed. Delivery would usually require a work item connected to the code change for traceability.

Build And DeployThe application is also accessible for testing and, of course monitoring in Bluemix.

Pro and Con’s

Looking at this post and the Bluemix post, there are obviously several valid approaches. The approach described here allows to have one application developed with one DevOps Services RTC project and have a continuous build and deploy for free.

The approach described in the Bluemix post, would allow to use Eclipse to work on several projects and actually manage the work and code in one or more DevOps Services RTC projects, as best fits. If I want to manage multiple applications in one RTC project, the automatic build and deployment would not be available. That, however can easily be scripted into continuous integration build scripts as well.


I hope this and the Bluemix post, provide you with some insight about how the DevOps Services and Bluemix work together and how you can user Eclipse and RTC to develop your applications.

Posted in BlueMix, RTC | Tagged , , , , | Leave a comment

Getting started with BlueMix

Recently everyone has their heads in the clouds (no pun intended) and I decided to have a peek to find out what it is all about.

This post is a summary of my first experiences with the IBM BlueMix Cloud Computing offering and how I got started with developing my first applications for it.

Note: this is not an RTC API post. However, RTC is involved.

There are several posts of my peers. Look into Dan Toczala’s and Tekehiko Amano-san’s blog and see these posts about BlueMix:

There are more posts available.

BlueMix has been around for some time now here at IBM and I wanted to understand what it is providing. I have seen some high level presentations and demos already. Unfortunately I am not the kind of person that can learn to fly by reading books and looking at slides. I have to get things into my hands, use and experiment with it to understand how they work. This usually also involves accidents, painful crashes and recovery from them. This is however the best way for me to understand how it works and is most beneficial from my point of view.

As you can read in the BlueMix documentation about what BlueMix provides.

Citing the web site, IBM® Bluemix is an open-standards, cloud-based platform for building, managing, and running apps of all types, such as web, mobile, big data, and smart devices. It can be used to develop and run server applications.

You can use your own development environment as well as IBM Dev Ops Services to develop the applications and manage the source code.

You should familiarize yourself with the architecture of BlueMix to understand the details. I will try to use the concepts described there in the post with only a short summary what they represent.

BlueMix provides several ways to start with developing applications.

  • Runtimes are a preconfigutred set of resources used to run applications
  • Boilerplates are preconfigured containers used to run applications that usually also contain services
  • Services hosted by BlueMix provide capabilities such as session caches, persistance and other capabilities

Looking at the Runtimes there is support for Node.js applications Liberty for Java (a lean profile for WebSphere Application server), Ruby on Rails and others available.

Since I am not a Web developer but have some few JavaScript experience, I decided I wanted to go for Node.js. I got myself some material to learn about it first. After understanding the basic concepts by reading, I started to set up a local development environment.

Setup a local Node.js Development Environment

I followed to install the local Node.js environment. I downloaded Node.js from and installed it as suggested in the previous link. I skipped CoffeeScript and I had a JDK 7 already on m machine.

Setup Eclipse for Node.js Development

I needed a local environment to be able to play with it and have a quick turn-around time. I downloaded Eclipse Juno, because I heard that would be the best option, and followed to install what is needed into Eclipse.

Having done this, I was good to go and I was able to create Node.js projects in Eclipse and run and debug them locally.

I ran some examples until I felt reasonably familiar with how the language works and decided to pursue my quest to BlueMix.

Setup Eclipse with RTC

Since I intended to use Eclipse with RTC embedded to be able to use RTC against IBM Dev Ops Services, and not using Git for SCM (sorry guys, but I can’t do that), I downloaded the RTC 5.0 p2 install package from the RTC All downloads Page. After the download succeeded, I installed RTC into Eclipse. I logged into IBM Dev Ops Services from RTC using my ID and my IBM ID password. Weird.

However, now my local environment works with RTC and I can use any RTC repository, including IBM Dev Ops Services, to manage my work and source code.

Logging into BlueMix

I logged into BlueMix. Please note, you can use or create an IBM ID. This basically provides you with an evaluation period of some months.This should be easy to follow and work like a charm.

If you are an IBM’er you can use your Intranet ID. I would suggest to do that. I unfortunately used my IBM ID and had to follow the explanation text and links to the right of the user and password fields to link both up. There seem to be still problems with this, because I happened to end up on a staging version of BlueMix that did not work for me.

Note: After logging into BlueMix, make sure your URL is

Creating a Sample Project on BlueMix

To get started, I created a sample project on BlueMix.  I went into my dashboard and clicked the tile Create An App. I picked the Runtime SDK for Node.js, provided a unique host name for example rsjazz01 and accepted all the default settings.

Note: The host name needs to be unique which basically means, anyone following this will have to pick a different name and replace it in the images and text below.

The project gets opened, but won’t run, since there is nothing in there yet. In the top section to the left, underneath the application icon and name is a link named View Guide. This link provides more information about how to get started. The following is what it shows if you chose a project name RsJazzTest03. The project name will be reflected in the downloaded sample files at some places.

BlueMix Sample

Install The CloudFoundry Commandline

BlueMix uses CloudFoundry to upload and deploy applications. Follow the link and desription in the guide to download and install the CloudFoundry command-line.

Also download the example code for the application. Store the compressed code somewhere and extract the file into a Folder, for example c:\temp. Assuming the application name is rsjazz01 there would be a folder C:\temp\rsjazz01 that contains the source code of the project.

You can follow the instructions to push the example to BlueMix and run it. However, lets get it into Eclipse so that we can look at it in a more convenient way.

Create an Eclipse Project

Create an Eclipse Node.js project. It can have any name as far as I can tell, but in the context of Eclipse choose the name of the application as the project name, e.g. rsjazz01.

From the folder C:\temp\rsjazz01 that contains the uncompressed example, select all files and folders. Copy the files and folders using CTRL+c and paste them into your Eclipse project. You can do this in the Eclipse project explorer or in the filesystem. If you did it in the Filesystem, refresh the Eclipse project to see the files. The Project content should look like this:

Examlple ProjectThe main application file is represented by the file app.js. The folders public and views and their contained files are used by the framework used to create web pages.

Run the Application on the Local Development Environment

Before trying to run the application in the cloud, let’s try to run it on the development environment. In order to do so lets examine the application first. The file app.js looks as follows:

Example AppThe application prepares itself first, then gets some data, such as the host name and the port it is using from the environment, or used some defaults if not. Then it starts to listen as a server on that port and host.

The description of the sample mentions some other pieces it is using. Lets look what it is.

The files

  • manifest.yml
  • package.json

marked in the project screen shot above, are used by BlueMix to deploy and run the application. Any application that runs on BlueMix needs this kind of information to be able to deploy and run.

Lets look at the manifest.yml file first. This is the content for our sample.

ManifestThis describes some of the properties of the application, such as the host, the application name, the command to start it as Node.js application, domain, number of instances and required memory and disk. When creating an application from scratch, this is important information to look at.

The package.json file looks like this:

PackageThis file describes the application and, more importantly it describes the packages that the application requires to be able to run. It needs the Express web application framework, version 3.4.7 and the Jade Template Engine, version 1.1.4 to run on a node engine.

Install Express and Jade 

To install these packages, on your local machine, in order to be able to run the application, open a shell and use the package manager. Type each line below and hit enter. The versions needed are from the dependencies. Note, the newest version of express won’t work. There have been changes to it that will break the application.

npm install express@3.4.7
npm install jade@1.1.4

Wait for Node.js to download and install the packages.

Now right click on app.js and select to run it as a Node application. Open http://localhost:3000/ and see the web page displayed.

It is now possible to develop the application further on the local development environment. It is possible to use RTC to put it under version control, to share it and to plan the work. Any other source control providers that Eclipse supports can be used as well.

Deploy the application on BlueMix

Lets try to deploy the application on BlueMix. How this works is described in the guide above. Open a shell. The first three commands can be run anywhere.

First set the API URL for Cloud Foundry:

cf api

Log into the server (use your own ID):

cf login -u 

This prompts for a password. Provide your password and finish the login.

Set the target space for the application. By default the space is called dev.

cf target -o  -s dev

Now change the directory to the folder that represents the project on disk, named rsjazz01 in this example. this folder is directly in the workspace folder you chose to use with Eclipse when you started it.

Now push the application to the BlueMix server:

cf push rsjazz01

The data gets uploaded, deployed and started. In the BlueMix Dashboard on the application tile you should be able to see that there are activities happening in BlueMix, while they show up in the shell. Once the process finishes the application is deployed and you can open the URL and see the same result you had from the local run.

Create a Simple Custom Sample Application

Running a sample application that is supposed to be running is – relatively – easy. But what about running a custom application? What is needed to do that?

Create a new Node.Js project and give it a name. In the example we will use rsjazz02. Pick a name that suits you if you want to perform this as well.

The new project is empty. Create a new JavaScript file and call it app.js. The file should have the following content:

/*jshint node:true*/

 * New node file
var http = require("http");

function onRequest(request, response){
	response.writeHead(200, {"Content-Type": "text/plain"});
	response.write("Hello World - this is rsjazz's first BlueMmix application!");

//There are many useful environment variables available in process.env.
//VCAP_APPLICATION contains useful information about a deployed application.
var appInfo = JSON.parse(process.env.VCAP_APPLICATION || "{}");
//TODO: Get application information and use it in your app.

//VCAP_SERVICES contains all the credentials of services bound to
//this application. For details of its content, please refer to
//the document or sample of each service.
var services = JSON.parse(process.env.VCAP_SERVICES || "{}");
//TODO: Get service credentials and communicate with bluemix services.

//The IP address of the Cloud Foundry DEA (Droplet Execution Agent) that hosts this application:
var host = (process.env.VCAP_APP_HOST || 'localhost');
//The port on the DEA for communication with the application:
var port = (process.env.VCAP_APP_PORT || 3000);
console.log('Start my server on port ' + port);
//Start server

console.log('App started on port ' + port);

This application basically waits for an HTTP request on a port on a host and responds with a simple text. It reuses the parsing of the environment variable we saw in the sample application to get the port and the host name.

Run the application on the local Node.js and connect to it using http://localhost:3000/. It should run and provide the expected output in the browser window.

It does not have any dependencies to any other packages. However, it would not yet run on BlueMix. It lacks the information required to deploy end run it there.

Copy the manifest.yml and the package.json files from the sample application over. You can also copy the readme files, but these are not required.

Open the manifest.yml file and edit it to use a new host name. To make sure the host name is unique you can create an empty project on BlueMix, but you don’t have to. BlueMix will tell you if the host name is already taken. In the code below I use rsjazz02 as name of the application and as host name.

- disk_quota: 1024M
  host: rsjazz02
  name: rsjazz02
  command: node app.js
  path: .
  instances: 1
  memory: 128M

The line

  command: node app.js

can stay as it is. If you chose to use a different name for the main JavaScript file, you would put the name in here.

Open the package.json file and edit it to match the new situation. You can change the name and the description. Remove the dependencies, as there are no dependencies to other packages needed. Keep the rest as it is.

	"name": "RSJazzSampleApp",
	"version": "0.0.1",
	"description": "A sample nodejs app for Bluemix - by rsjazz",
	"dependencies": {},
	"engines": {
		"node": "0.10.26"
	"repository": {}

Save all the changes to these files.

The application is now ready to deploy on BlueMix. Change the directory of your shell to the new folder e.g. using cd ../rsjazz02.

Now push the application to the BlueMix server using the shell command:

cf push rsjazz02

The data gets uploaded, the application deployed and you can test it using once it is running and the health shows green (replace the name of the application in the URL with your application). The result should be the same as in the local run.

User RTC and IBM Dev Ops Services

You can use IBM Dev Ops Services to develop and deploy BlueMix Applications with RTC. You would basically create a DevOps Services project to manage your source code and use it to deploy your application. I will try to blog about this later.

You would still do all the above steps to set up your local development environment.

Enable Eclipse To Deploy Directly to BlueMix

So far a local shell and the cf command is used to push the application up to BlueMix. As mentioned above you could also use IBM Dev Ops Services to do this.

There is a third option available. You can configure your Eclipse client to connect to BlueMix and to deploy the application automatically if you did changes, if you desire.

You can install the IBM Eclipse Tools for Bluemix into your local Eclipse Client.

Once you have done that, you can open the Eclipse View Servers and add a new server to it.

The server view would look like below. The overview shows the configured BlueMix connection. The Applications and Services shows the applications and services you have configured. The server view shows the applications on the server as well as the locally connected ones.

BlueMix Eclipse ToolsTo be able to deploy you Node.js application, you have to change it a bit first. You have to convert it to a Faceted form, using Configure in the context menu of the project.

Configure Faceted FormIn the following dialog you have to select the application type, in this case Node.js Application. Once you have done it, you can see it in the Add and Remove dialog for the server.

Configure ServerYou can add applications and remove them. If configured to do so, any save will trigger a deployment.


This post shows how you can use RTC and Eclipse to start developing Node.js applications for BlueMix. It shows how to configure the environment and the first basic steps in a way to support getting over the first questions. After reading this you should be able to do some basic experiments in half a day or so.

As always I hope this helps someone out there to save some time and I appreciate feedback.


Posted in BlueMix, cloud, Jazz | Tagged , , | Leave a comment

Reading and Writing Files Directly from and to an RTC SCM Stream

Can you read files directly from an RTC SCM stream or write them directly into it, avoiding having to use a repository workspace? Kevin, a colleague, was recently facing this challenge writing automation for a customer. We discussed this challenge when we met on a trip.

I was pretty sure it should be possible, but I had no clue how. I have worked with parts of the RTC SCM API and published the results in this blog. However, I have always used a repository workspace and the usual workflow and I had no answer. So how can you do this, provided the API always wants a workspace connection?

Kevin got this puzzle solved with the help of one of our developers and has published the resulting code and reasoning in this blog post. If you are interested, check his code out and give him a thumbs up!

Other posts in this blog about using the SCM API



Posted in Jazz, RTC, RTC Automation, RTC Extensibility | Tagged , , , | Leave a comment

Manage Scheduled Absences Using The PlainJava Client Libraries

I have seen questions in the forum around how to manage public holidays or other scheduled absences for a large user base. I have heard this kind of questions from others as well. And thought a solution would be quite interesting, so here goes.

Since I hate repetitive, boring, time consuming tasks as any one else, I wanted to do something about this for a while. In the context of this question two colleagues from Japan, Saitoh-san and Kobayashi-san approached me. They already had created a solution but were not sure how to publish it. They invited me into their IBM DevOps Services project to share what they had done. I looked into the code and found they had actually implemented an Eclipse Wizard to import scheduled absences for a user.

Since I can’t blog about things I haven’t done, I decided to take a deeper look at what they had done and create a solution from there. I finally ended up creating some tooling that allows to manage single absences and collections of scheduled absences for one or many users.

The code in this post is client API.

This blog post uses internal API which can be changed at any time.

This blog post uses internal API which can be changed at any time. If the Internal API changes, the code published here will no longer work.

Warning, some of the code uses internal API that might change in the future. If the Internal API changes, the code published here will no longer work.

The code in this post hides the RTC API for scheduled absences, which is actually an internal API, from the user. It provides methods to conveniently work with absences. It allows to create, read and delete absences for one or many users. Before we continue, the usual ceremony:

The post contains published code, so our lawyers reminded me to state that the code in this post is derived from examples from as well as the RTC SDK. The usage of code from that example source code is governed by this license. Therefore this code is governed by this license. I found a section relevant to source code at the and of the license. Please also remember, as stated in the disclaimer, that this code comes with the usual lack of promise or guarantee. Enjoy!

As always, please note, If you just get started with extending Rational Team Concert, or create API based automation, start reading this and the linked posts to get some guidance on how to set up your environment. Then I would suggest to read the article Extending Rational Team Concert 3.x and follow the Rational Team Concert 4.0 Extensions Workshop at least through the setup and Lab 1. This provides you with a development environment and with a lot of example code and information that is essential to get started. You should be able to use the following code in this environment and get your own extension working.

The Code

The code discussed in this post can be downloaded from here. Please note, the code might change over time, although I hope to keep the interfaces stable.

 The Absence Manager Overview

The source code of the Absence Manager is separated into three projects. The core project contains all the code required to create tooling to manage absences.

Core Absence Manager Project

Core Absence Manager Project

The package with suffix core contains the interfaces to work with, for most of the time.

  • IAbsence represents the Interface to scheduled absences in an external format used to store the data in a common format  and to make the data accessible
  • IAbsenceFactory is an interface that provides ways to create scheduled absences in the external format in different ways; the pattern to convert strings and timestamps can be set in the constructor; see the section Date, Timestamp and String Representation Troubles – Here be Dragons below
  • IAbsenceManager is an interface that the Absence Manager provides to allow to create, read and delete absences; it uses an IAbsenceFactory to create absence objects where needed

The package with suffix impl contains implementations for the interfaces that do the real work.

The package with suffix utils contains a utility class that basically manages the conversion of timestamps and string representations.

I ended up with this structure, because I wanted clear abstractions of the concepts and allow to easily enhance or replace the implementations if one so desires. Before I finally ended up with this clear structure, there where a lot of inter-dependencies in the code that where hard to handle. They tended to break the code when introducing small changes and where very confusing in general.

This is by far the most complex automation I blogged about so far and I needed to be able to test it during refactoring. Once I had the first snippets available I used a test driven approach to finalize the solution. This also made the whole refactoring required to get to a clean structure possible in the first place.

The project contains unit test for the core classes and interfaces. The unit tests should cover most of the interface and its implementation. I did not make sure all is covered, but the main functionality should be covered.

  • AbsenceDataTest basically runs some simple tests to create absences in different formats
  • AbsenceManagerTest runs tests against a test repository and manages test scheduled absences in that repository for the logged in user testing, leaving the user with no scheduled absences
  • AllTests is a suite that runs all tests above

The third project basically has two classes, that implement CSV import and -export of scheduled absences for all active (not archived) users.

CSV Absence Manager

CSV Absence Manager

The classes can be used as prototype for a custom implementation.

To read and write CSV files I used opencsv to avoid having to implement CSV reading and writing. This made it very easy to implement the functionality after the fundamental interfaces where working.

NOTE: I will not include opencsv in the download. You can download it from sourceforge, unzip it and place the library in the lib folder of the project.

There are other open and free Java implementations of CSV file readers for example SuperCSV for download as well.

  • ScheduledAbsenceCSVImporter uses a CSV file with a comma separated format to read scheduled absences and adds them to all users that are not archived
  • ScheduledAbsenceCSVExporter exports all scheduled absences for all active users to a CSV file, with a similar format, except it contains the user ID as a leading column

Here an example file for using as import source:

CSV Import Example File

CSV Import Example File

Other Uses of the AbsenceManager

If  you have a common system with an Interface to get at absence data, you can create an integration to that system with the attached code. Such an integration could, as an example, synchronize the absences between the other system and Rational Team Concert servers. The simplest approach would be to always delete all absences for a user in RTC and then recreate the absences from that system. This could be done in a nightly run.

RTC, Absences and the AbsenceManager

RTC stores the absences as java.sql.Timestamps in the CCM databse. Absences basically have the following data:

  • Summary – a text that describes the absence
  • StartDate – a Timestamp of the start date
  • EndDate – a Timestamp of the end date, the same as the start date in case of one day long absences

Absences are defined by the three attributes. To be able to find absences it is necessary to find one with the same summary and the same dates. While developing the Absence Manager, it became apparent that matching for the exact data is sometimes not desirable. Therefore the date is, in some cases, only compared to the same day, to avoid missing matches. For the summary the match is implemented as ignore-case.

It would be easy to implement a way to find all absences by the summary. This would potentially be a collection of items. For the use cases so far it was not necessary to implement it and thus I left it out.

During testing, when absences are manually created, the time created for the absence seemed to be 2pm in the timezone of the server. While specifying the absence the user actually only selects the date and not the time. If using automation, first check what the server would create and specify the times accordingly.

The RTC Absence API

The API to get absences is very easy. The code below shows how to access the scheduled absences for a contributor. All the code is hidden in the AbsenceManagerImpl.

	 * Get the internal representation of all absences of a contributor.
	 * @param contributor
	 * @param monitor
	 * @return
	 * @throws TeamRepositoryException
	private ItemCollection<IContributorAbsence> getContributorAbsences(
			IContributorHandle contributor, IProgressMonitor monitor)
			throws TeamRepositoryException {
		final IResourcePlanningClient resourcePlanning = (IResourcePlanningClient) fTeamRepository

		IContributorInfo info = resourcePlanning.getResourcePlanningManager()
				.getContributorInfo(contributor, true, monitor);
		ItemCollection<IContributorAbsence> absences = info
		return absences;

The code basically gets the IResourcePlanningClient to get the ResourcePlanningManager and uses this to get the IContributorInfo. This contains the absences as as well as the team allocations.  The call .getAbsencs(IContributorHandle) returns an ItemCollection with all the IContributorAbsences. All the classes and interfaces, except IContributorAbsence are internal API. This is the reason why it should be encapsulated so that most of the implementation does not interfere with it. This will make it easier to adjust to changing API’s later.

The code below shows how to create a IContributorAbsence

	 * Create an internal IContributorAbsense from an IAbsence 
	 * @param contributor
	 * @param iAbsence
	 * @return
	private IContributorAbsence createContributorAbsence(
			IContributorHandle contributor, IAbsence iAbsence) {
		ContributorAbsence absence = (ContributorAbsence) IContributorAbsence.ITEM_TYPE
		return absence;

The data provided during creation is String and timestamps.

This code shows how new absences are saved using saveAbsences().

	 * Add absences from a collection to a user using the contributor object of
	 * the user. The method checks if an absence with the same summary, same
	 * start- and end- date already exist. The comparison converts the dates and
	 * uses a precision of a day to find matches.
	 * @throws TeamRepositoryException
	public void addAbsences(IContributorHandle contributor,
			Collection<IAbsence> absences, IProgressMonitor monitor)
			throws TeamRepositoryException {
		final IResourcePlanningClient resourcePlanning = (IResourcePlanningClient) fTeamRepository

		List<IContributorAbsence> absencesToBeCreated = new ArrayList();

		IContributorInfo info = resourcePlanning.getResourcePlanningManager()
				.getContributorInfo(contributor, true, monitor);
		 * Can't access all absences, need to narrow down to contributor
		ItemCollection<IContributorAbsence> existingAbsences = info

		for (Iterator<IAbsence> iterator = absences.iterator(); iterator
				.hasNext();) {
			IAbsence iAbsence = (IAbsence);

			 * Check if the absence is already there to avoid entering it
			 * multiple times. The check is for an match of all data. It does
			 * not prevent from entering absences that overlap or different
			 * summaries. The check of the dates is not precise, but on a day
			 * level.
			if (!exists_SameDay(existingAbsences, iAbsence)) {
				IContributorAbsence absence = createContributorAbsence(
						contributor, iAbsence);
				.toArray(new ContributorAbsence[absencesToBeCreated.size()]),

The code for removing absences is similar, only the call is to a different method – deleteAbsences().

	 * Remove a collection of absences from the absences. Matches by summary, as
	 * well as start- and end- date. Date match is don one a same-day basis.
	 * @param contributor
	 *            contributor to remove absences from, must not be null.
	 * @param absences
	 *            collection of absences, must not be null.
	 * @param monitor
	 * @throws TeamRepositoryException
	public void removeAbsences(IContributorHandle contributor,
			Collection<IAbsence> absences, IProgressMonitor monitor)
			throws TeamRepositoryException {
		final IResourcePlanningClient resourcePlanning = (IResourcePlanningClient) fTeamRepository

		List<IContributorAbsenceHandle> absencesToBeRemoved = new ArrayList();

		IContributorInfo info = resourcePlanning.getResourcePlanningManager()
				.getContributorInfo(contributor, true, monitor);
		 * Can't access all absences, need to narrow down to contributor
		ItemCollection<IContributorAbsence> existingAbsences = info

		// For all absences
		for (Iterator<IAbsence> iterator = absences.iterator(); iterator
				.hasNext();) {
			IAbsence iAbsence = (IAbsence);

			// search if the absence is available (match same day)
			IContributorAbsence found = findContributorAbsence(
					existingAbsences, iAbsence);
			if (null != found) {
		IContributorAbsenceHandle[] remove = absencesToBeRemoved
				.toArray(new IContributorAbsenceHandle[absencesToBeRemoved
		resourcePlanning.deleteAbsences(remove, monitor);

That’s it. Nothing big. But….

The Rest of the Code

All the 9/10th rest of the code is basically to make it easy and convenient to manage the scheduled absences and to hide the internal API within. In addition tests and usage examples make up a reasonable amount of the code.

Date, Timestamp and String Representation Troubles – Here be Dragons

The biggest trouble in the whole implementation was the conversion of date and time to timestamps. This occurs in several areas. The general problem here is that there is no general tool that is able to parse any string defining a date/time without problems.

To overcome this, the AbsenceManager uses java.text.SimpleDateFormat. This requires a string expression to parse and map the external data to the internal representation.

By default the mapping pattern used is “yyyy/MM/dd hh:mm:ss z”. However, this requires to provide date, time and timezone. If it is necessary, e.g. to avoid the time and timezone, a different mapping string can be used. To provide a different mapping string, use the second constructor of the AbsenceFactoryImpl and provide the mapping string as shown below.

		new AbsenceFactoryImpl("yyyy/MM/dd")
		IAbsenceManager absenceManager= new AbsenceManagerImpl(teamRepository, new AbsenceFactoryImpl("yyyy/MM/dd"));

Please note, if no time is provided, the server will pick an hour on its own.

The IAbsenceFactory provides also ways to create new absence instances with different representations. However, keep in mind that some comparisons are done internally and it is best to have a common schema.

How to use the AbsenceManager

How to use the absence manager is shown in the classes AbsenceManagerTest and ScheduledAbsenceCSVImporter.  You have to be connected to a team repository with a user that has sufficient permissions to read/write the absences.

To get the AbsenceManager use:

		IAbsenceManager manager = new AbsenceManagerImpl(fTeamRepository,new AbsenceFactoryImpl());

		String absence1Summary="Absence 1";
		Date today= new Date();
		IAbsence absence1= manager.getAbsenceFactory().newInstance(absence1Summary, today);
		IAbsence absence1_sameday= manager.getAbsenceFactory().newInstance(absence1Summary, new Timestamp(today.getTime()+60000));
		IAbsence absence2= manager.getAbsenceFactory().newInstance(absence1Summary, "2014/07/17 02:00:00 CEST");
		IAbsence absence3= manager.getAbsenceFactory().newInstance(absence1Summary, "2014/07/17 02:00:00 CEST", "2014/07/30 02:00:00 CEST");

You can add single or multiple absences like this:

		// Add a single absence using a contributorHandle
		manager.addAbsence(contributorHandle, absence1, monitor);
		// Add a collection of absences
		ArrayList<IAbsence> addAbsences= new ArrayList<IAbsence>();
		manager.addAbsences(contributorHandle, addAbsences, monitor);

		// Add a single absence using a userId
		manager.addAbsence(manager.getContributor("ralph"), absence1, monitor);
		// Add a collection of absences
		ArrayList<IAbsence>addAbsences= new ArrayList<IAbsence>();
		manager.addeAbsences(manager.getContributor("ralph"), addAbsences, monitor);

The interface implements a convenience method getContributor(String userID) to allow to get the IContributor interface, which also implements IContributorHandle, from the userId.

You can get absences for a user.

		ArrayList<IAbsence> userAbsences=manager.getAbsences(contributorHandle, monitor);

You can test for an existing absence

		if(manager.hasAbsences(contributorHandle, absence1, monitor)){

You can remove specific absences

		manager.removeAbsence(contributorHandle, absence3, monitor);
		manager.removeAbsence(contributorHandle, absenceCollection, monitor);

You can delete all absences up to a specific date for

		// Clear all absences for user
		manager.purgeAbsences(contributorHandle, null , monitor);
		// Clear all absences up to a certain date for user
		manager.purgeAbsences(new Date(), monitor);
		// Clear all absences for all users up to a certain date (including archived users)
		manager.purgeAbsences(new Date(), monitor);
		// Clear all absences for all users (including archived users)
		manager.purgeAbsences(null, monitor);

Import the code

The code can be downloaded here. Save the code on the local disk. Set up an Eclipse client for example the RTC Eclipse client. Follow the section Setting Up The Plain Java Client Libraries and Setting Up Your Workspace for Plain Java Development in the post Setting up Rational Team Concert for API Development to set up at least the Plain Java Client Libraries. You don’t have to set up the SDK to run the code. This step would however provide you with access to the API classes and their source code. Make sure to create the user Library for the Plain Java Client Libraries.

Import the compressed file as archived project file into Eclipse.

The example does not ship the opencsv library. Download it by following the link e opencsv  and clicking on the download link in the General section of the description shown below.

Open CSV download link

Open CSV download link

Store the download file e.g. opencsv-2.3-src-with-libs.tar.gz in a temporary folder. Use 7Zip to extract the file. Use 7Zip again to extract the file content. Find the folder deploy in the extracted folder structure. E.g. C:\temp\opencsv-2.3\deploy  and copy the enclosed JAR file e.g. named opencsv-2.3.jar into the folder lib underneath the project Please note, opencsv also ships the file junit.jar. that is not the file you want.

For all the projects you just imported starting with Run a clean build and check the build path for errors.  If you used the proposed name PlainJavaApi, for the Plain Java Client Libraries user library you should be fine. If you see errors, you probably have a different name. In this case configure the build path. Remove the Plain Java Client Library user library from the build path and add your user library.

Finally all errors should be gone.

Run the ScheduledAbsence Code

You are now ready to use the code and run the Unit Tests, the CSV importer and the CSV exporter.

The code ships with launches. They are located in a sub folder named Launches in the projects.The launches should now be available in the Eclipse Debug Configurations and Run Configurations menu.

Open the configuration e.g. for the ScheduledAbsenceCSVImporter launch. The Arguments tab will show

"" "ralph" "ralph" "USFederalHolidays2014.csv"

The  ScheduledAbsenceCSVImporter  and the ScheduledAbsenceCSVExporter require

  • the Repository URL
  • a user ID
  • a password
  • a name for the CommaSeperatedFile

Replace the information with your own values. Then you can run the importer and the exporter.

The same information is required in the AbsenceManagerTest Junit test. It is hard coded in the AbsenceManagerTest. Replace the values with your own information if you intent to run the test against you own test system.

Next Steps

I will try to find some time to be able to create a version that can be shipped as binaries with batches so that they can easily be run from the command line. In general the post Understanding and Using the RTC Java Client API should give you all the information you need to get this done yourself.


This post provides you with all the code and information needed to automate managing scheduled and other absences in RTC repositories. The code is obviously not production code, so you should make sure it works as advertized in your environment. the provided tests should help you to fix errors, should they occur.

As always, I hope this post saves users out there some time – or is at least fun to read.

Posted in Jazz, RTC, RTC Automation | Tagged , , , , , , , , | 1 Comment