Jazz Community Contributions


The Jazz Community starts sharing their tools here: http://jazz-community.org/. The code for their tools can be found here.

There is a very active Jazz user community of members of several companies in Europe that are heavily using the Jazz products such as Rational Team Concert, Rational Quality Manager and Doors Next Generation.

The community members try to meet to share their experience with using, administrating and running the Jazz tools in their environments. It became clear that the different companies and community members face similar challenges and that it would be beneficial if they could share tools they created to make running such an environment easier.

The community has now started sharing their tools in this community project and in this code repository.

JazzCommunity2017-05-12_11-34-52

Some of the tools have already been shared on other sites. I have linked the ones I am aware of to the Interesting links page in the ‘Extensions Provided by the Community’ section. These are the ones I am aware of (and the code for some of them is already available in the community repository):

I am looking forward to see more community created tools soon. Visit the community to find out what they have to offer. The code for their tools can be found here.

Last but not least, a special thanks to Dani for getting this awesome user groups started and for the members of said community for their spirit, engagement and willingness to contribute and help each other. You know whom I address here!

Advertisements

Upgrading your CLM system? – Definitive checklist.


Upgrading the IBM CLM solution is sometimes received as complex and users struggle with it. Yesterday I was notified that there was a blog from a colleague providing a check list that helps making this easier and more likely successful. I thought this is worth reblogging to make it easier to find.

Please find the information in Paul’s blog post Upgrading your CLM system? – Definitive must-read checklist just published. The link points to our Deployment Wiki which is always worth looking at for all things related to deployment, install, setup and configuration.

How to filter RTC email notifications by project or by user


Spamming the in-boxes of your project with notification mails? This is a reblog of Marks internal blog.

The RTC Mail Filtering Problem

A common requirement for Rational Team Concert administrators is the need for limiting email notifications for a single RTC project or user.  There are many scenarios which might drive this need.  Here are some examples:

  • Thousands of work items are imported and you want to not cause notifications for the new work items.
  • A new field is added and data from the old field is copied over, affecting many work items.  We don’t want users to be notified.
  • A subset of users now use a different RTC server, and they no longer want notifications from the old project, but they need to remain active in the old project.

Unfortunately, the current RTC product as of version 6.0.1 does not support this requirement.  Notifications are controlled at the Jazz Team Server level.  The JTS may control multiple RTC, Rational Quality Manager, and Rational Doors Next Generation repositories.  Turning off notifications in the JTS turns off mail for all RTC, RQM, and RDNG projects from all repositories controlled by that JTS server.  Mail  generated for all of those RTC, RQM, and RDNG projects while notifications are off in the JTS are lost.
This leaves administrators with a tough choice:  lose all mail for everything connected to the JTS, or live with excessive and unwanted notifications for a single project.

A Solution:  Milters

We have found and implemented a solution for this requirement, milters.  Milters are plugins for Sendmail that add additional functionality to Sendmail.  The “milter-regex” milter plugin permits filtering mail using regular expressions.

These are the overall steps for utilizing the regex milter:

  • Set up a machine with Sendmail and install the milter-regex plugin
  • Configure Sendmail to allow mail from the JTS server machine in /etc/mail/access
  • Develop a set of regular expressions which cause mail from a particular project or user to be found and filtered out and update the /etc/mail/milter-regex.conf configuration file
  • Restart the milter-regex service
  • Edit the email settings of the JTS to point to your Sendmail machine as the mail SMTP Server

You can easily add and remove projects and users from the filtering list by editing the /etc/mail/milter-regex.conf file.  You can turn off filtering completely by restoring the JTS SMTP Server value back to its original setting.

Regular Expressions

You’ll have to examine your email templates to determine the project and user information for the various email formats.  Here’s an example of the configuration statements for our email templates to filter out all mail for project “Project XYZ”:

reject “Mail filtered for project: Project XYZ”
body ,Team Area:.* Project XYZ,i
body ,Project Area:.* Project XYZ,i

Both “body” statements are required.  The “reject” statement defines a message which is logged into the /var/log/messages log file when a piece of mail is filtered out.

This is an example of filtering for a single user:

reject “Mail filtered for user: Mark E. Ingebretson”
body ,The user ‘Mark E. Ingebretson’ made a .* request,i*
body ,Mark E. Ingebretson mentioned you in,i
body ,Mark E. Ingebretson.*changed on,i

If you are a user of our IBM Systems servers and need email filtering, you can submit an ITHELP request.

Other Approaches

There is another approach from Sam provided on the Jazz.net forum here: Manage User E-mail Preferences for Mass Updates. It has been sitting on my Interesting Links page for a while. Time to show it here.

A new solution s available for RTC 6.0 starting with iFix03 if you are using the Java API. Use the Skip Mail Save Parameter.

 

Summary

I found the topic very interesting and related questions also come up on Jazz.net, so I decided to re-blog and promote this when Mark showed this to me. I hope it helps our users all over the world. I hope that this solution can help other RTC administrators address this important requirement.

Alien Skies II – JavaScript Yet Another Computer Language?


I tried to look into Web UI Development, especially in the context of RTC Extensions like dashboards and the like several times in the past. I couldn’t get it to fly. It was a total disaster so far. I couldn’t understand how all that works and what the fuzz was all about. The more I looked into it, the more confused I was.

Surprisingly, looking into Doors Next Extensions seems to have revealed the missing link(s) I needed to better understand the whole topic around JavaScript. I have talked to other people who had similar issues with JavaScript, so I hope it is worth sharing my thoughts here.

I am well aware that the topic and my approach to it could be very controversial. If you like to disagree with any of my assessment, please feel free to do so in the comments. I rarely dismiss comments, but please be aware that the comments in this blog are moderated xD.

I have a book about JavaScript, but I rarely see JavaScript like it is described in that book. So, how does this JavaScript stuff work?

Its Not The Language

As far as I can tell, the language itself does not really matter that much. The language syntax of JavaScript is very similar to other languages e.g. such as Java. The Language has inheritance mechanisms, the language can support event based, asynchronously processed and functional aspects. All in all, a collection of properties that are shared across several language families.

There are some issues with the language (at least from my, still uneducated, point of view). It is not type safe to get started. While this is very flexible it is also asking for trouble in my experience. JavaScript is very dynamic and does not get compiled. A lot of issues will be seen at run time and not during coding.

Anyway, to have a common language with a common syntax and semantic is probably useful if you have to switch the domains often. At least you don’t have to learn a new language over and over.

Its Not The Development Environment

I am missing a good library handling and context sensitive editor support. There might be some cool development environment out there, but it has eluded me so far. Any suggestions are welcome, as always.

I have seen the first computers making their appearance in schools. With enormous keyboards with two lines of 40 characters to display. Nothing one would associate with a computer today. I was lucky enough to be able to write my first own programs on Commodore PET, VIC-20, 64 and on the first IBM PC’s. Ignore the first program on a Texas Instruments calculator with 32 programmable steps. I have developed without debugger, because there were no debuggers available or not affordable e.g. you could buy in circuit debuggers for  embedded systems but that was sometimes unaffordable even for companies back then.

I have seen bad development environments and debugging by printf. It is not that bad with JavaScript, in browser based applications. It basically depends on what the use case is, if there is a good debugging environment available or not.

Feeding The Confusion

I am usually pretty good in spotting patterns. Looking into the JavaScript domain, I had a hard time finding the basic ones. A lot of the confusion I felt is also, in hind sight, probably related to the fact that there is not just one JavaScript. There are multiple. And they are used in different ways and using different patterns. In addition to that the expression JavaScript is often used in contexts, where it would probably be better to talk about Web or JavaScript libraries or frameworks instead.

  1. There is the JavaScript that runs in browsers. This JavaScript usually works together with CSS and HTML. It is used to create dynamic content and presentations in the Web to be able to provide more or less usable UI’s in a browser, designed to show more or less static HTML documents. The JavaScript engine is built into and shipped with the browser and there are different engines and development environments for different browsers. This type of JavaScript basically requires to understand how JavaScript, HTML and CSS work in combination and how to control their interaction.
  2. Browser based JavaScript is integrated in several different frameworks and tool kits such as Ajax/Dojo and others.
  3. There is NodeJS, which claims to be Java Script and actually is sever side JavaScript processing and a ton of libraries and frameworks that are available for it. The only common part I can identify here is the language that is used. I did not look into the details however, so maybe there is more e.g. the run time used could be one of the common ones.

So when I started to look at the Doors Next Extensions which said JavaScript, I realized quickly, that there were constructs in the examples that I had never seen before in any JavaScript example. The normal pattern again, every time I looked at something with JavaScript inside and thought I had some basic knowledge already, I saw something different. This time however I was able to figure out what I was missing and this seems to help with all the other cases as well.

The Doors Next Extension Mechanism

The Doors Next Extension mechanism is based on JavaScript, HTML and CSS. The API supports TypeScript which provides with a more type safe and compile like way of developing JavaScript. The Doors Next API itself comes with a Typescript interface definition. Some of the examples that are provided use jQuery a JavaScript based library and framework. The latter means that the extensions look a bit uncommon.

Uncommon Common

To some extend this seems to be fairly typical for my experience with JavaScript. Almost every solution I have seen has some common part that can be found in other solutions. But looking across all the examples there is only a tiny common core, which often is basic JavaScript functions. If the JavaScript based solution deals with UI’s, there is usually the common CSS and HTML, if it does not focus on UI’s this is missing. Since there are so many frameworks and libraries, Java Script based solutions can look very different and it is important to understand the detailed frameworks and resources that are used in the specific example.

Summary

Granted, it took a while for me to come to terms with JavaScript,  but looking into the Doors Next Generation Extension Mechanism provided me with some important insight. This will help me to understand these technologies in other contexts as well. I am looking forward to playing with Doors Next Extensions and add the results to this blog.

Publishing XML and Other Data With the WAS Liberty Profile


Since CLM now ships the WebSphere Application Server Liberty Profile as default, you might want to be able to publish XML and other data similar to the easy way that was available in Tomcat, by just using the application server that is available. This avoids having to set up an additional Web or application server.

Problem

Sometimes it is necessary to provide access to some files using the HTTP/HTTPS protocol.

This is interesting for the Process Enactment Workshop, or if you want to publish your own XML data to be used with HTTP Filtered Value Sets, host custom Doors Next Extensions, or if you want to publish Build results.

It is convenient if there is already an application server available to host these files there and make them accessible, instead of having to set up a web server. This especially holds if it is necessary to provide the data using HTTPS, and not HTTP, because browsers these days reject to process content that is mixed from HTTP and HTTPS sources. The latter is important for Doors Next Generation extensions. Not having to set up a new server also avoids having to create a trusted certificate for it. It reduces the additional failure point, backup and maintenance and provides URI stability as well, since you don’t want to change the URI for the CLM applications hosted anyway.

Tomcat

With Tomcat, it was relatively easy to do that, by just adding a folder under \tomcat\webapps and placing your files there. Assuming the folder name is myfiles, Tomcat then provides the files with the public URI root https://serverpath:port/myfiles/fileName.

Another way was to simply drop the file into the tomcat/webapps/ROOT/ or a sub folder, which has the same effect.

Is there a similar way to achieve this with the WAS Liberty Profile shipped with CLM?

Example Scenario

Lets assume, like in the scenario explained in Publish and Host XML Data Using Tomcat – The Easy Way, we want to publish a file makers.xml to be accessible with a context root PEWEnactmentData on the server with the public URI root https://clm.example.com:9443/.

The expected outcome is to be able to access the XML file content using the URL https://clm.example.com:9443/PEWEnactmentData/makers.xml.

Solutions

There are two relative simple solutions for WAS Liberty. The solutions below are based on information from Lars, one of our WebSphere experts and the IBM WebSphere Application Server Liberty Profile Guide for Developers.

For WAS Liberty Profile, all solutions require to deploy an application. There are several ways to create and deploy an application in WAS Liberty Profile.

  1. Deploy an application in some folder and declare the context root and location of the application in the server.xml or specific location XML file
  2. Deploy an application in the dropins folder; for this to work, it is necessary to make sure the application server has the dropins monitoring enabled for this to work

Deploying an application in this context means

  1. Create a folder with some contend in a location (or copy the folder into the location)
  2. Deploy a compressed folder with some content

Please note: Any changes to any of the server configuration XML files are automatically picked up be WAS Liberty Profile. There is no need to reboot the server for the steps below to work.

1. Deploying an Application – Standard

The trick in this solution, is to pretend to deploy an uncompressed Web Application such as a WAR file. It is obviously possible to create a real WAR file or the structure that is contained in it, but that is not really necessary. See the last section here if you are interested in deploying the application as compressed WAR file.

1.1 Deploying the Application

The only thing necessary to trick the WAS Liberty Profile into making the data available with a correct context root is to create a folder with the context root as name and an extension .war, and to make the application location known in an XML file.

In the default setup of WAS Liberty Profile with CLM, after the first launch, all installed CLM applications end up in the folder <Install Folder>\server\liberty\servers\clm\apps. The screen shot below shows this structure.

FolderStructure

The applications are deployed as compressed WAR file and have been extracted to a folder with the name of the application. As an example the CCM application (RTC) came shipped as ccm.war.zip and was extracted to a folder ccm.war. The context root of the CCM application is ccm, which is basically the name without the extension war. The same pattern holds for all the other applications.

It is possible to create a folder to host the files to be published. The folder should be named <context root>.war to result in an application with the desired context root.

Files within this folder will be available with their file name.

A file in that folder with the name <filename> will be accessible as <public URI root>/<context root>/<filename>.

If the files are within sub folders, the folder path within the root folder and the file name compose the file name part of the URL. For example, the root folder has a sub folder names myfiles and a file test.xml in it, the file is accessible as <public URI root>/<context root>/myfiles/test.xml.

In our example with the given public URI root being https://clm.example.com:9443/, the folder name <context root> being PEWEnactmentData and <filename> being makers.xml the URL would be https://clm.example.com:9443/PEWEnactmentData/makers.xml.

1.1 Declaring the Application Location

The WAS Liberty Profile needs to know which application has to be published and where to find its resources. WAS Liberty Profile defines an XML element to declare which application is located where. The element is <appliction…./>. It supports various attributes and can come in many different forms. The most important attributes are

  • id: Must be unique and is used internally by the serve
  • location: The path to the application resource
  • context-root: The context root of the application
  • name: The name of the application
  • type: Specifies the type of application (war or ear)

The location and ID is required and either the name equal to the context root, or the context-root can be provided. The type can be omitted. I did not provide the ID and it worked for me, but since the server requires it, it should be provided. Use the same value as the name or context root for the ID.

It is necessary to add the application declaration to the server configuration for example by adding it to the server.xml file. In a CLM deployment a file appliction.xml is included in the server.xml, declaring all the applications and it makes sense to use that file instead.

Save the xml files after any change to activate the new setting. The server will pick up the change.

2. Deploying an Application – dropins

The WAS Liberty Profile has the capability to monitor a special dropins folder on a regular basis. If new content is detected, it is automatically made available by the server. The automatic monitoring needs to be enabled for this to work. A standard CLM deployment disables the monitoring to reduce server load.

2.1 Deploying the Application – dropins

In a default CLM install the dropins folder is located here:

<Install Folder>\server\liberty\servers\clm\dropins

Create a folder named <context root>.war in the dropins folder and add the files to publish. After detecting the addition, the files will be accessible like explained in 1.1.

The folder <Install Folder>\server\liberty\servers\clm\dropins should now look like below.

Published application with file

2.1 Enable Automatic Monitoring for Dropins

The automatic monitoring of the dropins folder is disabled for a standard CLM setup with WAS Liberty Profile. This saves processor cycles and I/O for the server operation. It has to be enabled once in the server.xml file for this solution to work. Find the XML element <applictionMonitor> and change the attribute dropinsEnabled from “false” to “true” and save the file. The image below shows the changed configuration file.

Enable Dropins Monitoring

The attribute pollingRate determines how often the server checks the location for changes. Adjust the value to your needs.

Save the server.xml file after any change to activate the new setting. The server will pick up the change.

The XML file content should now be accessible.

Example 1 – Standard

In the example we want to publish a document with the URI https://clm.example.com:9443/<context root>/makers.xml, where <context root>=PEWEnactmentData. The public URI root is already given by the server set up.

1.1 Create the “Application”

As first step create the application. Create a new folder named PEWEnactmentData.war in the folder <Install Folder>\server\liberty\servers\clm\apps. The folder could be created anywhere, but it makes sense to put it where the other applications are. If it is necessary to use a different folder, for example to allow other users to modify the content, place the folder in a different location and note the path. The path to the location will become important in the next step.

Put the files to be published underneath this folder. In the given example, put the file makers.xml into the folder.

The folder structure should look like the image below:

New Application folder

1.2 Publish the “Application”

The WAS Liberty Profile needs to know which application has to be published and where to find its resources. WAS Liberty Profile defines an XML element to declare which application is located where. The element is <appliction…./>. It supports various attributes and can come in many different forms. The most important attributes are

  • id: Must be unique and is used internally by the serve
  • location: The path to the application resource
  • context-root: The context root of the application
  • name: The name of the application
  • type: Specifies the type of application (war or ear)

The location and ID is required and either the name equal to the context root, or the context-root can be provided. The type can be omitted. I did not provide the ID and it worked for me, but since the server requires it, it should be provided. Use the same value as the name or context root for the ID.

The element can be put into several places. In a CLM installation, by default, the applications are declared in the file

<Install Folder>\server\liberty\servers\clm\conf\application.xml

The file is included in the file <Install Folder>\server\liberty\servers\clm\server.xml. It would be possible to place the application declaration in the file server.xml, but it seems to make more sense to put it into the file application.xml instead.

Add lines

<!–  My Application –>
<application id=”PEWEnactmentData” name=”PEWEnactmentData” location=”${server.config.dir}/apps/PEWEnactmentData.war”/>

to the file application.xml and save the file.

The file should look like below:

Declare new Application

In this case the location is relative to the server configuration folder which n our case is <Install Folder>\server\liberty\servers\clm\. The location could really be anywhere. If it is necessary to include a folder that, as an example, can be accessed by regular users, you could change the location to somewhere with less read and modification restrictions.

After the save of the application.xml, the published XML file should be accessible using the URL https://clm.example.com:9443/PEWEnactmentData/makers.xml.

The image below shows the browser access.

Browser Access To XML File

2. Deploying an Application – dropins

In the example we want to publish a document with the URI https://clm.example.com:9443/<context root>/makers.xml, where <context root>=PEWEnactmentData. The public URI root is already given by the server set up.

2.1 Create the “Application”

As first step create the application. Create a new folder named PEWEnactmentData.war in the folder <Install Folder>\server\liberty\servers\clm\dropins.

Put the files to be published underneath this folder. In the given example, put the file makers.xml into the folder.

The folder structure should look like the image below.

Published application with file

Assuming the automatic application monitoring is enabled, the file should be accessible after the next polling cycle, (at least after waiting the full pollingRate). If automatic application monitoring is not enabled, enable it as described in 2.2 and save the configuration file.

The XML file content should now be accessible using the URL https://clm.example.com:9443/PEWEnactmentData/makers.xml.

Deploying Compressed Folder Structures

It is also possible to publish real WAR, EAR similar to the way described above. For example create a WAR file as described in Publish XML Data Using Tomcat – Hotfix for The Process Enactment Workshop and attached to that post. Deploy the compressed WAR file, for example PEWEnactmentData.war in the dropins folder or declare it as an application and put it onto another location like explained below. This automatically deploys the application and makes the contained files accessible.

Just zipping the folder structure we created above into a zip file, name it PEWEnactmentData.war and trying to deploy does not seem to be working however. The WAR file needs to provide required supporting structures in this case.

Related Posts

The RM Extensions Hosting Guide for CLM 6.0.1 and later versions explain the dropins example as well.

Summary

This post shows how easy it is to publish supporting files on the WAS Liberty Profile. This is important if you want to host custom Doors Next Extensions, or if you want to publish your own XML data to be used with HTTP Filtered Value Sets, for the Process Enactment Workshop and it can possibly also used with Build result publishing. In the latter case, you would likely publish a root folder that contains all the build results.

As always I hope this post has some value to users out there, or is at least interesting to read.

Creating Custom Link Types for Rational Team Concert


It can be useful to be able to create custom link types for RTC. This is Interesting if a special business logic/behavior needs to be implemented and the available link types don’t fit. How can this be done?

It is surprisingly easy to do as Eduardo describes in Creating a New Link Type in his blog Extending Rational Team Concert – RTC Extending. I had to do it recently and thought it would be useful to describe the experience, adding a bit more detail to the content of the blog above.

License and Download

The post contains published code, so our lawyers reminded me to state that the code in this post is derived from examples from Jazz.net as well as the RTC SDK. The usage of code from that example source code is governed by this license. Therefore this code is governed by this license. I found a section relevant to source code at the and of the license. Please also remember, as stated in the disclaimer, that this code comes with the usual lack of promise or guarantee. Enjoy!

You can download the code for this extension here.

Just Starting With Extending RTC?

If you just get started with extending Rational Team Concert, or create API based automation, start with the post Learning To Fly: Getting Started with the RTC Java API’s and follow the linked resources.

You should be able to use the following code in this environment and get your own automation or extension working.

Important Update

Please note that it is at least up to RTC 6.0.2 not possible to create custom link types that connect items in different repositories. The built in link types for example Tracks/Contributes to and Related Change request are available, but it is not possible to create custom links that behave this way.

Please note that custom link types at least up to RTC 6.0.2 are ignored by the data collection component and can not be used for reporting purposes.

Creating a Custom Link Type

All that needs to be done to create a custom link type is to create a plug-in.

Oh, no! Another Extension! Can’t I just define a new link type in the process configuration? I can literally hear it 8). Unfortunately that is not supported in RTC today. Vote for this work item if you want this kind of capability.

On the other hand, if there is really a need for business logic and operational behavior for the link type, an extension would be needed and it would not matter.

To make the new link type available in the Eclipse UI as well as in the Web UI, the plug-in needs to be deployed in the client as well as in the server. The RTC SDK calls this common API and it makes sense to follow this example.

As explained in other posts already it is crucial to come up with a good naming schema for the various projects and ID’s needed. Have a unique part in it (in my case js as infix) to be able to find it on disk and in the UI.

When creating the link type an ID for the link is needed as well as ID’s for the endpoints of the link. It is crucial to keep these values once they are chosen. If you change them while developing, you would otherwise introduce dangling references into your test model. I managed to damage my test database and got unreliable results when doing this. If this happens, it is possible to delete the folder server that sits in the same folder as the workspace and create a new test repository running the [RTCExt] Create RTC Test Database JUnit test as described in the Extensions Workshop.

This is the extensions editor for the plug-in I created:

Link Type Common Plugin

Link Type Common Plugin

Different to the aforementioned blog post, the plugin defines a component as well as the link type. The reason is that the component allows to see if the plugin was successfully deployed in the server. At some time in the future a component might become required as well. So I always define a component for my extensions.

You can use the context menu of the editor to add the the extension elements for the source, target, endpoints and the itemReferenceTypes.

The endpoints allow to specify the multiplicity. This has actually an impact on how the links behave. If an endpoint specifies 0..1 as multiplicity, only one item can be referenced with the endpoint. If an item is selected already and another item is chosen in the add link dialog, the old item is replaced by the new one.

The plugin.xml looks as follows

Final plugin.xml

Final plugin.xml

The implication of the values in the plugin.xml can be found in the extension point description that can be opened from the extension point itself. Review it to understand the options. From that description:

  • id – String id for the link type.
  • editors – Semicolon separated string of ids of those components permitted to create and delete links of this type. For example, the string “com.ibm.team.repository;com.ibm.team.scm” specifies the two components “com.ibm.team.repository” and “com.ibm.team.scm”. If the attribute is not present, this indicates that there is no restriction regarding which components (or client) is permitted to create and delete links of this type.
  • constrained – If true, the defining component requests that Links of this link type not be created by other components (without permission, or without going through an API provided by the defining component).
  • internal – If true, this link type is an internal detail of the implementation of the defining component, and is not intended as a generic link type which users can freely create, delete and view.
  • componentId – The id of the component defining this link type. Component ids are declared using the com.ibm.team.repository.common.components extension point. If set and if constrained=true, then only services that are part of that component may save and delete links of this type.

The itemReferenceType entry is still a bit mysterious. It is easy to review existing examples. On the com.ibm.team.repository.common.linktype extension select “Show References” in the context menu and browse through the examples. Especially the references in com.ibm.team.workitem.common are of interest. It is possible to define different kinds of endpoints, dependent on what items to link.

In this case work items are supposed to be linked to work items. Both ends select this itemReferenceType. Please be aware that this kind of links will only work within one CCM repository. It will not allow to link across repository borders. The are other CLM link types that would allow this to happen.

I added some icons for the link types that show up in the editors. For the final deployment, I made sure that the folders icons and META-INF as well as the plugin.xml file are selected in the binary build.

I tested the new link type in a Jetty based test server OSGI2 launch as well as in an Eclipse2 Application launch.

Prepare to Deploy

As already mentioned the common plugin needs to be deployed in the server as well as in the Eclipse client. Please follow Lab 6 in the Rational Team Concert 4.0 Extensions Workshop if you have never done this and want to understand how deployment works in general. The workshop explains in great detail which files you need to deploy on the server. The procedure below follows the deployment procedure on the server side. Other than in the workshop the client extension gets deployed using an update site.

To make this easier I created a Feature project as well as an Update Site project. To make deploying it in the server easier I also created a normal Eclipse project that contains the deployment folder structure as well as the provision profile INI file. The projects look as follows:

Final Project Structure

Final Project Structure

Once the Update Site project is built, It is easy to copy the site.xml file and the folders  features and plugins into the folder js_custom_linktype in this project.The folders and contained files in this project can then used to easily deploy the extension on the server using copy paste. It is easy to do, reliable and repeatable.

The provision profile js_custom_linktype.ini looks as follows:

url=file:ccm/sites/js_custom_linktype
featureid=com.ibm.team.js.workitem.custom.linktype.feature

It references the sub folder in the site folder that will contain the feature and plugins folders, once it is copied over to the server for deploying.

TIP: Always delete all content of the Update Site project except the site.xml before building the update site again. I have seen cases where subsequent builds did not successfully pick up all changes.

Deploy in the Eclipse client

To deploy in the Eclipse client, use the Help>Install New Software menu, add the Update Site (browsing to the folder that represents the Update Site project) and install the extension. You can package the Update Site or provide it in a web server in a company context as well.

Deploy in the Server

Copy the generated folders and the site.xml file into the folder underneath the sites folder which is referenced by the provision profile js_custom_linktype.ini file in the  serverdeploy project folder. This would be easy to automate with a build script by the way.

Now select the folders provision_profies and site with all the content and copy the into the servers folder /server/conf/ccm. Allow to overwrite the folders and request a server reset. Then restart the server.

Add the New Link Type to the Quick Information Presentation

As Sam suggests in his comment below, you also want to add the new link type so that it shows in the quick Information presentation. This is done in the Process Configuration>Project configuration>Configuration Data>Workitems>Quick Information Presentations. It allows the workitem summary page Quick Information section to show a count of and quick link to the new link in a workitem This would look like below:

Quick Information Presentation

Quick Information Presentation

Admire your work

You can now admire the result of your work in the work item editors. If not, check your deployment setup. As usually the first attempt was, of course, on a test server to not affect the production system until you have perfected your deployment process.

 Too many Link Types! What to do?

If you have created all the new link types your business demanded, users might start to complain that there are too many link types and many are not needed anyways. You knew that would be coming and decided long ago to review the article Customization of work item editor presentation to show or hide link types in Rational Team Concert. to understand how to fix this issue once needed.

Summary
That was easy, wasn’t it? Now you can create behavior based on the link type.

I have tested it against a test server on Tomcat and Derby. There is no real code this time. However, as always, I hope the post is an inspiration and helps someone out there to save some time. If you are just starting to explore extending RTC, please have a look at the hints in the other posts in this blog on how to get started.

Give Me A REST


Recently I worked with OSLC/REST and discovered a nice REST client that I find really useful. I thought I would share it with you here.

If you have found tools that help you with this and want to share, please comment to the post. Please use English, describe the tool, what it does and why it is useful. If possible provide the URL in the comment as well. This will help me distinguishing the valuable information from occasional spam that slips through.

* Update * See Postman as another alternative. Links can be found below.

I have actually not done that much work with OSLC/REST in the past. I worked with the Java API’s most of the time. I have however done the Open Services for Lifecycle Collaboration Workshop in the past and also helped submitting it in classes.

The workshop suggests to use Firefox and one of the REST Client Addons available. I did exactly that and it was a pain in the, err.., neck. Why? The whole URL was maintained in one line and that, every time I changed the focus, snapped back to the position in the front. I found myself frantically scrolling back and forth trying to find the last edit location. In general the display did not give me a good way to understand what the request did and if there were issues with it. Not pretty. I tried out alternatives, but I couldn’t find anything satisfying.

I decided to try a different browser – despite the fact that I really like Firefox.

I looked into Chrome, which I recently started to use for JavaScript based Attribute Customization as described in this Wiki page and the Process Enactment Workshop (Lab 5). It suits me way better for JavaScript debugging, as I find it a lot easier to find the script, compared to Firefox Firebug.

The Nugget – Advanced REST Client

I discovered the Advanced REST client. And I it’s a nugget. A really big one actually, I think. You can simply download and install it in Chrome and have it in your Apps tile in the bookmarks bar.

AppBookMarksThis makes it easy to reach and does not take away a lot of real estate.

Now what are the things that distinguish this from other REST tools I have seen so far?

URL Management

The URL management is really nice as it basically allows you to work in two modes.

There is the traditional mode where you have the whole URL in the URL field. This is most efficient to use if one has a complete URL available and wants to copy/paste it.

Traditional URL ManagementThe first real great difference however is the small triangle in front of the URL field. It allows you to automatically decompose the URL into its interesting parts with respect to REST and also to easily manage the query parameters.

Note, I found some URL’s where this does not work as expected for the decomposition. You can still use the feature, but folding/unfolding needs some manual work during unfolding. I hope this gets corrected soon.

Decomposed URLThis is very useful, as it allows to focus on the parts that are really interesting and removes the need to parse the URL visually. It is really easy to edit, add and remove query parameters to create the needed content.

There are also several ways to help with encoding and decoding the request data.

The same is available for the request headers.

Request HeaderIt is possible to use the Raw mode to copy and paste complete headers and it is possible to switch to a Form view like above that works similar to the query parameter section.

 Manage Projects And Save Requests

The second really useful feature is the ability to manage requests and to save them in projects. This allows to store requests that work or are under construction for later use.

Manage And Save RequestsWhile I was looking into OSLC, I had issues with my request several times and I found myself maintaining my URL’s in a text file while I was exploring. Being able to save the request makes it a lot easier and it is no longer necessary to switch between applications.

If something works, it can be kept and changed to work towards the final goal.

You can have multiple projects to manage the request, for example for different projects.

Other REST Tools

I found another REST tool that has similar capabilities.

Postman is available as application (online usage) and as Packaged App. It provides similar capabilities, except I miss the encoding/decoding option. The advantage of Postman is that it keeps a request history. This is easier to use compared to saving the change every time.

Summary

The Advanced REST client together with Chrome made my life a lot easier and I can only recommend to use it. If you do, give it a big thumbs up and donate to help supporting it.

As always I hope this is useful to the Rational Team Concert and CLM users out there.

New Version – Does Your Backup Still Work?


Just upgraded? Does your backup really still work?

Upgrading to new versions of the Jazz based Collaborative Lifecycle Management solution is important. However, the upgrade does not necessarily stop with the upgrade process. It is also very important to check if the backup procedure still works and covers all required data.

Since RTC 1.0 was released, the Jazz based solutions have undergone several changes in the storage architecture, adding new applications, databases and index files that require back up.

Adjust and check the backup procedure for version 5.x.

Version 5.0 is another case where this happens. The Requirement Management application Rational Doors Next Generation now gets its own database storage and index files. Before version 5.o the JTS was used to store and index the RM data. It is important to add the database and the index file location to your backup.

In the Backup CLM Deployment Wiki page we try to explain the backup steps for your solution. The page has been updated for the version 5.0. Please carefully check if your upgrade still works. It is also a good idea to try a restore on a test system, to make sure your data is valid and can be used.

JMX Monitoring of a CLM Tomcat Server only with the CLM Webinterface


Stefan is doing some amazing work here and I think you might be interested. This is an example for server monitoring in a dashboard.

Steve Blog

[Update: Source Code now with Garbacollector]
Sometimes it is necessary to tackle down CLM performance issues. To monitor the OS resources like CPU, IO, Memory etc. is not really helpful, because Java is like a black box for an OS like LINUX. With JMX you can look “behind the scene”.

You can monitor the Heap, Threads, Memory and other things from a CLM  Tomcat Server. With the help of some JMX settings you can access this information with jconsole. jconsole is include in every Java JDK (not JRE) since 1.5.

But JMX has some disadvantages:

  • Only Java
  • Problems with firewalls. Especially Productive CLM server are in a protected environment, so it is nearly impossible to get access with your jconsole.
  • “All or nothing” protection

Today we will create a Open Social Widget that will show the amount of running Threads, Heapsize, CCM Requests and Garbage Collector with the help of JMX…

View original post 105 more words

Beware of the Underscore


I ran into a funny situation today and that cost me a lot of time. Maybe this post helps other users to avoid the issue.

This has nothing to do with extending RTC, but it shows that the devil is in the details, so I wanted to share it.

Do you know which characters you can use in host names? I obviously didn’t and that cost me 2 hours.

While trying to finalize a workshop, I was blocked by an odd error message during the  setup. I have never seen this error before and I installed the same version last week – several times. The error said: “An unspecified server error has occurred”.

hostname_illegalWhen I continued to the next page of the wizard, no public URI was visible. I was very confused. I looked into the jts log file and there was an exception that complained about an illegal host name. The host name I used was prod_a. I am relatively sure that I had used the same name in the past, without issues. It was still in my hosts file on my machine.

So I blamed auto update, new Java versions and all other potential root causes for it. Trying to find a potential reason and work around I installed and set up RTC over and over. I tried different versions of RTC. To no avail. I even tried localhost – but the old name was in the configuration somewhere, so this did not fly either. Finally I did a full new install with localhost and that worked.

Since I had also changed the hosts file, I blamed myself for some issue with editing, tidied that up a bit and tried the original name prod_a again. Same result as before.

I finally gave in and looked at the Wikipedia entry on host names. Only a-z, 0-9 and – (hyphen) are allowed in host names. I was not aware of that fact and others might not be as well.

The disturbing issue is that browser(s) I used do not have any problem with the host name. They use it, Tomcat works with it too. It is only in the JTS server, when the name is validated, that an exception is thrown. This has nothing to do with a defect in the programming. Basic Java classes for managing URI’s throw the exception. The only thing that could be done in the application would be a better error message – and a hint to Wikipedia.

My Learning, what could I have done different?

  • Trust the error messages and log files and evaluate into that direction
  • Read available information on the internet, even if you think you already know all the facts
  • Regardless how often you did a task, you still can run into trouble

I hope this was at least interesting to read and helps others that run into into something similar and are so smart to look into the internet for solutions.