Registering Custom Resource Intensive Scenarios to CLM Applications

There is no such thing as limitless computing power. This is an unfortunate truth that can cause problems running the CLM and other tools, as the usage grows. To understand what systems actually do when getting under heavy load, more and more monitoring was introduced over the last years. Resource intensive scenarios where identified and the CLM tools have capabilities to record information about their frequency and duration. Plan loading and SCM compare workspace are examples in the product.

Custom Resource Intensive Scenarios

In addition to resource intensive scenarios that are built in, it is also possible to introduce custom resource intensive scenarios. Some examples are:

  • Custom automation that execute long running operations on work items, SCM data, requirements, test artifacts. Typical scenarios are custom export/import, mass updates, custom analysis of source code, baselines, linked work items.
  • Follow up actions
  • Long running custom dashboards

This is by no means a comprehensive list. It is possible to bring your clients and servers to their knees with custom themes that do not scale, by work item attribute customization adding more and more custom attributes, JavaScript providers, value providers with thousands of values to choose from and other customization.

What is your server up to?

When users complain about performance problems, even if a server is getting overloaded, it is hard to find the root causes, because a typical server does so many things.

Monitoring that has been added over time has helped, but it is still hard. It is sometimes even hard to understand the situation. As an example for how complex this can become. Users complained about performance.

Our performance architect looked at the server load and the build load and a huge amount of calls that we were not able to account for. The server was unarguably under heavy load created by builds, but the build users and SCM users where not complaining. The developers we talked to had no real issues. Some users, at a different location, using work items and running work item queries, had.

Because we could not explain the inconsistent feedback, I finally went to the location where the users where complaining. I met the users followed their day to day work and found the work item performance unacceptable. The web browser was even locking up on them.

Knowing this, we were able to reproduce the use case, and look into what happened. We found that the work item load was slow, especially on slow laptops, because it had to load so many team areas and iterations. This was specific to how the project area was configured and used.

We also found that the browser flooded the server with requests that where definitely not part of what the product UI sent. This basically forced the Web Browser to process and cache thousands of calls, reserving more and more memory and exhausting the CPU capabilities of the relatively weak laptops used by the users that complained.

The final verdict was, that there was a custom extension to the theme that created all these calls. It took us weeks and was luck that we found this out. If we had known there was such an extension, we would have been able to find this a lot faster. The server was still under a heavy build load, but the performance issue reported was not related to that.

Needless to say that this extension was also deployed in other environments. If it had a detrimental impact, it was heavily depended on the timeline and iteration structure of a project area. The more and deeper the worse.

It would have helped if we could have seen the extension working, and see how long it worked would also have helped.

Registering Custom Resource Intensive Scenarios

The same mechanism that is used to register resource intensive scenarios in the product code can be used to register custom resource intensive scenarios. Unfortunately, we where lacking a good description and supporting code that we could provide customers to use it for their extensions.

This has now changed. Some colleagues and I, independently, started creating a customer usable description how to register resource intensive scenarios. A colleague wrote some cURL code to do this. I wrote Java code to do this and started creating a presentation. When we found out, we decided to combine the effort. Here the result.

The Deployment Wiki page Register Custom Scripts as a Resource Intensive Scenario, explains, using an example, how the API works in general. It also explains how to retrieve and monitor this information.

Then it provides example code to perform this using cURL, Eclipse Lyo OSLC4J based java code, and RTC Plain Java Client Libraries based Java Code.

The Java Code comes with main classes to run it. This is basically example code, but it can also be directly used in command line based automation.

Open Source Code

Disclaimer and Download

Any code downloadable or accessible in this post is provided as is, without support, and used at your own risk. Part of the code was developed in Java using Eclipse and is based on the Eclipse Lyo Client. This was published as open source, under
 Eclipse Public License – v 1.0, in the incredible (mostly German speaking) Jazz Community and can be found here: custom-expensive-scenario-notifier-oslc4j.

Another part of the code was developed in Java using Eclipse and is based on the Plain Java Client Libraries. This was published as open source, under MIT license, in the incredible (mostly German speaking) Jazz Community and can be found here: custom-expensive-scenario-notifier-plainjava.

See the other examples the Deployment Wiki page Register Custom Scripts As a Resource Intensive Scenario.

How does it work?

There is basically a REST API to register the start and the stop of a scenario. All there is to register the start of the scenario at the beginning and then register the stop, after you are done. See Register Custom Scripts as a Resource Intensive Scenario for more details on the code.

What should your automation do?

If you have written automation tools or extensions, you should use the methods described in Register Custom Scripts as a Resource Intensive Scenario, to register your extension as an resource intensive scenario. Add the code to register the start and stop in a way that allows for disabling it easily.

Monitor the various resource intensive scenarios over time. For a scenario that takes only a fraction of a second, you could temporarily disable the registration. Scenarios that take a second or longer should continue to be monitored.

Related

Feedback

If you have questions around the Custom Resource Intensive Scenario code, ask them in the Jazz.net forum instead of commenting on the article or this blog post. Tag the question as a clm question and add the tag: custom-resource-intensive-scenarios to mark it for the reader.

Summary

Please use the method above to enhance your automation and extensions to allow monitoring their duration, frequency and deviation.

As always I hope this helps users out there with the Jazz products.

Jazz Community Contributions

The Jazz Community starts sharing their tools here: http://jazz-community.org/. The code for their tools can be found here.

There is a very active Jazz user community of members of several companies in Europe that are heavily using the Jazz products such as Rational Team Concert, Rational Quality Manager and Doors Next Generation.

The community members try to meet to share their experience with using, administrating and running the Jazz tools in their environments. It became clear that the different companies and community members face similar challenges and that it would be beneficial if they could share tools they created to make running such an environment easier.

The community has now started sharing their tools in this community project and in this code repository.

JazzCommunity2017-05-12_11-34-52

Some of the tools have already been shared on other sites. I have linked the ones I am aware of to the Interesting links page in the ‘Extensions Provided by the Community’ section. These are the ones I am aware of (and the code for some of them is already available in the community repository):

I am looking forward to see more community created tools soon. Visit the community to find out what they have to offer. The code for their tools can be found here.

Last but not least, a special thanks to Dani for getting this awesome user groups started and for the members of said community for their spirit, engagement and willingness to contribute and help each other. You know whom I address here!

DIY stream naming convention advisors

Organizations might be interested in enforcing naming conventions for streams or make sure that stream names must be unique. This post shows some simple example advisor that check for a naming convention and make sure the stream name is unique.

License

The post contains published code, so our lawyers reminded me to state that the code in this post is derived from examples from Jazz.net as well as the RTC SDK. The usage of code from that example source code is governed by this license. Therefore this code is governed by this license. I found a section relevant to source code at the and of the license. Please also remember, as stated in the disclaimer, that this code comes with the usual lack of promise or guarantee. Enjoy!

Just Starting With Extending RTC?

If you just get started with extending Rational Team Concert, or create API based automation, start with the post Learning To Fly: Getting Started with the RTC Java API’s and follow the linked resources.

You should be able to use the following code in this environment and get your own automation or extension working.

The example in this blog post shows RTC Server and Common API.

Compatibility

This code has been used with RTC 6.0 and is prepared to be used with RTC 6.0.x with no changes and it is pretty safe to assume, that the code will work with newer versions of RTC. It should run with any version that provides the operation ID.

The code in this post uses common and server libraries/services that are available in the RTC Server SDK.

Download

The code can be downloaded from here. Please note, there might be restrictions to access Dropbox and therefore the code in your company or download location.

Solution

Since some versions of RTC the operation ID com.ibm.team.scm.server.modifyStream has been made available. The operation allows to write RTC advisors (pre-conditions) and participants (follow up actions) that trigger on saving of a stream. The data that is made available in these operations allows to detect a variety of change types including a rename  of a stream.

RTC (version 6.x) ships the following out of the box preconditions for this operation ID:

  1. Ensure that snapshot names are unique for streams in the process area.
  2. Prevent Adding Component to Stream When Component and Stream Owners are Different.
  3. Prevent Adding User Owned Component
  4. Restrict Stream Visibility to be set to Public

The implementing classes are shipped with the SDK

  1. com.ibm.team.scm.service.internal.process.advisors.UniqueBaselineSetNameAdvisor
  2. com.ibm.team.scm.service.internal.process.advisors.StreamAddComponentAdvisor
  3. com.ibm.team.scm.service.internal.process.advisors.StreamAddUserOwnedComponentAdvisor
  4. com.ibm.team.scm.service.internal.process.advisors.StreamVisibilityAdvisor

Looking at the source code can be a great inspiration.

This post shows two advisors

  1. StreamNamingPatternAdvisor
  2. UniqueStreamNameAdvisor

StreamNamingPatternAdvisor checks for a simple naming convention and prevents creation or renaming of streams that violate the naming convention.

UniqueStreamNameAdvisor checks if the name of the stream is already used by another stream and prevents saving the stream if that is the case. Please note, the operation com.ibm.team.scm.server.modifyStream does not trigger if someone creates or saves a user repository workspace. This means it is not possible to provide this capability for  them.

General Approach

The operation ID provides a special interface IModifyStreamOperationData which makes the type of change and the related data available.

imodifystreamoperationdata

The code for the advisors first checks if the operation data provided is of the type IModifyStreamOperationData.If not, the operation terminates. If so, it casts to be able to use the interface to access the change details. If the data indicates anything other than a IModifyStreamOperationData.CHANGES, the advisor terminates.

IModifyStreamOperationData opData = (IModifyStreamOperationData) operationData;
if (!opData.isOperationType(IModifyStreamOperationData.CHANGES)) {
	// Nothing to do
	return;
}

If the operation is for such a change, the operation data can contain several different changes. So the code gets all the change ID’s for the changes in a set. The code then iterates the change ID’s and gets the change usinf the ID as IStreamChange. This interface allows to get more information, for example the identifier for the operation. In our case, if it is a name change IModifyStreamOperationData.NAME, the advisors are responsible to deal with it.

	// Get the set of change IDs and look through the changes
	Set changeIDs = opData.getChangeIds();
	for (String changeID : changeIDs) {
		IStreamChange change = opData.getChange(changeID);
		// Is this a stream name change?
		if (change.getIdentifier().equals(IModifyStreamOperationData.NAME)){
		........

The interface IStreamChange does not provide a whole lot of data to find out what actually happened. But the identifier gives a good idea. The IStreamChange is implemented by several interfaces that provide more information.

istreamchange

  • com.ibm.team.scm.common.process.IStreamComponentChanges is an interface to describe component changes on the stream
  • com.ibm.team.scm.common.process.IStreamFlowChange is an interface to describe flow target related changes on the stream
  • com.ibm.team.scm.common.process.IStreamPropertyChange is an interface to describe property changes such as a name change on the stream

The code can now cast to the specific interface IStreamPropertyChange and access the current and the new value of the property change. These values are the current name and the new name of the stream.

With the new name the code can run a basic validation. If the validation succeeds, the advisor terminates, otherwise it complains and fails the advisor.

The code below shows the whole advisor implementation. In this example the code looks for a very simple prefix ‘TEST_’ on the stream name. If the prefix is available it succeeds, it fails otherwise. This validation can be replaced by a more complex and potentially configurable method.

/*******************************************************************************

 * Licensed Materials - Property of IBM
 * (c) Copyright IBM Corporation 2017. All Rights Reserved. 
 *
 * Note to U.S. Government Users Restricted Rights:  Use, duplication or 
 * disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
 *******************************************************************************/
package com.ibm.js.scm.naming.advisor.service;

import java.util.Set;

import org.eclipse.core.runtime.IProgressMonitor;

import com.ibm.team.process.common.IProcessConfigurationElement;
import com.ibm.team.process.common.advice.AdvisableOperation;
import com.ibm.team.process.common.advice.IAdvisorInfo;
import com.ibm.team.process.common.advice.IAdvisorInfoCollector;
import com.ibm.team.process.common.advice.runtime.IOperationAdvisor;
import com.ibm.team.repository.common.TeamRepositoryException;
import com.ibm.team.scm.common.process.IModifyStreamOperationData;
import com.ibm.team.scm.common.process.IStreamChange;
import com.ibm.team.scm.common.process.IStreamPropertyChange;
import com.ibm.team.scm.service.internal.AbstractScmService;

@SuppressWarnings("restriction")
/**
 * This advisor tests the name of a stream against a naming convention pattern.
 * If the stream name does not match the pattern, the stream can not be saved.
 * 
 * Note that this extension point does not get any change events for repository workspaces. It only works for streams.
 * 
 * 
 * There are several examples for extensions that are shipped with the product that can be looked into in the SDK.
 * @see com.ibm.team.scm.service.internal.process.advisors.UniqueBaselineSetNameAdvisor
 * @see com.ibm.team.scm.service.internal.process.advisors.StreamVisibilityAdvisor
 * @see com.ibm.team.scm.service.internal.process.advisors.StreamAddComponentAdvisor
 * @see com.ibm.team.scm.service.internal.process.advisors.StreamAddUserOwnedComponentAdvisor
 *
 */
public class StreamNamingPatternAdvisor extends AbstractScmService implements
		IOperationAdvisor {

	public static final String REQUIRED_PREFIX = "TEST_";
	public static final String STREAM_NAMING_ADVISOR = "com.ibm.js.scm.naming.advisor.service.streamNaming";

	@Override
	public void run(AdvisableOperation operation,
			IProcessConfigurationElement advisorConfiguration,
			IAdvisorInfoCollector collector, IProgressMonitor monitor)
			throws TeamRepositoryException {
		Object operationData = operation.getOperationData();
		if (!(operationData instanceof IModifyStreamOperationData)) {
			// There are some stream modify operations that are not process
			// enabled so they do not provide any data.
			return;
		}
		// com.ibm.team.scm.server.modifyStream is only called for streams.
		// Saving a repository workspace (which is really like a stream) does
		// not trigger the extension point.
		// This especially implies that it is not possible to test for unique
		// names across streams and workspaces. It is possible to
		IModifyStreamOperationData opData = (IModifyStreamOperationData) operationData;
		if (!opData.isOperationType(IModifyStreamOperationData.CHANGES)) {
			// Nothing to do
			return;
		}

		// Get the set of change IDs and look through the changes
		Set changeIDs = opData.getChangeIds();
		for (String changeID : changeIDs) {
			IStreamChange change = opData.getChange(changeID);
			// Is this a stream name change?
			if (change.getIdentifier().equals(IModifyStreamOperationData.NAME)) {
				/***
				 * Get the change details based on the change we identified
				 * 
				 * The supported types of changes for this extension point are
				 * 
				 * @see com.ibm.team.scm.common.process.IStreamPropertyChange
				 * @see com.ibm.team.scm.common.process.IStreamComponentChanges
				 * @see com.ibm.team.scm.common.process.IStreamFlowChange
				 * 
				 *      A name change is stored as IStreamPropertyChange
				 */
				if (change instanceof IStreamPropertyChange) {
					IStreamPropertyChange streamPropertyChange = (IStreamPropertyChange) change;
					String newName = streamPropertyChange.getNewValue()
							.toString();
					if (validateName(newName)) {
						// We are fine
						return;
					}
					String description = "The stream name violates the naming conventions stream name must have prefix '"
							+ REQUIRED_PREFIX + "'!";
					String summary = description;
					IAdvisorInfo info = collector.createProblemInfo(summary,
							description, STREAM_NAMING_ADVISOR);//$NON-NLS-1$
					collector.addInfo(info);
				}
			}
		}
	}

	/**
	 * Validate the name against the convention
	 * 
	 * @param name
	 * @return
	 */
	private boolean validateName(String name) {
		// Implement your own naming convention here
		if (name.startsWith(REQUIRED_PREFIX)) {
			return true;
		}
		return false;
	}
}

The UniqueStreamNameAdvisor only replaces the validation with a more complex one querying the SCM system for streams with a given name.

The code creates workspace search criteria to search for a stream, that happens to have the same name as the new name for this stream will be. If so, it fails. The search is limited to streams, because it is not possible to do this check for repository workspaces and a conflict would be hard to handle.

/**
 * Validate the name against the convention a stream must have a unique name
 * 
 * @param name
 * @return
 * @throws TeamRepositoryException
 */
private boolean validateName(String name) throws TeamRepositoryException {
	// Get the query service and set criteria
	IScmQueryService queryService = getService(IScmQueryService.class);
	final IWorkspaceSearchCriteria criteria = IWorkspaceSearchCriteria.FACTORY
			.newInstance();
	criteria.setExactName(name); // Look for the same name
	// Only for streams, we can not prevent creation of workspaces with the
	// same name either and a user could create a workspace with the same
	// name and prevent us from saving the stream later.
	criteria.setKind(IWorkspaceSearchCriteria.STREAMS);
	// We only have to find one other
	ItemQueryResult result = queryService.findWorkspaces(criteria, 1, null);
	if (!result.getItemHandles().isEmpty()) {
		return false;
	}
	return true;
}

Consolidation

It is relatively obvious and straight forward to consider refactoring the solution shown here. Create an abstract class that contains all the shared code of both classes, with an abstract method validateName(). Create implementations providing the implementation for the method validateName() and provide these implementation classes in the plugin.xml. That way it is simple to provide new versions for various naming conventions.

Code Structure

The code structure follows the general approach that has been used in this blog for quite some time now. The code comes in the following Eclipse projects used for the purpose explained below.

com.ibm.js.scm.naming.advisor.common is the project that defines the jazz component to be used by this extension. It is in a separate project to allow to use it in case of implementing an aspect editor.

com.ibm.js.scm.naming.advisor.launches is a project that contains the launches that can be used to debug the extension with Jetty.

com.ibm.js.scm.naming.advisor.service is the project that contains the server side extension code.

com.ibm.js.scm.naming.advisor.service.feature defines is the feature for the server side deployment of the extension.

com.ibm.js.scm.naming.advisor.service.updatesite contains the update site to build the server side extension.

com.ibm.js.scm.naming.advisor.service.serverdeploy contains the provision profile as well as the folder structure needed for the final deployment. To prepare deployment Build the update site and copy the plugins and feature folder into the  js_scm_naming_advisor folder. To deploy copy the provision_profiles and the sites folder with all their contents into the server/conf/ccm folder of the RTC server.

codestructure

Configuring the precondition

When running the custom advisor in Jetty or finally deployed the advisor can be configured as usual.

configuring

If configured, the advisors prevent saving streams that do not conform to the specific naming condition. The advisor displays the problem like below.

fail_unique_name

Summary

As always I hope this helps someone out there with running RTC. Please keep in mind that this is as usual a very basic example with no or very limited testing and error handling. Please see the links in the getting started section for more examples, especially if you are just getting started.

A component naming convention advisor

Organizations sometimes would like to implement naming conventions for components based on the architecture for example. This post shows a simple example advisor that checks for a naming convention.

License

The post contains published code, so our lawyers reminded me to state that the code in this post is derived from examples from Jazz.net as well as the RTC SDK. The usage of code from that example source code is governed by this license. Therefore this code is governed by this license. I found a section relevant to source code at the and of the license. Please also remember, as stated in the disclaimer, that this code comes with the usual lack of promise or guarantee. Enjoy!

Just Starting With Extending RTC?

If you just get started with extending Rational Team Concert, or create API based automation, start with the post Learning To Fly: Getting Started with the RTC Java API’s and follow the linked resources.

You should be able to use the following code in this environment and get your own automation or extension working.

Compatibility

This code has been used with RTC 6.0 and is prepared to be used with RTC 6.0.x with no changes and it is pretty safe to assume, that the code will work with newer versions of RTC. It should run with any version that provides the operation ID.

The code in this post uses common and server libraries/services that are available in the RTC Server SDK.

Download

The code is included in the download in the post DIY stream naming convention advisors.

Solution

In the last few versions of RTC several operations have been made available for operational behavior. At least since RTC 5.0.2 the operation to modify a component is available with the operation ID com.ibm.team.scm.server.component. This allows to create advisors/preconditions as well as participants/follow up actions that operate on such events. An example shipped with the product is implemented in the class com.ibm.team.scm.service.UniqueComponentNameAdvisor.

There are examples shipped with the product in the SDK that you can look at for more sample code. For example: com.ibm.team.scm.service.internal.process.advisors.UniqueComponentNameAdvisor

The code below shows a very basic example how to test the component name for some simple naming schema.

The important information to take away is that the information about the save operation is provided in a special interface IComponentModificationData which allows access to the type of the operation, to old and new properties and to the component directly.

componentmodification

So it is possible to find out what operation is done on the component and based on that look at the properties that the component has.

The code below does exactly that. It checks what operation is going on and then takes the new name of the component and checks it.

/*******************************************************************************
 * Licensed Materials - Property of IBM
 * (c) Copyright IBM Corporation 2017. All Rights Reserved. 
 *
 * Note to U.S. Government Users Restricted Rights:  Use, duplication or 
 * disclosure restricted by GSA ADP Schedule Contract with IBM Corp.
 *******************************************************************************/
package com.ibm.js.scm.naming.advisor.service;

import org.eclipse.core.runtime.IProgressMonitor;

import com.ibm.team.process.common.IProcessConfigurationElement;
import com.ibm.team.process.common.advice.AdvisableOperation;
import com.ibm.team.process.common.advice.IAdvisorInfo;
import com.ibm.team.process.common.advice.IAdvisorInfoCollector;
import com.ibm.team.process.common.advice.runtime.IOperationAdvisor;
import com.ibm.team.repository.common.TeamRepositoryException;
import com.ibm.team.scm.service.internal.AbstractScmService;
import com.ibm.team.scm.service.internal.process.IComponentModificationData;
import com.ibm.team.scm.service.internal.process.IComponentModificationData.OpType;

/**
 * An example advisor that checks the name of a component for some naming convention
 *
 * Also @see com.ibm.team.scm.service.UniqueComponentNameAdvisor for product example code
 */
@SuppressWarnings("restriction")
public class ComponentNamingAdvisor extends AbstractScmService implements
		IOperationAdvisor {
	public static final String REQUIRED_PREFIX = "TEST_";
	public static final String COMPONENT_NAMING_ADVISOR = "com.ibm.js.scm.naming.advisor.service.componentNaming";

	@Override
	public void run(AdvisableOperation operation,
			IProcessConfigurationElement advisorConfiguration,
			IAdvisorInfoCollector collector, IProgressMonitor monitor)
			throws TeamRepositoryException {

		IComponentModificationData data = (IComponentModificationData) operation
				.getOperationData();
		if (data == null) {
			throw new TeamRepositoryException("Missing component data"); //$NON-NLS-1$
		}

		String compName;

		if (data.getOpType() == OpType.CREATE) {
			compName = data.getNewName();
		} else if (data.getOpType() == OpType.RENAME) {
			compName = data.getNewName();
		} else {
			return;
		}
		if (validateName(compName)) {
			// Nothing to do
			return;
		}
		String description = "The component name violates the naming conventions component name must have prefix '"
				+ REQUIRED_PREFIX + "'!";
		String summary = description;
		IAdvisorInfo info = collector.createProblemInfo(summary, description,
				COMPONENT_NAMING_ADVISOR);//$NON-NLS-1$
		collector.addInfo(info);
	}

	/**
	 * Validate the name against the convention
	 * 
	 * @param compName
	 * @return
	 */
	private boolean validateName(String compName) {
		// Implement your own naming convention here
		if (compName.startsWith(REQUIRED_PREFIX)) {
			return true;
		}
		return false;
	}
}

Summary

This is as usual a very basic example with no or very limited testing and error handling. See the links in the getting started section for more examples. As always I hope this helps someone out there with running RTC.

Build On State Change Work Item Save Participant

In the unlikely event you have missed it or just to complete the hit list if you search for examples on this blog, the Build On State Change Work Item Save participant/follow up action is a complete example as part of the Rational Team Concert Extensions Workshop.

The Build On State Change Work Item Save participant monitors work item state changes of configured work item types and state changes. If a qualifying change happens, it issues a build request for a configured build definition. The example comes with a complete package including the configuration UI.

Just Starting with API work?

If you just get started with extending Rational Team Concert, or create API based automation, start with the post Learning To Fly: Getting Started with the RTC Java API’s and follow the linked resources.

RTC Process Customization – What you can and cannot do

Rational Team Concert Process Customization – What you can and cannot do, that is the title of the webinar I presented two days ago.

If you are interested in my view on this, you can find the replay of the webinar here in the Rational Team Concert Enlightenment Series.

The slides are shared here.

Also see What API’s are Available for RTC and What Can You Extend? for more links.

How to filter RTC email notifications by project or by user

Spamming the in-boxes of your project with notification mails? This is a reblog of Marks internal blog.

The RTC Mail Filtering Problem

A common requirement for Rational Team Concert administrators is the need for limiting email notifications for a single RTC project or user.  There are many scenarios which might drive this need.  Here are some examples:

  • Thousands of work items are imported and you want to not cause notifications for the new work items.
  • A new field is added and data from the old field is copied over, affecting many work items.  We don’t want users to be notified.
  • A subset of users now use a different RTC server, and they no longer want notifications from the old project, but they need to remain active in the old project.

Unfortunately, the current RTC product as of version 6.0.1 does not support this requirement.  Notifications are controlled at the Jazz Team Server level.  The JTS may control multiple RTC, Rational Quality Manager, and Rational Doors Next Generation repositories.  Turning off notifications in the JTS turns off mail for all RTC, RQM, and RDNG projects from all repositories controlled by that JTS server.  Mail  generated for all of those RTC, RQM, and RDNG projects while notifications are off in the JTS are lost.

This leaves administrators with a tough choice:  lose all mail for everything connected to the JTS, or live with excessive and unwanted notifications for a single project.

A Solution:  Milters

We have found and implemented a solution for this requirement, milters.  Milters are plugins for Sendmail that add additional functionality to Sendmail.  The “milter-regex” milter plugin permits filtering mail using regular expressions.

These are the overall steps for utilizing the regex milter:

  • Set up a machine with Sendmail and install the milter-regex plugin
  • Configure Sendmail to allow mail from the JTS server machine in /etc/mail/access
  • Develop a set of regular expressions which cause mail from a particular project or user to be found and filtered out and update the /etc/mail/milter-regex.conf configuration file
  • Restart the milter-regex service
  • Edit the email settings of the JTS to point to your Sendmail machine as the mail SMTP Server

You can easily add and remove projects and users from the filtering list by editing the /etc/mail/milter-regex.conf file.  You can turn off filtering completely by restoring the JTS SMTP Server value back to its original setting.

Regular Expressions

You’ll have to examine your email templates to determine the project and user information for the various email formats.  Here’s an example of the configuration statements for our email templates to filter out all mail for project “Project XYZ”:

reject “Mail filtered for project: Project XYZ”

body ,Team Area:.* Project XYZ,i

body ,Project Area:.* Project XYZ,i

Both “body” statements are required.  The “reject” statement defines a message which is logged into the /var/log/messages log file when a piece of mail is filtered out.

This is an example of filtering for a single user:

reject “Mail filtered for user: Mark E. Ingebretson”

body ,The user ‘Mark E. Ingebretson’ made a .* request,i*

body ,Mark E. Ingebretson mentioned you in,i

body ,Mark E. Ingebretson.*changed on,i

If you are a user of our IBM Systems servers and need email filtering, you can submit an ITHELP request.

Other Approaches

There is another approach from Sam provided on the Jazz.net forum here: Manage User E-mail Preferences for Mass Updates. It has been sitting on my Interesting Links page for a while. Time to show it here.

A new solution s available for RTC 6.0 starting with iFix03 if you are using the Java API. Use the Skip Mail Save Parameter.

Summary

I found the topic very interesting and related questions also come up on Jazz.net, so I decided to re-blog and promote this when Mark showed this to me. I hope it helps our users all over the world. I hope that this solution can help other RTC administrators address this important requirement.

JavaScript Attribute Customization – Where is the log file?

A lot of users try Java Script based attribute customization and often run into issues. They ask on the Jazz.net forum to get the issue solved. Unfortunately the questions usually lack the information required to help. This post explains how to retrieve log information to be able to provide this information.

Where are the Script Log files?

Java Script attribute customization can use the console to log text messages into a log file.

console.log("My message");

The question is, where are the log files?

The script context

Java Script attribute customization scripts are, as far as I can tell, run in one of the following contexts:

  1. The Eclipse Client
  2. The Web Browser
  3. The RTC Server

Dependent on the context it is run, the log information can be found in a log file that is created and maintained by the

  1. The Eclipse Client
  2. The RTC Server

Please note that the logging information is not in the RTC Application log file CCM.log.

The Jazz.net Wiki entry about attribute customization provides hints about how to log data and how to log data and how to debug scripts in the section Debugging Scripts. Similar information is provided in the Process Enactment Workshop for the Rational solution for Collaborative Lifecycle Management.

Unfortunately both only talk about how to find the server log information for Tomcat. Since Websphere Application Server and WAS Liberty are also valid options, how can one find the log files in this case?

Find The Eclipse Workspace Log

As background, note that the Eclipse Client as well as the RTC Server are based on Eclipse technology. This common technology is used to log the data and determines the log file location and name.

Each running Eclipse has a workspace location and stores meta data and log information in this workspace. The workspace is basically a folder in the file system. The metadata is stored in a sub folder with the name .metadata. The log file is in this folder and named .log.

For the RTC Eclipse client and for scripts that run in this context, open the Eclipse workspace folder that is used and find the .log file in the .metadata folder.

For the RTC Server, the easiest way to find this workspace and the enclosed log file that I have been able to find is to search for the folder .metadata. For Tomcat and WAS Liberty standard installs go to the folder where the RTC Server was installed and then into the sub folder server. From here search for the folder .metadata.

For Websphere Application Server (WAS) go the profile folder for the profile that includes the RTC server deploy and search there.

Here an example search for a test install based on WAS Liberty:

LogFileLocations_2016-06-17_13-10-14

Note that every Jazz application has its own Eclipse workspace with metadata folder and workspace log file. The one interesting for RTC attribute customization is the workspace of the RTC server. The folder structure includes the context root of the Jazz application. Each application has a different context root which typically matches the application war files name prefix. The RTC application typically has the context root and application war files name prefix ccm. Open the workspace for this application and find the log file.

Looking Into the Log File

You can look into the log file. Please make sure to use a tool that does not block writing to the log file, while you are browsing its content. The log file is kept open by the server when it is running and blocking it from writing is not what is desired. Use more or an editor that reads the file and does not block it. For windows users: notepad does lock the file for writing. Use a different tool such as notepad++.

The Process Enactment Workshop for the Rational solution for Collaborative Lifecycle Management provides some examples for how logs look like and can be created. If you can’t find the log entry you are looking for, always check the server log as well. Maybe the script runs in a different context than you expect.

Here an example for log entries:

Log_Examples_2016-06-17_14-58-43

Load Errors

It might happen, that an expected log entry is not found in any of the log files. In this case make sure to check for script loading errors as well as thrown exceptions at the time the script was supposed to run.

Load errors can be caused by different reasons.

One reason can be that attachment scripts are not enabled. There are enough indicators in the Attribute customization editor in Eclipse that a user should have spotted this these days.

Another reason can be that the script is syntactically not correct and can not be interpreted as a valid JavaScript. One reason for a script not being recognizable as a valid script that I have seen recently is an incorrect encoding. If an external editor is used to edit the script and the script is then loaded from the file, make sure that the script has a correct UTF-8 encoding. If in doubt change the encoding to UTF-8 and reload the script.

Why would the encoding be important? The encoding controls the format of the content. it is hard to determine the encoding from a file and it is often not checked. But expecting a specific encoding but loading a file that was encoded in a different one can cause to find unexpected content. This can can cause the JavaScript not being recognized as JavaScript and the load fails.

Debugging vs. Logging

Using the debugging techniques explained in the Wiki entry in the section Debugging Scripts and in the Process Enactment Workshop for the Rational solution for Collaborative Lifecycle Management should be the preferred option and is usually more effective.

Looking at the logs is still a valid option, especially to be able to log execution times and to find script loading issues and for scripts that are run in the background in the server context, such as conditions.

I found using the Chrome Browser and the built in Developer Tools to be most effective. The scripts can easily be found in the sources tab under the node (no domain). Make sure to enable the debug mode as explained here: Debugging Scripts.

JavaScript_Debug_Chrome_2016-06-17_14-24-43

Summary

This post explains how to find the log files that contain log information written by JavaScript attribute customization scripts. I hope that this helps users out there and makes their work a little bit easier.

Publishing XML and Other Data With the WAS Liberty Profile

Since CLM now ships the WebSphere Application Server Liberty Profile as default, you might want to be able to publish XML and other data similar to the easy way that was available in Tomcat, by just using the application server that is available. This avoids having to set up an additional Web or application server.

Problem

Sometimes it is necessary to provide access to some files using the HTTP/HTTPS protocol.

This is interesting for the Process Enactment Workshop, or if you want to publish your own XML data to be used with HTTP Filtered Value Sets, host custom Doors Next Extensions, or if you want to publish Build results.

It is convenient if there is already an application server available to host these files there and make them accessible, instead of having to set up a web server. This especially holds if it is necessary to provide the data using HTTPS, and not HTTP, because browsers these days reject to process content that is mixed from HTTP and HTTPS sources. The latter is important for Doors Next Generation extensions. Not having to set up a new server also avoids having to create a trusted certificate for it. It reduces the additional failure point, backup and maintenance and provides URI stability as well, since you don’t want to change the URI for the CLM applications hosted anyway.

Tomcat

With Tomcat, it was relatively easy to do that, by just adding a folder under \tomcat\webapps and placing your files there. Assuming the folder name is myfiles, Tomcat then provides the files with the public URI root https://serverpath:port/myfiles/fileName.

Another way was to simply drop the file into the tomcat/webapps/ROOT/ or a sub folder, which has the same effect.

Is there a similar way to achieve this with the WAS Liberty Profile shipped with CLM?

Example Scenario

Lets assume, like in the scenario explained in Publish and Host XML Data Using Tomcat – The Easy Way, we want to publish a file makers.xml to be accessible with a context root PEWEnactmentData on the server with the public URI root https://clm.example.com:9443/.

The expected outcome is to be able to access the XML file content using the URL https://clm.example.com:9443/PEWEnactmentData/makers.xml.

Solutions

There are two relative simple solutions for WAS Liberty. The solutions below are based on information from Lars, one of our WebSphere experts and the IBM WebSphere Application Server Liberty Profile Guide for Developers.

For WAS Liberty Profile, all solutions require to deploy an application. There are several ways to create and deploy an application in WAS Liberty Profile.

  1. Deploy an application in some folder and declare the context root and location of the application in the server.xml or specific location XML file
  2. Deploy an application in the dropins folder; for this to work, it is necessary to make sure the application server has the dropins monitoring enabled for this to work

Deploying an application in this context means

  1. Create a folder with some contend in a location (or copy the folder into the location)
  2. Deploy a compressed folder with some content

Please note: Any changes to any of the server configuration XML files are automatically picked up be WAS Liberty Profile. There is no need to reboot the server for the steps below to work.

1. Deploying an Application – Standard

The trick in this solution, is to pretend to deploy an uncompressed Web Application such as a WAR file. It is obviously possible to create a real WAR file or the structure that is contained in it, but that is not really necessary. See the last section here if you are interested in deploying the application as compressed WAR file.

1.1 Deploying the Application

The only thing necessary to trick the WAS Liberty Profile into making the data available with a correct context root is to create a folder with the context root as name and an extension .war, and to make the application location known in an XML file.

In the default setup of WAS Liberty Profile with CLM, after the first launch, all installed CLM applications end up in the folder <Install Folder>\server\liberty\servers\clm\apps. The screen shot below shows this structure.

FolderStructure

The applications are deployed as compressed WAR file and have been extracted to a folder with the name of the application. As an example the CCM application (RTC) came shipped as ccm.war.zip and was extracted to a folder ccm.war. The context root of the CCM application is ccm, which is basically the name without the extension war. The same pattern holds for all the other applications.

It is possible to create a folder to host the files to be published. The folder should be named <context root>.war to result in an application with the desired context root.

Files within this folder will be available with their file name.

A file in that folder with the name <filename> will be accessible as <public URI root>/<context root>/<filename>.

If the files are within sub folders, the folder path within the root folder and the file name compose the file name part of the URL. For example, the root folder has a sub folder names myfiles and a file test.xml in it, the file is accessible as <public URI root>/<context root>/myfiles/test.xml.

In our example with the given public URI root being https://clm.example.com:9443/, the folder name <context root> being PEWEnactmentData and <filename> being makers.xml the URL would be https://clm.example.com:9443/PEWEnactmentData/makers.xml.

1.1 Declaring the Application Location

The WAS Liberty Profile needs to know which application has to be published and where to find its resources. WAS Liberty Profile defines an XML element to declare which application is located where. The element is <appliction…./>. It supports various attributes and can come in many different forms. The most important attributes are

  • id: Must be unique and is used internally by the serve
  • location: The path to the application resource
  • context-root: The context root of the application
  • name: The name of the application
  • type: Specifies the type of application (war or ear)

The location and ID is required and either the name equal to the context root, or the context-root can be provided. The type can be omitted. I did not provide the ID and it worked for me, but since the server requires it, it should be provided. Use the same value as the name or context root for the ID.

It is necessary to add the application declaration to the server configuration for example by adding it to the server.xml file. In a CLM deployment a file appliction.xml is included in the server.xml, declaring all the applications and it makes sense to use that file instead.

Save the xml files after any change to activate the new setting. The server will pick up the change.

2. Deploying an Application – dropins

The WAS Liberty Profile has the capability to monitor a special dropins folder on a regular basis. If new content is detected, it is automatically made available by the server. The automatic monitoring needs to be enabled for this to work. A standard CLM deployment disables the monitoring to reduce server load.

2.1 Deploying the Application – dropins

In a default CLM install the dropins folder is located here:

<Install Folder>\server\liberty\servers\clm\dropins

Create a folder named <context root>.war in the dropins folder and add the files to publish. After detecting the addition, the files will be accessible like explained in 1.1.

The folder <Install Folder>\server\liberty\servers\clm\dropins should now look like below.

Published application with file

2.1 Enable Automatic Monitoring for Dropins

The automatic monitoring of the dropins folder is disabled for a standard CLM setup with WAS Liberty Profile. This saves processor cycles and I/O for the server operation. It has to be enabled once in the server.xml file for this solution to work. Find the XML element <applictionMonitor> and change the attribute dropinsEnabled from “false” to “true” and save the file. The image below shows the changed configuration file.

Enable Dropins Monitoring

The attribute pollingRate determines how often the server checks the location for changes. Adjust the value to your needs.

Save the server.xml file after any change to activate the new setting. The server will pick up the change.

The XML file content should now be accessible.

Example 1 – Standard

In the example we want to publish a document with the URI https://clm.example.com:9443/<context root>/makers.xml, where <context root>=PEWEnactmentData. The public URI root is already given by the server set up.

1.1 Create the “Application”

As first step create the application. Create a new folder named PEWEnactmentData.war in the folder <Install Folder>\server\liberty\servers\clm\apps. The folder could be created anywhere, but it makes sense to put it where the other applications are. If it is necessary to use a different folder, for example to allow other users to modify the content, place the folder in a different location and note the path. The path to the location will become important in the next step.

Put the files to be published underneath this folder. In the given example, put the file makers.xml into the folder.

The folder structure should look like the image below:

New Application folder

1.2 Publish the “Application”

The WAS Liberty Profile needs to know which application has to be published and where to find its resources. WAS Liberty Profile defines an XML element to declare which application is located where. The element is <appliction…./>. It supports various attributes and can come in many different forms. The most important attributes are

  • id: Must be unique and is used internally by the serve
  • location: The path to the application resource
  • context-root: The context root of the application
  • name: The name of the application
  • type: Specifies the type of application (war or ear)

The location and ID is required and either the name equal to the context root, or the context-root can be provided. The type can be omitted. I did not provide the ID and it worked for me, but since the server requires it, it should be provided. Use the same value as the name or context root for the ID.

The element can be put into several places. In a CLM installation, by default, the applications are declared in the file

<Install Folder>\server\liberty\servers\clm\conf\application.xml

The file is included in the file <Install Folder>\server\liberty\servers\clm\server.xml. It would be possible to place the application declaration in the file server.xml, but it seems to make more sense to put it into the file application.xml instead.

Add lines

<!–  My Application –>

<application id=”PEWEnactmentData” name=”PEWEnactmentData” location=”${server.config.dir}/apps/PEWEnactmentData.war”/>

to the file application.xml and save the file.

The file should look like below:

Declare new Application

In this case the location is relative to the server configuration folder which n our case is <Install Folder>\server\liberty\servers\clm\. The location could really be anywhere. If it is necessary to include a folder that, as an example, can be accessed by regular users, you could change the location to somewhere with less read and modification restrictions.

After the save of the application.xml, the published XML file should be accessible using the URL https://clm.example.com:9443/PEWEnactmentData/makers.xml.

The image below shows the browser access.

Browser Access To XML File

2. Deploying an Application – dropins

In the example we want to publish a document with the URI https://clm.example.com:9443/<context root>/makers.xml, where <context root>=PEWEnactmentData. The public URI root is already given by the server set up.

2.1 Create the “Application”

As first step create the application. Create a new folder named PEWEnactmentData.war in the folder <Install Folder>\server\liberty\servers\clm\dropins.

Put the files to be published underneath this folder. In the given example, put the file makers.xml into the folder.

The folder structure should look like the image below.

Published application with file

Assuming the automatic application monitoring is enabled, the file should be accessible after the next polling cycle, (at least after waiting the full pollingRate). If automatic application monitoring is not enabled, enable it as described in 2.2 and save the configuration file.

The XML file content should now be accessible using the URL https://clm.example.com:9443/PEWEnactmentData/makers.xml.

Deploying Compressed Folder Structures

It is also possible to publish real WAR, EAR similar to the way described above. For example create a WAR file as described in Publish XML Data Using Tomcat – Hotfix for The Process Enactment Workshop and attached to that post. Deploy the compressed WAR file, for example PEWEnactmentData.war in the dropins folder or declare it as an application and put it onto another location like explained below. This automatically deploys the application and makes the contained files accessible.

Just zipping the folder structure we created above into a zip file, name it PEWEnactmentData.war and trying to deploy does not seem to be working however. The WAR file needs to provide required supporting structures in this case.

Related Posts

The RM Extensions Hosting Guide for CLM 6.0.1 and later versions explain the dropins example as well.

Summary

This post shows how easy it is to publish supporting files on the WAS Liberty Profile. This is important if you want to host custom Doors Next Extensions, or if you want to publish your own XML data to be used with HTTP Filtered Value Sets, for the Process Enactment Workshop and it can possibly also used with Build result publishing. In the latter case, you would likely publish a root folder that contains all the build results.

As always I hope this post has some value to users out there, or is at least interesting to read.

Manage Access Control Permissions for Work Items and Versionables

A customer requires very fine grained access control to work items and versionable objects such as files in the SCM system. In addition the customer had the requirement to be able to set attributes on elements in the SCM system.

I knew that it can be set for work items and kind of how to because I had briefly looked at it in the A RTC WorkItem Command Line Version 3.0. But I was not completely sure about the rules and how to access the SCM part and the attributes in SCM.

Since the customer needed this urgently I looked at what the rules are and what can be done.

The content is way too big for just one post, so this is going to be a series. The first post explains the rules around access control permissions and some basic API’s around finding the objects that can be used to provide the access control context.

Related posts

The posts in this series are:

  1. Manage Access Control Permissions for Work Items and Versionables – this post
  2. Setting Access Control Permissions for Work Items
  3. Setting Access Control Permissions for SCM Versionables
  4. Setting Attributes for SCM Versionables

Also see

Controlling access to source control artifacts in Rational Team Concert

License and Download

The post contains published code, so our lawyers reminded me to state that the code in this post is derived from examples from Jazz.net as well as the RTC SDK. The usage of code from that example source code is governed by this license. Therefore this code is governed by this license. I found a section relevant to source code at the and of the license. Please also remember, as stated in the disclaimer, that this code comes with the usual lack of promise or guarantee. Enjoy!

Just Starting With Extending RTC?

If you just get started with extending Rational Team Concert, or create API based automation, start with the post Learning To Fly: Getting Started with the RTC Java API’s and follow the linked resources.

You should be able to use the following code in this environment and get your own automation or extension working.

Compatibility

This code has been used with RTC 5.0.2 and is prepared to be used with RTC 6.0.x with no changes and it is pretty safe to assume, that the code will work with newer versions of RTC.

The code in this post uses common libraries/services that are available in the Plain Java Client, Eclipse client and Jazz Eclipse server API. If client or server API is used, this is stated.

Problem Description

The first question was, if it was possible to set the permission to access

  1. Work Items
  2. Items in the SCM system such as versionables

The second question was, if it was possible to set the attributes introduced in RTC 5.0 that can be specified and set using the API. The documentation only mentioned the SCM command line.

The third question was, how team areas and access groups can be created automatically.

The forth question was, what exactly the rules are:

  1. Can this be done in the server?
  2. Can this be done in the client?
  3. Can this be done in a command line tool or from another application?
  4. What permissions are required and what are the rules in the different contexts?

Solution Approach

The approach that we chose was to try to implement a minimal demonstrator to demonstrate the capabilities and prove them as a proof of concept. this demonstrator was also intended to serve as a platform to find the rules.

Learning

Here is what we learned creating the demonstrator.

The Rules – Work Items

Work Items have an attribute called “Restricted Access” that can be used to control access to them. There are basically the following modes available for work items that can be used.

  1. Access control using access control of the project area
  2. Restricted access by category
  3. Restricted access by setting the restricted access attribute

Option 1: Only the users that have access to the context set for the project area can access these items. This can be set to everyone basically exposing the work items to everyone. Other options a re project area membership and access control list etc. The rules here are very clear.

Option 2: It is possible to set RTC in a project area to automatically determine access permissions based on the category (filed against) of the work item. This sets the restricted access to the project area or team area associated to the category. Only members of the team area (and members of sub team areas of the team area associated to the category) have access to the work items which are restricted by category. It is worth noting, that a user being associated to the project area can not look into the work items filed against teams, if these limit access, it is quite the opposite.

Option 3: Use a special editor presentation to set the restricted access attribute manually to a project area or an access group. this could potentially be supported by automation.

The most important learning for option 3 is, that it is only possible to set the restricted access attribute to a project area or an access group this way. It is not possible to set the access context to a team area. If the context is set to a team area, it automatically picks up the containing project area. This also holds for automation.

See this help topic on how to set up the work items to allow setting the restricted access.

The Rules – Versionables

Prior to RTC 6.0.1, which has just been released, it is only possible to set read access control to the default (which is controlled by the component) project areas, team areas (called process areas if the distinction is unimportant) and a single user.

If access control is set to a team area, any member of that team area and its sub team areas have access. Note that this is different from option 2 for work items. If access control is set to a project area, any member who has access to the project area has access to the item.

RTC 6.0.1 introduces the capability to set access control to an access group. Only members of this access group have access to the item. This seems to be the best option by far, especially since access groups can contain project areas and team areas which adds their members to the access group.

Please note, that access control applies to the whole item and its history.

JazzAdmin Repository Role

Users with the JazzAdmin repository role have access to any work item or SCM controlled item, regardless of the access control setting. Users with this role (and sufficient licenses) can access all data.

Project Area Administrators

Administrators of a team or project area and don’t have the JazzAdmin repository role can not access items that have access control set to a context they don’t have access to.

General Rules

Users can only set the access context of work items or items in the SCM system to an access context that preserves their access permission. It is not possible to find or select an access group, process area the user has no access to/is member of or a different user and set the access context to it. This would remove the read permissions and is prevented.

Users with the JazzAdmin role and sufficient licenses however can do this, because they don’t loose read access.

General Rules For Automation

All automation can only perform the operations that are permitted to the user that is used to run the automation. This especially means that automation running in the context of a user with JazzAdmin repository role and the required licenses can perform all operations. The user context available in an automation depends on the automation.

  1. Follow up actions (participants) run in the context of the user that is performing the operation on the client or the server. They are especially not elevated run in a JazzAdmin repository role, if the user that performs the operation does not have this role. It is also not possible to use a different users or services to elevate the permissions.
  2. Preconditions (advisors) must not change the triggering elements. Otherwise the same rules apply as described in follow up actions.
  3. Asynchronous tasks or services in RTC are run with JazzAdmin permissions and can use the IImpersonationService to change the operation to a different user context.
  4. Plain Java Client Library or other client based automation run in the context of the user that was used to log into the system.

In the context of this post, 1 means, that it is not possible to set access control in such an operation that would remove read access from the user that performs the operation. If you require rules where this could be necessary, it would be necessary to run in a administrative context such as an Asynchronous task or in a client application with JazzAdmin repository role.

Finding ProcessAreas, Access Groups and Contributors

How to find the access context information needed using the Java API?

Finding ProcessAreas

There are many ways to find ProcessAreas (project areas and team areas) using the API, the simplest one assumes an java.net.URI constructed from a string based name path of the process area separated by ‘/’.

Lets assume a project area named TestProject1 and a team area within the project are named TestTeam1 and a sub team area underneath TestTeam1 named TestSubTeam1.

The URI for the project area could be constructed from the string “TestProject1”. The URI for TestTeam1 could be constructed from the string “TestProject1/TestTeam1”. Similar the URI for TestSubTeam1 could be constructed from the string “TestProject1/TestTeam1/TestSubTeam1”.

Process area names could contain spaces. Spaces are not allowed in URI’s, therefore they need to be replaced by the encoding character string “%20”.

To construct an java.net.URI from a string value use the following code.

String namePath = "TestProject1/TestTeam1/TestSubTeam1";
URI anURI = URI.create(URI.create(namePath.replaceAll(" ", "%20")))

The service to find the process area by its URI is provided by the interface IAuditableCommon. This interface class, as its name implies, is a common interface that is available in the RTC Plain Java API, as well as the RTC Eclipse client and Eclipse server SDK. This allows to use this API call in any possible context, either by

IAuditableCommon auditableCommon = (IAuditableCommon) teamRepository
	.getClientLibrary(IAuditableCommon.class);

in Plain Java and client SDK based code, or using

IAuditableCommon auditableCommon = getService(IAuditableCommon.class);

in the server extensions that all extend AbstractService (or in case of SCM server extensions AbstractScmService, which extends AbstractService).

For code that already retrieved the common Service IWorkItemCommon, for example to find work items, IAuditableCommon is available using the call

IAuditableCommon auditableCommon = workItemCommon.getAuditableCommon();

So the following code would retrieve the access context for the team area “/TestTeam1/TestSubTeam1” nested in the team area “TestTeam1” in the project area “TestProject1”

String namePath = "TestProject1/TestTeam1/TestSubTeam1";
IProcessArea area = auditableCommon.findProcessAreaByURI(URI.create(namePath.replaceAll(" ", "%20")), null, monitor);
UUID context = area.getItemId();

Note, for work items only a project area is a valid context. Setting the context above will result in the project area containing the process area to be set. Alternatively use

UUID context = area.getProjectArea().getItemId();

for work items and document the fact to save the time to figure it out the hard way during testing (like I did).

Finding the Public Access Context

Work items can also be set to public read access. The constant com.ibm.team.repository.common.IContext.PUBLIC is available for that.

UUID publicContext = IContext.PUBLIC.getUuidValue();

provides with the public access context.

Finding Access Groups

Access Groups are managed in the administration pages of the RTC application. It is possible to create, modify and delete access groups. The image below shows this section.

Access Group ManagementAccess Groups are also accessible using the IAuditableCommon common interface that was already used in the section above.

To get all access groups use this code:

IAccessGroup[] groups;
groups = auditableCommon.getAccessGroups(null, Integer.MAX_VALUE,
	monitor);

This returns all access groups up to a maximal number that is passed to the method.

It is possible to pass a filter to select only access groups with specific name pattern like below.

IAccessGroup[] groups;
groups = auditableCommon.getAccessGroups("My*", Integer.MAX_VALUE,
	monitor);

The access groups can be iterated like below to compare the name or do something similar.

for (IAccessGroup group : groups) {
// Compare the value to the access group name.
	if (group.getName().equalsIgnoreCase(value)) {
		return group;
	}
}

Similar to the process areas above, there is also a public access group which can be obtained like this:

return auditableCommon.getAccessGroupForGroupContextId(
	IContext.PUBLIC, monitor);

Finding Contributors

Finding contributors is different in the client and the server. There is no common API available.

In the client API the IContributorManager can be used for example using the following methods.

IContributorManager contributorManager= teamRepository.contributorManager();
// Find a user by the users ID returns a handle
IContributorHandle contributorHandle = contributorManager
	.fetchContributorByUserId(attributeValue, monitor);

// Get all users, returns a list of full IContributor items
List contributors = contManager.fetchAllContributors(monitor);

In the server API the IContributorService is available and provides the following method.

IContributorService contributorService = getService(IContributorService.class);
contributorService.fetchContributorByUserId(userId);

I was not able to locate a way to search for all users like in the client API. The server API usually gets a user passed and there is no need to find all users.

Summary

This post explains the rules around read access control for work items and RTC SCM versionables and some basic API’s around finding the objects that can be used to provide the access control context. The next post will explain the details around setting access control for work items.