I did not have to use the EWM Extensions workshop for a while. But recently I wanted to look into the API and noticed that I was unable to work with my setup for the EWM Extensions Workshop. I was unable to connect to the Jetty based debug system
Using a browser
Using an Eclipse client
Both issues are due to the fact that the built in Jetty only supports TLS 1.1 and TLS 1.1 is no longer supported by the current web browsers and it is also disabled in the most recent Java Versions. I have created defect 563338for this issue. The work item describes a work around that I was able to find.
To allow Firefox to connect to the Jetty debug server, type about:config into the search/URL field. Agree to the disclaimer and open the Advanced Properties. Find and change the setting security.tls.version.min to 1.
Find and change the setting security.tls.version.min to 1.
To be able to connect an Eclipse based client to the server, requires to allow Java to use TLS1.1 again. Go to the JRE folder jre\lib\security open the file java.security. Remove TLSv1.1 from the jdk.tls.disabledAlgorithms setting.
Make sure that this Java version is used to run the Eclipse you want to connect to the debug server. If you want to run with an external Eclipse, make sure to set the VM setting to the modified Java vm. If you want to run an Eclipse debug client from within the Eclipse development environment, make sure to add the modified Java to the Installed JREs in the preferences. and set the JRE to active. Make sure this Java is used in the build path when you start the Eclipse client.
I tested this with the 7.0.2SR1, but it should work with previous versions as well.
As always, I hope this helps users out there with their work.
While working on Debugging EWM Work Item Attribute Customization JavaScript I thought about all the questions I have seen being asked about Attribute Customization on the Jazz.net forum. There are some common questions I have seen a lot. There are some common things users would like to do and there are some limitations that might get into the way. I thought, I would create a blog that summarizes what I know about attribute customization and also provide the important links. I also want to provide a short summary about, for all I know, what can and can not be done with Attribute Customization and the provided API. I already create something like that in RTC Process Customization – What you can and cannot do. I will follow its structure and reuse some content but will try to be more specific in this blog. This blog will be limited to the work item attribute customization, with some few exceptions. The blog post will mention possible solutions that go beyond attribute customization, where this seems reasonable.
Important links
The following links are the most important references about EWM work item configuration and attribute customization.
A lot of the input that I take under consideration comes from questions and answers on the Jazz.net forum. Some of the examples originate from customer requests.
Questions on Jazz.net and even customer requests do often not provide good requirements. They often provide some implementation idea instead. The real problem or business case is usually hidden and not exposed. So it is important to question the request to drill down and expose why the question was asked as it is and what the background of the question is. If the real reason for the question is exposed, the answer can be completely different.
One example is: “Can I auto subscribe users to a work item?”. In pretty much all cases I can remember, the purpose of this was to send an e-mail notification to a user e.g. when the work item was created. Adding a user to the subscribers might or might not be a viable solution, but the example clearly exposes the problem.
So take the questions on Jazz.net with a grain on salt and try to dig deeper to understand what the question is really about.
What is Process Configuration and Attribute Customization?
EWM allows to configure the process of a project area. There are several options that can be used.
Work items, work item attributes, editor presentations, workflows can be configured.
Roles can be configured
Permissions can be configured
Operational behavior can be configured using out of the box preconditions and follow up actions.
Attribute Customization is a work item related capability provided by EWM. It provides built-in capabilities that can be configured and thus be considered process configuration as mentioned above.
The capability types available for configuration are:
Calculated value provider to support calculation with work item attributes to support certain processes.
Value Sets to provide customized sets of selectable values for work item attributes. Built in value sets allow to extract string values from an XML page in the Web, select contributors by role and allow to select enumeration vales based on other dependent enumerations.
Validators to validate values of attributes based on range or regular expressions. Validators can show a warning in the UI and be used with a built in advisor to prevent saving a work item, unless the offending attribute values are corrected.
Conditions can be used with built in advisors to make work item attributes read-only or required. There is no built in condition available at this time.
What is special about Attribute Customization?
What sets attribute customization apart from the other capabilities is, that it provides a JavaScript API that allows to create custom scripts for provider configurations. The JavaScript can be uploaded to the project areas process by any user with the required permission. This does not require write access to the application servers file system. For this reason it is possible to create and deploy custom JavaScript scripts even if the server is maintained by a cloud provider.
JavaScript is also reasonably easy to learn and a lot of practitioners have some JavaScript skills. This is a two edged sword.
Contrary to Eclipse extension plugins, another extension mechanism available in EWM, the JavaScript API has a relatively low experience threshold for beginners. It is way easier to develop than the other Eclipse plugin based customization options available in EWM that require great skills, understanding of the framework and familiarity with the available API. Since these options are based on Java and Eclipse plugin extension development, they require the plugins to be deployed which means they need to be copied into a file system, typically on the server where the CCM application is deployed. This is usually not possible in a cloud hosted environment. Except, some cloud services would perform such deployments for the customer, for a higher price tier.
The danger is that the low skill cap attracts very inexperienced users, which can eventually lead to issues, including server performance issues. Some of the capabilities can be overused. There is also a tendency to overcomplicate the work item workflow and the process.
The JavaScript Attribute Customization API constraints and limitations
Attribute Customization is configured on the attribute level. This means that any attribute customization will apply to all work item types that have the attribute configured.
JavaScript attribute customization scripts have to implement the required interface for their type. All the interfaces require to return some data. The data can be empty or undefined, dependent on the attribute type, e.g. an empty string or null. It has to return something that matches the attribute type it is configured for, nevertheless.
It is possible to detect the type of the work item the script is executed for, this allows to have different behavior for different work item types in the JavaScript attribute customization script. Based on the work item type attribute value, the script decides what to return, some custom value or the current attribute value to do nothing. This strategy allows to adjust for type specific attribute customization to some extend.
The API description in the Work Items Attribute Customization Wiki page explains which work item attribute value types are supported. Only very basic attribute types such as strings, numbers, enumerations, timestamps and dates are really supported. Especially note that the support for complex item based attribute types such as team areas, contributors etc. is very limited. For a those attributes, all you can access is the UUID and the display name. If you want to set such values, you must return the UUID of the item.
The documented API does not provide with methods to discover information beyond the work items current attribute values. There is no documented simple JavaScript API to discover details about not attribute based information. There is no documented simple API to follow or create links to work items or other artifacts. There is no documented API to access process information, such as users and roles or any other details. There is also no mechanism to detect a state change or workflow action. The workflow action is only available in conditions but the detection of a state change would be desirable in calculated values as well.
The return value of JavaScript attribute customization scripts is processed in the API. Dependent on the work item attribute type, the script is configured for, some kind of conversion is performed. As an example a string can be converted into a timestamp, provided the format is correct.
As far as I can tell, the conversion that is performed for html and string based attributes, including the subject and description, when returned from JavaScript converts the content into a string escaping any special content such as XML or HTML tags. This conversion happens outside of the JavaScript and there is no way to change the behavior. Because of this reason it seems to be impossible to return HTML with working links for HTML or string based attributes including the summary and description. Only the built in configurable default values work correctly. The only current attribute type that can be set by JavaScript attribute customization to contain links is the type Wiki. In this case the Wiki syntax has no XML/HTML tags involved that would have to be escaped.
Performance considerations
Some users seen to think there is such a thing as limitless size and performance. This does not apply to the real world. It is possible to heavily impact the performance of clients or servers with process configuration and especially customization. I am aware of a case, where JavaScript added to a Theme caused so much communication that the browsers on smaller laptops where drained of resources and almost came to a halt. Loading a work item took several minutes. This made working with the tools impossible for many users. So be careful and consider what you might do to the performance.
It is also very easy to annoy the users with overcomplicated processes, thousands of choices, required attributes nobody knows the values for etc. In general big numbers are bad and small numbers are good. When drop down boxes grow too big and the number of items ever increases, the users will be affected in their ability to successfully use the process. I have seen cases where the users got confused and did not realize that they had to enter a value and complained the UI had stopped. Overcomplicated processes will likely cause performance complaints and wear away the will to support the chosen tool and process.
There are limitations in the amount of custom attributes for a work item type (100). Having a lot of work item types is also not helping to make the process easier. Keep numbers low e.g. 10 would be a guidance for work item types.
Also keep in mind that there are size limits for all work item attribute types. Especially note that for list type attributes there is a size limit. How many selections can be stored is limited by the site of the list attributes and the size of the ID’s or item UUID’s that are stored.
Where does the JavaScript run?
The JavaScript runs in the Web Browser or in the EWM Eclipse client in a JavaScript interpreter..
Where and when is Attribute Customization not executing
Attribute customization including the JavaScript is not executed when the work item operations are created using an API such as OSLC or the SDK or Plain Java Client Libraries.
Work Item Proxy
When debugging the JavaScript, it is possible to see more objects accessible to the JavaScript in the debugger. It is possible to see an object called work Item proxy that looks very promising. Please note that this object is considered internal API. It should not be used by the JavaScript. The supported API is documented here. It is also important to understand, that this work item proxy is only available in the Web Browser and not if the script runs in the Eclipse client.
There are several posts on the Jazz.net that discuss how the proxy could be used. Again, it is internal API and should not be used, if you do, you are on your own. There might be unwanted side effects and the internal API could change at any time.
Why does my script not work?
This question comes up often on the forum. It is pretty much impossible to debug others scripts on Jazz.net. There are reasons for that:
Java Script Attribute Customization depends on the process used in the project area. The community does not have and, due to privacy, security and other concerns will not get this.
It is quite some effort to setup a test project area, even if one would have a process template.
So it is usually impossible to help on this level. What can be done is scanning the script and the description in the question for typical issues.
Please note, there are different levels of issues with JavaScript. On the most fundamental level:
Attachment Scripts must be enabled in the advanced teamserver.properties to be working – see below.
The script must be accessible in the process.
The script files encoding must be UTF-8 and consistent so the file can be read, interpreted and executed.
If an error happens at this level the best you can hope for is an entry in the log file. Only if the JavaScript is successfully decoded and loaded, it can be debugged.
What you can and cannot do with JavaScript Attribute Customization
There are certain limitations in the JavaScript Attribute Customization API as far as I am aware. I provided a more comprehensive presentation here: Process Customization – What you can and cannot do.
I will try to provide some details here.
Attribute Customization JavaScript is always defined as a function. Regardless what type of Attribute Customization the script must return a value (can be null or empty). Otherwise this will be a potential error in the log. I am not sure what happens worst case.
Attribute customization is for the attribute. This means especially that the customization will be executed in all work item types that have the attribute configured.
Sure, in JavaScript, you can get the work item type, by reading the respective attribute. You can then decide what to return. As explained in 1. you cannot return nothing. You however can get the current value of the attribute and return that, this helps in some cases.
As far as I am aware, Attribute Customization only runs in the context of a users browser and the Eclipse client. As explained above, this also means that automation based on attribute customization does not work, if you access work items using any of the APIs. This is important, as pre-conditions/advisors and follow up actions/participants can be implemented as server extensions that always run, regardless who accesses the work item how. When the JavaScript is run in the Eclipse client, only the infrastructure of the Eclipse client is available.
Attribute customization scripts can run multiple times, for example the calculated values might execute multiple times, based on changes to the other attributes they depend on.
JavaScript Attribute Customization only has access to the data of the work item in which context it runs. It is not possible to follow work item links using the documented API.
Even the available data of a work item is limited. As an example, a workflow action for a state change is only available in conditions. It is not available in any other attribute customization types, especially not in calculated values. This essentially means, there is no simple way of tracking and detecting a workflow state change.
Java Eclipse plugin extension based Attribute Customization
Beyond JavaScript, it is possible to provide Java based Work Item Attribute Customization Plugin Extensions. This is possible for all available attribute customization types, default values, calculated values, value sets, conditions and validators as shown in these examples. These Java based plugin extensions have access to more complex EWM Java SDK API. This makes it more feasible to create complex custom attribute customization. It is also easier to control which return result types are supported and, based on these, create richer attribute customization. It would, for example, absolutely be possible to have a default value provider that returns an XML or HTML result, that shows working HTML links. The decision to return a result that escapes the HTML tags or return the correct format is an implementation detail. A more complex example with additional background information can be found here and in this post.
The Java based attribute customizations would show up in the Admin UI in the same way the built in Attribute customization does. To make this possible the Java based Eclipse plugins need to be deployed in the Eclipse client as well as the EWM Server (see here). The API that is available to the Eclipse client and the server is the EWM Common SDK API. Consider reading this blog post to better understand what this means. The requirement to deploy these extensions on the EWM server are a potential inhibitor for cloud based deployments.
EWM Eclipse Plugin Extensibility
Java base attribute customization is only a subset of the Java Eclipse plugin extensibility available in EWM. The SDK provided for EWM allows to develop all kinds of extensions, including pre-conditions, follow up actions and asynchronous tasks. It is possible to develop such extensions to run in the context of the Eclipse client or on the server. These extensions need to be deployed in the context they are developed for. Server extensions have access to the full capabilities of the EWM Common SDK API and the EWM Server SDK API.
Preconditions also called Advisors can act as process advisors that prevent saving if the data violates some standards. Preconditions can be used to implement attribute customization validators or conditions. Example Advisors can be found here.
Follow up actions, also called Participants can perform additional operations on the EWM data. As an example, follow up actions can collect data across linked artefacts, perform calculations on the data, set attributes and properties for items and save the changes. Example participants can be found here. This includes the capability to create new work items and link the new work items to other work items.
In some cases it is better to use these capabilities instead of attribute customization, because it provides more API than even the Java based attribute customization.
All plugin based solutions require to be deployed. Server extensions require to be deployed in the server, which might be a blocker in an environment hosted as a service in the cloud.
Popular requests
In the next sections, I will try to provide some examples of popular requests. I will describe the background these request have, as far as I can tell, and which implementation options might be viable.
Popular: Custom E-Mail Notification
This comes up in various flavors. Examples are
Can I automatically set an owner of a work item?
Can I automatically subscribe a user to a work item?
With respect to attribute customization, the answer would be yes, you can do this, to some extent. It is possible to set or add a subscriber, to set or add a work item owner. In the JavaScript API this needs to be done the contributors UUID. The UUID will have to be configured or hard coded when using the JavaScript API. There is no API to search for contributors in JavaScript.
A Java based attribute customization plugin would have more API access and can provide better control and automation.
A real custom mail notification is possible using the following approaches
Custom follow up actions that trigger at work item save.
The last three options require to write custom Java extension plugins and their deployment on the application server, which makes it less attractive in a cloud environment.
As an alternative, EWM has other mechanism that might be better suited. For example there are RSS feeds for various change events, work item queries and dashboards. These do not send e-mail, but they might scale better than e-mail notifications.
Keep in mind that the e-mail notification options are controlled at user level by the user. A real Java Eclipse plugin that sends custom e-mail notifications could work without the need to be correctly configured at user level.
Popular: Calculation with work item attributes
This is often requested to calculate effort or something similar.
With respect to JavaScript based attribute customization, it is possible to perform calculations on attributes of the same work item. It is possible to perform numerical, boolean and other operations in calculated values. It is necessary to set the dependencies right, so that the chain of events executes until all the calculations have been performed. If multiple work item attributes need to be set, keep in mind, that the calculated value is a function that only works for one work item attribute. If multiple attributes need to be set, a calculation for each attribute is necessary. As already mentioned, It is not possible to calculate across values of multiple work items.
If it is necessary to calculate across multiple work items, JavaScript attribute customization does not work. It is necessary to use the Java plugin extension mechanisms such as follow up actions to implement this. An example is the RTC Update Parent Duration Estimation and Effort Participant.
It is also possible to use external tools based on API to collect data and then update work items e.g. in scheduled jobs.
Popular: Calculate how Often a work item was in a specific state
I have seen this several times. There is no easy way to detect work item changes. It might be possible to use an approach as shown below to detect this, but I am not sure if this idea below would really work.
With some tricks and several hidden custom attributes for state tracking ST1, ST2 and counting STC, it might be possible to cobble something together. E.g. initialize the hidden state tracking attributes st1 and st2 with a default based on the state ‘New’.
The calculated value of ST1 is triggered and calculated based on the current value of the work item state attribute. If the current state is not the value of ST1 return the new current value.
Calculate the counter based on a triggered change of ST1. If ST1 contains the desired state, return the current value of STC increased by one.
Calculate the value of ST2 when ST1 was changed. Return the value of ST1 as result to the calculated value.
This requires some intelligent usage of the dependent attributes. It might be necessary to use more artificial attributes to get this working. This attempt has however many downsides, especially the custom attributes are all showing up in the history and also create more changes in the work item.
It would be easier to develop a follow up action that actually works on the state change, if this is necessary. It would be important to see if reporting can be used instead.
Popular: Make work item attributes read-only or required
EWM has preconditions/Advisors to make work item attributes read only or required based on conditions. The question is, how far can you go with this? The JavaScript API is very limited in what data is available. It is obvious that conditions can work on this data. So you can make attributes read only or required on work item attribute information.
But since you have no access to any complex information such as team membership or roles or the current contributor, JavaScript can not be used to create more complex conditions. Any condition that needs to have access to more complex information, would require to be written as Java plugin. An example requested by a customer with additional background information can be found here and in this post. It is important to know that these condition based attribute customizations are the only way to customize rules to make attributes read-only or required. There is no other extension point available that you could use to hook into. This is one of the rare cases where custom preconditions/Advisors can not be used.
In my experience with requests like here and in this post, to make attributes required or read only, are often motivated by the desire to have more fine grained control. Consider to use permissions to prevent roles from changing attributes.
Requests like above are also often motivated by the desire to make EWM work like some legacy tool. My experience so far has always been, that trying to make a tool work against its design is usually fatal. It usually makes the process hard to understand and follow. It also often degrades usability and performance.
Since every user in EWM has at least the default role (Everyone), can have many roles, and the roles have an order, it is also debatable what ‘by role’ really means.
Popular: Dynamically Hiding Work Item Attributes
I have also seen several requests that want to make attributes hidden by some complex, usually role based rule. This is not possible today, regardless the approach.
There are very limited capabilities to control visibility built into EWM. Attributes can be hidden if they are empty, if the work item is in a workflow state, if some endpoint is empty, if there is no project link and if the work item is just created. An editor presentation tab can be hidden, if all contained attribute presentations are empty.
There is no extension point, and there is no attribute customization available, I am aware of, that would allow customize the behavior in any other way.
There are various enhancement requests for EWM in this context. I think it would be nice if there was a custom attribute customization option that could be used with a precondition e.g. ‘Hide for condition’ that could be used similar to ‘Read only for condition’. This would allow to hide attributes that are unimportant. The question is, if it is helpful when attributes show up and vanish for the same attribute type. This is at least debatable. As a user I would likely find this confusing. As always, keep in mind this could open the door for performance issues and abuse. The remark about roles and their usage in EWM also applies.
Please note, just because an attribute is not visible in the work item editor, presentation does not make it inaccessible by work item queries or reports. EWM was also designed for collaboration and not hiding.
Popular: Dynamic workflow
I don’t know where this comes from. I doubt this is a thing in other tools for change tracking, but it cam up more than once.
EWM/RTC does not support dynamically changing the workflow of an instance of a work item. The workflow is tied to the work item type. It would be possible to use a follow up action/Participant to change the work item type under certain circumstances. Due to the lack of support for detecting a state change and the lack of access to other data in attribute customization, it would be not feasible to use attribute customization. A participant is a far better option.
Popular: Dynamic creation of attributes
This came up several times.EWM does not officially support this. It is possible to create an attribute using the API, but this attribute does not surface in the admin UI. It is necessary to add statements to the process XML to make such an attribute configurable in the process and the editor presentations.
This is used by some integrations just once and not meant to be used constantly during operations. Also mind the limit of custom attributes and consider usability of the process.
Restrict Read Access To Data
Some customers want to have more control about who can access which data. EWM has the capability to restrict work item and SCM item read access permission based on access context. If an item can not be read, it can also not be written.
For work items, the restricted access context can be an access group, project area, or team area. It is possible to automate setting the work items restricted access based on categories. This sets the access context to the process area associated with the work item category. This is the only built in automation for category based work item access control. There is no other built in automation for work items e.g. enabling the automatic selection of an access group for the work item.
For SCM data there are more options to set the access context/visibility. Components can have access control set to a project area or an access groups. SCM content in a component, folders, files, can be configured more fine grained. It is possible to limit access to a contributor, a project or team area or an access group. There is no built in automation to set the access context for SCM items.
It is possible to create custom automation to set the access context for work items and SCM items. One implementation approach is using follow up actions. For work item restricted access this can be configured for the work item save operation. For SCM objects follow up actions for the check in and deliver operation could be developed to compute the desired access context. I have explained some of the options in the blog series Setting Access Control Permissions for SCM Versionables.
In contrast to my knowledge back then, I know today, that it is possible to elevate the operation context to an administrator, why this is important is explained in the second last paragraph.
The question would be, can JavaScript attribute customization be used for custom automation for the restricted access? Attribute customization is only available for work items. There is no such API for SCM content. It might be possible to set the restricted access attribute in JavaScript attribute customization by returning the UUID of the access context, but as already mentioned, because of the very limited amount of data and API access available, JavaScript would most likely not be feasible for anything beyond that.
Java Based attribute customization could be more feasible, but it is most likely not possible to elevate the operation to super user access due to the limitation of the available API to the common API. I would consider this approach to be unfeasible for any relevant complexity.
If it is necessary to develop a Java Eclipse plugin for attribute customization anyway, it looks to be more suitable to create a follow up action to automate setting the access context. This allows to use the full capabilities of the EWM SDK common and server API and it is possible to elevate the operation to run in an administrator/super user context.
The reason why it would it beneficial to run the operation in an administrator context is, that the operational behavior usually runs with the permissions and limitations of the user that performs the save operation. This also means the operation fails, if the permissions are preventing it. This limits what can be done. For example it is impossible to change the access context of an item in a way that makes it inaccessible to the user performing the operation. The operation would fail. Elevating the operation to run in an administrator context, would allow such an operation to be successfully executed, regardless of the repository access and permissions of the regular user triggering the operation.
I would suggest to not try to use attribute customization for these use cases.
Summary
As a summary, attribute customization, especially using JavaScript can be used for some interesting purposes. There are however a lot of limitations in the JavaScript API that prevent several use cases. The biggest benefit is, that it can be used without having to deploy anything on a server. This allows it to be used in cloud deployments, where the users have no access to the server file system.
For more complex problems, the Eclipse plugin Extensibility is a better option, regardless if it is the Java based attribute customization, or Eclipse plugin based EWM extensions, just because the APIs provided in the EWM SDK are so much richer.
This was quite some effort. As always, I hope that this is useful to someone out there.
I updated the EWM / Rational Team Concert Extensions Workshop recently for 7.0.x and for 7.0.2SR1. There is an issue when using the P2 install explained in the Workshop with more recent Eclipse Clients. I think I have seen this with the 6.0.6.x as well as with the 7.0.x versions, dependent of the Eclipse version I used.
EWM P2 install with more modern Eclipse client versions
Eclipse client fails to connect with the EWM Server with the error message “Could not get value of field private transient sun.util.calendar.BaseCalendar$Date java.util.Date.cdate in 2022-07-31 09:37:08.301”. I was recently contacted by a colleague Thomas, who provided a solution for this issue. The solution is explained in this forum question. I added the following lines to the eclipse.ini file:
This solved the issue for me for Eclipse 2022-06 and might be a solution for other versions as well.
I tried several other things such as different Java versions, but I did never find a really reliable solution that allowed to stay close to the workshop documentation.
Update
After discussing the findings with some of our developers, I found out that there was actually a setting in the eclipse.ini, that defined using a Java 17, which overrode my -vm setting to use a different Java version. The setting was buried somewhere in the middle of the ini file and looked like this:
This specific setting also obscured that there is a Java shipped with Eclipse. At least after removing that and providing Java 11, the error does no longer happen. The settings above are only needed to allow the operation in Java 17 that is still allowed in Java 11. The defect that causes this is Illegal reflective access operation in EWM Eclipse Client (P2) (547430)) and fixed in 7.0.3. After removing the duplicate -vm with the Java 17, the issue went away.
UI Changes in newer Eclipse versions
When adding the plugins to the search path the UI has changed in more recent versions. The menu item “Add to Java Search”, used in in Lab 1.3 and 1.7, is renamed from to “Add to Java Workspace Scope” in newer Eclipse versions like the one above.
Caveat with the P2 Install
Please note that there is still an issue with the P2 install of EWM. After the install it is impossible to use the Help->Install New Software and the Marketplace. There is a plugin that causes an error as Thomas rightly pointed out to me.
Feature based Launches
Every now and then, I have seen issues with installing the Feature Based Launches in certain Eclipse versions. The Launch options added by the Feature Based Launches (such as OSGI2 and Eclipse2) sometimes do not appear to be working. They do not show up in the Debug Configurations.
If this is the case there is a reasonably safe approach that works. Install the latest Eclipse EWM Client that ships as a zip with EWM/RTC bundled. As an example unzip it into C:\RTC702SR1Dev\installs\TeamConcert\. Adding the dropins folder and the Feature Based Launches has always worked for me so far. The dropins folder needs to be the folder C:\RTC702SR1Dev\installs\TeamConcert\jazz\dropins if you unzip the client that way.
Import the RTC Client Feature
Unfortunately there is another caveat with this approach. In 1.8__53 Import a feature to make launching the RTC Eclipse client much easier the path to import the feature com.ibm.team.rtc.client.feature is C:\RTC702SR1Dev\installs\TeamConcert\jazz and not C:\RTC702SR1Dev\installs\TeamConcert\eclipse with the example install above. If you have done this successfully, you can use the standard EWM Eclipse client for development.
Using multiple versions of Eclipse
It is possible to use both approaches in parallel by installing into different folders such as TeamConcertVanilla and TeamConcert_2022_06. Please be aware that there will be warnings when using the same workspace with different Eclipse versions.
Summary
This blog should help with overcoming some new issues that show up when performing the EWM/RTC Extensions workshop for the latest versions of EWM/RTC. Thes solutions make the work easier for me and I hope sharing them helps others too.
It was requested to support 7.0.2 SR1 with the WorkItem Command Line. I had a look and was able (I think) to update the code of the WCL to work with the new Plain Java Client Libraries. This removes the dependencies to Log4J1 and change the functionality to use Log4J2. The code is on a different branch named Log4j2, because it is incompatible with previous versions of EWM. A new pre-release was created and is available here as Work Item Command Line 6.0 prerelease. I have only done minimal testing. If you have an opportunity to run it, please report issues.
EWM/RTC Extensions Workshop for 7.0.2 SR1
To be able to perform the updates I had to perform (most parts of) Lab 1 of the Rational Team Concert Extensions Workshop. The workshop still works. There are some smaller issues and here is my erratum so far.
I downloaded Eclipse IDE 2022-06 R and installed EWM into it. Unfortunately there is an issue when connecting to the EWM server. The error is “Could not get value of field private transient sun.util.calendar.BaseCalendar$Date java.util.Date.cdate in 2022-07-31 09:37:08.301”. Here is an approach how that can be solved.
When adding the plugins to the search path the UI has changed and the menu item in the Eclipse version above is named “Add to Java Workspace Scope”.
When trying to create the Test Database with the JUnit launch, the test ran out of memory. I increase the memory settings in the launch.
Summary
The changes to the ELM 7.0.2 SR1 break the Workitem Command Line and require a new setup of the Extensions workshop to develop extensions. The WorkItem Command Line is available as a prerelease for ELM 7.0.2 and the Extensions Workshop seems to be working with minor limitations.
After looking into how to create and update work items the question becomes, how to find work items to begin with. OSLC provides a query mechanism to allow querying for items. This post intents to show how OSLC queries work including some examples that work for me. The techniques explained in the previous posts in this series are important. If necessary, go back to the previous posts to understand the details. As usual the focus in this blog is EWM/RTC, however, the OSLC Query mechanism works for all product supporting it. So what is explained here based on EWM will work with ETM, DNG etc.
Context of the blog post is the series
This is the series of planned posts I intent to publish over time. Most of the examples will be EWM based, but quite a lot of the content applies to more ELM applications. The examples where performed with versions 6.0.6.1 and 7.0.x.
I used at least the following links for exploring the OSLC Query mechanism. The latest version of the OSLC Query V3 document contains several examples which are very helpful. I still found it a challenge to get some of the queries working.
The examples in this blog are using the content type application/rdf+xml. Some examples work with the content type application/json, provided the function to parse the files are switched to using a JSON parser, some do not. The posts in this blog series are using RDF+XML and I decided to stay with it, to make it easier to follow.
Query Base
Before being able to execute a query, it is necessary to discover the query base. The query base is the URI that defines the root for the OSLC query. The first step is to get the service provider catalog from the rootservices document as described in the blog post about EWM OSLC Discovery. Get the rootservices document
GET https://elm.example.com:9443/ccm/rootservices
Accept application/rdf+xml; charset=utf-8
OSLC-Core-Version 2.0
Get the service provider catalog like below. Pass the OSLC-Core-Version and the Accept content type header. This step requires authentication to be done.
The catalog lists the services available for each project area. To narrow down to a project area perform a GET on the work item service for the desired project area, providing the headers mentioned above. As an example
GET https://elm.example.com:9443/ccm/oslc/contexts/_8e5qfFpmEeukW7cqqDjAuA/workitems/services
The resulting response body contains various services that are provided for the various work item types of he project area.
Various dialogs such as OSLC selection, creation dialogs and pickers
The OSLC creation factories for all the work item types
The OSLC resource shapes for all the work item types
The OSLC query capabilities
To be able to query for work items, it is necessary to analyze the query capabilities provided by the service provider. The query capabilities can be found by searching for the oslc:queryCapability nodes.
There are usually at least two query capabilities in the work item service provider for an EWM project area. One is the query capability for deliverables. A deliverable is also referred to as a release and is a work item attribute type. The query capabilities for deliverables can be found using the resourceType http://open-services.net/ns/cm#Deliverable.
The second query capability is for work items (in OSLC referred to as change requests). The image below shows the oslc query capabilities for work items. This can be identified by the resourceType http://open-services.net/ns/cm#ChangeRequest
Query capability for work items.
The oslc:queryBase is the URI for the work item OSLC query mechanism for the selected project area. It will be used in constructing the OSLC Query URI. The query base has the form:
The resource shape provides the information about the attributes provided by the TRS. The resource shape contains a oslc:valueShape for work items based on the work item type defect.
Value shape for work items
This provides the value shape that defines the common work item attributes, their types and the allowed values for these work item attribute value types. It is possible to GET the oslc:valueShape value shape (provide the OSLC related headers) like below:
GET https://elm.example.com:9443/ccm/oslc/context/_8e5qfFpmEeukW7cqqDjAuA/shapes/workitems/defect
This value shape provides with all the common attribute and link types. The work item of a specific type might still have additional custom attributes. The post EWM Work Item OSLC CM API explains how to get that information using the work item type specific resource shape associated with the creation factory.
The image below shows the code that gets the query capabilities out of the project area work item service provider document.
Extract the query capabilities from the service provider document
This code shows how the query base is extracted from the service provider catalog. To do this, the code identifies the change request resource type and gets the associated query base.
Get the query base for the work items
Having the query base it is now possible to execute OSLC queries. The code below executes the most simple form of OSLC query:
Querying the query base.
The code performs the GET method on the query base.
GET https://elm.example.com:9443/ccm/oslc/contexts/_8e5qfFpmEeukW7cqqDjAuA/workitems
Accept application/rdf+xml; charset=utf-8
OSLC-Core-Version 2.0
Please note the OSLC headers Accept and OSLC-Core-Version above. The same headers are used in all calls in this blog.
Paging
The request is redirected to support paging like shown below. The paging mechanism is used to split queries with large result sets into smaller, more manageable chunks.
Paging redirect
The response body contains the work item URI’s for the first page of work items. Note, because there is no query parameter, the query only returns the work item URI’s and not any other property data. To get more properties of the resulting work items e.g. to display the summary would, in this case, require to get the work item using the URI from the response.
Response body contains work item URI’s and information about paging.
In addition the response contains information about the total amount of results and the URL to get the next page, in case the number of results exceeds the maximum items returned by this page. The image above shows the oslc:responseInfo which contains oslc:nextPage with the URI to get the next query result page and the oslc:totalCount with the total count of query results.
The code below shows how the query is executed. In the while loop a GET request to the query URI is executed. The result of the call is analyzed and this provides the next page of the query, if paging is needed. If no next page is available, the query is finished.
Execute an OSLC Query
The code below processes the result for a result page and gets the resulting work items to display them. It prints the work item URI. It also tries to get and print the identifier (Id) and the title (summary) of each work item in the query. If the identifier or the summary is not available nothing is printed. The code then analyzes the paging information. It returns the next page URL if there is one. It also returns the total count of the result.
Try to get at the result details of the query page
URL Encoding
OSLC Query provides a mechanism to create simple queries to retrieve items. These queries are sent as a URL in a GET HTTP request. There are limitations on which characters can be sent in the URL. Some characters have specific meanings in the URL. To avoid creating and sending the wrong or illegal URLs, parts of the OSLC query parameters need to be URL encoded. See some explanations about the URL encoding here:
In the code below url encoding is done using the function urlEncodeString as shown below.
comm.urlEncodeString('URL Encode me')
The code URL encodes the data in a way that has worked for me. I am not totally sure I understand the URL encoding in all its details and there might be issues in the code below. If I got something incorrect, please leave a comment with a suggestion to correct it.
More complex OSLC Queries
The code below uses the following constants (mind the = at the end of each term) which are used to compose more complex OSLC Queries:
The statements represented by above constants and how they work are explained in the following sections.
The oslc.select Statement
OSLC queries support selecting the properties supposed to be returned in the query. The statement that is used for that is the oslc.select statement. The code below shows how the select statement is used. The first three lines are used to control running the queries and creating the file name for logging and can be ignored.
The interesting part is the selectString1. It defines the list of properties that should be returned for each query result. In the case below we want the type, the id, the title, the description, when it was last modified, who modified it, when it was created and who created it for each work item. This information is encoded in the selectString1 by creating a comma separated list of property identifiers. The select statement term is then created by concatenating the select statement ‘oslc.select=’ with the URL encoded version of the selectString1. This term is added to the query base as first query parameter using the separator ‘?’.
Select statement is provided to the OSLC query
When this query is run, the query result contains the properties for the work items that are returned in the query result. The image below shows the resulting data in the response.
The query result for a query using a select statement
The select statement can be nested to provide nested data. It is possible to use ‘*’ to select all available properties. The amount of data that is transferred can impact performance of the communication. The specification mentions that the servers SHOULD accept rdf:nil as single property. Both extremes are commented out in the cod above.
The oslc.properties Statement
The oslc.properties statement can be used to limit the set of properties or attributes of a work item that are considered in the OSLC request. The oslc.properties can be used in a GET request. In this case it can be used to specify which properties the requestor is interested in. The server does not have to collect and transmit all properties, but only the ones of interest.
The oslc.properties can be also used when updating a work item. This allows to perform partial updates. There is no example provided in this blog, but the syntax and encoding of oslc.properties and oslc.select are the same.
The oslc.where Statement
The oslc.where statement can be used to specify which work items to query. The first example below shows a query for the work item with the identifier 1. The select statement uses * to select all work item properties.
Get the work item with the ID 1 and return all properties.
The URL that is created can be seen below. The encoding makes it hard to understand. The important request headers are the OSLC-Core-Version and the Accept header.
The GET request with headers and URL encoding
The image below shows parts of the resulting RDF-XML for the work item with all the data for its properties.
The response with all properties
The In statement can be used to search for properties being in a range. The code below searches for the work items with identifiers being 1, 3, 9, 50. The query would run successful, if there are IDs that are not found.
Find the work items out of a list of ID’s
Please note as documented here, especially in the syntax section, the oslc.where statement only supports to compose complex conditions using the boolean operation ‘and‘. The operation ‘not’, ‘or’ or other complex nesting is not supported.
Before looking into more examples lets introduce the next statement.
Sorting – The oslc.orderBy Statement
The statement oslc.orderBy can be used to define the order for the result set. The example below finds all work items with the type defect and requests the result set to be ordered by severity (descending) and then by id (ascending). The order is defined with a prefix for the property. The prefix is + for ascending and – as descending order. Multiple order specification can be given in a comma separated list.
Search for all defects and sort by descending severity and ascending id
Unfortunately, the orderBy seems not to work for EWM 7.0.2 and the accept format application/rdf+xml. I have seen it working with application/json.
Full text Search – the oslc.searchTerms Statement
The statement oslc.searchTerms can be used for full text search. The query below finds all work items that can be fond by full text searching for the term ‘dividend‘ creates 20 hits with the JKE Banking Sample.
Full text search for ‘dividend’
Define a Namespace Prefix – oslc.prefix
The OSLC query mechanism allows to define new namespace prefixes that can be used in the oslc.where, oslc.orderBy, and oslc.select statement. My experimentation with this feature was quite problematic. What worked for me is the following:
To define a namespace create a variable with the namespace prefix, including the equals sign, and encode the URL. Then use only one namespace in the oslc.prefix definition Only define one prefix for one namespace in each oslc.prefix statement.
The code below defines several namespaces (mind the encoding part). Based on these namespaces I create a prefix term for each of the namespaces based on ‘oslc.prefix=’ and the namespace URI. To define multiple namespaces in a query concatenate the prefix terms with ‘&’. Some discussions on Jazz.net indicate it might be possible to use a comma separated list. I could not get this to work.
Defining prefixes
To use the prefixes in a query, just include the prefixes as query parameter.
Using prefixes in queries
The query URI can become very long when providing many prefixes.
The prefixes can make the query quite lengthy.
Search For Items With a Specific Enumeration Literal Value
The image below shows a query that selects only work items that have the severity set to critical. The enumeration literal to be provided for the selection is information that can be looked up by querying the resource shape and allowed values for each work item type associated to the creation factories. For common work item attributes the available attributes and allowed values can be found using the chain resourceShape to valueShape associated with the query base.
Queries for the severity literal representing the severity ‘Critical’
Search For Items Modified After a Certain Date
The image below shows a where term that selects all work items that are modified after a certain date.
Items modified after
Compound Where Terms
The OSLC Query mechanism supports compound where terms are built form simple terms. The only supported boolean operation is and. Or is not supported, neither is ‘not’. The code below shows a query that looks for work items of type ‘defect’ and severity ‘critical’.
Items of type ‘defect’ and severity ‘Critical’
Summary
This concludes my short blog about the OLSC Query mechanism. My intention, in addition to understand it better myself, was to provide some working examples and how to create the queries. I have tried this in the past and got some where, but it was quite challenging. The documentation I had available in the past was often lacking some small but important information. The documentation for OSLC CM 3.0 has improved a lot, but apparently there are still issues you can run into.
So as always I hope I was able to provide simple but relevant examples that help users of this technology to achieve their goals and save some time.
I have collected a lot of experience with the Java APIs for RTC/EWM over the years. Until 2020 I did not use the RTC/EWM REST and OSLC APIs at all. Luckily I got involved in several engagements where I had the opportunity to explore these APIs and learn how to use them.
I found the documentation for these APIs I was able to find underwhelming. The available documentation was often lacking complete working examples. There was usually some critical part missing or there where no examples at all, just the API specification. The latter seemed to be systematic to focus on the specification and not the specifics. However the lack of samples was confusing and left too much room for interpretation. I ended using search engines a lot. I had to experiment a lot to get things moving.
Obviously I learned a lot. I would like to spare others the hassle. So, as usual I want to share the experiences and lessons learned with the community. My intention is to provide with some relevant and working examples that are easy enough to perform on your own. I hope this can save people some time when trying to use these APIS as well.
This post will be the first in a series of posts and provide links to the other posts for easy navigation. In addition this post will discuss which development environment and tools were utilized to explore and use the APIs. I will share some of the code I have developed over the time to ease the exploration.
The planned blog posts for the series
This is the series of planned posts I intent to publish over time. Most of the examples will be EWM based, but quite a lot of the content applies to more ELM applications. The examples where performed with versions 6.0.6.1 and 7.0.x.
Since back then, the ELM API Landing Page has been added to provide a more comprehensive overview about the available ELM APIs. If you are interested in ELM related APIs go over that page and find out what APIs are available. This page also points to other resources such as the OSLC standards and available workshops.
Finally the Interesting links page is a collection of, well, interesting links I found over the years.
Development environment choices
The first information I can share is how I explored and used the APIs and explain a little bit about the development environment options available and which development environments I used to explore the APIs.
I like developing with Java a lot. The EWM/RTC Java APIs are very rich and it is relatively easy to develop code for EWM/RTC, provided the development environment was set up by performing at least the complete Lab 1 of the Rational Team Concert Extensions Workshop. Eclipse, the RTC SDK and the Plain Java Client Libraries allow development of extensions and automation based on the EWM Java SDK and API. The same environment can also be used to develop code against the REST and OSLC APIs.
It is also possible to use Java or any other language supporting HTTP, to develop code for the EWM/RTC REST and OSLC APIs, just using libraries and available frameworks.
I have already used the Java based Eclipse Lyo framework to develop a client automation for Doors Next Generation (DNG) and I used the Eclipse Lyo Designer code generation framework to develop integration servers. My experience was that Lyo is a nice framework that helps a lot, if you know what you are doing. If I was not, I found it challenging, especially debugging and understanding what was going on in the HTTP requests.
I have looked into and used Postman and the Firefox addon RESTClient to experiment with REST and OSLC APIs. It is very useful for experimentation and I use it in parallel to the other development environments. A typical use case is to login and experiment with one call to figure out how it works. If the call sequence and the amount of data becomes too big, it is not really efficient any more, and I would use a different approach.
I started using Python and Jupyter Notebook in 2020 when I had the need for some automation for importing, manipulating, consolidating and querying a lot of CSV data for a customer. I was very impressed with the quality of the available libraries and the turn around times that were achievable. When I was asked to help one of our customer teams with information about the RTC/EWM APIs for the development of a prototype for a customer specific mobile client, I decided to use Python instead of Java. As mentioned above I also used RESTClient or Postman for experimentation with one or two API requests.
There are various Python development environments around. I do not think it matters which one you use. I used Spyder which comes with Anaconda. There is also PyCharm and kite. I am not opinionated. I just notice that the development environments are far away from the quality of Eclipse and the built in compiler and debugger. There are always tradeoffs, I guess.
Python – Libraries and Code Samples
The focus of the blog posts is more the APIs, how they work and how they can be used and not so much Python and how to use it. However, I figured that I want to share some code I developed over time, enabling easier data collection and debugging. So I will provide examples where I see fit.
The most important aspects of HTTP based APIs is to understand which method is used with which URI, which headers are used, which formats are sent and accepted and which request body (if any) is sent. The response data is also key, especially the status, response headers such as Location and obviously the response body. A mechanism that can log all this information is key in understanding the APIs and faster turn around times.
Python has a lot of libraries for various purposes. The Libraries that I used are shown below, loosely grouped by what they are used for. First the libraries used operating system and system specific purposes such as logging, files and execution.
import os
import sys
from datetime import datetime, timezone
from pathlib import Path
Then the requests library which is used for session handling and HTTP communication in my code.
import requests
from requests.auth import HTTPBasicAuth
from requests.packages.urllib3.exceptions import InsecureRequestWarning
This code below is necessary to suppress issues with certificates. This is a typical situation for me as I usually develop against some local test system.
# Disable warnings for self signed or invalid SSL certificates
# to be able to talk to test systems
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
# Start a session
session = requests.Session()
The Libraries I used for RDF XML and JSON parsing and representation.
I ended up creating a base library for the Communication with the ELM system that allowed better reuse. I will not share all the code at the moment, but I will share some basic learning and code that I found being key for me to be able to do my work. The library is referred to as:
from elmcommlib import ELMCommLib as elmcomm
The library is initialized with a session, the public URI and a name for the log folder to be created.
publicURI = 'https://elm.example.com:9443/ccm'
paName='JKE Banking (Change Management)'
user='ralph'
password='ralph'
# Disable warnings for self signed or invalid SSL certificates
# to be able to talk to test systems
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)
# Start a session
session = requests.Session()
comm=elmcomm(session, publicURI,'logCreateWiRDF')
The library also provides a mechanism to create and set log file folders using createLogFolder("FolderName") . In case the folder already exists it can alternatively set with setLogFolder("FolderName").
The log folder is used by the method writeResult() shown below, which dumps the complete communication in a text file, when a file name is provided. The file name should be constructed and numbered to better understand the flow of the sequences. Below shows such a sequence with file name numbering as an example.
The communication logs are always created in the current log folder. This allows to split the logs for the API usage into smaller sequences by switching the current log folder.
Content of a log folder.
A debug print dPrint() allows to avoid chatty logging. You can keep the logging entry and force it to show if you want. Printing a timestamp using timeStamp() is sometimes useful, especially when looking at performance of calls.
# Folder for log files
def createlogFolder(self,folderName):
defaultLogFolder= 'commlogs'
if(folderName==None):
folderName=defaultLogFolder
if(folderName==''):
folderName=defaultLogFolder
script_dir = os.path.dirname(__file__) #<-- absolute dir the script is in
logFolder=os.path.join(script_dir, folderName)
Path(logFolder).mkdir(parents=True, exist_ok=True)
return logFolder
# Folder for log files
def setlogFolder(self, folderName):
self.logFolder=self.createlogFolder(folderName)
# Log the HTTP communication for the request in a folder
def writeResult(self, fileName, result, url=None):
if(fileName!='fileName'):
self.dPrint(f"Execute: '{fileName}'")
logFileName=os.path.join(self.logFolder, fileName)
with open(logFileName,'w') as f:
if(url!=None):
f.write(f"Destination URL: {url}\n\n")
reqMethod = result.request.method
reqURL=result.request.url
reqBody=result.request.body
f.write(f"Request: {reqMethod} {reqURL}\n")
f.write("\nRequest Headers:\n")
for header in result.request.headers:
value=result.request.headers[header]
f.write(f"\t{header} {value}\n")
f.write(f"\nBody:\n{reqBody}\n")
f.write(f"\nResponse Status: {result.status_code}\n")
f.write("\nResponse Headers:\n")
for header in result.headers:
value=result.headers[header]
f.write(f"\t{header} {value}\n")
cookies=result.cookies._cookies
f.write(f"\nResponse Cookies:\n {cookies}\n")
f.write(f"\nResult Body:\n{result.text}\n")
# Debug print if debugging is on
def dPrint(self, message=None, doPrint=True):
# DebugPrint, switch off by sending doPrint=False
if(message!=None):
if(doPrint==True):
print(message)
else:
pass
# Print timestamp
def timeStamp(self, message):
now = datetime.now(timezone.utc)
self.dPrint(f"{now}: {message}")
The image below shows how a log file for a message created using the method writeResult() looks like. Note that the log contains all the important pieces of the request, response pair. I used tooling in the editor Notepad++ to “pretty print” format the XML section in the response body. This makes it much easier to understand.
Logged http request – response
Help with RDF XML
The REST and OSLC APIs provide different serializations for the content that they accept and provide. One older one is XML based using the Resource Description Framework (RDF) specification. Newer standards such as JSON and more might be available. I have experience with RDF and JSON and I prefer JSON.
RDF is not for me. I always struggle to understand what and how I should be searching to get the data I want. Especially when the data is full blown with namespaces and what have you. This was one of the biggest struggles I had with Eclipse Lyo. The HTTP Client was hard to use for debugging, because the content was usually consumed when I tried to dump the response into a log file. So I could have a log entry for debugging and the call would not proceed or the call would work and I had no log data. Maybe I overlooked something.
In Python I was able to use the method writeResult() and continue processing the response data. I was able to use the function below to serialize RDF response bodies into a form that shows all the subject, predicate and object data and saves it into a file. That made it easier to work with RDF for me. I still prefer JSON format, if available. The OSLC discovery mechanism supported by RTC/EWM requires XML-RDF in the first steps, so you will have to deal with it.
# Serializes a graph (based on RDF) in the nt format
# This format shows all graph nodes as Subject->Predicate->Object
# This allows to better understand what to search for
def debugSerialize(self, graph, fileName='fileName'):
# Serializes a graph into the NT format. This provides
# a great source to look into RDF triples in the graph
if(fileName!='fileName'):
logFileName=os.path.join(self.logFolder, fileName)
graph.serialize(logFileName, format="nt")
The serialization formats supported out of the box are “xml”, “n3”, “turtle”, “nt”, “pretty-xml”, “trix”, “trig” and “nquads”. For me NT and Turtle seem to be most useful, I built in capabilities to save the XML data as NT and Turtle format to help understanding how to be able to access the data later.
Update: The preferred option is to serialize the graph as Turtle format and look into that.
This is how the parsed RDF graph data from above looks in the NT format. Every row (mind the word wrap) is a triple of subject, predicate and object. This provides with hints how to search for data.
The RDF-XML in NT format, providing the triples in the model.
The capabilities above where absolute key for me to be able to explore and understand the EWM/RTC APIs and document them for my colleagues.
Operating on RDF requires the RDF definitions in Python. I used the ones below and defined them in my library.
This is the first of a series of posts, I hope to publish more soon. I will try to keep this post maintained and I am looking forward to the next posts. As always, I hope that my content, especially in this blog, helps someone in the ELM community out there. If it does, feedback would be awesome.
I am very passionate about the RTC Extensions Workshop as you might be able to tell from the content of this blog. Performing it with EWM 7.0.x provided several challenges. It became apparent that an update to the workshop would be beneficial.
I spent a considerable amount of time in the past two months to update the workshop. As a summary the following items where addressed:
Since the CCM server is shipped with WebSphere liberty profile, configuring the server for debugging needed to be changed. The old way to configure the server still worked in the 6.0.x versions, so it went unnoticed. With EWM 7.0.1 this is no longer the case and the workshop was updated to address this.
The advanced capabilities introduced in the EWM SCM system in the 6.x and later caused a deviation of the screen shots showing the pending changes. The workshop setup tool was slightly changed to fix this.
The workshop setup tool and its shell script has been tested with Linux and MAC OS.
I wanted to add a section to Lab 1, explaining how to setup the existing Eclipse client/server development workspaces to better support development and debugging of the Plain Java Client Libraries forever. The new last optional section addresses this. For this reason Lab 1 of the workshop is a must for anyone intending to create Java based automation or extensions to RTC/EWM.
I had an errata list with a number of small issues, typos, naming inconsistencies and the like that were fixed. During reviews a bunch more showed up and were fixed.
A colleague ran the workshop on his MAC, so this works. Use whatever is available for MAC like Eclipse and where this is not specifically available, use the Linux versions.
The RTC Extensions Workshop has been published with an additional section for the new EWM versions and is now available for download. I will update recent posts around the workshop in the next few days.
As always, I hope that this blog post helps the users in the Jazz Community.
As mitigation I suggested to replace the JRE with an earlier one, or one that works. Unfortunately, this does not look sustainable. I have tried downloading the newest IBM Java SDK and the issue still happens.
I have looked a little bit deeper into it and discovered that the server.startup script provides a debug option. Using this option allows remote debugging the EWM server. I am looking into an update to the RTC/EWM Extensions Workshop for 7.0.x. until this is available here a procedure how to follow the workshop for 7.0.x versions.
In Lab 1.2, do not add the following lines to the server.startup script, remove these lines:
set JAVA_OPTS=%JAVA_OPTS% -Xdebug
set JAVA_OPTS=%JAVA_OPTS% -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=3388
Scroll to the bottom of the server.startup script and inspect the statements behind setting the JAVA_OPTS. Note that there are an action and an option supported.
Special parameters supported for starting Liberty Profile
Start server for remote debugging
The action -debug can be used to start the Liberty Profile Server in debug mode. When following the workshop, continue to Lab 1.5.
Follow the lab until 1.5__32 Add the following steps:
Open a console/cmd window and change directory into the server folder containing the server.startup script. Stop the development server by running server.shutdown.
>server.shutdown
Wait until the server is successfully stopped. Start the development server by running the following command.
>server.startup -debug
Note that the server does not start, it suspends listening for a remote connection on a dt_socket/port. The default socket/port address is 8000.
Server waits in debug console
The server will not continue starting unless a remote connection is opened on that port.
Connect Eclipse as remote debugger
Follow the workshop and open the [RTCExt] Debug Running Liberty Profile launch configuration. This launch is used to connect to a remote Java application. In our case the application is the RTS/EWM development server waiting on socket/port 8000.
Change the port in the launch to 8000 then click Apply and Debug.
Change debug port to 8000 apply and start debugging.
Switch to the command prompt. It takes a while and the server continues to start. Wait for the server application to successfully start.
Continue with the workshop and remote debug the server.
When disconnecting from the remote Java application, the RTC development server will show again, that it waits for a remote connection.
Reprovision the Jazz applications
The option -clean can be used to force to re-provision the Jazz Applications installed on the Liberty Profile server. This is necessary to force rereading custom server extensions during deployment of these extensions. The option can also used together with the -debug action.
Summary
Following the solution described above, allows to successfully perform the Lab 1.5 of the Rational Team Concert Extensions Workshop. I am currently trying to find out if there
Is an option to change the default port for this option
Is an option to avoid suspending the server startup until the remote debugger connection is initially done
I will update the workshop and publish the changes. I have already done several other changes to adjust the workshop for the newer EWM/RTC versions.
As always, I hope that the blog posts here help users in the Jazz community.
While running the test, I found that I had to increase the maximum heap size for the launch [RTCExt] Jetty RTC Server, because the JUnit Test that is run to create the test database, ran out of heap memory. I doubled the value from -Xmx256M to -Xmx512M and the error was gone.
Increase the available heap
The fix is only in the EWM/RTC Client SDK, so it is only necessary to download that again.
Please note that due to the name changes and newer versions of Eclipse, there might be some small differences between the screen shots in the RTC Extensions Workshop and the newer versions. However, they are so minor, it should not be an issue. I am currently looking into updating the current Extensions Workshop material with small changes to help guide the user with these changes.
I have seen issues with the RTC Client SDK for the 7.0.x releases recently. The Server SDK seems not to be affected.
Client plug-in development
UPDATE: The issue with the Client SDK has been found and corrected. Just download and use the new Client SDK.
UPDATE: The issue with the server debugging can be solved as explained below.
If you are using the Rational Team Concert Extensions Workshop for the EWM/RTC versions 7.0.0, 7.0.1, 7.0.2 and try to start an Eclipse client from the client launch as described in section 1.9 Test the RTC Eclipse client launch, you might run into a problem. I reported the issue in Defect 511733 (use your Jazz.net credentials).
This issue has been solved and you should not see it any longer. Make sure to download the latest Client SDK from Jazz.net.
If you need to develop client extensions and run into this problem, check the status of the defect. Worst case, use a 6.0.6.1 SDK to develop your extension. This is absolutely possible if you do not rely on a new client API feature from 7.0.x.
Server debugging
If you are using the Rational Team Concert Extensions Workshop for the EWM/RTC versions 7.0.2, trying to launch the development server in debug mode that allows to attach an Eclipse debugger to that server, the server startup fails. In ‘1.2__15 start the server using server.startup.bat’ after adding the debug configuration, the server does not start. The log folder only contains the files start.log and console.log. The content indicates that the connection to the debug port failed because the port was already in use. When checking the pot usage, the port is not in use. This has not been seen in any version before. See Defect 527867 for work arounds and solutions.