The EWM / Rational Team Concert Extensions Workshop and recent Eclipse Versions

I updated the EWM / Rational Team Concert Extensions Workshop recently for 7.0.x and for 7.0.2SR1. There is an issue when using the P2 install explained in the Workshop with more recent Eclipse Clients. I think I have seen this with the 6.0.6.x as well as with the 7.0.x versions, dependent of the Eclipse version I used.

EWM P2 install with more modern Eclipse client versions

Eclipse client fails to connect with the EWM Server with the error message “Could not get value of field private transient sun.util.calendar.BaseCalendar$Date java.util.Date.cdate in 2022-07-31 09:37:08.301”. I was recently contacted by a colleague Thomas, who provided a solution for this issue. The solution is explained in this forum question. I added the following lines to the eclipse.ini file:

--add-opens=java.base/java.lang=ALL-UNNAMED
--add-opens=java.base/java.util=ALL-UNNAMED

As an example in the image below.

This solved the issue for me for Eclipse 2022-06 and might be a solution for other versions as well.

I tried several other things such as different Java versions, but I did never find a really reliable solution that allowed to stay close to the workshop documentation.

Update

After discussing the findings with some of our developers, I found out that there was actually a setting in the eclipse.ini, that defined using a Java 17, which overrode my -vm setting to use a different Java version. The setting was buried somewhere in the middle of the ini file and looked like this:

-vm
plugins/org.eclipse.justj.openjdk.hotspot.jre.full.win32.x86_64_17.0.3.v20220515-1416/jre/bin

This specific setting also obscured that there is a Java shipped with Eclipse. At least after removing that and providing Java 11, the error does no longer happen. The settings above are only needed to allow the operation in Java 17 that is still allowed in Java 11. The defect that causes this is Illegal reflective access operation in EWM Eclipse Client (P2) (547430)) and fixed in 7.0.3. After removing the duplicate -vm with the Java 17, the issue went away.

UI Changes in newer Eclipse versions

When adding the plugins to the search path the UI has changed in more recent versions. The menu item “Add to Java Search”, used in in Lab 1.3 and 1.7, is renamed from to “Add to Java Workspace Scope” in newer Eclipse versions like the one above.

Caveat with the P2 Install

Please note that there is still an issue with the P2 install of EWM. After the install it is impossible to use the Help->Install New Software and the Marketplace. There is a plugin that causes an error as Thomas rightly pointed out to me.

Feature based Launches

Every now and then, I have seen issues with installing the Feature Based Launches in certain Eclipse versions. The Launch options added by the Feature Based Launches (such as OSGI2 and Eclipse2) sometimes do not appear to be working. They do not show up in the Debug Configurations.

If this is the case there is a reasonably safe approach that works. Install the latest Eclipse EWM Client that ships as a zip with EWM/RTC bundled. As an example unzip it into C:\RTC702SR1Dev\installs\TeamConcert\. Adding the dropins folder and the Feature Based Launches has always worked for me so far. The dropins folder needs to be the folder C:\RTC702SR1Dev\installs\TeamConcert\jazz\dropins if you unzip the client that way.

Import the RTC Client Feature

Unfortunately there is another caveat with this approach. In 1.8__53 Import a feature to make launching the RTC Eclipse client much easier the path to import the feature com.ibm.team.rtc.client.feature is C:\RTC702SR1Dev\installs\TeamConcert\jazz and not C:\RTC702SR1Dev\installs\TeamConcert\eclipse with the example install above. If you have done this successfully, you can use the standard EWM Eclipse client for development.

Using multiple versions of Eclipse

It is possible to use both approaches in parallel by installing into different folders such as TeamConcertVanilla and TeamConcert_2022_06. Please be aware that there will be warnings when using the same workspace with different Eclipse versions.

Summary

This blog should help with overcoming some new issues that show up when performing the EWM/RTC Extensions workshop for the latest versions of EWM/RTC. Thes solutions make the work easier for me and I hope sharing them helps others too.

WorkItem Command Line and RTC/EWM Extensions Workshop for 7.0.2 SR1

Work Item Command Line for 7.0.2 SR1

The Workitem Command Line had dependencies to the plain Java Client Libraries, including to Log4J1 in a few places. This breaks the Work Item Command Line for IBM Engineering Lifecycle Management 7.0.2 SR1 especially IBM Engineering Workflow Management 7.0.2 SR1 (EWM). WCL 5.3.1 and earlier will not work with the 7.0.2 SR1.

It was requested to support 7.0.2 SR1 with the WorkItem Command Line. I had a look and was able (I think) to update the code of the WCL to work with the new Plain Java Client Libraries. This removes the dependencies to Log4J1 and change the functionality to use Log4J2. The code is on a different branch named Log4j2, because it is incompatible with previous versions of EWM. A new pre-release was created and is available here as Work Item Command Line 6.0 prerelease. I have only done minimal testing. If you have an opportunity to run it, please report issues.

EWM/RTC Extensions Workshop for 7.0.2 SR1

To be able to perform the updates I had to perform (most parts of) Lab 1 of the Rational Team Concert Extensions Workshop. The workshop still works. There are some smaller issues and here is my erratum so far.

  • I downloaded Eclipse IDE 2022-06 R and installed EWM into it. Unfortunately there is an issue when connecting to the EWM server. The error is “Could not get value of field private transient sun.util.calendar.BaseCalendar$Date java.util.Date.cdate in 2022-07-31 09:37:08.301”. Here is an approach how that can be solved.
  • When adding the plugins to the search path the UI has changed and the menu item in the Eclipse version above is named “Add to Java Workspace Scope”.
  • When trying to create the Test Database with the JUnit launch, the test ran out of memory. I increase the memory settings in the launch.

Summary

The changes to the ELM 7.0.2 SR1 break the Workitem Command Line and require a new setup of the Extensions workshop to develop extensions. The WorkItem Command Line is available as a prerelease for ELM 7.0.2 and the Extensions Workshop seems to be working with minor limitations.

EWM Discovery

Discovery is the name for the method to locate the entry points for the OSLC API in EWM and other ELM tools. The mechanism is the same for all applications but there are differences in the details. This post aims to help understanding the discovery process with a focus on EWM and work items. Ultimately we want to be able to create a new work item and need to get all that is required to do that.

Context of the blog post is the series

This is the series of planned posts I intent to publish over time. Most of the examples will be EWM based, but quite a lot of the content applies to more ELM applications. The examples where performed with versions 6.0.6.1 and 7.0.x.

External Links

Rootservices

The main entry point into EWM and other Jazz applications is the rootservices document. The document is an XML document that is based on RDF. The document can be accessed using a HTTP GET on the rootservices in the context root of the ccm and other applications. For example:

GET https://elm.example.com:9443/ccm/rootservices

The document is not password protected and does not require special headers to be accessed. It is only available as RDF XML document, there is no other way or format to get it. It is possible to use the URI for the rootservices document directly in the browser to display it.

The rootservices document contains information about all the resources and services provided by the Jazz Server.

Service Provider Catalog

To create a work item we need to find the Work Item Service Provider Catalog. We are interested in the rdf:resource entry of the element oslc_cm:cmServiceProviders.

There are certainly many ways to achieve this. The best way to achieve this is to use RDF. RDF support exists in various languages. I am not the best person for the job to explain RDF. My summary would be that RDF defines subject, predicate, object relationships in a Graph based on XML. Once the graph is created, it can be queried. For example, if one has the subject and the predicate it is possible to locate the objects.

The trick is to understand what the subject, predicate and object could be. I have always struggled to figure that out on my own, just looking at the RDF XML document. I have found two possible solutions, that work for me.

  1. Serialize the graph as N-Triples and look into these
  2. Even better, serialize the graph as Turtle format and look into that

Here is a part of the rootservices document, that was parsed as RDF XML serialized into Turtle.

Turtle serialized rootservices document showing the subject

In this document, line 19 shows the subject. Please also note line 17 that shows a prefix definition we are interested in. The next part of the document of interest is shown below.

Turtle serialized rootservices document with predicate and object we are after

Line 100 shows the object we are after, the URI

https://elm.example.com:9443/ccm/oslc/workitems/catalog

The predicate to locate the object is

oslc_cm1:cmServiceProviders

With Line 17 above the predicate has the following namespace prefix

@prefix oslc_cm1: <http://open-services.net/xmlns/cm/1.0/>

The lines 104 to the end of the file show additional subjects and their predicates and objects.

In Python, it is possible to use the library rdflib to work on RDF XML. A core communication support library that I have developed for this define the URIs and namespaces for the used domains. Also see the blog post Using the EWM REST and OSLC APIs for more information about this. The documentation for rdflib can be found here.

The image below shows an example for defining the namespace used above.

Defining a new namespace.

The image below shows the import statements used for rdflib in my communication library that has all the RDF support infrastructure.

Imports for RDF XML rdflib

The image shows the code that is used to serialize the rootservices document and the function to get the service provider catalog. The last argument is the predicate that locates the work item service provider catalog we are interested in. In this example the predicate is

comm.oslc_xml_cm1_ns.cmServiceProviders

Which translates to

http://open-services.net/xmlns/cm/1.0/cmServiceProviders

or

oslc_cm1:cmServiceProviders

used here to search for the service providers.

Searching the service provider catalog

The function below composes the subject and then gets the objects selected by the subject and the predicate.

Get the objects based on the subject and predicate

The function creates the RDF graph. This binds all the namespace definitions and aliases.

Then the rootservice document is parsed based on this RDF definition to create the graphs content. The code creates the URI for the rootservices document that represents the subject – it is basically the URI of the rootservices document. Finally the code gets all objects selected by the subject and predicate, and creates an array of these URIs.

The code segment shows how the work item service provider URI is then taken out of the result array. Please note that all these activities did not require a login, username, password or anything in addition to the rootservices document.

The image below shows how the RDF graph is created and the desired aliases and namespaces are bound to the graph.

Create the graph by binding the namespace aliases and URIs

Work Item Service

We just discovered the work item service provider catalog shown in the GET command below. The next step in the discovery chain is to look up the work item services for the project areas. This is done using the service provider catalog that was discovered above.

GET https://elm.example.com:9443/ccm/oslc/workitems/catalog

and the listed OSLC request headers.

Accept application/rdf+xml; charset=utf-8
OSLC-Core-Version 2.0

Note that this request is for a protected resource. The server will redirect to the authentication. The details are explained in the previous post ELM Authentication.

The resulting RDF XML document contains the information we are interested in. The image below shows the Turtle serialization of the service provider catalog. It shows a structure similar to below.

Turtle format of the service provider catalog

The section with the predicate oslc:ServiceProviderCatalog shows the project areas services documents. For some purposes this might be enough. It is possible to iterate all the service provider catalogs to get more information for each one performing a GET on the URI. However, the response we already got contains more information that can be used.

Service Provider details

By searching for the objects that have the predicate oslc:ServiceProvider instead, it is possible to access the URI for the service provider, as well as additional information such as the project areas name using the dcterms:title. This can help reducing the number of subsequent calls to get these details.

The code below shows how to get that information from the serialized document. It builds arrays of project area names and URIs which is later used. E.g. find the index for a project area name and then get the related URI. There are likely better ways to store the information in Python.

Retrieve the service providers

The first step is to look up the subject for the predicate oslc:ServiceProvider. The resulting URI is the service provider for the project area. For example

https://elm.example.com:9443/ccm/oslc/contexts/_8e5qfFpmEeukW7cqqDjAuA/workitems/services.xml

As explained above, the code uses the found subject to get the project area URI in the oslc_cm1:details and the project area name in the dcterms:title attribute.

Project Area OSLC Services

In EWM, work item services are project area specific. The reason is that each project area can have a different process and the process defines the work item types, attributes, workflows and all that. The next steps in the discovery process would be to get the project areas services.xml.

GET https://elm.example.com:9443/ccm/oslc/contexts/_8e5qfFpmEeukW7cqqDjAuA/workitems/services.xml

This requires the OSLC headers:

Accept application/rdf+xml; charset=utf-8
OSLC-Core-Version 2.0

The resulting RDF XML document contains information about various OSLC related capabilities and services. By looking through the information, especially using the Turtle format, it is easy to identify.

  • The OSLC query capability
  • Various dialogs such as OSLC selection, creation dialogs and pickers
  • The OSLC creation factories for all the work item types
  • The OSLC resource shapes for all the work item types

Similar to the work item services above, it is possible to get the desired information. Here example code getting the creation factories and related resource shapes for all the work item types.

Getting the creation factories

The pattern repeats. To create a work item of a specific type, get the creation factory for that type. To get information about the attributes to create, in the work item, get the resource shape for the type. For the attributes in the resource shape get the type, allowed values and multiplicities.

Other Formats

The rootservices document is only available in RDF XML. Dependent on the tool, other formats might be supported for subsequent documents. For example using the header

Accept application/json; charset=utf-8

it is possible to try to get the JSON representation. This works for EWM in various places. No guarantee. The discovery mechanism in this case is similar to the XML example above, the only difference is that JSON is a format that is easier to read for humans in my opinion. Instead of creating a graph, it is necessary to use a JSON library to access the data.

The code below shows how to get the service providers when using JSON format instead of RDF XML.

Getting the service providers JSON

The discovery is analogue to what has been shown above with RDF. Even the structure of the code is similar, which is due to the fact that the data structure is similar. It is just in a different format that is easier to digest than RDF, at least for me.

Summary

I have struggled for a while with how to explain the discovery process. Especially how to get the data out of the RDF XML has been hard. Since I found the Turtle format, it makes a lot more sense to me. So, as always, I hope that this was of interest and helps the users and the Jazz community to make their life easier.

Using the EWM REST and OSLC APIs

I have collected a lot of experience with the Java APIs for RTC/EWM over the years. Until 2020 I did not use the RTC/EWM REST and OSLC APIs at all. Luckily I got involved in several engagements where I had the opportunity to explore these APIs and learn how to use them.

I found the documentation for these APIs I was able to find underwhelming. The available documentation was often lacking complete working examples. There was usually some critical part missing or there where no examples at all, just the API specification. The latter seemed to be systematic to focus on the specification and not the specifics. However the lack of samples was confusing and left too much room for interpretation. I ended using search engines a lot. I had to experiment a lot to get things moving.

Obviously I learned a lot. I would like to spare others the hassle. So, as usual I want to share the experiences and lessons learned with the community. My intention is to provide with some relevant and working examples that are easy enough to perform on your own. I hope this can save people some time when trying to use these APIS as well.

This post will be the first in a series of posts and provide links to the other posts for easy navigation. In addition this post will discuss which development environment and tools were utilized to explore and use the APIs. I will share some of the code I have developed over the time to ease the exploration.

The planned blog posts for the series

This is the series of planned posts I intent to publish over time. Most of the examples will be EWM based, but quite a lot of the content applies to more ELM applications. The examples where performed with versions 6.0.6.1 and 7.0.x.

I wrote Learning To Fly: Getting Started with the RTC Java API’s a couple of years back and it is still relevant. Read it to understand What API’s are Available for RTC and What Can You Extend? and how to get started with the RTC/EWM Java APIs. The post and the linked posts contain even more valuable links with respect to APIs.

Since back then, the ELM API Landing Page has been added to provide a more comprehensive overview about the available ELM APIs. If you are interested in ELM related APIs go over that page and find out what APIs are available. This page also points to other resources such as the OSLC standards and available workshops.

Finally the Interesting links page is a collection of, well, interesting links I found over the years.

Development environment choices

The first information I can share is how I explored and used the APIs and explain a little bit about the development environment options available and which development environments I used to explore the APIs.

I like developing with Java a lot. The EWM/RTC Java APIs are very rich and it is relatively easy to develop code for EWM/RTC, provided the development environment was set up by performing at least the complete Lab 1 of the Rational Team Concert Extensions Workshop. Eclipse, the RTC SDK and the Plain Java Client Libraries allow development of extensions and automation based on the EWM Java SDK and API. The same environment can also be used to develop code against the REST and OSLC APIs.

It is also possible to use Java or any other language supporting HTTP, to develop code for the EWM/RTC REST and OSLC APIs, just using libraries and available frameworks.

I have already used the Java based Eclipse Lyo framework to develop a client automation for Doors Next Generation (DNG) and I used the Eclipse Lyo Designer code generation framework to develop integration servers. My experience was that Lyo is a nice framework that helps a lot, if you know what you are doing. If I was not, I found it challenging, especially debugging and understanding what was going on in the HTTP requests.

I have looked into and used Postman and the Firefox addon RESTClient to experiment with REST and OSLC APIs. It is very useful for experimentation and I use it in parallel to the other development environments. A typical use case is to login and experiment with one call to figure out how it works. If the call sequence and the amount of data becomes too big, it is not really efficient any more, and I would use a different approach.

I started using Python and Jupyter Notebook in 2020 when I had the need for some automation for importing, manipulating, consolidating and querying a lot of CSV data for a customer. I was very impressed with the quality of the available libraries and the turn around times that were achievable. When I was asked to help one of our customer teams with information about the RTC/EWM APIs for the development of a prototype for a customer specific mobile client, I decided to use Python instead of Java. As mentioned above I also used RESTClient or Postman for experimentation with one or two API requests.

There are various Python development environments around. I do not think it matters which one you use. I used Spyder which comes with Anaconda. There is also PyCharm and kite. I am not opinionated. I just notice that the development environments are far away from the quality of Eclipse and the built in compiler and debugger. There are always tradeoffs, I guess.

Python – Libraries and Code Samples

The focus of the blog posts is more the APIs, how they work and how they can be used and not so much Python and how to use it. However, I figured that I want to share some code I developed over time, enabling easier data collection and debugging. So I will provide examples where I see fit.

The most important aspects of HTTP based APIs is to understand which method is used with which URI, which headers are used, which formats are sent and accepted and which request body (if any) is sent. The response data is also key, especially the status, response headers such as Location and obviously the response body. A mechanism that can log all this information is key in understanding the APIs and faster turn around times.

Python has a lot of libraries for various purposes. The Libraries that I used are shown below, loosely grouped by what they are used for. First the libraries used operating system and system specific purposes such as logging, files and execution.

import os
import sys
from datetime import datetime, timezone
from pathlib import Path

Then the requests library which is used for session handling and HTTP communication in my code.

import requests
from requests.auth import HTTPBasicAuth
from requests.packages.urllib3.exceptions import InsecureRequestWarning

This code below is necessary to suppress issues with certificates. This is a typical situation for me as I usually develop against some local test system.

# Disable warnings for self signed or invalid SSL certificates 
# to be able to talk to test systems
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)

# Start a session
session = requests.Session()

The Libraries I used for RDF XML and JSON parsing and representation.

import json

from rdflib import Graph, URIRef, Namespace
from rdflib.namespace import CSVW, DC, DCAT, DCTERMS, DOAP, FOAF, ODRL2, ORG, OWL, PROF, PROV, RDF, RDFS, SDO, SH, SKOS, SOSA, SSN, TIME, VOID, XMLNS, XSD

Some miscellaneous library for encoding.

from base64 import b64encode

Python Logging and Reuse

I ended up creating a base library for the Communication with the ELM system that allowed better reuse. I will not share all the code at the moment, but I will share some basic learning and code that I found being key for me to be able to do my work. The library is referred to as:

from elmcommlib import ELMCommLib as elmcomm

The library is initialized with a session, the public URI and a name for the log folder to be created.

publicURI = 'https://elm.example.com:9443/ccm'
paName='JKE Banking (Change Management)'
user='ralph'
password='ralph'

# Disable warnings for self signed or invalid SSL certificates 
# to be able to talk to test systems
requests.packages.urllib3.disable_warnings(InsecureRequestWarning)

# Start a session
session = requests.Session()
comm=elmcomm(session, publicURI,'logCreateWiRDF')

The library also provides a mechanism to create and set log file folders using createLogFolder("FolderName") . In case the folder already exists it can alternatively set with setLogFolder("FolderName").

The log folder is used by the method writeResult() shown below, which dumps the complete communication in a text file, when a file name is provided. The file name should be constructed and numbered to better understand the flow of the sequences. Below shows such a sequence with file name numbering as an example.

The communication logs are always created in the current log folder. This allows to split the logs for the API usage into smaller sequences by switching the current log folder.

Content of a log folder.

A debug print dPrint() allows to avoid chatty logging. You can keep the logging entry and force it to show if you want. Printing a timestamp using timeStamp() is sometimes useful, especially when looking at performance of calls.

    # Folder for log files
    def createlogFolder(self,folderName):
        defaultLogFolder= 'commlogs'
        if(folderName==None):
            folderName=defaultLogFolder
        if(folderName==''):    
            folderName=defaultLogFolder
        script_dir = os.path.dirname(__file__) #<-- absolute dir the script is in
        logFolder=os.path.join(script_dir, folderName)
        Path(logFolder).mkdir(parents=True, exist_ok=True)
        return logFolder

    # Folder for log files
    def setlogFolder(self, folderName):
       self.logFolder=self.createlogFolder(folderName)


    # Log the HTTP communication for the request in a folder
    def writeResult(self, fileName, result, url=None):
        if(fileName!='fileName'):
            self.dPrint(f"Execute: '{fileName}'")
            logFileName=os.path.join(self.logFolder, fileName)
            with open(logFileName,'w') as f:
                if(url!=None):
                    f.write(f"Destination URL: {url}\n\n")
                reqMethod = result.request.method
                reqURL=result.request.url
                reqBody=result.request.body
                f.write(f"Request: {reqMethod} {reqURL}\n")
                f.write("\nRequest Headers:\n")    
                for header in result.request.headers:
                    value=result.request.headers[header]
                    f.write(f"\t{header} {value}\n")
                f.write(f"\nBody:\n{reqBody}\n") 
                f.write(f"\nResponse Status: {result.status_code}\n")
                f.write("\nResponse Headers:\n")
                for header in result.headers:
                    value=result.headers[header]
                    f.write(f"\t{header} {value}\n")
                cookies=result.cookies._cookies    
                f.write(f"\nResponse Cookies:\n {cookies}\n")
                f.write(f"\nResult Body:\n{result.text}\n")        
      
      
    # Debug print if debugging is on
    def dPrint(self, message=None, doPrint=True):
        # DebugPrint, switch off by sending doPrint=False
        if(message!=None):
            if(doPrint==True):
                print(message)
            else:
                pass
    

    # Print timestamp
    def timeStamp(self, message):
        now = datetime.now(timezone.utc)
        self.dPrint(f"{now}: {message}")
   

The image below shows how a log file for a message created using the method writeResult() looks like. Note that the log contains all the important pieces of the request, response pair. I used tooling in the editor Notepad++ to “pretty print” format the XML section in the response body. This makes it much easier to understand.

Logged http request – response

Help with RDF XML

The REST and OSLC APIs provide different serializations for the content that they accept and provide. One older one is XML based using the Resource Description Framework (RDF) specification. Newer standards such as JSON and more might be available. I have experience with RDF and JSON and I prefer JSON.

RDF is not for me. I always struggle to understand what and how I should be searching to get the data I want. Especially when the data is full blown with namespaces and what have you. This was one of the biggest struggles I had with Eclipse Lyo. The HTTP Client was hard to use for debugging, because the content was usually consumed when I tried to dump the response into a log file. So I could have a log entry for debugging and the call would not proceed or the call would work and I had no log data. Maybe I overlooked something.

In Python I was able to use the method writeResult() and continue processing the response data. I was able to use the function below to serialize RDF response bodies into a form that shows all the subject, predicate and object data and saves it into a file. That made it easier to work with RDF for me. I still prefer JSON format, if available. The OSLC discovery mechanism supported by RTC/EWM requires XML-RDF in the first steps, so you will have to deal with it.

    # Serializes a graph (based on RDF) in the nt format 
    # This format shows all graph nodes as Subject->Predicate->Object
    # This allows to better understand what to search for
    def debugSerialize(self, graph, fileName='fileName'):
        # Serializes a graph into the NT format. This provides 
        # a great source to look into RDF triples in the graph
        if(fileName!='fileName'):
            logFileName=os.path.join(self.logFolder, fileName)
            graph.serialize(logFileName, format="nt")

The serialization formats supported out of the box are “xml”, “n3”, “turtle”, “nt”, “pretty-xml”, “trix”, “trig” and “nquads”. For me NT and Turtle seem to be most useful, I built in capabilities to save the XML data as NT and Turtle format to help understanding how to be able to access the data later.

Update: The preferred option is to serialize the graph as Turtle format and look into that.

This is how the parsed RDF graph data from above looks in the NT format. Every row (mind the word wrap) is a triple of subject, predicate and object. This provides with hints how to search for data.

The RDF-XML in NT format, providing the triples in the model.

The capabilities above where absolute key for me to be able to explore and understand the EWM/RTC APIs and document them for my colleagues.

Operating on RDF requires the RDF definitions in Python. I used the ones below and defined them in my library.

    #RTC CM RDF definitions
    rtc_cm = 'rtc_cm'
    rtc_cm_URI = 'http://jazz.net/xmlns/prod/jazz/rtc/cm/1.0/'
    rtc_cm_ns = Namespace(rtc_cm_URI)
    
    oslc = 'oslc'
    oslc_URI = 'http://open-services.net/ns/core#'
    oslc_ns = Namespace(oslc_URI)
    
    oslc_cm = 'oslc_cm'
    oslc_cm_URI = 'http://open-services.net/ns/cm#'
    oslc_cm_ns = Namespace(oslc_cm_URI)
    
    oslc_xml_cm1 = 'oslc_cm1'
    oslc_xml_cm1_URI = 'http://open-services.net/xmlns/cm/1.0/'
    oslc_xml_cm1_ns = Namespace(oslc_xml_cm1_URI)
    
    jfs_process = 'jfs_proc'
    jfs_process_URI ='http://jazz.net/xmlns/prod/jazz/process/1.0/'
    jfs_process_ns = Namespace(jfs_process_URI)

    oslc_rm = 'oslc_rm'
    oslc_rm_URI = 'http://open-services.net/xmlns/rm/1.0/'
    oslc_rm_ns = Namespace(oslc_rm_URI)
    
    oslc_config = 'oslc_config'
    oslc_config_URI = 'http://open-services.net/ns/config#'
    oslc_config_ns = Namespace(oslc_config_URI)

Summary

This is the first of a series of posts, I hope to publish more soon. I will try to keep this post maintained and I am looking forward to the next posts. As always, I hope that my content, especially in this blog, helps someone in the ELM community out there. If it does, feedback would be awesome.

The RTC Extensions Workshop has been updated for EWM 7.0.x

I am very passionate about the RTC Extensions Workshop as you might be able to tell from the content of this blog. Performing it with EWM 7.0.x provided several challenges. It became apparent that an update to the workshop would be beneficial.

I spent a considerable amount of time in the past two months to update the workshop. As a summary the following items where addressed:

  1. Since the CCM server is shipped with WebSphere liberty profile, configuring the server for debugging needed to be changed. The old way to configure the server still worked in the 6.0.x versions, so it went unnoticed. With EWM 7.0.1 this is no longer the case and the workshop was updated to address this.
  2. The advanced capabilities introduced in the EWM SCM system in the 6.x and later caused a deviation of the screen shots showing the pending changes. The workshop setup tool was slightly changed to fix this.
  3. The workshop setup tool and its shell script has been tested with Linux and MAC OS.
  4. I wanted to add a section to Lab 1, explaining how to setup the existing Eclipse client/server development workspaces to better support development and debugging of the Plain Java Client Libraries forever. The new last optional section addresses this. For this reason Lab 1 of the workshop is a must for anyone intending to create Java based automation or extensions to RTC/EWM.
  5. I had an errata list with a number of small issues, typos, naming inconsistencies and the like that were fixed. During reviews a bunch more showed up and were fixed.
  6. A colleague ran the workshop on his MAC, so this works. Use whatever is available for MAC like Eclipse and where this is not specifically available, use the Linux versions.

The RTC Extensions Workshop has been published with an additional section for the new EWM versions and is now available for download. I will update recent posts around the workshop in the next few days.

As always, I hope that this blog post helps the users in the Jazz Community.