JavaScript Attribute Customization – Where is the log file?


A lot of users try Java Script based attribute customization and often run into issues. They ask on the Jazz.net forum to get the issue solved. Unfortunately the questions usually lack the information required to help. This post explains how to retrieve log information to be able to provide this information.

Where are the Script Log files?

Java Script attribute customization can use the console to log text messages into a log file.

console.log("My message");

The question is, where are the log files?

The script context

Java Script attribute customization scripts are, as far as I can tell, run in one of the following contexts:

  1. The Eclipse Client
  2. The Web Browser
  3. The RTC Server

Dependent on the context it is run, the log information can be found in a log file that is created and maintained by the

  1. The Eclipse Client
  2. The RTC Server

Please note that the logging information is not in the RTC Application log file CCM.log.

The Jazz.net Wiki entry about attribute customization provides hints about how to log data and how to log data and how to debug scripts in the section Debugging Scripts. Similar information is provided in the Process Enactment Workshop for the Rational solution for Collaborative Lifecycle Management.

Unfortunately both only talk about how to find the server log information for Tomcat. Since Websphere Application Server and WAS Liberty are also valid options, how can one find the log files in this case?

Find The Eclipse Workspace Log

As background, note that the Eclipse Client as well as the RTC Server are based on Eclipse technology. This common technology is used to log the data and determines the log file location and name.

Each running Eclipse has a workspace location and stores meta data and log information in this workspace. The workspace is basically a folder in the file system. The metadata is stored in a sub folder with the name .metadata. The log file is in this folder and named .log.

For the RTC Eclipse client and for scripts that run in this context, open the Eclipse workspace folder that is used and find the .log file in the .metadata folder.

For the RTC Server, the easiest way to find this workspace and the enclosed log file that I have been able to find is to search for the folder .metadata. For Tomcat and WAS Liberty standard installs go to the folder where the RTC Server was installed and then into the sub folder server. From here search for the folder .metadata.

For Websphere Application Server (WAS) go the profile folder for the profile that includes the RTC server deploy and search there.

Here an example search for a test install based on WAS Liberty:

LogFileLocations_2016-06-17_13-10-14

Note that every Jazz application has its own Eclipse workspace with metadata folder and workspace log file. The one interesting for RTC attribute customization is the workspace of the RTC server. The folder structure includes the context root of the Jazz application. Each application has a different context root which typically matches the application war files name prefix. The RTC application typically has the context root and application war files name prefix ccm. Open the workspace for this application and find the log file.

Looking Into the Log File

You can look into the log file. Please make sure to use a tool that does not block writing to the log file, while you are browsing its content. The log file is kept open by the server when it is running and blocking it from writing is not what is desired. Use more or an editor that reads the file and does not block it. For windows users: notepad does lock the file for writing. Use a different tool such as notepad++.

The Process Enactment Workshop for the Rational solution for Collaborative Lifecycle Management provides some examples for how logs look like and can be created. If you can’t find the log entry you are looking for, always check the server log as well. Maybe the script runs in a different context than you expect.

Here an example for log entries:

Log_Examples_2016-06-17_14-58-43

Load Errors

It might happen, that an expected log entry is not found in any of the log files. In this case make sure to check for script loading errors as well as thrown exceptions at the time the script was supposed to run.

Load errors can be caused by different reasons.

One reason can be that attachment scripts are not enabled. There are enough indicators in the Attribute customization editor in Eclipse that a user should have spotted this these days.

Another reason can be that the script is syntactically not correct and can not be interpreted as a valid JavaScript. One reason for a script not being recognizable as a valid script that I have seen recently is an incorrect encoding. If an external editor is used to edit the script and the script is then loaded from the file, make sure that the script has a correct UTF-8 encoding. If in doubt change the encoding to UTF-8 and reload the script.

Why would the encoding be important? The encoding controls the format of the content. it is hard to determine the encoding from a file and it is often not checked. But expecting a specific encoding but loading a file that was encoded in a different one can cause to find unexpected content. This can can cause the JavaScript not being recognized as JavaScript and the load fails.

Debugging vs. Logging

Using the debugging techniques explained in the Wiki entry in the section Debugging Scripts and in the Process Enactment Workshop for the Rational solution for Collaborative Lifecycle Management should be the preferred option and is usually more effective.

Looking at the logs is still a valid option, especially to be able to log execution times and to find script loading issues and for scripts that are run in the background in the server context, such as conditions.

I found using the Chrome Browser and the built in Developer Tools to be most effective. The scripts can easily be found in the sources tab under the node (no domain). Make sure to enable the debug mode as explained here: Debugging Scripts.

JavaScript_Debug_Chrome_2016-06-17_14-24-43

Summary

This post explains how to find the log files that contain log information written by JavaScript attribute customization scripts. I hope that this helps users out there and makes their work a little bit easier.

Posted in Internet Of Things, Jazz, RTC, RTC Process Customization, WAS Liberty Profile | Tagged , , | Leave a comment

Raspberry Pi Unleashed – Setup the GrovePi+


This is the next post in my series around the Raspberry Pi and the Internet of Things.. To talk about Internet of Things our Raspberry Pi should actually do something and provide some data or service. In this post we will set up the Raspberry Pi with the GrovePi I/O board and sensors. The Raspberry Pi will also be referred to as RPi in the posts.

Related Posts

  1. Raspberry Pi Unleashed – Getting Started with the Internet of Things and the Raspberry Pi
  2. Raspberry Pi Unleashed – Setup the GrovePi+ – this post

Start All Over

I already had setup the GrovePi+ once on a NOOBS based Raspbian Jessie full desktop image for testing. This was just to understand the process a bit and get used to the RPi. To be able to blog about it, I wanted to do this process again. So I decided that I wanted to start with a Raspbian Jessie Lite, a minimal image based on Debian Jessie. Devices in the Internet Of Things are probably usually only set up with the essential software they need to function. So I was curious what I needed to do to get this up and running.

Initial Install

So I started all over again.

After power up, the Raspberry Pi started without any issues and I was able to login using the Keyboard and my monitor. There is only a shell available in the new image.

PIE_With_Peripherals

Install Without a Display and Keyboard

It should be possible to avoid having to use a display and peripherals on the RPI to set it up, only using the network connection. The best approach would be to use an image instead of the NOOBS distribution. It is necessary to connect to the RPi once it is up. This requires to be able to see the IP address. In a local environment, you might have access to the router to be able to determine and even set the RPi’s IP address. In other environments, follow the tips in this blog to find the IP address using special tools such as nmap or Zenmap.

Once you have the IP address, use a SSH shell connection (e.g. using Putty on Windows) to the RPi. This allows copy and paste as well as screenshots to create documentation.

Use a SSH Shell Connection

The big display is more useful as a secondary display to the laptop. I set up my network so that the RPi always gets the same IP address for Ethernet and WLAN. At this point Ethernet was working  which allowed to connect to the RPi with SSH and to login.

Connected_1

This shell is going to be used to setup the system and add all needed packaging.

Install GIT

From my prior experiments I knew that I would need GIT to be able to set up the GrovePi+. The Raspbian Jessie Lite does not have GIT installed – i tried running GIT from the shell, and the program was not found. A quick search on the internet shows that this can be changed easily by running

sudo apt-get install git-core

The image loaded all needed packages and installed GIT. I had to press ‘y’ to  perform the install process.

Install the GrovePi software

Shutdown and Switch Off

It is necessary to shutdown the system before switching it off. By running

sudo shutdown

The shutdown is initiated. The connection will be lost after some time and it is safe to switch power off.

Connect GrovePi+ Board

Make sure the Raspberry Pi is shutdown and power is off.  Unfold the first section Connect the GrovePi+ to the Raspberry Pi in the description to connect your GrovePi+ to the Raspberry Pi.

Setting Up The Software for the GrovePi+

Make sure everything is nicely connected and start the RPi again. Connect to the RPi and log in.

To run the GrovePi+ it is necessary to install some additional software. The manufacturer provides this Getting Started description for the steps. I found this a bit confusing, as some of the steps refer to a special Raspbian image I don’t have here. The following steps worked for me.

Follow this link to set up the software for the GrovePi+. The link can be found on the right side of the Getting Started description.

The original description uses sub folders in the folder /home/pi/Desktop to install this software. Since there is no UI and thus no desktop the following steps install the necessary software into sub folders of the folder /home/pi/.

The first step is to download the software and drivers using GIT.

cd /home/pi/
sudo git clone https://github.com/DexterInd/GrovePi.git

It will take a short while to download the software. Once the data is available locally in the folder GrovePi, it can be installed by running

cd /home/pi/GrovePi/Script
sudo chmod +x install.sh
sudo ./install.sh

Please note, all the executable and shell scripts in the folder tree cloned by GIT are missing the executable permission, so running a script always requires to set the executable permission, this is a common pattern.

After the install was performed the system will automatically reboot, if this is not prevented. In any case, if you haven’t already connected the  GrovePi+ board, shut down the Raspberry Pi, switch off the power and install the GrovePi+ board.

To test if the GrovePi+ board is available you can run

sudo i2cdetect -y 1

The result should look like the image below:

Test_GrovePi

If you can see a “04” in the output, this means the Raspberry Pi is able to detect the GrovePi+ board. What it does is to check for I2C ports. I2C is a connection standard that is often used as interface for sensors and devices. You can find the technical details and tutorial for the GrovePi here. This includes the information about the available connection ports here.

If you followed my advice to go for option 2 in the last post and bought a LED for the GrovePi+,   you can test that now as described in step 10 in this link to set up the software for the GrovePi.

If you followed Tim’s shopping list you can use the Grove Barometer for testing.

If you haven’t connected your sensor or device to your GrovePi+, you should shutdown and power down the RPi. It is always a good idea to power off if connecting something to a hardware, if not stated otherwise.

Look up the connection method for your sensor or device. Connect the device to the port required.

Example The Grove Barometer sensor details for the RPi are described here in the Wiki. Look at the GrovePi+ port description and look up an I2C port. Connect the Grove Barometer Sensor to the chosen port.

Power up the RPi, open a connection and log in.

Example the Grove Barometer Sensor. Make the sensor Python scripts executable.

# Make the high accuracy barometer sensor example script executable
cd /home/pi/GrovePi/Software/Python/grove_barometer_sensors/high_accuracy_hp206c_barometer 
sudo chmod +x high_accuracy_barometer_example.py

Once this has been performed successfully, run the sensor example script to read the sensor like below

# Read high accuracy barometer sensor using the example script
cd /home/pi/GrovePi/Software/Python/grove_barometer_sensors/high_accuracy_hp206c_barometer 
sudo ./high_accuracy_barometer_example.py

You should see something similar to the image below.

Barometer_Sensor_Test

The sensor was successfully read and you can now go ahead and use the GrovePi+ board and the I/O devices you purchased for it. If you run into issues, you might wnat to upgrade the firmware for the GrovePi+ if you have not already done so.

Firmware Update for the GrovePi+

It is always a good idea to make sure your peripherals have the latest firmware. The procedure to upgrade the firmware for the Grove PI+ is described here.

Essentially you perform the steps below

# Firmware update 
cd /home/pi/GrovePi/Firmware
sudo chmod +x firmware_update.sh
sudo ./firmware_update.sh

I ended up having problems with the first NOOBS Raspbian image I used and the described procedure was unable to locate the GrovePi+ board. The sensor board was not recognized and the update never started. A blog entry hinted to use the Raspbian_For_Robots update scrips instead which worked.

# Firmware update Raspbian_For_Robots
cd /home/pi/
sudo git clone https://github.com/DexterInd/Raspbian_For_Robots.git

cd /home/pi/Raspbian_For_Robots/upd_script
sudo chmod +x update_GrovePi.sh
sudo ./update_GrovePi.sh

sudo chmod +x update_GrovePi_Firmware.sh
sudo ./update_GrovePi_Firmware.sh

Enable the Wireless Connection

After getting rid of the dedicated monitor, keyboard and mouse for the Raspberry Pi it was time to see how to enable the wireless connection with this image. A search in the internet provides with this Wiki page how to set up WIFI in Raspbian using the command line.

The steps are simple, assuming there is a Edimax Wi-Fi USB Adapter Nano Size or another supported WIFI USB adapter connected to the RPi. Start a detection run to find the available networks like this.

# Detect networks
sudo iwlist wlan0 scan

You should get a list of networks with details such as the ESSID and the authentication used, similar to below:

Wireless

As described in p WIFI in Raspbian using the command line edit the configuration file and add the required information.

# Edit WIFI configuration file
sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

Modify the file as described. My result was like beow. I changed the country code and added the network that should be chosen.

Wireless_Config

After saving the configuration changes and rebooting (the other suggested measures did not work for me) my Raspberry Pi was connected to the wireless network.

Please note, Like for the Ethernet connection I configured my VDSL router/wireless network to provide the same IP address to the Raspberry Pi. This allows to use a remote connection reliably in my network. Otherwise I would have needed some DSL or other name resolution to be able to reliably find the IP address for my Raspberry Pi to open the wireless SSH connection.

Up and Running

All the important parts are now up and running on the device and most of the periphery that made it look like a desktop commuter is gone. Now it is ready to join the Internet of Things.

PiWireless

Summary

Now the Raspberry Pi is configured to use the GrovePi+ I/O board and the sensors. The driver software is available.

This was actually really easy to do and the documentation I found was really good enough to get me going really quickly. This is no comparison to bringing up a custom RTOS on a custom board where you might even have to create your own device drivers to get the system working. I am not sure if python scripts can be debugged. I am not yet sure if it is possible to set up a cross development and debugging environment on my laptop to develop and deploy on the RPi, that remains to be seen.

I will now have a look at how to connect to Bluemix to provide some data/service and try to get my thing into the internet…… Hmmmm, that does not sound right.

Posted in Internet Of Things | Tagged , | Leave a comment

Raspberry Pi Unleashed


This is not the usual RTC API post although it has something to do with RTC in the long run, I hope. I have been tinkering and found some interesting things I wanted to share.

Since beginning of the year I have been a part of what is now the IBM Watson Internet of Things (IoT) business unit, working in the Unleash the Labs team. Like my colleague Tim I am currently involved with our CLM solution which basically provides the development teams with the required capabilities to plan, develop, build and test.

Internet of Things

Internet of Things means that more and more devices interact, provide and share data in the internet. This basically means these devises will have more and more software, sensors, processor and network capabilities. The software as well as the devices need to be planned, developed integrated, run and maintained. That is basically what our CLM solution is for. Interaction also means to make the data available and to provide services based on them. Bluemix is a cloud platform that allows to do this.

It is not obvious to everyone, but in the last 20 years software has crept into almost all devices. From airplane to electrical toothbrush, software is embedded everywhere and provides more and more of the added value to make the difference over the competition.

In the past the software embedded in most products usually worked isolated from the environment it did what it was made for, maybe used sensors and motors to interact and control, but most of the data was isolated in the system. The devices were often not connected and did rarely provide data outside of the system. For some time sensors in cars where often added for specific subsystems and the data was rarely available or shared with other subsystems.

Since connectivity to the Internet, even mobile, has become reasonably cheap, recently more and more products can also be connected to the Internet. Products in the past have usually be isolated and only used the data directly available to them. With reliably internet connections the products can also provide life data or use data provided somewhere else. The potential benefits for the user are typically provided by integrating the device data with other data or services available in the internet. It is nice that your runners watch can record your GPS position. The real benefit is to be able to see the pace data in a map and to understand how you improved over time. So the next value chain will be in integrating multiple systems and data sharing. Devices will become chatty and integrated in The Internet of Things.

Today, Jet engines of huge passenger airplanes constantly report their status over satellite and other connections and the company that built and maintains them uses this data to plan maintenance and detect possible issues before they become an expensive problem.

Similarly, today your toothbrush might be able to tell you that you are pressing it too much or not enough onto your teeth. Pretty soon it might be able talk to your health insurance company about your tooth brushing habits and together with the data of your runners watch, hopefully, get you a rebate.

There are a lot of new business models, benefits and services waiting to be found and implemented using the Internet of Things.

This can be great or not so great, so as a user it might be a good idea to carefully check which data you want to share, who benefits from that data and who can see or use it. As the borders between local devices and data and the internet are getting thinner it can be hard to even judge who could access which data and what data you share. If your local reality and devices you rely upon is so interwoven with the internet, it might also be good to consider that this makes your devices and infrastructure vulnerable to break downs, errors and attacks. It is, to some extent, up to us users how the devices, services, data sharing and usage will look like in the future and how dependent we are on it. The truth is, we are heading into the direction of the Internet of Things today, and fast.

Reading

Other than in George Orwells novel Nineteen Eighty-Four, we are actually paying for the cameras, microphones and network connection ourselves rather happily and carry them around with GPS tracking too! 8). It is worth knowing this book. With its content in mind consider some of the measures and desires of governments, intelligence and security services performed or discussed in the past years.

Another author that discusses the consequences of these developments in his books is Daniel Suarez. Check the novels Daemon, Freedom and Kill Decision. Also check other SF, especially the Cyberpunk genre.

Obviously most of the literature above shows the bad side of the possibilities. I have read most of the books above, so I can talk about them at least a bit. I will try to create a reading list here if I come across good books or get suggestions from my peers. Any suggestion in the comments will be welcome, as well.

Our technology is even more advanced than a lot of these authors expected it to be. And it is worth having and extending it, as well.

Anyone who has used an app that helped managing travel and connections does not want to miss that anymore. There is also a lot of potential to improve the value of this information and connectivity. Here in Europe we have something called “Public Transportation” in most of the areas, not only in the metro-pol regions. Bus, subway, train it is a great system.  The software however, that is supposed to help me with public transportation in my region is sub optimal, to say it nicely. Basics such as access to favorites, clearing input fields or searching for hubs based on the position are not existent or hard to find. Frustrating. It does not require a genius to find a better design, I think. Today a lot of good ideas also suffer from inaccessibility of information. For example travel apps are often not allowed to use data available for regional/city travel. The local company has the IP for that data. Even if there would be a better system across the country or Europe the IP of the data prevents its more global usage and success.

My Background

As student I was involved in the development of embedded software for print products.

Back then embedded software development used to be challenging. The electronics was usually custom designed around a special CPU. Memory, usually static RAM was expensive and scarce. DRAM was usually not supported in these embedded devices. There were all kinds of development environments such as cross compiler and real time operating system (RTOS) provided by specialized companies or in-house developed. If you were lucky you had a debugger. Debuggers for embedded devices often required special and expensive hardware to support them. Reasons where that there was no standard connection available that could be used, the available resources where an issue too. also keep in mind that embedded systems often control machines. You can’t just set a break point in the control code if the machine is continuously running, without potentially breaking the controlled process. So often you could only debug by printf().

Later in the last years before I joined IBM Rational 2001 I was involved with developing  this Nexpress 2100 printer. The Nexpress 2100 was a system of systems with multiple CPU’s and custom I/O electronics coordinating motors, chargers, heaters and other electrical devices that coordinate over a network and also communicate to other devices that provide the printing data. I found some pictures and videos that show it in action. It was a huge machine with its own environment and air condition system and loads of moving parts. It was a very interesting task, and we at least had chosen to use various tools that supported debugging the system.

Still, to set up the development environment and to bring up your CPU board with the RTOS of your choice often was a challenge already. Running a full UNIX system on an embedded device was not an option, the processors simply did not have the performance or the resources for this.

It was also pretty unlikely to be able to do anything without electronics design support to build the boards and get the sensors and actuators connected back then.

Raspberry Pi Unleashed

So back in the day it was pretty hard to get embedded systems to work. But when I saw Tim’s post “My first foray into IBM Internet of Things Foundation” I thought that I wanted to refresh my experiences, play around with something like this and see how things progressed over the last 15 years. I always need to do some real work to learn how things work (one of the reasons that I create working examples for the RTC API). So I decided to get stuck in.

The Plan

The idea for this series of posts is to

  1. Talk about the Internet Of things
  2. Get a small device up and running and share the experience and learning
  3. Show some interesting things that one can do with this kind of devices
  4. Get a development environment up and running that includes RTC/CLM to develop software for such a device on my laptop using a cross compiler
  5. Get the GrovePi+ up and running in a Docker image and connected to Bluemix

Related Posts

  1. This post
  2. Raspberry Pi Unleashed – Setup the GrovePi+

Shopping Options and Considerations

So I ordered a Raspberry Pi. The Raspberry Pi will also be referred to as RPi in the posts. I basically  followed Tim’s shopping list.

In hindsight, if someone wants to get into this I would suggest to

  1. Only order the Raspberry Pi and get some LED for the GPIO
  2. Get Tim’s shopping list with the GrovePi+ but add at least one LED and possibly some of the cool additional sensor and output devices
  3. Go bananas and build a RPi Cluster for a Docker Swarm

The robots are also tempting!

Option 1 basically allows you to stay away from the hardware aspect more or less or only touch it by blinking the GPIO based LED. This is the least expensive start. If you find it interesting you can always add more later.

Option 2 with the additional LED allows you to follow the GrovePi+ setup and see the LED work very early. Additional input and output devices are certainly fun and there are interesting choices such as GPS and motion sensors! I was able to set up the GrovePi+ using the Python examples and the Grove – Barometer (High-Accuracy) as well, but a flashing LED is probably more impressive.

Option 3 is probably a bit odd, but maybe fun! See the section Docker RPiCloud below for more information.

It is necessary to have a card reader/writer that can be used to write the initial operating system to the micro SD card. If your computer does not have one built in, there are small cheap devices with USB connector available for the various common types of laptops and desktops.

8GB for the micro SD should be enough. It makes sense to rather buy two or more micro SD cards than a bigger one. Multiple SD cards allow to have different setups for the RPi that can be changed quickly.

After playing around with the RPi for a while, I think it makes sense to have some kind of case that protects the device and prevents short circuits. It has to be able to contain the GrovePi+ if you use it.

Although it is not strictly needed it is a good idea to have a Monitor or TV set with HDMI input and cable and a USB mouse and keyboard available. This makes it easier to play around with the Raspberry Pi to get started. There are multiple options to get the Raspberry Pi up and running the first time. Some require a mouse and keyboard directly connected to the RPi for the first steps e.g. to see and monitor the first  boot process, choose the operating system to be installed and to set some defaults for the RPi.

When I started, I used an old USB keyboard but found the cable irritating and pulling at the RPi. I had an old Mouse that uses a wireless USB connector that also allows to hook up a keyboard. The “unifying” interface is stable since some years now and I ended up buying a small keyboard in addition to the existing mouse similar to this combo. This is ideal as I can move these devices out of the way if they are not needed.

Wireless network is not necessarily supported by all packages or configured when bringing the RPi up the first time, so an Ethernet cable is important for the first steps. In any case you need to be able to find the IP address of the Raspberry Pi. So having access to the DSL or cable modem router is a bonus and helps identifying this IP address.

If possible, I would suggest to set up your router to always provide the same IP address for the Raspberry Pi. In this scenario you can use the stable IP address of the RPi to establish a remote connection and can work without a connected display or TV set. If this is impossible, is necessary to use special tools such as nmap or Zenmap to identify the IP address of the Raspberry Pi to establish the remote connection.

Getting Started with the Raspberry Pi

Follow the Quick Start Guide for the detailed setup information. This is just a short summary of the steps you have to do to get started with the Raspberry Pi, to give you an impression how quick and easy that actually is.

The first step would be to bring the Raspberry Pi up the first time. There are small differences depending on the operating system you are using.  The general steps are pretty much the same, but the number of auxiliary tools needed might be different.

  1. Download the Operating System (Raspbian aka Jessie) image / OS Setup tool (NOOBS) or other images for the Raspberry Pi
  2. Download the format program for the SD card
  3. Download a tool to write an image ISO file to the SD card
  4. Download remote connection tools if needed e.g. to be able to use SSH to connect to the RPi
  5. Put the Operating System for the Raspberry Pi on the SD card, e.g. for NOOBS or if you use an image
  6. Insert the SD card into the Raspberry Pi
  7. Connect network and peripherals to the Raspberry Pi
  8. Power up the Raspberry Pi
  9. If you use NOOBS, choose the OS’s to install on the Raspberry Pi

That is all for now. You are done. There is a full blown operating system on your Raspberry Pi. Most likely a Linux based operating system, but not necessarily. There are other choices available. NOOBS has Windows 10 IoT available as well.

You can now work with the keyboard and mouse on the HDMI connected screen.

Remote Connection

Or, you use an SSH client such as Putty or whatever is built into your operating system to connect to the Raspberry Pi. Most of the additional work I did was using an SSH shell from my laptop. The reason is that I have two screens then and copy paste and documenting with screen shots is so much easier.  Make sure that SSH is installed and enabled, check the description of you image for hints how you should connect.

First impressions

In comparison to 15 to 20 years ago, this process is so easy, anyone should be able to do this. The other aspect is that the Raspberry Pi is very affordable. The Raspberry Pi comes with images that directly support a media library or device to display videos or other media on a TV set. So there is some immediate purpose it can serve. In addition it provides you with a platform that can be used to develop and run applications. Linux has all the needed editors and compilers available and other language choices are possible. The system can act as a server as well as a client, dependent on what is needed.

I am quite impressed. There is also infrastructure available to support class room use and a lot of example projects, YouTube videos and companies providing additional devices.

The image below shows my RPi with its peripherals. It is connected to Ethernet, but wireless is functional and could be used as well. The keyboard and the mouse are connected wireless as well. The Raspberry Pi is connected to a monitor and runs X-Windows. There are various flavors of operating systems and images available. Above is a full blown Raspbian which can be used as desktop with keyboard and mouse.

PIE_With_Peripherals

You can install media library and home automation support in addition. You can get images pre configured for media library and home automation services. There are other options available such as Windows 10 IoT and more if you search the internet.

You can also get for example a Raspbian Lite image which is stripped down to a minimal footprint which you can then extend with what you need. It still supports access using mouse, keyboard and monitor, but does only boot up into a terminal mode. It is not necessary to have a mouse, keyboard and monitor/TV connected. It is possible and sometimes easier to use a remote connection.

If I have to follow more complex tasks from a description to set up something, I usually don’t use the keyboard and mouse directly connected to the Raspberry Pi. I rather use a SSH shell connection (e.g. using Putty on Windows) to the RPi. This allows copy and paste as well as screenshots to create documentation. The image below shows a connection to the system above using putty .

SSH_Connection

The default user for the raspbian image above is pi, the password raspberry. Consider your keyboard layout, dependent on how you connect and what keyboard settings you have the ‘y’ key could actually map to ‘z’.

If there is no need for a full blown operating system and you want to rather do some hardware related work, have a look at the Arduino. It is cheaper and has a more hardware control focus. The Raspberry Pi can control hardware as well using the GPIO but it is more expensive and has more overhead in development and OS.

Docker RPiCloud

In the first days after I received the Raspberry Pi but, due to shipping, not yet a GrovePi+ board and sensors. So I looked around in the internet what you could do with a plain Raspberry Pi. I ran into this blog from Hypriot that talked about running Docker on the Raspberry Pi.

I had actually looked into Docker recently, so I decided I’d try it out. It really worked very well for me. If you follow this blog, you end up with a system that has a Docker host and a Docker daemon running on the Raspberry Pi and can run Docker images on that RPi. Here is a reference to the Docker architecture.

You have to keep in mind that Docker is not a full blown virtualization. That makes Docker Images dependent on the architecture of the Docker host. To run Docker images an a Raspberry Pi, you have to provide them for the ARM platform. The blog authors already ported Docker and provide several images for the Raspberry Pi.

And there is more. Several people have created custom Raspberry Pi clusters. There is also a company that provides sets to build Raspberry Pi Pico cluster with 3 to 100 nodes. You can run Docker swarms on these clusters.

That is incredible when I did my Diploma Thesis working on a parallel computing system based on Transputers  32 nodes was some kind of super computer. I have to assume that in comparison to the Raspberry Pi Transputers are actually not that fast anymore. So it is possible to setup a small “Supercomputer” to explore parallel computing for a reasonable price and put it on your table.

So, if I have time, I will try if I can create a Docker image that contains the software required to run against Bluemix and has the GrovePi+ and the sensors configures. I am curious what happens if one tries to run multiple containers. As long as the sensors are only read, I assume everything is going top be OK. But we shall see.

Summary

The experience with the Raspberry Pi was very different from my experience with embedded development in the past. If you had more than a terminal on your embedded device, you where lucky.

I haven’t yet tried debugging and setting up a cross development environment up on my laptop, but I am looking forward to that too. Getting up the GrovePi+ I/O board and the sensors will be the next challenge.

Stay tuned, if you are interested in the Internet of Things. If you like to tinker with it yourself, get started. It is very easy to approach these days and there are a lot of interesting example projects out there you could follow.

The next post will talk about bringing up the GrovePi+ and more tinkering. I don’t yet know the details.

Posted in BlueMix, Internet Of Things | Tagged , | Leave a comment

Alien Skies II – JavaScript Yet Another Computer Language?


I tried to look into Web UI Development, especially in the context of RTC Extensions like dashboards and the like several times in the past. I couldn’t get it to fly. It was a total disaster so far. I couldn’t understand how all that works and what the fuzz was all about. The more I looked into it, the more confused I was.

Surprisingly, looking into Doors Next Extensions seems to have revealed the missing link(s) I needed to better understand the whole topic around JavaScript. I have talked to other people who had similar issues with JavaScript, so I hope it is worth sharing my thoughts here.

I am well aware that the topic and my approach to it could be very controversial. If you like to disagree with any of my assessment, please feel free to do so in the comments. I rarely dismiss comments, but please be aware that the comments in this blog are moderated xD.

I have a book about JavaScript, but I rarely see JavaScript like it is described in that book. So, how does this JavaScript stuff work?

Its Not The Language

As far as I can tell, the language itself does not really matter that much. The language syntax of JavaScript is very similar to other languages e.g. such as Java. The Language has inheritance mechanisms, the language can support event based, asynchronously processed and functional aspects. All in all, a collection of properties that are shared across several language families.

There are some issues with the language (at least from my, still uneducated, point of view). It is not type safe to get started. While this is very flexible it is also asking for trouble in my experience. JavaScript is very dynamic and does not get compiled. A lot of issues will be seen at run time and not during coding.

Anyway, to have a common language with a common syntax and semantic is probably useful if you have to switch the domains often. At least you don’t have to learn a new language over and over.

Its Not The Development Environment

I am missing a good library handling and context sensitive editor support. There might be some cool development environment out there, but it has eluded me so far. Any suggestions are welcome, as always.

I have seen the first computers making their appearance in schools. With enormous keyboards with two lines of 40 characters to display. Nothing one would associate with a computer today. I was lucky enough to be able to write my first own programs on Commodore PET, VIC-20, 64 and on the first IBM PC’s. Ignore the first program on a Texas Instruments calculator with 32 programmable steps. I have developed without debugger, because there were no debuggers available or not affordable e.g. you could buy in circuit debuggers for  embedded systems but that was sometimes unaffordable even for companies back then.

I have seen bad development environments and debugging by printf. It is not that bad with JavaScript, in browser based applications. It basically depends on what the use case is, if there is a good debugging environment available or not.

Feeding The Confusion

I am usually pretty good in spotting patterns. Looking into the JavaScript domain, I had a hard time finding the basic ones. A lot of the confusion I felt is also, in hind sight, probably related to the fact that there is not just one JavaScript. There are multiple. And they are used in different ways and using different patterns. In addition to that the expression JavaScript is often used in contexts, where it would probably be better to talk about Web or JavaScript libraries or frameworks instead.

  1. There is the JavaScript that runs in browsers. This JavaScript usually works together with CSS and HTML. It is used to create dynamic content and presentations in the Web to be able to provide more or less usable UI’s in a browser, designed to show more or less static HTML documents. The JavaScript engine is built into and shipped with the browser and there are different engines and development environments for different browsers. This type of JavaScript basically requires to understand how JavaScript, HTML and CSS work in combination and how to control their interaction.
  2. Browser based JavaScript is integrated in several different frameworks and tool kits such as Ajax/Dojo and others.
  3. There is NodeJS, which claims to be Java Script and actually is sever side JavaScript processing and a ton of libraries and frameworks that are available for it. The only common part I can identify here is the language that is used. I did not look into the details however, so maybe there is more e.g. the run time used could be one of the common ones.

So when I started to look at the Doors Next Extensions which said JavaScript, I realized quickly, that there were constructs in the examples that I had never seen before in any JavaScript example. The normal pattern again, every time I looked at something with JavaScript inside and thought I had some basic knowledge already, I saw something different. This time however I was able to figure out what I was missing and this seems to help with all the other cases as well.

The Doors Next Extension Mechanism

The Doors Next Extension mechanism is based on JavaScript, HTML and CSS. The API supports TypeScript which provides with a more type safe and compile like way of developing JavaScript. The Doors Next API itself comes with a Typescript interface definition. Some of the examples that are provided use jQuery a JavaScript based library and framework. The latter means that the extensions look a bit uncommon.

Uncommon Common

To some extend this seems to be fairly typical for my experience with JavaScript. Almost every solution I have seen has some common part that can be found in other solutions. But looking across all the examples there is only a tiny common core, which often is basic JavaScript functions. If the JavaScript based solution deals with UI’s, there is usually the common CSS and HTML, if it does not focus on UI’s this is missing. Since there are so many frameworks and libraries, Java Script based solutions can look very different and it is important to understand the detailed frameworks and resources that are used in the specific example.

Summary

Granted, it took a while for me to come to terms with JavaScript,  but looking into the Doors Next Generation Extension Mechanism provided me with some important insight. This will help me to understand these technologies in other contexts as well. I am looking forward to playing with Doors Next Extensions and add the results to this blog.

Posted in CLM, Doors Next Generation, extending, Jazz | Tagged | 1 Comment

Alien Skies – Peeking Into Doors Next Extensibility


I started a recon mission into the alien territory of Doors Next extensions recently and almost lost my way.

Doors Next is one of the products I am responsible for. I have some experience with requirement management and I understand Doors Next usually good enough to find my way, but I am way more familiar with RTC, especially since I did a lot with its Java API and extensions. So I thought if this is similar to the situation with RTC, capabilities to extend Doors Next will be in high demand soon and I better get my hands dirty and into it. Since the language available to extend Doors Next is JavaScript, this looked also like a good opportunity to have some more exposure to JavaScript.

First Steps – Look into the documentation

So I browsed through the documentation a bit to learn what there is to learn! Other than me, please start with reading the most recent version of the documentation. This will save you detours like looking at WAS Liberty Profile and publishing data. I followed the guidance and played around with the simple examples and the more complex ones provided. I became adventurous and wanted to modify the Hello World open social gadget example and create one that should be published in the widget catalog along with the other examples. The example worked as open social gadget already and I thought, I could use it as widget as well.

That did not work so good. Why? If I used the same xml code the open social gadget showed and the Dashboard widget showed a blank content.

The Widget

This is how the basic widget looks like. All the rest such as CSS, JavaScript and the like can be added to it based on this structure.

DashboardWidget

The Open Social Gadget

This is how the open social gadget looks like. I tried to spot what the difference was for a long time. Until I finally noticed that the difference between a working gadget and a working widget is the <html></html> tag that the open social gadget has and which is missing in the widget..

OpenSocialGadget

It took me quite some time to figure out what the problem was. Embarrassing, but on the other hand it shows again, that tool extensions, even the smallest, are complex business. Maybe there are better ways to debug these kinds of issues. Compared to RTC Java Extensions I felt like back in time, trying to debug C and Assembler with printf…..

See this site for some additional introduction into open social gadgets.

Deployment Structure

Similar to my description in Publishing XML and Other Data With the WAS Liberty Profile  the extensions where deployed in the apps folder like this:

WidgetDeployment

Compared to RTC Extensions, this is a relatively simple and accessible structure. As mentioned by Guido, in an enterprise context, it would make sense to have your own web server to publish the extensions. This would allow to have a more fine grained control about who can change what.

It would be possible to use a RTC Build workspace to publish these extensions. I think there is a need to come up with some naming pattern for the extensions, in order to avoid chaos and confusion over time.

This has been challenging in the past and I assume it will stay challenging. The structure above has a css folder, as well as a JavaScript folder just in case. There is also a need for some more documentation and the like.

Summary

Doors next has means to extend its capabilities. Provided some fundamental knowledge, this should allow for a lot of the automation needed in enterprise environment. My first steps into this alien territory where a bit shaky, but all in all, I think I now have the basic understanding where and how to get started with more complex extensions.

Posted in Doors Next Generation, extending, Jazz | Tagged , , | 2 Comments

Publishing XML and Other Data With the WAS Liberty Profile


Since CLM now ships the WebSphere Application Server Liberty Profile as default, you might want to be able to publish XML and other data similar to the easy way that was available in Tomcat, by just using the application server that is available. This avoids having to set up an additional Web or application server.

Problem

Sometimes it is necessary to provide access to some files using the HTTP/HTTPS protocol.

This is interesting for the Process Enactment Workshop, or if you want to publish your own XML data to be used with HTTP Filtered Value Sets, host custom Doors Next Extensions, or if you want to publish Build results.

It is convenient if there is already an application server available to host these files there and make them accessible, instead of having to set up a web server. This especially holds if it is necessary to provide the data using HTTPS, and not HTTP, because browsers these days reject to process content that is mixed from HTTP and HTTPS sources. The latter is important for Doors Next Generation extensions. Not having to set up a new server also avoids having to create a trusted certificate for it. It reduces the additional failure point, backup and maintenance and provides URI stability as well, since you don’t want to change the URI for the CLM applications hosted anyway.

Tomcat

With Tomcat, it was relatively easy to do that, by just adding a folder under \tomcat\webapps and placing your files there. Assuming the folder name is myfiles, Tomcat then provides the files with the public URI root https://serverpath:port/myfiles/fileName.

Another way was to simply drop the file into the tomcat/webapps/ROOT/ or a sub folder, which has the same effect.

Is there a similar way to achieve this with the WAS Liberty Profile shipped with CLM?

Example Scenario

Lets assume, like in the scenario explained in Publish and Host XML Data Using Tomcat – The Easy Way, we want to publish a file makers.xml to be accessible with a context root PEWEnactmentData on the server with the public URI root https://clm.example.com:9443/.

The expected outcome is to be able to access the XML file content using the URL https://clm.example.com:9443/PEWEnactmentData/makers.xml.

Solutions

There are two relative simple solutions for WAS Liberty. The solutions below are based on information from Lars, one of our WebSphere experts and the IBM WebSphere Application Server Liberty Profile Guide for Developers.

For WAS Liberty Profile, all solutions require to deploy an application. There are several ways to create and deploy an application in WAS Liberty Profile.

  1. Deploy an application in some folder and declare the context root and location of the application in the server.xml or specific location XML file
  2. Deploy an application in the dropins folder; for this to work, it is necessary to make sure the application server has the dropins monitoring enabled for this to work

Deploying an application in this context means

  1. Create a folder with some contend in a location (or copy the folder into the location)
  2. Deploy a compressed folder with some content

Please note: Any changes to any of the server configuration XML files are automatically picked up be WAS Liberty Profile. There is no need to reboot the server for the steps below to work.

1. Deploying an Application – Standard

The trick in this solution, is to pretend to deploy an uncompressed Web Application such as a WAR file. It is obviously possible to create a real WAR file or the structure that is contained in it, but that is not really necessary. See the last section here if you are interested in deploying the application as compressed WAR file.

1.1 Deploying the Application

The only thing necessary to trick the WAS Liberty Profile into making the data available with a correct context root is to create a folder with the context root as name and an extension .war, and to make the application location known in an XML file.

In the default setup of WAS Liberty Profile with CLM, after the first launch, all installed CLM applications end up in the folder <Install Folder>\server\liberty\servers\clm\apps. The screen shot below shows this structure.

FolderStructure

The applications are deployed as compressed WAR file and have been extracted to a folder with the name of the application. As an example the CCM application (RTC) came shipped as ccm.war.zip and was extracted to a folder ccm.war. The context root of the CCM application is ccm, which is basically the name without the extension war. The same pattern holds for all the other applications.

It is possible to create a folder to host the files to be published. The folder should be named <context root>.war to result in an application with the desired context root.

Files within this folder will be available with their file name.

A file in that folder with the name <filename> will be accessible as <public URI root>/<context root>/<filename>.

If the files are within sub folders, the folder path within the root folder and the file name compose the file name part of the URL. For example, the root folder has a sub folder names myfiles and a file test.xml in it, the file is accessible as <public URI root>/<context root>/myfiles/test.xml.

In our example with the given public URI root being https://clm.example.com:9443/, the folder name <context root> being PEWEnactmentData and <filename> being makers.xml the URL would be https://clm.example.com:9443/PEWEnactmentData/makers.xml.

1.1 Declaring the Application Location

The WAS Liberty Profile needs to know which application has to be published and where to find its resources. WAS Liberty Profile defines an XML element to declare which application is located where. The element is <appliction…./>. It supports various attributes and can come in many different forms. The most important attributes are

  • id: Must be unique and is used internally by the serve
  • location: The path to the application resource
  • context-root: The context root of the application
  • name: The name of the application
  • type: Specifies the type of application (war or ear)

The location and ID is required and either the name equal to the context root, or the context-root can be provided. The type can be omitted. I did not provide the ID and it worked for me, but since the server requires it, it should be provided. Use the same value as the name or context root for the ID.

It is necessary to add the application declaration to the server configuration for example by adding it to the server.xml file. In a CLM deployment a file appliction.xml is included in the server.xml, declaring all the applications and it makes sense to use that file instead.

Save the xml files after any change to activate the new setting. The server will pick up the change.

2. Deploying an Application – dropins

The WAS Liberty Profile has the capability to monitor a special dropins folder on a regular basis. If new content is detected, it is automatically made available by the server. The automatic monitoring needs to be enabled for this to work. A standard CLM deployment disables the monitoring to reduce server load.

2.1 Deploying the Application – dropins

In a default CLM install the dropins folder is located here:

<Install Folder>\server\liberty\servers\clm\dropins

Create a folder named <context root>.war in the dropins folder and add the files to publish. After detecting the addition, the files will be accessible like explained in 1.1.

The folder <Install Folder>\server\liberty\servers\clm\dropins should now look like below.

Published application with file

2.1 Enable Automatic Monitoring for Dropins

The automatic monitoring of the dropins folder is disabled for a standard CLM setup with WAS Liberty Profile. This saves processor cycles and I/O for the server operation. It has to be enabled once in the server.xml file for this solution to work. Find the XML element <applictionMonitor> and change the attribute dropinsEnabled from “false” to “true” and save the file. The image below shows the changed configuration file.

Enable Dropins Monitoring

The attribute pollingRate determines how often the server checks the location for changes. Adjust the value to your needs.

Save the server.xml file after any change to activate the new setting. The server will pick up the change.

The XML file content should now be accessible.

Example 1 – Standard

In the example we want to publish a document with the URI https://clm.example.com:9443/<context root>/makers.xml, where <context root>=PEWEnactmentData. The public URI root is already given by the server set up.

1.1 Create the “Application”

As first step create the application. Create a new folder named PEWEnactmentData.war in the folder <Install Folder>\server\liberty\servers\clm\apps. The folder could be created anywhere, but it makes sense to put it where the other applications are. If it is necessary to use a different folder, for example to allow other users to modify the content, place the folder in a different location and note the path. The path to the location will become important in the next step.

Put the files to be published underneath this folder. In the given example, put the file makers.xml into the folder.

The folder structure should look like the image below:

New Application folder

1.2 Publish the “Application”

The WAS Liberty Profile needs to know which application has to be published and where to find its resources. WAS Liberty Profile defines an XML element to declare which application is located where. The element is <appliction…./>. It supports various attributes and can come in many different forms. The most important attributes are

  • id: Must be unique and is used internally by the serve
  • location: The path to the application resource
  • context-root: The context root of the application
  • name: The name of the application
  • type: Specifies the type of application (war or ear)

The location and ID is required and either the name equal to the context root, or the context-root can be provided. The type can be omitted. I did not provide the ID and it worked for me, but since the server requires it, it should be provided. Use the same value as the name or context root for the ID.

The element can be put into several places. In a CLM installation, by default, the applications are declared in the file

<Install Folder>\server\liberty\servers\clm\conf\application.xml

The file is included in the file <Install Folder>\server\liberty\servers\clm\server.xml. It would be possible to place the application declaration in the file server.xml, but it seems to make more sense to put it into the file application.xml instead.

Add lines

<!–  My Application –>
<application id=”PEWEnactmentData” name=”PEWEnactmentData” location=”${server.config.dir}/apps/PEWEnactmentData.war”/>

to the file application.xml and save the file.

The file should look like below:

Declare new Application

In this case the location is relative to the server configuration folder which n our case is <Install Folder>\server\liberty\servers\clm\. The location could really be anywhere. If it is necessary to include a folder that, as an example, can be accessed by regular users, you could change the location to somewhere with less read and modification restrictions.

After the save of the application.xml, the published XML file should be accessible using the URL https://clm.example.com:9443/PEWEnactmentData/makers.xml.

The image below shows the browser access.

Browser Access To XML File

2. Deploying an Application – dropins

In the example we want to publish a document with the URI https://clm.example.com:9443/<context root>/makers.xml, where <context root>=PEWEnactmentData. The public URI root is already given by the server set up.

2.1 Create the “Application”

As first step create the application. Create a new folder named PEWEnactmentData.war in the folder <Install Folder>\server\liberty\servers\clm\dropins.

Put the files to be published underneath this folder. In the given example, put the file makers.xml into the folder.

The folder structure should look like the image below.

Published application with file

Assuming the automatic application monitoring is enabled, the file should be accessible after the next polling cycle, (at least after waiting the full pollingRate). If automatic application monitoring is not enabled, enable it as described in 2.2 and save the configuration file.

The XML file content should now be accessible using the URL https://clm.example.com:9443/PEWEnactmentData/makers.xml.

Deploying Compressed Folder Structures

It is also possible to publish real WAR, EAR similar to the way described above. For example create a WAR file as described in Publish XML Data Using Tomcat – Hotfix for The Process Enactment Workshop and attached to that post. Deploy the compressed WAR file, for example PEWEnactmentData.war in the dropins folder or declare it as an application and put it onto another location like explained below. This automatically deploys the application and makes the contained files accessible.

Just zipping the folder structure we created above into a zip file, name it PEWEnactmentData.war and trying to deploy does not seem to be working however. The WAR file needs to provide required supporting structures in this case.

Related Posts

The RM Extensions Hosting Guide for CLM 6.0.1 and later versions explain the dropins example as well.

Summary

This post shows how easy it is to publish supporting files on the WAS Liberty Profile. This is important if you want to host custom Doors Next Extensions, or if you want to publish your own XML data to be used with HTTP Filtered Value Sets, for the Process Enactment Workshop and it can possibly also used with Build result publishing. In the latter case, you would likely publish a root folder that contains all the build results.

As always I hope this post has some value to users out there, or is at least interesting to read.

Posted in CLM, Doors Next Generation, Jazz, RTC, RTC Process Customization, WAS Liberty Profile | Tagged , , , | 6 Comments

Check out the RTC Timebox Planning Widget with SAFe Support


Take a look at this great community contribution. If you ever wanted to be able to easily see the status of a plan and balance the load in the dashboard, this could be the ideal solution for you.

I already blogged about the great predecessor and other community content in the post Some Community Extensions for RTC.

In the newest post New RTC Timebox Planning Widget with SAFe Support, Markus Giacomuzzi from Siemens Switzerland explains the newest version of their RTC Timebox Planning Widget with SAFe Support. He also created videos on their You Tube channel, explaining how to use and set up the TimeBox Planning dashboard widget. The Time Box Planning widget is an excellent example for how community contributions can improve the useability of RTC and provide fresh ideas. Please read the post and view the videos. Don’t forget to share and give Markus a thumbs up.

The extension can be found here on IBM Bluemix DevOps Services, if I am not mistaken.

Markus is going to present this at the IBM InterConnect 2016 in Las Vegas in the session DOP-3145 and you might want to consider visiting the session to be able to see it and share your experiences and ideas.

Here is a screen shot of the program level planning.

TimeBox Planning with SAFe support program level

And here is a screen shot of the team level planning.

TimeBox Planning with SAFe support team level

Last but not least, I thought about delaying this reblog for a day, because I already created a post today, but I just can’t restrain myself to do it.

Posted in Jazz, RTC, RTC Automation, RTC Extensibility | Tagged , , , | Leave a comment