The RTC Work Item Command Line on Bluemix


I was talking to a customer recently. They are using the WorkItem Command Line for some automation purposes. Since this can trigger e-mail notifications to a huge amount of users they wanted to use the new Skip Mail save WorkItem Parameter introduced in RTC 6.0 iFix3.

I had the time and went ahead implementing it. The resulting source code is available on IBM Bluemix DevOps Services in the project Jazz In Flight

ibm-bluemix-devops-services-2016-10-24_17-55-35

Access the Source Code

License

The post contains published code, so our lawyers reminded me to state that the code in this post is derived from examples from Jazz.net as well as the RTC SDK. The usage of code from that example source code is governed by this license. Therefore this code is governed by this license. I found a section relevant to source code at the and of the license. Please also remember, as stated in the disclaimer, that this code comes with the usual lack of promise or guarantee. Enjoy!

RTC SCM Access

In the project you can access the source code of several extensions and automation I have created over the years. If you click Edit Code and you are not yet member of the project, you have to request access which I will allow.

The project contains a Stream called RTC Extensions with several components. One of the components is Work Item Command Line.

configure-eclipse-request-access-2016-10-24_18-13-14

To configure your RTC Eclipse client follow the instructions in the Configure eclipse client link. You can then create yourself a repository workspace and download the code. Please use the tracking and planning section (work items) if you want to do any changes to coordinate with me.

Changes

The current version uploaded there contains the capabilities described in A RTC WorkItem Command Line Version 3.0 plus a variety of bug fixes and a new switch /skipEmailNotification to disable work item update notification for the commands that modify work items such as

  • update
  • importworkitems
  • migrateattribute

The feature to suppress work item update notification is implemented in RTC 6.0 iFix3 where a new Skip Mail save WorkItem Parameter was introduced in RTC. When this additional save parameter is provided, the work item change does not trigger a work item change notification mail.The adoption in the WorkItem Command Line is done in a way that the implementation does not break the older API.  It introduces the additional save parameter value into the work item command line source code as new String constant instead of referencing the constant in the API. This way the WCL can be compiled with RTC Plain Java Client Library versions of RTC prior to 6.0 iFix3. If the WCL is run with versions earlier than 6.0 iFix3, e-mail notification is not suppressed. The behavior does not change in such versions of RTC and the additional save parameter is simply ignored.

Additional Download

You can also download the latest version 3.4 here:

Please note, there might be restrictions to access Dropbox and therefore the code in your company or download location.

Usage and install

Please see the posts A RTC WorkItem Command Line Version 3.0.

For the general setup follow the description in A RTC WorkItem Command Line Version 2.

For usage follow the description in A RTC WorkItem Command Line Version 2 and in A RTC WorkItem Command Line Version 2.1. Check the README.txt which is included in the downloads.

Summary

The work item command line is now available on IBM Bluemix Dev Ops Services and can be accessed and worked on there.

Advertisements

Raspberry Pi Unleashed


This is not the usual RTC API post although it has something to do with RTC in the long run, I hope. I have been tinkering and found some interesting things I wanted to share.

Since beginning of the year I have been a part of what is now the IBM Watson Internet of Things (IoT) business unit, working in the Unleash the Labs team. Like my colleague Tim I am currently involved with our CLM solution which basically provides the development teams with the required capabilities to plan, develop, build and test.

Internet of Things

Internet of Things means that more and more devices interact, provide and share data in the internet. This basically means these devises will have more and more software, sensors, processor and network capabilities. The software as well as the devices need to be planned, developed integrated, run and maintained. That is basically what our CLM solution is for. Interaction also means to make the data available and to provide services based on them. Bluemix is a cloud platform that allows to do this.

It is not obvious to everyone, but in the last 20 years software has crept into almost all devices. From airplane to electrical toothbrush, software is embedded everywhere and provides more and more of the added value to make the difference over the competition.

In the past the software embedded in most products usually worked isolated from the environment it did what it was made for, maybe used sensors and motors to interact and control, but most of the data was isolated in the system. The devices were often not connected and did rarely provide data outside of the system. For some time sensors in cars where often added for specific subsystems and the data was rarely available or shared with other subsystems.

Since connectivity to the Internet, even mobile, has become reasonably cheap, recently more and more products can also be connected to the Internet. Products in the past have usually be isolated and only used the data directly available to them. With reliably internet connections the products can also provide life data or use data provided somewhere else. The potential benefits for the user are typically provided by integrating the device data with other data or services available in the internet. It is nice that your runners watch can record your GPS position. The real benefit is to be able to see the pace data in a map and to understand how you improved over time. So the next value chain will be in integrating multiple systems and data sharing. Devices will become chatty and integrated in The Internet of Things.

Today, Jet engines of huge passenger airplanes constantly report their status over satellite and other connections and the company that built and maintains them uses this data to plan maintenance and detect possible issues before they become an expensive problem.

Similarly, today your toothbrush might be able to tell you that you are pressing it too much or not enough onto your teeth. Pretty soon it might be able talk to your health insurance company about your tooth brushing habits and together with the data of your runners watch, hopefully, get you a rebate.

There are a lot of new business models, benefits and services waiting to be found and implemented using the Internet of Things.

This can be great or not so great, so as a user it might be a good idea to carefully check which data you want to share, who benefits from that data and who can see or use it. As the borders between local devices and data and the internet are getting thinner it can be hard to even judge who could access which data and what data you share. If your local reality and devices you rely upon is so interwoven with the internet, it might also be good to consider that this makes your devices and infrastructure vulnerable to break downs, errors and attacks. It is, to some extent, up to us users how the devices, services, data sharing and usage will look like in the future and how dependent we are on it. The truth is, we are heading into the direction of the Internet of Things today, and fast.

Reading

Other than in George Orwells novel Nineteen Eighty-Four, we are actually paying for the cameras, microphones and network connection ourselves rather happily and carry them around with GPS tracking too! 8). It is worth knowing this book. With its content in mind consider some of the measures and desires of governments, intelligence and security services performed or discussed in the past years.

Another author that discusses the consequences of these developments in his books is Daniel Suarez. Check the novels Daemon, Freedom and Kill Decision. Also check other SF, especially the Cyberpunk genre.

Obviously most of the literature above shows the bad side of the possibilities. I have read most of the books above, so I can talk about them at least a bit. I will try to create a reading list here if I come across good books or get suggestions from my peers. Any suggestion in the comments will be welcome, as well.

Our technology is even more advanced than a lot of these authors expected it to be. And it is worth having and extending it, as well.

Anyone who has used an app that helped managing travel and connections does not want to miss that anymore. There is also a lot of potential to improve the value of this information and connectivity. Here in Europe we have something called “Public Transportation” in most of the areas, not only in the metro-pol regions. Bus, subway, train it is a great system.  The software however, that is supposed to help me with public transportation in my region is sub optimal, to say it nicely. Basics such as access to favorites, clearing input fields or searching for hubs based on the position are not existent or hard to find. Frustrating. It does not require a genius to find a better design, I think. Today a lot of good ideas also suffer from inaccessibility of information. For example travel apps are often not allowed to use data available for regional/city travel. The local company has the IP for that data. Even if there would be a better system across the country or Europe the IP of the data prevents its more global usage and success.

My Background

As student I was involved in the development of embedded software for print products.

Back then embedded software development used to be challenging. The electronics was usually custom designed around a special CPU. Memory, usually static RAM was expensive and scarce. DRAM was usually not supported in these embedded devices. There were all kinds of development environments such as cross compiler and real time operating system (RTOS) provided by specialized companies or in-house developed. If you were lucky you had a debugger. Debuggers for embedded devices often required special and expensive hardware to support them. Reasons where that there was no standard connection available that could be used, the available resources where an issue too. also keep in mind that embedded systems often control machines. You can’t just set a break point in the control code if the machine is continuously running, without potentially breaking the controlled process. So often you could only debug by printf().

Later in the last years before I joined IBM Rational 2001 I was involved with developing  this Nexpress 2100 printer. The Nexpress 2100 was a system of systems with multiple CPU’s and custom I/O electronics coordinating motors, chargers, heaters and other electrical devices that coordinate over a network and also communicate to other devices that provide the printing data. I found some pictures and videos that show it in action. It was a huge machine with its own environment and air condition system and loads of moving parts. It was a very interesting task, and we at least had chosen to use various tools that supported debugging the system.

Still, to set up the development environment and to bring up your CPU board with the RTOS of your choice often was a challenge already. Running a full UNIX system on an embedded device was not an option, the processors simply did not have the performance or the resources for this.

It was also pretty unlikely to be able to do anything without electronics design support to build the boards and get the sensors and actuators connected back then.

Raspberry Pi Unleashed

So back in the day it was pretty hard to get embedded systems to work. But when I saw Tim’s post “My first foray into IBM Internet of Things Foundation” I thought that I wanted to refresh my experiences, play around with something like this and see how things progressed over the last 15 years. I always need to do some real work to learn how things work (one of the reasons that I create working examples for the RTC API). So I decided to get stuck in.

The Plan

The idea for this series of posts is to

  1. Talk about the Internet Of things
  2. Get a small device up and running and share the experience and learning
  3. Show some interesting things that one can do with this kind of devices
  4. Get a development environment up and running that includes RTC/CLM to develop software for such a device on my laptop using a cross compiler
  5. Get the GrovePi+ up and running in a Docker image and connected to Bluemix

Related Posts

  1. This post
  2. Raspberry Pi Unleashed – Setup the GrovePi+

Shopping Options and Considerations

So I ordered a Raspberry Pi. The Raspberry Pi will also be referred to as RPi in the posts. I basically  followed Tim’s shopping list.

In hindsight, if someone wants to get into this I would suggest to

  1. Only order the Raspberry Pi and get some LED for the GPIO
  2. Get Tim’s shopping list with the GrovePi+ but add at least one LED and possibly some of the cool additional sensor and output devices
  3. Go bananas and build a RPi Cluster for a Docker Swarm

The robots are also tempting!

Option 1 basically allows you to stay away from the hardware aspect more or less or only touch it by blinking the GPIO based LED. This is the least expensive start. If you find it interesting you can always add more later.

Option 2 with the additional LED allows you to follow the GrovePi+ setup and see the LED work very early. Additional input and output devices are certainly fun and there are interesting choices such as GPS and motion sensors! I was able to set up the GrovePi+ using the Python examples and the Grove – Barometer (High-Accuracy) as well, but a flashing LED is probably more impressive.

Option 3 is probably a bit odd, but maybe fun! See the section Docker RPiCloud below for more information.

It is necessary to have a card reader/writer that can be used to write the initial operating system to the micro SD card. If your computer does not have one built in, there are small cheap devices with USB connector available for the various common types of laptops and desktops.

8GB for the micro SD should be enough. It makes sense to rather buy two or more micro SD cards than a bigger one. Multiple SD cards allow to have different setups for the RPi that can be changed quickly.

After playing around with the RPi for a while, I think it makes sense to have some kind of case that protects the device and prevents short circuits. It has to be able to contain the GrovePi+ if you use it.

Although it is not strictly needed it is a good idea to have a Monitor or TV set with HDMI input and cable and a USB mouse and keyboard available. This makes it easier to play around with the Raspberry Pi to get started. There are multiple options to get the Raspberry Pi up and running the first time. Some require a mouse and keyboard directly connected to the RPi for the first steps e.g. to see and monitor the first  boot process, choose the operating system to be installed and to set some defaults for the RPi.

When I started, I used an old USB keyboard but found the cable irritating and pulling at the RPi. I had an old Mouse that uses a wireless USB connector that also allows to hook up a keyboard. The “unifying” interface is stable since some years now and I ended up buying a small keyboard in addition to the existing mouse similar to this combo. This is ideal as I can move these devices out of the way if they are not needed.

Wireless network is not necessarily supported by all packages or configured when bringing the RPi up the first time, so an Ethernet cable is important for the first steps. In any case you need to be able to find the IP address of the Raspberry Pi. So having access to the DSL or cable modem router is a bonus and helps identifying this IP address.

If possible, I would suggest to set up your router to always provide the same IP address for the Raspberry Pi. In this scenario you can use the stable IP address of the RPi to establish a remote connection and can work without a connected display or TV set. If this is impossible, is necessary to use special tools such as nmap or Zenmap to identify the IP address of the Raspberry Pi to establish the remote connection.

Getting Started with the Raspberry Pi

Follow the Quick Start Guide for the detailed setup information. This is just a short summary of the steps you have to do to get started with the Raspberry Pi, to give you an impression how quick and easy that actually is.

The first step would be to bring the Raspberry Pi up the first time. There are small differences depending on the operating system you are using.  The general steps are pretty much the same, but the number of auxiliary tools needed might be different.

  1. Download the Operating System (Raspbian aka Jessie) image / OS Setup tool (NOOBS) or other images for the Raspberry Pi
  2. Download the format program for the SD card
  3. Download a tool to write an image ISO file to the SD card
  4. Download remote connection tools if needed e.g. to be able to use SSH to connect to the RPi
  5. Put the Operating System for the Raspberry Pi on the SD card, e.g. for NOOBS or if you use an image
  6. Insert the SD card into the Raspberry Pi
  7. Connect network and peripherals to the Raspberry Pi
  8. Power up the Raspberry Pi
  9. If you use NOOBS, choose the OS’s to install on the Raspberry Pi

That is all for now. You are done. There is a full blown operating system on your Raspberry Pi. Most likely a Linux based operating system, but not necessarily. There are other choices available. NOOBS has Windows 10 IoT available as well.

You can now work with the keyboard and mouse on the HDMI connected screen.

Remote Connection

Or, you use an SSH client such as Putty or whatever is built into your operating system to connect to the Raspberry Pi. Most of the additional work I did was using an SSH shell from my laptop. The reason is that I have two screens then and copy paste and documenting with screen shots is so much easier.  Make sure that SSH is installed and enabled, check the description of you image for hints how you should connect.

First impressions

In comparison to 15 to 20 years ago, this process is so easy, anyone should be able to do this. The other aspect is that the Raspberry Pi is very affordable. The Raspberry Pi comes with images that directly support a media library or device to display videos or other media on a TV set. So there is some immediate purpose it can serve. In addition it provides you with a platform that can be used to develop and run applications. Linux has all the needed editors and compilers available and other language choices are possible. The system can act as a server as well as a client, dependent on what is needed.

I am quite impressed. There is also infrastructure available to support class room use and a lot of example projects, YouTube videos and companies providing additional devices.

The image below shows my RPi with its peripherals. It is connected to Ethernet, but wireless is functional and could be used as well. The keyboard and the mouse are connected wireless as well. The Raspberry Pi is connected to a monitor and runs X-Windows. There are various flavors of operating systems and images available. Above is a full blown Raspbian which can be used as desktop with keyboard and mouse.

PIE_With_Peripherals

You can install media library and home automation support in addition. You can get images pre configured for media library and home automation services. There are other options available such as Windows 10 IoT and more if you search the internet.

You can also get for example a Raspbian Lite image which is stripped down to a minimal footprint which you can then extend with what you need. It still supports access using mouse, keyboard and monitor, but does only boot up into a terminal mode. It is not necessary to have a mouse, keyboard and monitor/TV connected. It is possible and sometimes easier to use a remote connection.

If I have to follow more complex tasks from a description to set up something, I usually don’t use the keyboard and mouse directly connected to the Raspberry Pi. I rather use a SSH shell connection (e.g. using Putty on Windows) to the RPi. This allows copy and paste as well as screenshots to create documentation. The image below shows a connection to the system above using putty .

SSH_Connection

The default user for the raspbian image above is pi, the password raspberry. Consider your keyboard layout, dependent on how you connect and what keyboard settings you have the ‘y’ key could actually map to ‘z’.

If there is no need for a full blown operating system and you want to rather do some hardware related work, have a look at the Arduino. It is cheaper and has a more hardware control focus. The Raspberry Pi can control hardware as well using the GPIO but it is more expensive and has more overhead in development and OS.

Docker RPiCloud

In the first days after I received the Raspberry Pi but, due to shipping, not yet a GrovePi+ board and sensors. So I looked around in the internet what you could do with a plain Raspberry Pi. I ran into this blog from Hypriot that talked about running Docker on the Raspberry Pi.

I had actually looked into Docker recently, so I decided I’d try it out. It really worked very well for me. If you follow this blog, you end up with a system that has a Docker host and a Docker daemon running on the Raspberry Pi and can run Docker images on that RPi. Here is a reference to the Docker architecture.

You have to keep in mind that Docker is not a full blown virtualization. That makes Docker Images dependent on the architecture of the Docker host. To run Docker images an a Raspberry Pi, you have to provide them for the ARM platform. The blog authors already ported Docker and provide several images for the Raspberry Pi.

And there is more. Several people have created custom Raspberry Pi clusters. There is also a company that provides sets to build Raspberry Pi Pico cluster with 3 to 100 nodes. You can run Docker swarms on these clusters.

That is incredible when I did my Diploma Thesis working on a parallel computing system based on Transputers  32 nodes was some kind of super computer. I have to assume that in comparison to the Raspberry Pi Transputers are actually not that fast anymore. So it is possible to setup a small “Supercomputer” to explore parallel computing for a reasonable price and put it on your table.

So, if I have time, I will try if I can create a Docker image that contains the software required to run against Bluemix and has the GrovePi+ and the sensors configures. I am curious what happens if one tries to run multiple containers. As long as the sensors are only read, I assume everything is going top be OK. But we shall see.

Summary

The experience with the Raspberry Pi was very different from my experience with embedded development in the past. If you had more than a terminal on your embedded device, you where lucky.

I haven’t yet tried debugging and setting up a cross development environment up on my laptop, but I am looking forward to that too. Getting up the GrovePi+ I/O board and the sensors will be the next challenge.

Stay tuned, if you are interested in the Internet of Things. If you like to tinker with it yourself, get started. It is very easy to approach these days and there are a lot of interesting example projects out there you could follow.

The next post will talk about bringing up the GrovePi+ and more tinkering. I don’t yet know the details.

Using RTC to Work with DevOps Services and With Bluemix


I recently had a look into Bluemix and how to use it with Eclipse to develop cloud applications. The blog post also mentions that there is an integration to DevOps Services that enables to use work items for planning. It also allows to use GIT or Jazz SCM to manage the source code.

Recently I had a look into how that works and I would like to share here what I learned. This post assumes you have performed the first steps to setup your environment following the Getting started With Bluemix post already.

Please note: DevOps Services as well as Bluemix are evolving quickly, adopting for new needs as they arise and what is described here might not be the only possible solution or outdated if you look at it later. It might be a good idea to check with the current documentation of DevOps Services.

Creating a new DevOps Services Project

The first step to get started with DevOps Services, is to create a new project to manage work items and the source code.

After signing into DevOps Services, using the IBM ID created for Bluemix, it is possible to create a project. The screen shot below shows the information needed to do this. Basically it has to have a name, how the source code should be managed, how the project template should look like. There is also a choice to integrate Bluemix with the project.

For the following part of this blog I am assuming that Jazz SCM was chosen.

New DevOps Services ProjectFor the Bluemix integration provide the organization – basically the Bluemix ID and the password.

Clicking the Create button creates a RTC project (which is working under the hood of DevOps Services).

On the overview page, you can select to edit the code, track and plan work with work items, and configure and manage build and deployment.

Configure Eclipse ProjectThere is also a “Configure Eclipse Client” choice available. Clicking at it provides the information of an invitation that can be used in the RTC Eclipse client to set up the connection.

Configure Eclipse ClientJust copy the invitation data and paste it into the ‘Accept Invitation’ action, provide the password and the connection is created. We will look into the next steps done with Eclipse later.

Enabling the Bluemix Integration

Switch to the Build & Deploy section using the button. This page allows to configure the build and deploy mechanism, request a new build and deploy and view the deployment status.

Configure Deploy and BuildThe Build and Deploy has basically two settings. Click Simple to select the Simple setting which are adequate for now (this means I haven’t been able to use the advanced settings). Then click the configure button.

Configure DeploymentThis basically defines the structure needed to deploy an app.

The integration expects the manifest.yml in the root folder in the jazz SCM system. Since there currently is no example code, the first builds&deploys will probably fail.

Jazz SCM in the Project Web UI

Switching to the Edit Code page allows to access the SCM information.

Please note: I had issues with seeing the stream information, versioned files and other data with the latest Version of the Firefox Browser ESR (31.2.0).

Chrome worked for me, so I would suggest to use that browser. It is unclear why, because other users apparently don’t have that problem. It might as well be one of these weird effects we ave to put up with in a browser-based world.

The project creation dialog created a Stream, a repository workspace and a component already. The names are based on the name of the project.

You can browse the repository workspace and create files and folders in the Orion editor in the web UI and deliver your changes to the stream to be deployed.

My task was doing this with the Eclipse client, so there I went first.

Jazz SCM in the Eclipse Client

There is a description for this step that I could find here in the documentation. However, I had problems with performing them. This might be different today, however, if you run into anything, it might have similar reasons.

At this point the assumption is that the invitation from DevOps Services has been used to create a repository connection and the client is logged into the project.

As a first step, a new repository workspace is needed. The easiest way to create one is to find the stream in the Team Artifacts view and create the repository workspace from that. This creates the repository workspace and sets the default and current flow back to the stream. Tip: Name the repository workspace e.g. putting ‘Eclipse’ into the workspace name. This is to not confuse this workspace with the one used by the Web UI in the Orion editor. The reason is that repository workspaces are not designed to support one instance to be loaded and modified multiple times in different places (streams are designed for this).

Next step would be to load the repository workspace. Before attempting this, keep in mind that the Build&Deploy step assumes the manifest.yml file to be in the root folder. To achieve that using the Eclipse client and RTC Jazz SCM, there is only one option: Load the component as root folder as shown below. Trying this however, failed for me the first time around. The reason for that is that the default name of the component is derived from the project name and has a pipe ‘|’ symbol in the name. This is not allowed as name in a file or folder on the filesystem (Windows at least). Best approach is to rename the component to some useful mane. At least replace the pipe symbol by a valid one, for example a dash.

After this has been done the component can be loaded.

Load Repo Workspace ComponentIn the second step of the load wizard select the component to be loaded and press finish.

Select Load Repo Workspace Component As FolderWhile loading the data to disk, the RTC Eclipse client creates an artificial project file to mark this folder as an Eclipse project. dependent on the scenarios one wants to perform later, one might or might not want this file to be checked into version control. If one would like to have Eclipse projects on a deeper level, the file could get into the way.

Since the file is always created if the data is loaded this way, I added the file to the Jazz ignore file.

It is now possible to add the files for the application. For example the files from the example from Bluemix from my last post can be used as shown below. This would for example look like below:

Example File Structure

Why this structure? The project.json file is from configuring the project. It contains the property for the project name. I left it there.

The manifest.yml file is needed for the boilerplate/runtime our sample is using. It need to be in the root folder. It is specific to how Bluemix builds and deploys. In the example above I basically moved the original the manifest.yml from the enclosed eclipse project rsjazz01 into the root folder. Then I changed the path to pint into my Eclipse project/folder rsjazz01.  The content is changed to reflect the path to the Node.js project in the sub folder rsjazz01.

Manifest FileIf the path set above, would be just the root folder, the package.json file would be required also in the root folder. As it is above the file is needed in the sub folder.

The way it is now, would allow to load the repository workspace to find the rsjazz01 folder as node.js project and do local debugging on it.

Working with the Code

Once the general structure is set up, it is possible to edit the code in the Web UI as well as in the Eclipse client. Once you deliver the code to the stream it gets automatically built and deployed. Delivery would usually require a work item connected to the code change for traceability.

Build And DeployThe application is also accessible for testing and, of course monitoring in Bluemix.

Pro and Con’s

Looking at this post and the Bluemix post, there are obviously several valid approaches. The approach described here allows to have one application developed with one DevOps Services RTC project and have a continuous build and deploy for free.

The approach described in the Bluemix post, would allow to use Eclipse to work on several projects and actually manage the work and code in one or more DevOps Services RTC projects, as best fits. If I want to manage multiple applications in one RTC project, the automatic build and deployment would not be available. That, however can easily be scripted into continuous integration build scripts as well.

Summary

I hope this and the Bluemix post, provide you with some insight about how the DevOps Services and Bluemix work together and how you can user Eclipse and RTC to develop your applications.

Getting started with BlueMix


Recently everyone has their heads in the clouds (no pun intended) and I decided to have a peek to find out what it is all about.

This post is a summary of my first experiences with the IBM BlueMix Cloud Computing offering and how I got started with developing my first applications for it.

Note: this is not an RTC API post. However, RTC is involved.

There are several posts of my peers. Look into Dan Toczala’s and Tekehiko Amano-san’s blog and see these posts about BlueMix:

There are more posts available.

BlueMix has been around for some time now here at IBM and I wanted to understand what it is providing. I have seen some high level presentations and demos already. Unfortunately I am not the kind of person that can learn to fly by reading books and looking at slides. I have to get things into my hands, use and experiment with it to understand how they work. This usually also involves accidents, painful crashes and recovery from them. This is however the best way for me to understand how it works and is most beneficial from my point of view.

As you can read in the BlueMix documentation about what BlueMix provides.

Citing the web site, IBM® Bluemix is an open-standards, cloud-based platform for building, managing, and running apps of all types, such as web, mobile, big data, and smart devices. It can be used to develop and run server applications.

You can use your own development environment as well as IBM Dev Ops Services to develop the applications and manage the source code.

You should familiarize yourself with the architecture of BlueMix to understand the details. I will try to use the concepts described there in the post with only a short summary what they represent.

BlueMix provides several ways to start with developing applications.

  • Runtimes are a preconfigutred set of resources used to run applications
  • Boilerplates are preconfigured containers used to run applications that usually also contain services
  • Services hosted by BlueMix provide capabilities such as session caches, persistance and other capabilities

Looking at the Runtimes there is support for Node.js applications Liberty for Java (a lean profile for WebSphere Application server), Ruby on Rails and others available.

Since I am not a Web developer but have some few JavaScript experience, I decided I wanted to go for Node.js. I got myself some material to learn about it first. After understanding the basic concepts by reading, I started to set up a local development environment.

Setup a local Node.js Development Environment

I followed http://www.nodeclipse.org/updates/ to install the local Node.js environment. I downloaded Node.js from http://www.nodejs.org/download/ and installed it as suggested in the previous link. I skipped CoffeeScript and I had a JDK 7 already on m machine.

Setup Eclipse for Node.js Development

I needed a local environment to be able to play with it and have a quick turn-around time. I downloaded Eclipse Juno, because I heard that would be the best option, and followed http://www.nodeclipse.org/updates/ to install what is needed into Eclipse.

Having done this, I was good to go and I was able to create Node.js projects in Eclipse and run and debug them locally.

I ran some examples until I felt reasonably familiar with how the language works and decided to pursue my quest to BlueMix.

Setup Eclipse with RTC

Since I intended to use Eclipse with RTC embedded to be able to use RTC against IBM Dev Ops Services, and not using Git for SCM (sorry guys, but I can’t do that), I downloaded the RTC 5.0 p2 install package from the RTC All downloads Page. After the download succeeded, I installed RTC into Eclipse. I logged into IBM Dev Ops Services from RTC using my Jazz.net ID and my IBM ID password. Weird.

However, now my local environment works with RTC and I can use any RTC repository, including IBM Dev Ops Services, to manage my work and source code.

Logging into BlueMix

I logged into BlueMix. Please note, you can use or create an IBM ID. This basically provides you with an evaluation period of some months.This should be easy to follow and work like a charm.

If you are an IBM’er you can use your Intranet ID. I would suggest to do that. I unfortunately used my IBM ID and had to follow the explanation text and links to the right of the user and password fields to link both up. There seem to be still problems with this, because I happened to end up on a staging version of BlueMix that did not work for me.

Note: After logging into BlueMix, make sure your URL is https://ace.ng.bluemix.net.

Creating a Sample Project on BlueMix

To get started, I created a sample project on BlueMix.  I went into my dashboard and clicked the tile Create An App. I picked the Runtime SDK for Node.js, provided a unique host name for example rsjazz01 and accepted all the default settings.

Note: The host name needs to be unique which basically means, anyone following this will have to pick a different name and replace it in the images and text below.

The project gets opened, but won’t run, since there is nothing in there yet. In the top section to the left, underneath the application icon and name is a link named View Guide. This link provides more information about how to get started. The following is what it shows if you chose a project name RsJazzTest03. The project name will be reflected in the downloaded sample files at some places.

BlueMix Sample

Install The CloudFoundry Commandline

BlueMix uses CloudFoundry to upload and deploy applications. Follow the link and desription in the guide to download and install the CloudFoundry command-line.

Also download the example code for the application. Store the compressed code somewhere and extract the file into a Folder, for example c:\temp. Assuming the application name is rsjazz01 there would be a folder C:\temp\rsjazz01 that contains the source code of the project.

You can follow the instructions to push the example to BlueMix and run it. However, lets get it into Eclipse so that we can look at it in a more convenient way.

Create an Eclipse Project

Create an Eclipse Node.js project. It can have any name as far as I can tell, but in the context of Eclipse choose the name of the application as the project name, e.g. rsjazz01.

From the folder C:\temp\rsjazz01 that contains the uncompressed example, select all files and folders. Copy the files and folders using CTRL+c and paste them into your Eclipse project. You can do this in the Eclipse project explorer or in the filesystem. If you did it in the Filesystem, refresh the Eclipse project to see the files. The Project content should look like this:

Examlple ProjectThe main application file is represented by the file app.js. The folders public and views and their contained files are used by the framework used to create web pages.

Run the Application on the Local Development Environment

Before trying to run the application in the cloud, let’s try to run it on the development environment. In order to do so lets examine the application first. The file app.js looks as follows:

Example AppThe application prepares itself first, then gets some data, such as the host name and the port it is using from the environment, or used some defaults if not. Then it starts to listen as a server on that port and host.

The description of the sample mentions some other pieces it is using. Lets look what it is.

The files

  • manifest.yml
  • package.json

marked in the project screen shot above, are used by BlueMix to deploy and run the application. Any application that runs on BlueMix needs this kind of information to be able to deploy and run.

Lets look at the manifest.yml file first. This is the content for our sample.

ManifestThis describes some of the properties of the application, such as the host, the application name, the command to start it as Node.js application, domain, number of instances and required memory and disk. When creating an application from scratch, this is important information to look at.

The package.json file looks like this:

PackageThis file describes the application and, more importantly it describes the packages that the application requires to be able to run. It needs the Express web application framework, version 3.4.7 and the Jade Template Engine, version 1.1.4 to run on a node engine.

Install Express and Jade 

To install these packages, on your local machine, in order to be able to run the application, open a shell and use the package manager. Type each line below and hit enter. The versions needed are from the dependencies. Note, the newest version of express won’t work. There have been changes to it that will break the application.

npm install express@3.4.7
npm install jade@1.1.4

Wait for Node.js to download and install the packages.

Now right click on app.js and select to run it as a Node application. Open http://localhost:3000/ and see the web page displayed.

It is now possible to develop the application further on the local development environment. It is possible to use RTC to put it under version control, to share it and to plan the work. Any other source control providers that Eclipse supports can be used as well.

Deploy the application on BlueMix

Lets try to deploy the application on BlueMix. How this works is described in the guide above. Open a shell. The first three commands can be run anywhere.

First set the API URL for Cloud Foundry:

cf api https://api.ng.bluemix.net

Log into the server (use your own ID):

cf login -u 

This prompts for a password. Provide your password and finish the login.

Set the target space for the application. By default the space is called dev.

cf target -o  -s dev

Now change the directory to the folder that represents the project on disk, named rsjazz01 in this example. this folder is directly in the workspace folder you chose to use with Eclipse when you started it.

Now push the application to the BlueMix server:

cf push rsjazz01

The data gets uploaded, deployed and started. In the BlueMix Dashboard on the application tile you should be able to see that there are activities happening in BlueMix, while they show up in the shell. Once the process finishes the application is deployed and you can open the URL and see the same result you had from the local run.

Create a Simple Custom Sample Application

Running a sample application that is supposed to be running is – relatively – easy. But what about running a custom application? What is needed to do that?

Create a new Node.Js project and give it a name. In the example we will use rsjazz02. Pick a name that suits you if you want to perform this as well.

The new project is empty. Create a new JavaScript file and call it app.js. The file should have the following content:

/*jshint node:true*/

/**
 * New node file
 */
var http = require("http");

function onRequest(request, response){
	response.writeHead(200, {"Content-Type": "text/plain"});
	response.write("Hello World - this is rsjazz's first BlueMmix application!");
	response.end();
}

//There are many useful environment variables available in process.env.
//VCAP_APPLICATION contains useful information about a deployed application.
var appInfo = JSON.parse(process.env.VCAP_APPLICATION || "{}");
//TODO: Get application information and use it in your app.

//VCAP_SERVICES contains all the credentials of services bound to
//this application. For details of its content, please refer to
//the document or sample of each service.
var services = JSON.parse(process.env.VCAP_SERVICES || "{}");
//TODO: Get service credentials and communicate with bluemix services.

//The IP address of the Cloud Foundry DEA (Droplet Execution Agent) that hosts this application:
var host = (process.env.VCAP_APP_HOST || 'localhost');
//The port on the DEA for communication with the application:
var port = (process.env.VCAP_APP_PORT || 3000);
console.log('Start my server on port ' + port);
//Start server
http.createServer(onRequest).listen(port,host);

console.log('App started on port ' + port);

This application basically waits for an HTTP request on a port on a host and responds with a simple text. It reuses the parsing of the environment variable we saw in the sample application to get the port and the host name.

Run the application on the local Node.js and connect to it using http://localhost:3000/. It should run and provide the expected output in the browser window.

It does not have any dependencies to any other packages. However, it would not yet run on BlueMix. It lacks the information required to deploy end run it there.

Copy the manifest.yml and the package.json files from the sample application over. You can also copy the readme files, but these are not required.

Open the manifest.yml file and edit it to use a new host name. To make sure the host name is unique you can create an empty project on BlueMix, but you don’t have to. BlueMix will tell you if the host name is already taken. In the code below I use rsjazz02 as name of the application and as host name.

applications:
- disk_quota: 1024M
  host: rsjazz02
  name: rsjazz02
  command: node app.js
  path: .
  domain: mybluemix.net
  instances: 1
  memory: 128M

The line

  command: node app.js

can stay as it is. If you chose to use a different name for the main JavaScript file, you would put the name in here.

Open the package.json file and edit it to match the new situation. You can change the name and the description. Remove the dependencies, as there are no dependencies to other packages needed. Keep the rest as it is.

{
	"name": "RSJazzSampleApp",
	"version": "0.0.1",
	"description": "A sample nodejs app for Bluemix - by rsjazz",
	"dependencies": {},
	"engines": {
		"node": "0.10.26"
	},
	"repository": {}
}

Save all the changes to these files.

The application is now ready to deploy on BlueMix. Change the directory of your shell to the new folder e.g. using cd ../rsjazz02.

Now push the application to the BlueMix server using the shell command:

cf push rsjazz02

The data gets uploaded, the application deployed and you can test it using http://rsjazz02.mybluemix.net/ once it is running and the health shows green (replace the name of the application in the URL with your application). The result should be the same as in the local run.

User RTC and IBM Dev Ops Services

You can use IBM Dev Ops Services to develop and deploy BlueMix Applications with RTC. You would basically create a DevOps Services project to manage your source code and use it to deploy your application. I will try to blog about this later.

You would still do all the above steps to set up your local development environment.

Enable Eclipse To Deploy Directly to BlueMix

So far a local shell and the cf command is used to push the application up to BlueMix. As mentioned above you could also use IBM Dev Ops Services to do this.

There is a third option available. You can configure your Eclipse client to connect to BlueMix and to deploy the application automatically if you did changes, if you desire.

You can install the IBM Eclipse Tools for Bluemix into your local Eclipse Client.

Once you have done that, you can open the Eclipse View Servers and add a new server to it.

The server view would look like below. The overview shows the configured BlueMix connection. The Applications and Services shows the applications and services you have configured. The server view shows the applications on the server as well as the locally connected ones.

BlueMix Eclipse ToolsTo be able to deploy you Node.js application, you have to change it a bit first. You have to convert it to a Faceted form, using Configure in the context menu of the project.

Configure Faceted FormIn the following dialog you have to select the application type, in this case Node.js Application. Once you have done it, you can see it in the Add and Remove dialog for the server.

Configure ServerYou can add applications and remove them. If configured to do so, any save will trigger a deployment.

Summary

This post shows how you can use RTC and Eclipse to start developing Node.js applications for BlueMix. It shows how to configure the environment and the first basic steps in a way to support getting over the first questions. After reading this you should be able to do some basic experiments in half a day or so.

As always I hope this helps someone out there to save some time and I appreciate feedback.