Tips & tricks for installing and running IBM products

OAuth and OpenID Connect provider configuration for reverse proxy instances - reuse acl option

Tom Bosmans  10 October 2018 10:12:04
I have multiple reverse proxy instances configured on an appliance, and recently added a new one.

I performed the "Oauth and OpenID Connect Provider configuration", and did not select the options "Reuse ACL" nor "Reuse Certificates" .

After that, I noticed that my OpenID authentication no longer worked correctly on the other instances.
The reason was that the ACL's for the objects in /mga/sps/oauth/oauth20/  disappeared .

So if you already have configured other instances on your appliance for "Oauth and OpenID connect",  always enable "Reuse ACL"  !

What actually happens is easy to follow in the autocfg__oauth.log file in the Reverse Proxy log files:

If reuse acl is not checked, it will first detach the ACL's from all objects , delete the ACL and then add it again, but only for the reverse proxy where your run the configuration .....
So you loose all configuration that uses the isam_oauth_* ACL's in the other instances .  

Moral of the story :  always enable "Reuse ACL"  when running the "Oauth and OpenID Connect Provider configuration"

Add a header X-LConn-UserId to all requests in Connections

Tom Bosmans  8 August 2018 11:51:30
By adding this generic property to LotusConnections-config.xml, all requests will contain a header X-Lconn-Userid , that contains the logged in user .

Depending on your configuration, this most likely is the email address of the logged in user.

<!-- To display email of logged in user in IHS: -->
<genericProperty name="">true</genericProperty>

You can then add this header value to the log configuration in apache/ihs, so you have logs including the user .  This is pretty helpful for tracing problems ....

Please note that this is not officially supported in any way !

IBM Cloud Private installation - Filebeat problem (CentOS7)

Tom Bosmans  13 July 2018 13:17:00
After installation of IBM Cloud Private, I noticed I did not see any log information in the ICP UI.

While checking the logs, I saw that filebeat did not start correctly (or rather, completely failed to start).

(on the master node: )
root@icpboot ~]#journalctl -xelf

Jul 13 11:54:17 hyperkube[1825]: E0713 11:54:17.168699    1825 kuberuntime_manager.go:733] container start failed: RunContainerError: failed to start container "ab8344159739d06825c25c489dc09a0143f437b6be321804df06e59417d66a18": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount.

Jul 13 11:54:17 hyperkube[1825]: E0713 11:54:17.168734    1825 pod_workers.go:186] Error syncing pod 2406dc66-85e4-11e8-8135-000c299e5111 ("logging-elk-filebeat-ds-wvxwb_kube-system(2406dc66-85e4-11e8-8135-000c299e5111)"), skipping: failed to "StartContainer" for "filebeat" with RunContainerError: "failed to start container \"ab8344159739d06825c25c489dc09a0143f437b6be321804df06e59417d66a18\": Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount."

Jul 13 11:54:28 hyperkube[1825]: I0713 11:54:28.083562    1825 kuberuntime_manager.go:513] Container {Name:filebeat Image:ibmcom/filebeat:5.5.1 Command:[] Args:[] WorkingDir: Ports:[] EnvFrom:[] Env:[{Name:NODE_HOSTNAME Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil}} {Name:POD_IP Value: ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,}}] Resources:{Limits:map[] Requests:map[]} VolumeMounts:[{Name:config ReadOnly:false MountPath:/usr/share/filebeat/filebeat.yml SubPath:filebeat.yml MountPropagation:} {Name:data ReadOnly:false MountPath:/usr/share/filebeat/data SubPath: MountPropagation:} {Name:container-log ReadOnly:true MountPath:/var/log/containers SubPath: MountPropagation:} {Name:pod-log ReadOnly:true MountPath:/var/log/pods SubPath: MountPropagation:} {Name:docker-log ReadOnly:true MountPath:/var/lib/docker/containers/ SubPath: MountPropagation:} {Name:default-token-kbdxx ReadOnly:true MountPath:/var/run/secrets/ SubPath: MountPropagation:}] VolumeDevices:[] LivenessProbe:nil ReadinessProbe:nil Lifecycle:nil TerminationMessagePath:/dev/termination-log TerminationMessagePolicy:File ImagePullPolicy:IfNotPresent SecurityContext:nil Stdin:false StdinOnce:false TTY:false} is dead, but RestartPolicy says that we should restart it.

Jul 13 11:54:28 hyperkube[1825]: I0713 11:54:28.083787    1825 kuberuntime_manager.go:757] checking backoff for container "filebeat" in pod "logging-elk-filebeat-ds-wvxwb_kube-system(2406dc66-85e4-11e8-8135-000c299e5111)"

This means that in the IBM Cloud Private UI, I don't see any logs .

Digging a bit further, I saw that the logging-elk-filebeat-ds indeed was not started.

root@icpboot ~]# kubectl get ds --namespace=kube-system
NAME                                 DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE

auth-apikeys                         1         1         1         1            1           role=master     20h

auth-idp                             1         1         1         1            1           role=master     20h

auth-pap                             1         1         1         1            1           role=master     20h

auth-pdp                             1         1         1         1            1           role=master     20h

calico-node                          3         3         3         3            3                    20h

catalog-ui                           1         1         1         1            1           role=master     20h

icp-management-ingress               1         1         1         1            1           role=master     20h

kube-dns                             1         1         1         1            1           master=true     20h

logging-elk-filebeat-ds              3         3         2         3            0                    20h

metering-reader                      3         3         2         3            2                    20h

monitoring-prometheus-nodeexporter   3         3         3         3            3                    20h

nginx-ingress-controller             1         1         1         1            1           proxy=true      20h

platform-api                         1         1         1         1            1           master=true     20h

platform-deploy                      1         1         1         1            1           master=true     20h

platform-ui                          1         1         1         1            1           master=true     20h

rescheduler                          1         1         1         1            1           master=true     20h

service-catalog-apiserver            1         1         1         1            1           role=master     20h

unified-router                       1         1         1         1            1           master=true     20h

Now the problem is of course in the log file, but I did not know how to fix it :

Error response from daemon: linux mounts: Path /var/lib/docker/containers is mounted on /var/lib/docker/containers but it is not a shared or slave mount.

On each node, execute these commands:

findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

mount --make-shared /var/lib/docker/containers

The result looks something like this:

[root@icpworker1 ~]# findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

TARGET                     PROPAGATION

/var/lib/docker/containers private

[root@icpworker1 ~]# mount --make-shared /var/lib/docker/containers

[root@icpworker1 ~]# findmnt -o TARGET,PROPAGATION /var/lib/docker/containers

TARGET                     PROPAGATION

/var/lib/docker/containers shared

After that, the logging-elk-filebeat DaemonSet is available :

root@icpboot ~]# kubectl get ds --namespace=kube-system

NAME                                 DESIRED   CURRENT   READY     UP-TO-DATE   AVAILABLE   NODE SELECTOR   AGE


logging-elk-filebeat-ds              3         3         2         3            2                    20h


I don't know if this is a bug, or if this is caused by me trying to run ICP on CentOS7 (which is not a supported platform) ...

Synology TFTP server for PXE Boot

Tom Bosmans  9 July 2018 10:54:08
Something I've been meaning to do for a while now, is setup my Synology NAS as a PXE boot server.

I want to be able to easily install new Operating systems on any new hardware I get, but more importantly, to easily install multiple Virtual Machines on my primary workstation without too much hassle.
This will involve setting up the Synology as TFTP server , supplying the correct PXE files (images and configuration) , and also configuring my DHCP server .

The official documentation from Synology is woefully inadequate to get PXE up and running, it is missing a number of vital steps.

Luckily, there are other sources on the internet that

Configure TFTP on Synology

Prepare a shared folder/volume.  In my case, I have a shared volume named "shared" , where I created a folder "PXEBOOT"

Go to Main Menu > Control Panel > File Services and select the TFTP tab.
Tick Enable TFTP service.
Image:Synology TFTP server for PXE Boot

Enter the folder you prepared earlier.

Now you need to add the folder structure for TFTP to be able to show a boot menu , and prepare the images.

Check out this exellent guide, that contains a link to a zip file with a configuration that contains CentOS and Ubuntu images.

(it actually uses this Github repository : )

The Github repository is not quite up to date, but it's easy to add newer images, I've added Ubuntu 18.04 and CentOS 7.5 .  It is configured to use the netinstall (http), so you do need an internet connection.

Unzip it, and put it on your shared folder, on your Synology, so it looks like this:
Image:Synology TFTP server for PXE Boot

Verify TFTP

I'm using Redhat 7.5 , and I wanted to quickly test TFTP.  Unfortunately, the tftp client is not a part of my configured repositories , so I just downloaded a client from .


(to) <ip address of synology>

tftp> verbose

Verbose mode on.

tftp> get pxelinux.0

getting from <ip address of synology>:pxelinux.0 to pxelinux.0 [netascii]

Received 26579 bytes in 0.2 seconds [1063767 bit/s]


This indicates that the location of pxelinux.0, that needs to be configured in the DHCP server, is in the root of the TFTP server and is accessible by everyone.

Configure Ubiquity Edge Router's DHCP

A very complete guide to do this can be found here. You need to use the Ubiquity CLI to do it.

I've configured the following (result of show service dhcp-server )

shared-network-name LAN2 {

    authoritative disable

    subnet {

        bootfile-name /pxelinux.0

        bootfile-server <ip address of synology>






        subnet-parameters "filename &quot;/pxelinux.0&quot;;"



use-dnsmasq disable

Note that you must use the &quote; syntax !

The following commands were used :


edit service dhcp-server shared-network-name LAN2 subnet

set subnet-parameters "filename &quot;/pxelinux.0&quot;;"

set bootfile-name /pxelinux.0

set bootfile-server <ip address of synology>



show service dhcp-server

Issuing a new "set" command does not overwrite a value, instead it adds a new line.  You need to remove the entries that are not correct (if you have multiple lines) :

show service dhcp-server


edit service dhcp-server shared-network-name LAN2 subnet

delete subnet-parameters "filename &quot;/shared/PXEBOOT/pxelinux.0&quot;;"



show service dhcp-server

If you have multiple lines or wrong lines, you will see PXE errors in the boot screen .

VMWare workstation

Lastly, I need to configure VMWare workstation.
2 important things here :
- I added a bridged network adapter , to obtain a dhcp address from my home network .  This adapter will receive the pxe boot instructions.
- I increased the memory size from 1024 MB to 2048 MB, because the CentOS 7.5 installer complained about "no space left on device" on the /tmp drive during installation (which effectively means, in memory).
Image:Synology TFTP server for PXE Boot

When booting, now get the configured menu options from my PXE boot server ....
Image:Synology TFTP server for PXE Boot

Then step through the installation options as you would perform a normal manual installation.  Of course it's also possible to prepare automated installations, but that is another topic .

Letsencrypt certificates for my own test servers

Tom Bosmans  26 June 2018 14:02:15
Yes, it's a bit over the top to use Let's Encrypt certificates for test systems, where a self-signed certificate would serve a similar purpose.  Furthermore, a Let's Encrypt certificate has a short lifietime and needs to be replaced every 3 months.

But since Let's Encrypt brought wild-card certificates to us fairly recently (march 2018), there is an advantage here.  You only need this single certificate and you can use it on all your systems.  Of course, in most case  you don't want to use wild-card certificates, but for my case (non-production test systems) , this is perfectly valid.

You also must use the DNS challenge (instead of the more traditional https challenge that Let's encrypt uses for verify ) .  The mechanism that is used, is similar to other verification mechanisms like DKIM for smtp (see DKIM deployed on my mail servers )

My usecase here, is a test environment running ISAM - IBM Security Access Manager (  Not having to trust the signer each time I access a page with a sefl-signed certificate , is a huge plus when demoing a solution :-)

1. Prerequisites
  • You need a recent version of certbot (that has the support for dns challenge, and the support for ACMEv2) , I'm using certbot 0.24.0
  • This certbot needs to run on a system with Internet access (outbound only, it needs to connect to the letsencrypt systems)
  • You also need a public dns domain, because Let's Encrypt uses DNS for the verification.   The only thing that needs to be in the domain records, is a TXT record btw.  You don't need to configure anything else.

2. DNS Preparation

I ordered a dns domain from my prefered DNS provider ( .
I could get a EU domain for something like 3 Euro for the first year .

There is nothing to configure for now , the configuration is done during the certbot action.

3. Certbot

Run certbot with the option --preferred-challenges dns , and define your domain as *.. (mine is * .
You can also use certbot-auto, and you can use a single commandline here, but I used this method :

[root@system ~]# certbot certonly --manual --server --preferred-challenges dns
Saving debug log to /var/log/letsencrypt/letsencrypt.log

Plugins selected: Authenticator manual, Installer None

Starting new HTTPS connection (1):

Please enter in your domain name(s) (comma and/or space separated)  (Enter 'c'

to cancel): *

Obtaining a new certificate

Performing the following challenges:

dns-01 challenge for


NOTE: The IP of this machine will be publicly logged as having requested this

certificate. If you're running certbot in manual mode on a machine that is not

your server, please ensure you're okay with that.

Are you OK with your IP being logged?


(Y)es/(N)o: Y


Please deploy a DNS TXT record under the name with the following value:


Before continuing, verify the record is deployed.


Press Enter to Continue

So now you need to go to your DNS provider, and create a TXT DNS record for _acme-challenge. , in my case,

_acme-challenge 28800 IN TXT 9zE0cU5V1hiYo5HJWY-Zx6FW74gl1gd5P9dnS0G8cYw

In the interface of my DNS provider, it looks like this :  I need to create a new subdomain, named .

Image:Letsencrypt certificates for my own test servers
In the next step , I can then enter the value that certbot provided , in a TXT field.

Now once you saved your DNS entry, DO NOT continue immediately.

Give it at least 1 minute, so you're certain the DNS entry is available, or even better, verify that your nameserver is up-to-date by performing a dns lookup, for instance using dig .

In my case, I can use this command, to use the nameserver of my provider.  Do this on another system you have your certbot command running, or open a new session .

dig -t txt +short


It needs to return the value of the TXT record.  As long as it doesn't, DO NOT continue in the certbot session, because it will fail and you need to start over.

But if it does return the key, continue.  

Waiting for verification...

Cleaning up challenges


- Congratulations! Your certificate and chain have been saved at:


Your key file has been saved at:


Your cert will expire on 2018-09-24. To obtain a new or tweaked

version of this certificate in the future, simply run certbot

again. To non-interactively renew *all* of your certificates, run

"certbot renew"

- If you like Certbot, please consider supporting our work by:

Donating to ISRG / Let's Encrypt:
Donating to EFF:           

Now the chain and certificate files are in the standard Let's Encrypt locations described (/etc/letsencrypt/live/./
Since this is a wildcard certificate, you likely want to copy it elsewhere and distribute across your systems.

4. Let's Encrypt keys and ISAM

IBM Security Access Manager expects pkcs12 certificates , so we first need to use openssl to convert the letsencrypt certificates to a .p12 .
I'm using ISAM 9.0.5, as OVA .

openssl pkcs12 -export -out \

-inkey /etc/letsencrypt/live/ \

-in /etc/letsencrypt/live/ \

-certfile /etc/letsencrypt/live/

Enter Export Password:

Verifying - Enter Export Password:

Use a strong password to protect your key !
( )

Now get the certificate to a system where you can upload it to ISAM .

In the LMI, I want it in 2 places :
- the management certificate
- the default certificate for the reverse proxies

hosts file on ISAM

Add the ip addresses for the interfaces you want to use in the hosts file on ISAM.  We could use DNS as well (since we have the public dns domain), but since this is internal, I am not going to do that and use simple hosts files .
Image:Letsencrypt certificates for my own test servers

Also, I use the following hosts file on my local machine to access my environment :

management certificate

Go to Manage System Settings/System Settings/Management SSL Certificate

Image:Letsencrypt certificates for my own test servers
The LMI will be restarted after this.

pdsrv keydb

Edit the pdsrv keydb, go to Personal certificates and select "Import"
Image:Letsencrypt certificates for my own test servers
Then select the "Let's Encrypt" certificate , click "Edit" and set it as the default certificate.

Image:Letsencrypt certificates for my own test servers

The DST ROOT CA is missing from the IBM provided keydbs (this is actually a bug in my opinion -  see this link : ), so you need to add it to the Signer Certificates in all key databases basically.  You can download it from the website in the link below, or you can export it from any modern browser (for example FireFox, below).  

Image:Letsencrypt certificates for my own test servers

Note that the reverse proxy can't handle the missing root CA, while the LMI does not seem to require it.   In each case, any server you want to protect using ISAM that would use TLS/SSL and a Let's Encrypt certificate would also require you to add this DST Root CA X3.

Note that HTTP/2  results in an ERR_SPDY_INADEQUATE_TRANSPORT_SECURITY  error at this point !

You need to restart the reverse proxies after saving and deploying this.

5. End result

I can now access the LMI on this url :

Image:Letsencrypt certificates for my own test servers

... and the reverse proxy (using Chrome this time round)
Image:Letsencrypt certificates for my own test servers

Everything is green, so everything is OK (at least , OK enough for my test environment).

Additional information

ISAM automation

To automate all these manual actions, I actually should use automation tooling like Ansible .
Fortunately , there is a publicly available repository with ansible-roles and playbooks for ISAM .  It would be relatively straightforward to automate the management of the certificates here (generate a new one, use openssl to convert it, upload it to ISAM for the reverse proxies and for the management interface).

My zonefile, for your information

This zonefile is obviously pretty specific to my DNS provider and to my situation, but still, it may serve as an example for what you would need to have to make this work .
It's the _acme-challenge entry that does the trick .


@ 28800 IN SOA 2018062619 10800 3600 604800 28800

@ 28800 IN NS

@ 28800 IN NS

@ 28800 IN NS

_acme-challenge 28800 IN TXT 9zE0cU5V1hiYo5HJWY-Zx6FW74gl1gd5P9dnS0G8cYw

WebSphere liberty docker on Synology NAS

Tom Bosmans  21 June 2018 16:44:55
I've got a Synology DS415+ at home, and have Docker running on it.  I needed a quick way to install a WebSphere liberty server, and since the Synology NAS support Docker containers, why not ...  It's very easy to get up and running, you just need a few extra configuration settings.

Please note that I'm not sure if this would work on any Synology, though.  I think you need a Synology that has an Intel CPU (mine is an INTEL Atom C2538)  ...


Install the Docker package on your synology nas using the Package Center .

Image:WebSphere liberty docker on Synology NAS - updated

Start Docker once it's installed.  In the Registry, you can search for "liberty".  Use the "Download" button to download the image .

The synology uses Docker hub, and it's this version you want to download :

There's more information there, for instance how to handle your certificates and key databases .  

Image:WebSphere liberty docker on Synology NAS - updated

Once the download of the image is complete, select Liberty and click "Launch".  This creates an actual container from the image.

Image:WebSphere liberty docker on Synology NAS - updated

You can then configure the container.  In particular, what needs to be configured are the volumes, and the ports .
Since the Docker container cannot be edited, you need volumes to save data between restarts .

Image:WebSphere liberty docker on Synology NAS - updated

These 3 volumes are needed for the following paths :

/opt/ibm/wlp/output/ (or, more precisely, the path that's in the WLP_OUTPUT_DIR variable)

/logs  (or , more precisely, the path in the LOG_DIR variable)

The documentation states you just need /logs and /config, but I found that the first path is also necessary.

You can also choose to do this later , by using the "Edit" button:
This is my Volume configuration

Image:WebSphere liberty docker on Synology NAS - updated

The ports , by default , are set to Automatic.  This means that they change after every restart, and that's not very handy.
I choose the ports 19080 and 19443 for the http and https ports respectively.

Image:WebSphere liberty docker on Synology NAS - updated

The environment varialbes can be used to give the Liberty container some correct startup options.   A very useful one, is the java options that are used to start the Liberty jvm.
By default, the jvm would be started in UTC time, and there's no "global" way to configure your Docker containers to start in a the correct timezone by default.

So add -Duser.timezone=Europe/Brussels  (or your timezone specification of choice) to the IBM_JAVA_OPTIONS environment variable :

IBM_JAVA_OPTIONS    -XX:+UseContainerSupport -Duser.timezone=Europe/Brussels

Image:WebSphere liberty docker on Synology NAS - updated
This concludes the configuration for the Docker container .  

Configure Liberty server (server.xml)

To get a meaningful Liberty server, you probably want to deploy your own configuration .
Using the File Station in Synology, I have the following folder structure (that contains the volume configuration of the container).

Image:WebSphere liberty docker on Synology NAS - updated

In the config directory, the magic happens.   As with a "normal" Liberty installation, you have a server.xml file here (that is empty by default).
There's also an "apps" directory , that contains your ear files.

In my case, I've used a simple configuration that you can download here : server.xml

Image:WebSphere liberty docker on Synology NAS - updated

This configuration contains a basic user registry, an LTPA configuration and has 2 applications installed : the adminCenter and the defaultApplication.ear (Snoop)

The LTPA keys are generated automatically when you first start the container.  Note that for LTPA SSO to work, you must configure your Liberty Server to run in the correct timezone (see previous topic) !

There are some specific steps to take , before everything will work :

SSL configuration

When you start the docker image, a default key configuration is generated.  You can of course use your own key database , but I choose the quick and easy solution.

Open the keystore.xml file that's in config/configDropins/defaults .  Use the password for the defaultKeyStore in the keystore parameter in your own server.xml.  

<keyStore id="defaultKeyStore" password="<replace with your keystore.xml password>" />


There's multiple ways to install the adminCenter, this is the method I followed :

Click on "Details" , with the websphere-liberty container selected.  
Switch to the "Terminal" tab .
Click on "Create" to create a new Bash terminal session .

Image:WebSphere liberty docker on Synology NAS - updated

Use the following commands to install the adminCenter :

root@websphere-liberty1:/# cd /opt/ibm/wlp/bin                                                                          
root@websphere-liberty1:/opt/ibm/wlp/bin# ./installUtility install adminCenter-1.0      

After restarting the Docker container, the adminCenter is available on the following url : https://:19443/adminCenter .
Image:WebSphere liberty docker on Synology NAS - updated

You need to log in using the admin user (if using the server.xml that's provided here, the password is : Passw0rd ) .

Image:WebSphere liberty docker on Synology NAS - updated

More information on the adminCenter application can be found here :

Default Application

WebSphere Application server comes out of the box with a DefaultApplication (aka snoop), that is handy to see if your server is working correctly,  Now unfortunately, there is no DefaultApplication.ear that comes with Liberty.
This version DefaultApplication.ear works with Liberty .

So download this file, and upload it to your Synology, in the "apps" directory.  Your Liberty server will install it automatically (or restart the Docker image , so the server.xml also becomes active).

The Snoop Servlet is then available on https://:19443/snoop  .  You do need to login (if you use the server.xml that's provided here)

Image:WebSphere liberty docker on Synology NAS - updated

Log files

The log that's in the "Detail" page is not very useful.  
Image:WebSphere liberty docker on Synology NAS - updated
Fortunately, you can use File Station the Synology to access the "log" directory, where the standard messages.log is (and the other log files , like ffdc logs, if you're interested in those)

How to convert Notes names to email addresses in the Notes client.

Tom Bosmans  12 June 2018 10:55:51
A golden oldie ...  I recently had to generate a list of email addresses to use in a Cloud application, based on a mail group in my personal addressbook.

The names in that group are in the Notes format, obviously, and I need the email address.

Now I didn't have my Designer ready, nor did I feel like accessing the addressbooks directly .   And since this actually was a question from a colleague of mine, with no Notes knowledge at all, I needed to find something that works in a regular Notes client.  

So I remembered that Notes includes a built-in @Formula tester.  If you put your @formula code in any (regular) field, and press SHIFT-F9, the content of the field is interpreted as Formula language and executed.

The solution :

Create a colon ( : )  separated list of formulas (@namelookup) for all the Notes addresses you have.  If these contain the @ part (Notes specific routing information), strip that off.  You can easily do that in a spreadsheet, or in a text editor.

I end up with a list of formulas , looking like this :

@NameLookup([Noupdate];"Tom Bosmanst/GWBASICS";"InternetAddress"):        
@NameLookup([Noupdate];"Joske Vermeulen/GWBASICS";"InternetAddress")

(The limit here is the max. size of a text field in Notes, which is about 900 entries.  I had to process about 500 , so not a problem)

Then, I open a new mail and paste the formulas in the "Subject" field.
Image:How to convert Notes names to email addresses in the Notes client.

Select the Subject field, Press "SHIFT-F9" , and the formulas will be executed.  The result is a list of email addresses .  

VMWare Workstation command line

Tom Bosmans  18 May 2018 10:11:38
Get a list of running virtual machines
vmrun list

Use that output to get the ip address of that guest.
vmrun getGuestIPAddress  /run/media/tbosmans/fast/Connections_6_ICEC/Connections_6.vmx

(note that this particular call is pretty buggy, and does not return the correct IP address if you have multiple interfaces : ...  still, it can be pretty useful)

You can run any command in the guest, as long as you authenticate properly (-gu -gp )

So for instance, this command lists all running processes, and you can use the output to actually do something with these processes in a next step (eg. kill them)

vmrun -gu root -gp listProcessesInGuest  /run/media/tbosmans/fast/Connections_6_ICEC/Connections_6.vmx

You can also run any command using that mechanism.

IBM Connections Communities Replay events DB2 queries

Tom Bosmans  23 January 2018 17:33:25
Working on a recent problem where events are not processed, I was looking at the wsadmin commands to provide information.
The Jython code supplied , for instance for CommunitiesQEventService.viewQueuedEventsByRemoteAppDefId("Blog", None, 100) is pretty useless in situations where you have 100's of 1000's of events in the queue.  The Jython code in the wiki is also plain wrong (but that's a differentl story)

So I turned to the DB2 database, to examine the LC_EVENT_REPLAY table.  Unfortunately, the interesting detailed infomration is stored as an XML in a field (CLOB) called EVENT.
It took me quite a bit of time to figure out how to get the information out of that field in an SQL Query.

In fact, the most puzzling fact, was the notation needed for the XML root element and the node elements.  They all need to use the namespace.  Using a wildcard for the namespace , is sufficient in this case.
So this query would give you some detailed information about events in the replay table :

XMLTABLE( '$tev/*:entry'
    "title"          VARCHAR(512) PATH '*:title/text()',
        "author"        VARCHAR(128) PATH '*:author/*:email/text()',
        "communityid"        VARCHAR(128) PATH '*:container/@id',
        "community"        VARCHAR(128) PATH '*:container/@name'
    ) AS X
ORDER BY X."communityid";

Of course, you can show any information from the EVENT XML file you like, but using this query as a start, would help you immensely :-) .

Custom dynamic dns on Ubiquity router with

Tom Bosmans  5 January 2018 16:57:04

Ubiquity Edgerouter X

The Ubiquity Edgerouter X is a very cheap but very powerful router with a lot of options.  It's based on EdgeOS, which is a linux based distro.
That basically allows you to do "anything" you want.

I got it from Alternate ( , around 54 Euros....  

Dynamic DNS

I would like to finally setup a vpn solution, so I can safely access my systems from whereever.  My Edgerouter X has these capabilities, so I was looking for a way to set it up.

The first thing to do, is look for a Dynamic DNS provider.  In the past, I used (long, looong ago), but they don't offer dynamic dns services anymore as far as I can tell.
I looked a several free Dynamic DNS providers, but couldn't figure them out (it's probably me) .  

So I went looking what my 'real' dns provider has to offer (  .  It turns out, there is a dynamic dns service recently (27th december 2017) .

Dynamic DNS on

Really simple to do : the UI has a new section 'dynamic dns', where you add a new subdomain.  That subdomain is then listed in your regular subdomains.
I did seem to have problems when using longer passwords, but that may have been a differnt problem ...

More information :

Dynamic DNS configuration on Edgerouter


The Edgerouter uses a pretty standard ddclient package .  

Web UI

Through the web ui, the options are limited.  Specifically, the protocol, is limited to a subset of what ddclient has to offer, although the Service says "custom" ...

Image:Custom dynamic dns on Ubiquity router with
Bottomline, it doesn't work , and is not as "custom" as I would like.


The Edgerouter allows ssh access, I have configured it to use ssh keys for me .

There is a series of commands to configure the dynamic dns feature (like in the web ui), but although that offers a bit more options, it's still not sufficient.

Custom ddclient

Luckily, ddclient is just a simple perl script, so it's easy to modify.   The problem with the code is that it contains hardcoded elements (like the /update.php? part in the update part)
There's 3 sections to change :
- variables
- examples
- update code

I copied the code from the duckdns sections and adapted it.

Open ddclient with a text editor, as root (sudo su - ).  The ddclient file is here :


Add keysystems definitions at the end of the %services section (after woima, in my case) :

   'woima' => {
       'updateable' => undef,
       'update'     => \&nic_woima_update,
       'examples'   => \&nic_woima_examples,
       'variables'  => merge(
   'keysystems' => {
       'updateable' => undef,

       'update'     => \&nic_keysystems_update,

       'examples'   => \&nic_keysystems_examples,

       'variables'  => merge(





Add the variables to the %variables object  (somewhere at the end is fine):

'keysystems-common-defaults'       => {

                       'server'              => setv(T_FQDNP,  1, 0, 1, '', undef),

                       'login'               => setv(T_LOGIN,  0, 0, 0, 'unused',            undef),


Copy the example code and update code to he end of the file .

## nic_keysystems_examples
sub nic_keysystems_examples {
   return < o 'keysystems'

The 'keysystems' protocol is used by the non-free
dynamic DNS service offered by and
Check for API

Configuration variables applicable to the 'keysystems' protocol are:
 protocol=keysystems               ##
 server=www.fqdn.of.service   ## defaults to
 password=service-password    ## password (token) registered with the service         ## the host registered with the service.

Example ${program}.conf file entries:
 ## single host update
 protocol=keysystems,                                       \\
 password=prettypassword                    \\


## nic_keysystems_update
## by Tom Bosmans
## response contains "code 200" on succesfull completion
sub nic_keysystems_update {
   debug("\nnic_keysystems_update -------------------");

   ## update each configured host
   ## should improve to update in one pass
   foreach my $h (@_) {
       my $ip = delete $config{$h}{'wantip'};
       info("KEYSYSTEMS setting IP address to %s for %s", $ip, $h);
       verbose("UPDATE:","updating %s", $h);

       # Set the URL that we're going to to update
       my $url;
       $url  = "http://$config{$h}{'server'}/update.php";
       $url .= "?hostname=";
       $url .= $h;
       $url .= "&password=";
       $url .= $config{$h}{'password'};
       $url .= "&ip=";
       $url .= $ip;
       # Try to get URL
       my $reply = geturl(opt('proxy'), $url);

       # No response, declare as failed
       if (!defined($reply) || !$reply) {
           failed("KEYSYSTEMS updating %s: Could not connect to %s.", $h, $config{$h}{'server'});
       last if !header_ok($h, $reply);

       if ($reply =~ /code = 200/)
               $config{$h}{'ip'}     = $ip;
               $config{$h}{'mtime'}  = $now;
               $config{$h}{'status'} = 'good';
               success("updating %s: good: IP address set to %s", $h, $ip);
               $config{$h}{'status'} = 'failed';
               failed("updating %s: Server said: '$reply'", $h);

Save the file and restart the ddclient service.

sudo service ddclient restart

This just checks if the code is fine.   Now the configuraiton.

We need 2 files:


Note that you can generate the second file, by using the webui of Edgerouter, or the console commands .  The values in the webui or console command don't matter, you will delete everything anyway.
You need to edit these files as root (sudo su - )

/etc/ddclient.conf :

# Configuration file for ddclient generated by debconf
# /etc/ddclient.conf



The important variables here are the password , and the last line, your hostname you defined in the Domaindiscount24 web interface.

# autogenerated by on Fri Jan  5 12:58:19 UTC 2018
use=if, if=eth0


Save both files.

You can now force an update of the ddns, but issuing a EdgeOS command :

update dns dynamic interface eth0

You can put a tail on the messages log, to see the results :

tail -f /var/log/messages

The result should be something like this :

Jan  5 15:20:06 ubnt ddclient[10616]: SUCCESS:  updating good: IP address set to
Jan  5 16:39:02 ubnt ddclient[13381]: SUCCESS:  updating good: IP address set to

Of course, instead of editing the files directly on your router, you could actually copy them using scp .... and editing them on your own desktop machine .


Alas, no supportability.  EdgeOS updates will likely wipe the changes away.,
Also, using the webui or console to update the dynamic dns settings, will wreak havoc on the configuration.  I am working on getting the updates in Source forge (  / ), but don't hold your breath for these changes to make it all the way down to Ubiquity .
So the solution is not ideal, but it works for now ...